Building Real-Time Data Pipelines with Kafka, Telegraf, and InfluxDB 3

Navigate to:

From pizza orders to real-time insights

When milliseconds matter and data never stops flowing, you need a pipeline that can handle high-velocity streaming data with reliability and scale. The modern streaming stack of Kafka, Telegraf, and InfluxDB 3 Core delivers exactly that.

To give you a concrete example, this blog works with a fictitious use case: “Papa Giuseppe’s Pizzeria.” Every oven, prep station, and order in this pizza restaurant generates data. Our workflow looks like this:

1. Customers = Events

Every new order or oven temperature reading is an event.

2. Waitstaff = Kafka

Waiters take customer orders and pin them on the order board (Kafka topics). Kafka ensures no order is lost, even if the kitchen is overwhelmed. It scales easily during rush hours (peak traffic spikes).

3. Chef = Telegraf

The chef pulls orders from the board, organizes them, and prepares dishes. Along the way, the chef can convert measurements (Celsius to Fahrenheit), prioritize urgent meals, or batch similar orders. Telegraf acts as an intelligent data processor.

4. Order History Book = InfluxDB 3 Core

Every completed dish is recorded in the order history book, including details such as time, equipment used, and outcome. InfluxDB specializes in time series data, making it ideal for tracking oven performance, cook times, or order volumes.

5. Restaurant Manager = Dashboard (Custom Web App/Grafana, etc.)

Instead of flipping through raw order slips, the manager sees a live dashboard: active orders, efficiency scores, rush-hour spikes. This is where operational awareness occurs.

By the way, the same pattern applies outside restaurants: Industrial IoT, DevOps monitoring, and financial systems all benefit from very similar architectures.

Demo app architecture


order process diagram

Sample app quick start

Download/clone the sample project from GitHub and follow the steps below to run the demo:

1. Run Core Services

docker-compose up -d influxdb3-core zookeeper kafka

2. Generate Token & Create Database

docker exec influxdb3-core influxdb3 create token --admin
docker exec influxdb3-core influxdb3 create database pizzeria_data --token "YOUR_TOKEN"

3. Launch Simulator & Dashboard

docker-compose --profile with-token up -d

Visit http://localhost:8080 to watch your pizza shop run in real-time.

Kafka: The Food Order Simulator

Kafka serves as a reliable order board. Producers (our simulator) publish new events, and consumers (Telegraf) read them.

Simulator program (GitHub)

from kafka import KafkaProducer
import json, datetime

producer = KafkaProducer(
    bootstrap_servers='kafka:9092',
`    value_serializer=lambda v: json.dumps(v).encode('utf-8')
)

# Example oven temperature event
event = {
    "measurement": "pizzeria_event",
    "equipment_id": "oven_1",
    "equipment_type": "pizza_oven",
    "event_type": "temperature_reading",
    "temperature": 450.5,
    "capacity_used": 3,
    "capacity_total": 4,
    "timestamp": datetime.datetime.utcnow().isoformat()
}

# Place order on the board (Kafka topic)
producer.send('pizzeria.events', value=event)

Key Concept: Kafka maintains an order of events. Producers write to topics and consumers read from them. This decouples systems—the oven doesn’t need to know who consumes its data.

Telegraf: The Chef

Telegraf consumes Kafka events, tags them, transforms the data if needed, and sends it downstream.

# Input: read events from Kafka
[[inputs.kafka_consumer]]
  brokers = ["kafka:9092"]
  topics = ["pizzeria.events"]
  data_format = "json"
  json_name_key = "measurement"
  tag_keys = ["equipment_id", "equipment_type", "location", "event_type"]

# Output: forward to InfluxDB
[[outputs.influxdb_v2]]
  urls = ["http://influxdb3-core:8181"]
  token = "${INFLUXDB_TOKEN}"
  organization = "${INFLUXDB_ORG}"
  bucket = "${INFLUXDB_BUCKET}"

Key Concept: Think of Telegraf as the chef in the kitchen. It takes raw orders, organizes them, and delivers ready-to-serve data to InfluxDB 3 Core.

InfluxDB 3 Core: The Order History Book

Once events are processed, they’re recorded in InfluxDB. Just like an order history book keeps a record of every dish served, InfluxDB maintains a precise timeline of events. You can query with SQL:

-- Current oven temps in last 10 minutes
SELECT equipment_id, temperature, time
FROM pizzeria_event
WHERE equipment_type = 'pizza_oven'
  AND time >= now() - interval '10 minutes'
ORDER BY time DESC;

-- Hourly order volume
SELECT DATE_TRUNC('hour', time) as hour, COUNT(*) as orders
FROM pizzeria_event
WHERE event_type = 'order_created'
  AND time >= now() - interval '24 hours'
GROUP BY hour
ORDER BY hour;

Key Concept: Time series queries ask what happened over time. InfluxDB makes this natural: group by time, filter by event type, spot anomalies.

Dashboard: Running the Restaurant

At http://localhost:8080, we run a Python web app (alternatively, you could use a Grafana dashboard), with a custom dashboard that shows:

  • Active and completed orders
  • Oven temperatures and efficiency
  • Rush-hour simulation (triples order load)
  • Equipment failure scenarios

The dashboard transforms raw data into operational awareness, much like a restaurant manager tracking performance in real-time.

Order’s up

This Kafka + Telegraf + InfluxDB 3 data pipeline works efficiently, much like a well-organized restaurant: customers place orders, waitstaff record them, chefs prepare them, and managers track operations through an order history. This architecture can be extended to a wide range of real-time streaming use cases, including industrial IoT, DevOps monitoring, financial systems, and healthcare monitoring.