Azure Event Hubs and OpenObserve Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Azure Event Hubs and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Azure Event Hubs Input Plugin allows Telegraf to consume data from Azure Event Hubs and Azure IoT Hub, enabling efficient data processing and monitoring of event streams from these cloud services.

This configuration pairs Telegraf’s HTTP output with OpenObserve’s native JSON ingestion API, turning any Telegraf agent into a first-class OpenObserve collector.

Integration details

Azure Event Hubs

This plugin serves as a consumer for Azure Event Hubs and Azure IoT Hub, allowing users to ingest data streams from these platforms efficiently. Azure Event Hubs is a highly scalable data streaming platform and event ingestion service capable of receiving and processing millions of events per second, while Azure IoT Hub enables secure device-to-cloud and cloud-to-device communication in IoT applications. The Event Hub Input Plugin interacts seamlessly with these services, providing reliable message consumption and stream processing capabilities. Key features include dynamic management of consumer groups, message tracking to prevent data loss, and customizable settings for prefetch counts, user agents, and metadata handling. This plugin is designed to support a range of use cases, including real-time telemetry data collection, IoT data processing, and integration with various data analysis and monitoring tools within the broader Azure ecosystem.

OpenObserve

OpenObserve is an open source observability platform written in Rust that stores data cost-effectively on object storage or local disk. It exposes REST endpoints such as /api/{org}/ingest/metrics/_json that accept batched metric documents conforming to a concise JSON schema, making it an attractive drop-in replacement for Loki or Elasticsearch stacks. The Telegraf HTTP output plugin streams metrics to arbitrary HTTP targets; when the "data_format = "json"" serializer is selected, Telegraf batches its metric objects into a payload that matches OpenObserve’s ingestion contract. The plugin supports configurable batch size, custom headers, TLS, and compression, allowing operators to authenticate with Basic or Bearer tokens and to enforce back-pressure without additional collectors. By reusing existing Telegraf agents already collecting system, application, or SNMP data, organizations can funnel rich telemetry into OpenObserve dashboards and SQL-like analytics with minimal overhead, enabling unified observability, long-term retention, and real-time alerting without vendor lock-in.

Configuration

Azure Event Hubs

[[inputs.eventhub_consumer]]
  ## The default behavior is to create a new Event Hub client from environment variables.
  ## This requires one of the following sets of environment variables to be set:
  ##
  ## 1) Expected Environment Variables:
  ##    - "EVENTHUB_CONNECTION_STRING"
  ##
  ## 2) Expected Environment Variables:
  ##    - "EVENTHUB_NAMESPACE"
  ##    - "EVENTHUB_NAME"
  ##    - "EVENTHUB_KEY_NAME"
  ##    - "EVENTHUB_KEY_VALUE"

  ## 3) Expected Environment Variables:
  ##    - "EVENTHUB_NAMESPACE"
  ##    - "EVENTHUB_NAME"
  ##    - "AZURE_TENANT_ID"
  ##    - "AZURE_CLIENT_ID"
  ##    - "AZURE_CLIENT_SECRET"

  ## Uncommenting the option below will create an Event Hub client based solely on the connection string.
  ## This can either be the associated environment variable or hard coded directly.
  ## If this option is uncommented, environment variables will be ignored.
  ## Connection string should contain EventHubName (EntityPath)
  # connection_string = ""

  ## Set persistence directory to a valid folder to use a file persister instead of an in-memory persister
  # persistence_dir = ""

  ## Change the default consumer group
  # consumer_group = ""

  ## By default the event hub receives all messages present on the broker, alternative modes can be set below.
  ## The timestamp should be in https://github.com/toml-lang/toml#offset-date-time format (RFC 3339).
  ## The 3 options below only apply if no valid persister is read from memory or file (e.g. first run).
  # from_timestamp =
  # latest = true

  ## Set a custom prefetch count for the receiver(s)
  # prefetch_count = 1000

  ## Add an epoch to the receiver(s)
  # epoch = 0

  ## Change to set a custom user agent, "telegraf" is used by default
  # user_agent = "telegraf"

  ## To consume from a specific partition, set the partition_ids option.
  ## An empty array will result in receiving from all partitions.
  # partition_ids = ["0","1"]

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Set either option below to true to use a system property as timestamp.
  ## You have the choice between EnqueuedTime and IoTHubEnqueuedTime.
  ## It is recommended to use this setting when the data itself has no timestamp.
  # enqueued_time_as_ts = true
  # iot_hub_enqueued_time_as_ts = true

  ## Tags or fields to create from keys present in the application property bag.
  ## These could for example be set by message enrichments in Azure IoT Hub.
  # application_property_tags = []
  # application_property_fields = []

  ## Tag or field name to use for metadata
  ## By default all metadata is disabled
  # sequence_number_field = "SequenceNumber"
  # enqueued_time_field = "EnqueuedTime"
  # offset_field = "Offset"
  # partition_id_tag = "PartitionID"
  # partition_key_tag = "PartitionKey"
  # iot_hub_device_connection_id_tag = "IoTHubDeviceConnectionID"
  # iot_hub_auth_generation_id_tag = "IoTHubAuthGenerationID"
  # iot_hub_connection_auth_method_tag = "IoTHubConnectionAuthMethod"
  # iot_hub_connection_module_id_tag = "IoTHubConnectionModuleID"
  # iot_hub_enqueued_time_field = "IoTHubEnqueuedTime"

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

OpenObserve

[[outputs.http]]
  ## OpenObserve JSON metrics ingestion endpoint
  url = "https://api.openobserve.ai/api/default/ingest/metrics/_json"

  ## Use POST to push batches
  method = "POST"

  ## Basic auth header (base64 encoded "username:password")
  headers = { Authorization = "Basic dXNlcjpwYXNzd29yZA==" }

  ## Timeout for HTTP requests
  timeout = "10s"

  ## Override Content-Type to match OpenObserve expectation
  content_type = "application/json"

  ## Force Telegraf to batch and serialize metrics as JSON
  data_format = "json"

  ## JSON serializer specific options
  json_timestamp_units = "1ms"

  ## Uncomment to restrict batch size
  # batch_size = 5000

Input and output integration examples

Azure Event Hubs

  1. Real-Time IoT Device Monitoring: Use the Azure Event Hubs Plugin to monitor telemetry data from IoT devices like sensors and actuators. By streaming device data into monitoring dashboards, organizations can gain insights into system performances, track usage patterns, and quickly respond to irregularities. This setup allows for proactive management of devices, improving operational efficiency and reducing downtime.

  2. Event-Driven Data Processing Workflows: Leverage this plugin to trigger data processing workflows in response to events received from Azure Event Hubs. For instance, when a new event arrives, it can initiate data transformation, aggregation, or storage processes, allowing businesses to automate their workflows more effectively. This integration enhances responsiveness and streamlines operations across systems.

  3. Integration with Analytics Platforms: Implement the plugin to funnel event data into analytics platforms like Azure Synapse or Power BI. By integrating real-time streaming data into analytics tools, organizations can perform comprehensive data analysis, drive business intelligence efforts, and create interactive visualizations that inform decision-making.

  4. Cross-Platform Data Sync: Utilize the Azure Event Hubs Plugin to synchronize data streams across diverse systems or platforms. By consuming data from Azure Event Hubs and forwarding it to other systems like databases or cloud storage, organizations can maintain consistent and up-to-date information across their entire architecture, enabling cohesive data strategies.

OpenObserve

  1. Edge Device Health Mirror: Deploy Telegraf on thousands of industrial IoT devices to capture temperature, vibration, and power metrics, then use this output to push JSON batches to OpenObserve. Plant operators gain a real-time overview of machine health and can trigger maintenance based on anomalies without relying on heavyweight collectors.

  2. Blue-Green Deployment Canary: Attach a lightweight Telegraf sidecar to each Kubernetes release-candidate pod that scrapes /metrics and forwards container stats to a dedicated “canary” stream in OpenObserve. Continuous comparison of error rates between blue and green versions empowers the CI pipeline to auto-roll back poor performers within seconds.

  3. Multi-Tenant SaaS Billing Pipeline: Emit per-customer usage counters via Telegraf and tag them with tenant_id; the HTTP plugin posts them to OpenObserve where SQL reports aggregate usage into invoices, eliminating separate metering services and simplifying compliance audits.

  4. Security Threat Scoring: Fuse Suricata events and host resource metrics in Telegraf, deliver them to OpenObserve’s analytics engine, and run stream-processing rules that correlate spikes in suspicious traffic with CPU saturation to produce an actionable threat score and automatically open tickets in a SOAR platform.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration