Kafka and Sensu Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Kafka and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

This plugin allows you to gather metrics from Kafka topics in real-time, enhancing data monitoring and collection capabilities within your Telegraf setup.

This plugin writes metrics events to Sensu via its HTTP events API, enabling seamless integration with the Sensu monitoring platform.

Integration details

Kafka

The Kafka Telegraf plugin is designed to read data from Kafka topics and create metrics using supported input data formats. As a service input plugin, it listens continuously for incoming metrics and events, differing from standard input plugins that operate at fixed intervals. This particular plugin can utilize features from various Kafka versions and is capable of consuming messages from specified topics, applying configurations such as security credentials using SASL, and managing message processing with options for message offsets and consumer groups. The flexibility of this plugin allows it to handle a wide array of message formats and use cases, making it a valuable asset for applications relying on Kafka for data ingestion.

Sensu

This plugin writes metrics events to Sensu via its HTTP events API. Sensu is a monitoring system that enables users to collect, analyze, and manage metrics from various components in their infrastructure. The plugin facilitates the integration of Telegraf, a server agent for collecting and reporting metrics, with the Sensu monitoring platform. Users can configure settings such as backend and agent API URLs, API keys for authentication, and optional TLS settings. The plugin’s core functionality is centered around sending metric events, including check and entity specifications, to Sensu, allowing for comprehensive monitoring and alerting. The API reference provides extensive details about the events and metrics structure, ensuring users can efficiently leverage Sensu’s capabilities for observability and incident response.

Configuration

Kafka


[[inputs.kafka_consumer]]
              ## Kafka brokers.
              brokers = ["localhost:9092"]

              ## Set the minimal supported Kafka version. Should be a string contains
              ## 4 digits in case if it is 0 version and 3 digits for versions starting
              ## from 1.0.0 separated by dot. This setting enables the use of new
              ## Kafka features and APIs.  Must be 0.10.2.0(used as default) or greater.
              ## Please, check the list of supported versions at
              ## https://pkg.go.dev/github.com/Shopify/sarama#SupportedVersions
              ##   ex: kafka_version = "2.6.0"
              ##   ex: kafka_version = "0.10.2.0"
              # kafka_version = "0.10.2.0"

              ## Topics to consume.
              topics = ["telegraf"]

              ## Topic regular expressions to consume.  Matches will be added to topics.
              ## Example: topic_regexps = [ "*test", "metric[0-9A-z]*" ]
              # topic_regexps = [ ]

              ## When set this tag will be added to all metrics with the topic as the value.
              # topic_tag = ""

              ## The list of Kafka message headers that should be pass as metric tags
              ## works only for Kafka version 0.11+, on lower versions the message headers
              ## are not available
              # msg_headers_as_tags = []

              ## The name of kafka message header which value should override the metric name.
              ## In case when the same header specified in current option and in msg_headers_as_tags
              ## option, it will be excluded from the msg_headers_as_tags list.
              # msg_header_as_metric_name = ""

              ## Set metric(s) timestamp using the given source.
              ## Available options are:
              ##   metric -- do not modify the metric timestamp
              ##   inner  -- use the inner message timestamp (Kafka v0.10+)
              ##   outer  -- use the outer (compressed) block timestamp (Kafka v0.10+)
              # timestamp_source = "metric"

              ## Optional Client id
              # client_id = "Telegraf"

              ## Optional TLS Config
              # enable_tls = false
              # tls_ca = "/etc/telegraf/ca.pem"
              # tls_cert = "/etc/telegraf/cert.pem"
              # tls_key = "/etc/telegraf/key.pem"
              ## Use TLS but skip chain & host verification
              # insecure_skip_verify = false

              ## Period between keep alive probes.
              ## Defaults to the OS configuration if not specified or zero.
              # keep_alive_period = "15s"

              ## SASL authentication credentials.  These settings should typically be used
              ## with TLS encryption enabled
              # sasl_username = "kafka"
              # sasl_password = "secret"

              ## Optional SASL:
              ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
              ## (defaults to PLAIN)
              # sasl_mechanism = ""

              ## used if sasl_mechanism is GSSAPI
              # sasl_gssapi_service_name = ""
              # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
              # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
              # sasl_gssapi_kerberos_config_path = "/"
              # sasl_gssapi_realm = "realm"
              # sasl_gssapi_key_tab_path = ""
              # sasl_gssapi_disable_pafxfast = false

              ## used if sasl_mechanism is OAUTHBEARER
              # sasl_access_token = ""

              ## SASL protocol version.  When connecting to Azure EventHub set to 0.
              # sasl_version = 1

              # Disable Kafka metadata full fetch
              # metadata_full = false

              ## Name of the consumer group.
              # consumer_group = "telegraf_metrics_consumers"

              ## Compression codec represents the various compression codecs recognized by
              ## Kafka in messages.
              ##  0 : None
              ##  1 : Gzip
              ##  2 : Snappy
              ##  3 : LZ4
              ##  4 : ZSTD
              # compression_codec = 0
              ## Initial offset position; one of "oldest" or "newest".
              # offset = "oldest"

              ## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
              # balance_strategy = "range"

              ## Maximum number of retries for metadata operations including
              ## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
              ## unset, use the Sarama default of 3,
              # metadata_retry_max = 0

              ## Type of retry backoff. Valid options: "constant", "exponential"
              # metadata_retry_type = "constant"

              ## Amount of time to wait before retrying. When metadata_retry_type is
              ## "constant", each retry is delayed this amount. When "exponential", the
              ## first retry is delayed this amount, and subsequent delays are doubled. If 0
              ## or unset, use the Sarama default of 250 ms
              # metadata_retry_backoff = 0

              ## Maximum amount of time to wait before retrying when metadata_retry_type is
              ## "exponential". Ignored for other retry types. If 0, there is no backoff
              ## limit.
              # metadata_retry_max_duration = 0

              ## When set to true, this turns each bootstrap broker address into a set of
              ## IPs, then does a reverse lookup on each one to get its canonical hostname.
              ## This list of hostnames then replaces the original address list.
              ## resolve_canonical_bootstrap_servers_only = false

              ## Strategy for making connection to kafka brokers. Valid options: "startup",
              ## "defer". If set to "defer" the plugin is allowed to start before making a
              ## connection. This is useful if the broker may be down when telegraf is
              ## started, but if there are any typos in the broker setting, they will cause
              ## connection failures without warning at startup
              # connection_strategy = "startup"

              ## Maximum length of a message to consume, in bytes (default 0/unlimited);
              ## larger messages are dropped
              max_message_len = 1000000

              ## Max undelivered messages
              ## This plugin uses tracking metrics, which ensure messages are read to
              ## outputs before acknowledging them to the original broker to ensure data
              ## is not lost. This option sets the maximum messages to read from the
              ## broker that have not been written by an output.
              ##
              ## This value needs to be picked with awareness of the agent's
              ## metric_batch_size value as well. Setting max undelivered messages too high
              ## can result in a constant stream of data batches to the output. While
              ## setting it too low may never flush the broker's messages.
              # max_undelivered_messages = 1000

              ## Maximum amount of time the consumer should take to process messages. If
              ## the debug log prints messages from sarama about 'abandoning subscription
              ## to [topic] because consuming was taking too long', increase this value to
              ## longer than the time taken by the output plugin(s).
              ##
              ## Note that the effective timeout could be between 'max_processing_time' and
              ## '2 * max_processing_time'.
              # max_processing_time = "100ms"

              ## The default number of message bytes to fetch from the broker in each
              ## request (default 1MB). This should be larger than the majority of
              ## your messages, or else the consumer will spend a lot of time
              ## negotiating sizes and not actually consuming. Similar to the JVM's
              ## `fetch.message.max.bytes`.
              # consumer_fetch_default = "1MB"

              ## Data format to consume.
              ## Each data format has its own unique set of configuration options, read
              ## more about them here:
              ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
              data_format = "influx"

Sensu

[[outputs.sensu]]
  ## BACKEND API URL is the Sensu Backend API root URL to send metrics to
  ## (protocol, host, and port only). The output plugin will automatically
  ## append the corresponding backend API path
  ## /api/core/v2/namespaces/:entity_namespace/events/:entity_name/:check_name).
  ##
  ## Backend Events API reference:
  ## https://docs.sensu.io/sensu-go/latest/api/events/
  ##
  ## AGENT API URL is the Sensu Agent API root URL to send metrics to
  ## (protocol, host, and port only). The output plugin will automatically
  ## append the correspeonding agent API path (/events).
  ##
  ## Agent API Events API reference:
  ## https://docs.sensu.io/sensu-go/latest/api/events/
  ##
  ## NOTE: if backend_api_url and agent_api_url and api_key are set, the output
  ## plugin will use backend_api_url. If backend_api_url and agent_api_url are
  ## not provided, the output plugin will default to use an agent_api_url of
  ## http://127.0.0.1:3031
  ##
  # backend_api_url = "http://127.0.0.1:8080"
  # agent_api_url = "http://127.0.0.1:3031"

  ## API KEY is the Sensu Backend API token
  ## Generate a new API token via:
  ##
  ## $ sensuctl cluster-role create telegraf --verb create --resource events,entities
  ## $ sensuctl cluster-role-binding create telegraf --cluster-role telegraf --group telegraf
  ## $ sensuctl user create telegraf --group telegraf --password REDACTED
  ## $ sensuctl api-key grant telegraf
  ##
  ## For more information on Sensu RBAC profiles & API tokens, please visit:
  ## - https://docs.sensu.io/sensu-go/latest/reference/rbac/
  ## - https://docs.sensu.io/sensu-go/latest/reference/apikeys/
  ##
  # api_key = "${SENSU_API_KEY}"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Timeout for HTTP message
  # timeout = "5s"

  ## HTTP Content-Encoding for write request body, can be set to "gzip" to
  ## compress body or "identity" to apply no encoding.
  # content_encoding = "identity"

  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
  ## plugin definition, otherwise additional config options are read as part of
  ## the table

  ## Sensu Event details
  ##
  ## Below are the event details to be sent to Sensu.  The main portions of the
  ## event are the check, entity, and metrics specifications. For more information
  ## on Sensu events and its components, please visit:
  ## - Events - https://docs.sensu.io/sensu-go/latest/reference/events
  ## - Checks -  https://docs.sensu.io/sensu-go/latest/reference/checks
  ## - Entities - https://docs.sensu.io/sensu-go/latest/reference/entities
  ## - Metrics - https://docs.sensu.io/sensu-go/latest/reference/events#metrics
  ##
  ## Check specification
  ## The check name is the name to give the Sensu check associated with the event
  ## created. This maps to check.metadata.name in the event.
  [outputs.sensu.check]
    name = "telegraf"

  ## Entity specification
  ## Configure the entity name and namespace, if necessary. This will be part of
  ## the entity.metadata in the event.
  ##
  ## NOTE: if the output plugin is configured to send events to a
  ## backend_api_url and entity_name is not set, the value returned by
  ## os.Hostname() will be used; if the output plugin is configured to send
  ## events to an agent_api_url, entity_name and entity_namespace are not used.
  # [outputs.sensu.entity]
  #   name = "server-01"
  #   namespace = "default"

  ## Metrics specification
  ## Configure the tags for the metrics that are sent as part of the Sensu event
  # [outputs.sensu.tags]
  #   source = "telegraf"

  ## Configure the handler(s) for processing the provided metrics
  # [outputs.sensu.metrics]
  #   handlers = ["influxdb","elasticsearch"]

Input and output integration examples

Kafka

  1. Real-Time Data Processing: Use the Kafka plugin to feed live data from a Kafka topic into a monitoring system. This can be particularly useful for applications that require instant feedback on performance metrics or user activity, allowing businesses to react more swiftly to changing conditions in their environments.

  2. Dynamic Metrics Collection: Leverage this plugin to dynamically adjust the metrics being captured based on events occurring within Kafka. For instance, by integrating with other services, users can have the plugin reconfigure itself on-the-fly, ensuring relevant metrics are always collected according to the needs of the business or application.

  3. Centralized Logging and Monitoring: Implement a centralized logging system using the Kafka Consumer Plugin to aggregate logs from multiple services into a unified monitoring dashboard. This setup can help identify issues across different services and improve overall system observability and troubleshooting capabilities.

  4. Anomaly Detection System: Combine Kafka with machine learning algorithms for real-time anomaly detection. By constantly analyzing streaming data, this setup can automatically identify unusual patterns, triggering alerts and mitigating potential issues more effectively.

Sensu

  1. Real-Time Infrastructure Monitoring: Utilize the Sensu plugin to send performance metrics from various servers and services directly to Sensu. This real-time data flow enables teams to visualize infrastructure health, track resource usage, and receive immediate alerts for any anomalies detected. By centralizing monitoring through Sensu, organizations can create a holistic view of their systems and respond swiftly to issues.

  2. Automated Incident Response Workflows: Leverage the plugin to automatically trigger incident response workflows based on the metrics events sent to Sensu. For example, if CPU usage exceeds a defined threshold, the Sensu system can be configured to alert the operations team, which can then initiate automated remediation processes, reducing downtime and maintaining system reliability. This integration allows for proactive management of system resources.

  3. Dynamic Scaling of Resources: Use the Sensu plugin to feed metrics into an auto-scaling system that adjusts resources based on demand. By tracking metrics like request load and resource utilization, organizations can automatically scale their infrastructure up or down, ensuring optimal performance and cost efficiency without manual intervention.

  4. Centralized Logging and Monitoring: Combine the Sensu with logging tools to send logs and performance metrics to a centralized monitoring system. This comprehensive approach allows teams to correlate logs with metric events, providing deeper insights into system behavior and performance, which aids in troubleshooting and performance optimization over time.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration