Kubernetes and Apache Druid Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
This plugin captures metrics for Kubernetes pods and containers by communicating with the Kubelet API.
This plugin allows Telegraf to send JSON-formatted metrics to Apache Druid over HTTP, enabling real-time ingestion for analytical queries on high-volume time-series data.
Integration details
Kubernetes
The Kubernetes input plugin interfaces with the Kubelet API to gather metrics for running pods and containers on a single host, ideally as part of a daemonset in a Kubernetes installation. By operating on each node within the cluster, it collects metrics from the locally running kubelet, ensuring that the data reflects the real-time state of the environment. Being a rapidly evolving project, Kubernetes sees frequent updates, and this plugin adheres to the major cloud providers’ supported versions, maintaining compatibility across multiple releases within a limited time span. Significant consideration is given to the potential high series cardinality, which can burden the database; thus, users are advised to implement filtering techniques and retention policies to manage this load effectively. Configuration options provide flexible customization of the plugin’s behavior to integrate seamlessly into different setups, enhancing its utility in monitoring Kubernetes environments.
Apache Druid
This configuration uses Telegraf’s HTTP output plugin with json
data format to send metrics directly to Apache Druid, a real-time analytics database designed for fast, ad hoc queries on high-ingest time-series data. Druid supports ingestion via HTTP POST to various components like the Tranquility service or native ingestion endpoints. The JSON format is ideal for structuring Telegraf metrics into event-style records for Druid’s columnar and time-partitioned storage engine. Druid excels at powering interactive dashboards and exploratory queries across massive datasets, making it an excellent choice for real-time observability and monitoring analytics when integrated with Telegraf.
Configuration
Kubernetes
[[inputs.kubernetes]]
## URL for the kubelet, if empty read metrics from all nodes in the cluster
url = "http://127.0.0.1:10255"
## Use bearer token for authorization. ('bearer_token' takes priority)
## If both of these are empty, we'll use the default serviceaccount:
## at: /var/run/secrets/kubernetes.io/serviceaccount/token
##
## To re-read the token at each interval, please use a file with the
## bearer_token option. If given a string, Telegraf will always use that
## token.
# bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
## OR
# bearer_token_string = "abc_123"
## Kubernetes Node Metric Name
## The default Kubernetes node metric name (i.e. kubernetes_node) is the same
## for the kubernetes and kube_inventory plugins. To avoid conflicts, set this
## option to a different value.
# node_metric_name = "kubernetes_node"
## Pod labels to be added as tags. An empty array for both include and
## exclude will include all labels.
# label_include = []
# label_exclude = ["*"]
## Set response_timeout (default 5 seconds)
# response_timeout = "5s"
## Optional TLS Config
# tls_ca = /path/to/cafile
# tls_cert = /path/to/certfile
# tls_key = /path/to/keyfile
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
Apache Druid
[[outputs.http]]
## Druid ingestion endpoint (e.g., Tranquility, HTTP Ingest, or Kafka REST Proxy)
url = "http://druid-ingest.example.com/v1/post"
## Use POST method to send events
method = "POST"
## Data format for Druid ingestion (expects JSON format)
data_format = "json"
## Optional headers (may vary depending on Druid setup)
# [outputs.http.headers]
# Content-Type = "application/json"
# Authorization = "Bearer YOUR_API_TOKEN"
## Optional timeout and TLS settings
timeout = "10s"
# tls_ca = "/path/to/ca.pem"
# tls_cert = "/path/to/cert.pem"
# tls_key = "/path/to/key.pem"
# insecure_skip_verify = false
Input and output integration examples
Kubernetes
-
Dynamic Resource Allocation Monitoring: By utilizing the Kubernetes plugin, teams can set up alerts for resource usage patterns across various pods and containers. This proactive monitoring approach enables automatic scaling of resources in response to specific thresholds—helping to optimize performance while minimizing costs during peak usage.
-
Multi-tenancy Resource Isolation Analysis: Organizations using Kubernetes can leverage this plugin to track resource consumption per namespace. In a multi-tenant scenario, understanding the resource allocations and usages across different teams becomes critical for ensuring fair access and performance guarantees, leading to better resource management strategies.
-
Real-time Health Dashboards: Integrate the data captured by the Kubernetes plugin into visualization tools like Grafana to create real-time dashboards. These dashboards provide insights into the overall health and performance of the Kubernetes environment, allowing teams to quickly identify and rectify issues across clusters, pods, and containers.
-
Automated Incident Response Workflows: By combining the Kubernetes plugin with alert management systems, teams can automate incident response procedures based on real-time metrics. If a pod’s resource usage exceeds predefined limits, an automated workflow can trigger remediation actions, such as restarting the pod or reallocating resources—all of which can help improve system resilience.
Apache Druid
-
Real-Time Application Monitoring Dashboard: Use Telegraf to collect metrics from application servers and send them to Druid for immediate analysis and visualization in dashboards. Druid’s low-latency querying allows users to interactively explore system behavior in near real-time.
-
Security Event Aggregation: Aggregate and forward security-related metrics such as failed logins, port scans, or process anomalies to Druid. Analysts can build dashboards to monitor threat patterns and investigate incidents with millisecond-level granularity.
-
IoT Device Analytics: Collect telemetry from edge devices via Telegraf and send it to Druid for fast, scalable processing. Druid’s time-partitioned storage and roll-up capabilities are ideal for handling billions of small JSON events from sensors or gateways.
-
Web Traffic Behavior Exploration: Use Telegraf to capture web server metrics (e.g., requests per second, latency, error rates) and forward them to Druid. This enables teams to drill down into user behavior by region, device, or request type with subsecond query performance.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration