StatsD and M3DB Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The StatsD input plugin captures metrics from a StatsD server by running a listener service in the background, allowing for comprehensive performance monitoring and metric aggregation.
This plugin allows Telegraf to stream metrics to M3DB using the Prometheus Remote Write protocol, enabling scalable ingestion through the M3 Coordinator.
Integration details
StatsD
The StatsD input plugin is designed to gather metrics from a StatsD server by running a backgrounded StatsD listener service while Telegraf is active. This plugin leverages the format of the StatsD messages as established by the original Etsy implementation, which allows for various types of metrics including gauges, counters, sets, timings, histograms, and distributions. The capabilities of the StatsD plugin extend to parsing tags and extending the standard protocol with features that accommodate InfluxDB’s tagging system. It can handle messages sent via different protocols (UDP or TCP), manage multiple metric metrics effectively, and offers advanced configurations for optimal metric handling such as percentiles calculation and data transformation templates. This flexibility empowers users to track application performance comprehensively, making it an essential tool for robust monitoring setups.
M3DB
This configuration uses Telegraf’s HTTP output plugin with prometheusremotewrite
format to send metrics directly to M3DB through the M3 Coordinator. M3DB is a distributed time series database designed for scalable, high-throughput metric storage. It supports ingestion of Prometheus remote write data via its Coordinator component, which manages translation and routing into the M3DB cluster. This approach enables organizations to collect metrics from systems that aren’t natively instrumented for Prometheus (e.g., Windows, SNMP, legacy systems) and ingest them efficiently into M3’s long-term, high-performance storage engine. The setup is ideal for high-scale observability stacks with Prometheus compatibility requirements.
Configuration
StatsD
[[inputs.statsd]]
## Protocol, must be "tcp", "udp4", "udp6" or "udp" (default=udp)
protocol = "udp"
## MaxTCPConnection - applicable when protocol is set to tcp (default=250)
max_tcp_connections = 250
## Enable TCP keep alive probes (default=false)
tcp_keep_alive = false
## Specifies the keep-alive period for an active network connection.
## Only applies to TCP sockets and will be ignored if tcp_keep_alive is false.
## Defaults to the OS configuration.
# tcp_keep_alive_period = "2h"
## Address and port to host UDP listener on
service_address = ":8125"
## The following configuration options control when telegraf clears it's cache
## of previous values. If set to false, then telegraf will only clear it's
## cache when the daemon is restarted.
## Reset gauges every interval (default=true)
delete_gauges = true
## Reset counters every interval (default=true)
delete_counters = true
## Reset sets every interval (default=true)
delete_sets = true
## Reset timings & histograms every interval (default=true)
delete_timings = true
## Enable aggregation temporality adds temporality=delta or temporality=commulative tag, and
## start_time field, which adds the start time of the metric accumulation.
## You should use this when using OpenTelemetry output.
# enable_aggregation_temporality = false
## Percentiles to calculate for timing & histogram stats.
percentiles = [50.0, 90.0, 99.0, 99.9, 99.95, 100.0]
## separator to use between elements of a statsd metric
metric_separator = "_"
## Parses tags in the datadog statsd format
## http://docs.datadoghq.com/guides/dogstatsd/
## deprecated in 1.10; use datadog_extensions option instead
parse_data_dog_tags = false
## Parses extensions to statsd in the datadog statsd format
## currently supports metrics and datadog tags.
## http://docs.datadoghq.com/guides/dogstatsd/
datadog_extensions = false
## Parses distributions metric as specified in the datadog statsd format
## https://docs.datadoghq.com/developers/metrics/types/?tab=distribution#definition
datadog_distributions = false
## Keep or drop the container id as tag. Included as optional field
## in DogStatsD protocol v1.2 if source is running in Kubernetes
## https://docs.datadoghq.com/developers/dogstatsd/datagram_shell/?tab=metrics#dogstatsd-protocol-v12
datadog_keep_container_tag = false
## Statsd data translation templates, more info can be read here:
## https://github.com/influxdata/telegraf/blob/master/docs/TEMPLATE_PATTERN.md
# templates = [
# "cpu.* measurement*"
# ]
## Number of UDP messages allowed to queue up, once filled,
## the statsd server will start dropping packets
allowed_pending_messages = 10000
## Number of worker threads used to parse the incoming messages.
# number_workers_threads = 5
## Number of timing/histogram values to track per-measurement in the
## calculation of percentiles. Raising this limit increases the accuracy
## of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000
## Maximum socket buffer size in bytes, once the buffer fills up, metrics
## will start dropping. Defaults to the OS default.
# read_buffer_size = 65535
## Max duration (TTL) for each metric to stay cached/reported without being updated.
# max_ttl = "10h"
## Sanitize name method
## By default, telegraf will pass names directly as they are received.
## However, upstream statsd now does sanitization of names which can be
## enabled by using the "upstream" method option. This option will a) replace
## white space with '_', replace '/' with '-', and remove characters not
## matching 'a-zA-Z_\-0-9\.;='.
#sanitize_name_method = ""
## Replace dots (.) with underscore (_) and dashes (-) with
## double underscore (__) in metric names.
# convert_names = false
## Convert all numeric counters to float
## Enabling this would ensure that both counters and guages are both emitted
## as floats.
# float_counters = false
M3DB
# Configuration for sending metrics to M3
[outputs.http]
## URL is the address to send metrics to
url = "https://M3_HOST:M3_PORT/api/v1/prom/remote/write"
## HTTP Basic Auth credentials
username = "admin"
password = "password"
## Data format to output.
data_format = "prometheusremotewrite"
## Outgoing HTTP headers
[outputs.http.headers]
Content-Type = "application/x-protobuf"
Content-Encoding = "snappy"
X-Prometheus-Remote-Write-Version = "0.1.0"
Input and output integration examples
StatsD
-
Real-time Application Performance Monitoring: Utilize the StatsD input plugin to monitor application performance metrics in real-time. By configuring your application to send various metrics to a StatsD server, teams can leverage this plugin to analyze performance bottlenecks, track user activity, and ensure resource optimization dynamically. The combination of historical and real-time metrics allows for proactive troubleshooting and enhances the responsiveness of issue resolution processes.
-
Tracking User Engagement Metrics in Web Applications: Use the StatsD plugin to gather user engagement statistics, such as page views, click events, and interaction times. By sending these metrics to the StatsD server, businesses can derive valuable insights into user behavior, enabling them to make data-driven decisions to improve user experience and interface design based on quantitative feedback. This can significantly enhance the effectiveness of marketing strategies and product development efforts.
-
Infrastructure Health Monitoring: Deploy the StatsD plugin to monitor the health of your server infrastructure by tracking metrics such as resource utilization, server response times, and network performance. With this setup, DevOps teams can gain detailed visibility into system performance, effectively anticipating issues before they escalate. This enables a proactive approach to infrastructure management, minimizing downtimes and ensuring optimal service delivery.
-
Creating Comprehensive Service Dashboards: Integrate StatsD with visualization tools to create comprehensive dashboards that reflect the status and health of services across an architecture. For instance, combining data from multiple services logged through StatsD can transform raw metrics into actionable insights, showcasing system performance trends over time. This capability empowers stakeholders to maintain oversight and drive decisions based on visualized data sets, enhancing overall operational transparency.
M3DB
-
Large-Scale Cloud Infrastructure Monitoring: Deploy Telegraf agents across thousands of virtual machines and containers to collect metrics and stream them into M3DB through the M3 Coordinator. This provides reliable, long-term visibility with minimal storage overhead and high availability.
-
Legacy System Metrics Ingestion: Use Telegraf to gather metrics from older systems that lack native Prometheus exporters (e.g., Windows servers, SNMP devices) and forward them to M3DB via remote write. This bridges modern observability workflows with legacy infrastructure.
-
Centralized App Telemetry Aggregation: Collect application-specific telemetry using Telegraf’s plugin ecosystem (e.g.,
exec
,http
,jolokia
) and push it into M3DB for centralized storage and query via PromQL. This enables unified analytics across diverse data sources. -
Hybrid Cloud Observability: Install Telegraf agents on-prem and in the cloud to collect and remote-write metrics into a centralized M3DB cluster. This ensures consistent visibility across environments while avoiding the complexity of running Prometheus federation layers.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration