StatsD and Google Cloud Monitoring Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The StatsD input plugin captures metrics from a StatsD server by running a listener service in the background, allowing for comprehensive performance monitoring and metric aggregation.
The Stackdriver plugin allows users to send metrics directly to a specified project in Google Cloud Monitoring, facilitating robust monitoring capabilities across their cloud resources.
Integration details
StatsD
The StatsD input plugin is designed to gather metrics from a StatsD server by running a backgrounded StatsD listener service while Telegraf is active. This plugin leverages the format of the StatsD messages as established by the original Etsy implementation, which allows for various types of metrics including gauges, counters, sets, timings, histograms, and distributions. The capabilities of the StatsD plugin extend to parsing tags and extending the standard protocol with features that accommodate InfluxDB’s tagging system. It can handle messages sent via different protocols (UDP or TCP), manage multiple metric metrics effectively, and offers advanced configurations for optimal metric handling such as percentiles calculation and data transformation templates. This flexibility empowers users to track application performance comprehensively, making it an essential tool for robust monitoring setups.
Google Cloud Monitoring
This plugin writes metrics to a project in Google Cloud Monitoring, which used to be known as Stackdriver. Authentication is a prerequisite and can be achieved via service accounts or user credentials. The plugin is designed to group metrics by a namespace
variable and metric key, facilitating organized data management. However, users are encouraged to use the official
naming format for enhanced query efficiency. The plugin supports additional configurations for managing metric representation and allows tags to be treated as resource labels. Notably, it imposes certain restrictions on the data it can accept, such as not allowing string values or points that are out of chronological order.
Configuration
StatsD
[[inputs.statsd]]
## Protocol, must be "tcp", "udp4", "udp6" or "udp" (default=udp)
protocol = "udp"
## MaxTCPConnection - applicable when protocol is set to tcp (default=250)
max_tcp_connections = 250
## Enable TCP keep alive probes (default=false)
tcp_keep_alive = false
## Specifies the keep-alive period for an active network connection.
## Only applies to TCP sockets and will be ignored if tcp_keep_alive is false.
## Defaults to the OS configuration.
# tcp_keep_alive_period = "2h"
## Address and port to host UDP listener on
service_address = ":8125"
## The following configuration options control when telegraf clears it's cache
## of previous values. If set to false, then telegraf will only clear it's
## cache when the daemon is restarted.
## Reset gauges every interval (default=true)
delete_gauges = true
## Reset counters every interval (default=true)
delete_counters = true
## Reset sets every interval (default=true)
delete_sets = true
## Reset timings & histograms every interval (default=true)
delete_timings = true
## Enable aggregation temporality adds temporality=delta or temporality=commulative tag, and
## start_time field, which adds the start time of the metric accumulation.
## You should use this when using OpenTelemetry output.
# enable_aggregation_temporality = false
## Percentiles to calculate for timing & histogram stats.
percentiles = [50.0, 90.0, 99.0, 99.9, 99.95, 100.0]
## separator to use between elements of a statsd metric
metric_separator = "_"
## Parses tags in the datadog statsd format
## http://docs.datadoghq.com/guides/dogstatsd/
## deprecated in 1.10; use datadog_extensions option instead
parse_data_dog_tags = false
## Parses extensions to statsd in the datadog statsd format
## currently supports metrics and datadog tags.
## http://docs.datadoghq.com/guides/dogstatsd/
datadog_extensions = false
## Parses distributions metric as specified in the datadog statsd format
## https://docs.datadoghq.com/developers/metrics/types/?tab=distribution#definition
datadog_distributions = false
## Keep or drop the container id as tag. Included as optional field
## in DogStatsD protocol v1.2 if source is running in Kubernetes
## https://docs.datadoghq.com/developers/dogstatsd/datagram_shell/?tab=metrics#dogstatsd-protocol-v12
datadog_keep_container_tag = false
## Statsd data translation templates, more info can be read here:
## https://github.com/influxdata/telegraf/blob/master/docs/TEMPLATE_PATTERN.md
# templates = [
# "cpu.* measurement*"
# ]
## Number of UDP messages allowed to queue up, once filled,
## the statsd server will start dropping packets
allowed_pending_messages = 10000
## Number of worker threads used to parse the incoming messages.
# number_workers_threads = 5
## Number of timing/histogram values to track per-measurement in the
## calculation of percentiles. Raising this limit increases the accuracy
## of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000
## Maximum socket buffer size in bytes, once the buffer fills up, metrics
## will start dropping. Defaults to the OS default.
# read_buffer_size = 65535
## Max duration (TTL) for each metric to stay cached/reported without being updated.
# max_ttl = "10h"
## Sanitize name method
## By default, telegraf will pass names directly as they are received.
## However, upstream statsd now does sanitization of names which can be
## enabled by using the "upstream" method option. This option will a) replace
## white space with '_', replace '/' with '-', and remove characters not
## matching 'a-zA-Z_\-0-9\.;='.
#sanitize_name_method = ""
## Replace dots (.) with underscore (_) and dashes (-) with
## double underscore (__) in metric names.
# convert_names = false
## Convert all numeric counters to float
## Enabling this would ensure that both counters and guages are both emitted
## as floats.
# float_counters = false
Google Cloud Monitoring
[[outputs.stackdriver]]
## GCP Project
project = "project-id"
## Quota Project
## Specifies the Google Cloud project that should be billed for metric ingestion.
## If omitted, the quota is charged to the service account’s default project.
## This is useful when sending metrics to multiple projects using a single service account.
## The caller must have the `serviceusage.services.use` permission on the specified project.
# quota_project = ""
## The namespace for the metric descriptor
## This is optional and users are encouraged to set the namespace as a
## resource label instead. If omitted it is not included in the metric name.
namespace = "telegraf"
## Metric Type Prefix
## The DNS name used with the metric type as a prefix.
# metric_type_prefix = "custom.googleapis.com"
## Metric Name Format
## Specifies the layout of the metric name, choose from:
## * path: 'metric_type_prefix_namespace_name_key'
## * official: 'metric_type_prefix/namespace_name_key/kind'
# metric_name_format = "path"
## Metric Data Type
## By default, telegraf will use whatever type the metric comes in as.
## However, for some use cases, forcing int64, may be preferred for values:
## * source: use whatever was passed in
## * double: preferred datatype to allow queries by PromQL.
# metric_data_type = "source"
## Tags as resource labels
## Tags defined in this option, when they exist, are added as a resource
## label and not included as a metric label. The values from tags override
## the values defined under the resource_labels config options.
# tags_as_resource_label = []
## Custom resource type
# resource_type = "generic_node"
## Override metric type by metric name
## Metric names matching the values here, globbing supported, will have the
## metric type set to the corresponding type.
# metric_counter = []
# metric_gauge = []
# metric_histogram = []
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of
## the table
## Additional resource labels
# [outputs.stackdriver.resource_labels]
# node_id = "$HOSTNAME"
# namespace = "myapp"
# location = "eu-north0"
Input and output integration examples
StatsD
-
Real-time Application Performance Monitoring: Utilize the StatsD input plugin to monitor application performance metrics in real-time. By configuring your application to send various metrics to a StatsD server, teams can leverage this plugin to analyze performance bottlenecks, track user activity, and ensure resource optimization dynamically. The combination of historical and real-time metrics allows for proactive troubleshooting and enhances the responsiveness of issue resolution processes.
-
Tracking User Engagement Metrics in Web Applications: Use the StatsD plugin to gather user engagement statistics, such as page views, click events, and interaction times. By sending these metrics to the StatsD server, businesses can derive valuable insights into user behavior, enabling them to make data-driven decisions to improve user experience and interface design based on quantitative feedback. This can significantly enhance the effectiveness of marketing strategies and product development efforts.
-
Infrastructure Health Monitoring: Deploy the StatsD plugin to monitor the health of your server infrastructure by tracking metrics such as resource utilization, server response times, and network performance. With this setup, DevOps teams can gain detailed visibility into system performance, effectively anticipating issues before they escalate. This enables a proactive approach to infrastructure management, minimizing downtimes and ensuring optimal service delivery.
-
Creating Comprehensive Service Dashboards: Integrate StatsD with visualization tools to create comprehensive dashboards that reflect the status and health of services across an architecture. For instance, combining data from multiple services logged through StatsD can transform raw metrics into actionable insights, showcasing system performance trends over time. This capability empowers stakeholders to maintain oversight and drive decisions based on visualized data sets, enhancing overall operational transparency.
Google Cloud Monitoring
-
Multi-Project Metric Aggregation: Use this plugin to send aggregated metrics from various applications across different projects into a single Google Cloud Monitoring project. This use case helps centralize metrics for teams managing multiple applications, providing a unified view for performance monitoring and enhancing decision-making. By configuring different quota projects for billing, organizations can ensure proper cost management while benefiting from a consolidated monitoring strategy.
-
Anomaly Detection Setup: Integrate the plugin with a machine learning-based analytics tool that identifies anomalies in the collected metrics. Using the historical data provided by the plugin, the tool can learn normal baseline behavior and promptly alert the operations team when unusual patterns arise, enabling proactive troubleshooting and minimizing service disruptions.
-
Dynamic Resource Labeling: Implement dynamic tagging by utilizing the tags_as_resource_label option to adaptively attach resource labels based on runtime conditions. This setup allows metrics to provide context-sensitive information, such as varying environmental parameters or operational states, enhancing the granularity of monitoring and reporting without changing the fundamental metric structure.
-
Custom Metric Visualization Dashboards: Leverage the data collected by the Google Cloud Monitoring output plugin to feed a custom metrics visualization dashboard using a third-party framework. By visualizing metrics in real-time, teams can achieve better situational awareness, notably by correlating different metrics, improving operational decision-making, and streamlining performance management workflows.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration