Jenkins and M3DB Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The Jenkins plugin collects vital information regarding jobs and nodes from a Jenkins instance through its API, facilitating comprehensive monitoring and analysis.
This plugin allows Telegraf to stream metrics to M3DB using the Prometheus Remote Write protocol, enabling scalable ingestion through the M3 Coordinator.
Integration details
Jenkins
The Jenkins Telegraf plugin allows users to gather metrics from a Jenkins instance without needing to install any additional plugins on Jenkins itself. By utilizing the Jenkins API, the plugin retrieves information about nodes and jobs running in the Jenkins environment. This integration provides a comprehensive overview of the Jenkins infrastructure, including real-time metrics that can be used for monitoring and analysis. Key features include configurable filters for job and node selection, optional TLS security settings, and the ability to manage request timeouts and connection limits effectively. This makes it an essential tool for teams that rely on Jenkins for continuous integration and delivery, ensuring they have the insights they need to maintain optimal performance and reliability.
M3DB
This configuration uses Telegraf’s HTTP output plugin with prometheusremotewrite
format to send metrics directly to M3DB through the M3 Coordinator. M3DB is a distributed time series database designed for scalable, high-throughput metric storage. It supports ingestion of Prometheus remote write data via its Coordinator component, which manages translation and routing into the M3DB cluster. This approach enables organizations to collect metrics from systems that aren’t natively instrumented for Prometheus (e.g., Windows, SNMP, legacy systems) and ingest them efficiently into M3’s long-term, high-performance storage engine. The setup is ideal for high-scale observability stacks with Prometheus compatibility requirements.
Configuration
Jenkins
[[inputs.jenkins]]
## The Jenkins URL in the format "schema://host:port"
url = "http://my-jenkins-instance:8080"
# username = "admin"
# password = "admin"
## Set response_timeout
response_timeout = "5s"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Optional Max Job Build Age filter
## Default 1 hour, ignore builds older than max_build_age
# max_build_age = "1h"
## Optional Sub Job Depth filter
## Jenkins can have unlimited layer of sub jobs
## This config will limit the layers of pulling, default value 0 means
## unlimited pulling until no more sub jobs
# max_subjob_depth = 0
## Optional Sub Job Per Layer
## In workflow-multibranch-plugin, each branch will be created as a sub job.
## This config will limit to call only the lasted branches in each layer,
## empty will use default value 10
# max_subjob_per_layer = 10
## Jobs to include or exclude from gathering
## When using both lists, job_exclude has priority.
## Wildcards are supported: [ "jobA/*", "jobB/subjob1/*"]
# job_include = [ "*" ]
# job_exclude = [ ]
## Nodes to include or exclude from gathering
## When using both lists, node_exclude has priority.
# node_include = [ "*" ]
# node_exclude = [ ]
## Worker pool for jenkins plugin only
## Empty this field will use default value 5
# max_connections = 5
## When set to true will add node labels as a comma-separated tag. If none,
## are found, then a tag with the value of 'none' is used. Finally, if a
## label contains a comma it is replaced with an underscore.
# node_labels_as_tag = false
M3DB
# Configuration for sending metrics to M3
[outputs.http]
## URL is the address to send metrics to
url = "https://M3_HOST:M3_PORT/api/v1/prom/remote/write"
## HTTP Basic Auth credentials
username = "admin"
password = "password"
## Data format to output.
data_format = "prometheusremotewrite"
## Outgoing HTTP headers
[outputs.http.headers]
Content-Type = "application/x-protobuf"
Content-Encoding = "snappy"
X-Prometheus-Remote-Write-Version = "0.1.0"
Input and output integration examples
Jenkins
-
Continuous Integration Monitoring: Use the Jenkins plugin to monitor the performance of continuous integration pipelines by collecting metrics on job durations and failure rates. This can help teams identify bottlenecks in the pipeline and improve overall build efficiency.
-
Resource Allocation Analysis: Leverage Jenkins node metrics to assess resource usage across different agents. By understanding how resources are allocated, teams can optimize their Jenkins architecture, potentially reallocating agents or adjusting job configurations for better performance.
-
Job Execution Trends: Analyze historical job performance metrics to identify trends in job execution over time. With this data, teams can proactively address potential issues before they grow, making adjustments to the jobs or their configurations as needed.
-
Alerting for Job Failures: Implement alerts that leverage the Jenkins job metrics to notify team members in case of job failures. This proactive approach can enhance operational awareness and speed up response times to failures, ensuring that critical jobs are monitored effectively.
M3DB
-
Large-Scale Cloud Infrastructure Monitoring: Deploy Telegraf agents across thousands of virtual machines and containers to collect metrics and stream them into M3DB through the M3 Coordinator. This provides reliable, long-term visibility with minimal storage overhead and high availability.
-
Legacy System Metrics Ingestion: Use Telegraf to gather metrics from older systems that lack native Prometheus exporters (e.g., Windows servers, SNMP devices) and forward them to M3DB via remote write. This bridges modern observability workflows with legacy infrastructure.
-
Centralized App Telemetry Aggregation: Collect application-specific telemetry using Telegraf’s plugin ecosystem (e.g.,
exec
,http
,jolokia
) and push it into M3DB for centralized storage and query via PromQL. This enables unified analytics across diverse data sources. -
Hybrid Cloud Observability: Install Telegraf agents on-prem and in the cloud to collect and remote-write metrics into a centralized M3DB cluster. This ensures consistent visibility across environments while avoiding the complexity of running Prometheus federation layers.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration