Jenkins and Google Cloud Monitoring Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The Jenkins plugin collects vital information regarding jobs and nodes from a Jenkins instance through its API, facilitating comprehensive monitoring and analysis.
The Stackdriver plugin allows users to send metrics directly to a specified project in Google Cloud Monitoring, facilitating robust monitoring capabilities across their cloud resources.
Integration details
Jenkins
The Jenkins Telegraf plugin allows users to gather metrics from a Jenkins instance without needing to install any additional plugins on Jenkins itself. By utilizing the Jenkins API, the plugin retrieves information about nodes and jobs running in the Jenkins environment. This integration provides a comprehensive overview of the Jenkins infrastructure, including real-time metrics that can be used for monitoring and analysis. Key features include configurable filters for job and node selection, optional TLS security settings, and the ability to manage request timeouts and connection limits effectively. This makes it an essential tool for teams that rely on Jenkins for continuous integration and delivery, ensuring they have the insights they need to maintain optimal performance and reliability.
Google Cloud Monitoring
This plugin writes metrics to a project in Google Cloud Monitoring, which used to be known as Stackdriver. Authentication is a prerequisite and can be achieved via service accounts or user credentials. The plugin is designed to group metrics by a namespace
variable and metric key, facilitating organized data management. However, users are encouraged to use the official
naming format for enhanced query efficiency. The plugin supports additional configurations for managing metric representation and allows tags to be treated as resource labels. Notably, it imposes certain restrictions on the data it can accept, such as not allowing string values or points that are out of chronological order.
Configuration
Jenkins
[[inputs.jenkins]]
## The Jenkins URL in the format "schema://host:port"
url = "http://my-jenkins-instance:8080"
# username = "admin"
# password = "admin"
## Set response_timeout
response_timeout = "5s"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Optional Max Job Build Age filter
## Default 1 hour, ignore builds older than max_build_age
# max_build_age = "1h"
## Optional Sub Job Depth filter
## Jenkins can have unlimited layer of sub jobs
## This config will limit the layers of pulling, default value 0 means
## unlimited pulling until no more sub jobs
# max_subjob_depth = 0
## Optional Sub Job Per Layer
## In workflow-multibranch-plugin, each branch will be created as a sub job.
## This config will limit to call only the lasted branches in each layer,
## empty will use default value 10
# max_subjob_per_layer = 10
## Jobs to include or exclude from gathering
## When using both lists, job_exclude has priority.
## Wildcards are supported: [ "jobA/*", "jobB/subjob1/*"]
# job_include = [ "*" ]
# job_exclude = [ ]
## Nodes to include or exclude from gathering
## When using both lists, node_exclude has priority.
# node_include = [ "*" ]
# node_exclude = [ ]
## Worker pool for jenkins plugin only
## Empty this field will use default value 5
# max_connections = 5
## When set to true will add node labels as a comma-separated tag. If none,
## are found, then a tag with the value of 'none' is used. Finally, if a
## label contains a comma it is replaced with an underscore.
# node_labels_as_tag = false
Google Cloud Monitoring
[[outputs.stackdriver]]
## GCP Project
project = "project-id"
## Quota Project
## Specifies the Google Cloud project that should be billed for metric ingestion.
## If omitted, the quota is charged to the service account’s default project.
## This is useful when sending metrics to multiple projects using a single service account.
## The caller must have the `serviceusage.services.use` permission on the specified project.
# quota_project = ""
## The namespace for the metric descriptor
## This is optional and users are encouraged to set the namespace as a
## resource label instead. If omitted it is not included in the metric name.
namespace = "telegraf"
## Metric Type Prefix
## The DNS name used with the metric type as a prefix.
# metric_type_prefix = "custom.googleapis.com"
## Metric Name Format
## Specifies the layout of the metric name, choose from:
## * path: 'metric_type_prefix_namespace_name_key'
## * official: 'metric_type_prefix/namespace_name_key/kind'
# metric_name_format = "path"
## Metric Data Type
## By default, telegraf will use whatever type the metric comes in as.
## However, for some use cases, forcing int64, may be preferred for values:
## * source: use whatever was passed in
## * double: preferred datatype to allow queries by PromQL.
# metric_data_type = "source"
## Tags as resource labels
## Tags defined in this option, when they exist, are added as a resource
## label and not included as a metric label. The values from tags override
## the values defined under the resource_labels config options.
# tags_as_resource_label = []
## Custom resource type
# resource_type = "generic_node"
## Override metric type by metric name
## Metric names matching the values here, globbing supported, will have the
## metric type set to the corresponding type.
# metric_counter = []
# metric_gauge = []
# metric_histogram = []
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of
## the table
## Additional resource labels
# [outputs.stackdriver.resource_labels]
# node_id = "$HOSTNAME"
# namespace = "myapp"
# location = "eu-north0"
Input and output integration examples
Jenkins
-
Continuous Integration Monitoring: Use the Jenkins plugin to monitor the performance of continuous integration pipelines by collecting metrics on job durations and failure rates. This can help teams identify bottlenecks in the pipeline and improve overall build efficiency.
-
Resource Allocation Analysis: Leverage Jenkins node metrics to assess resource usage across different agents. By understanding how resources are allocated, teams can optimize their Jenkins architecture, potentially reallocating agents or adjusting job configurations for better performance.
-
Job Execution Trends: Analyze historical job performance metrics to identify trends in job execution over time. With this data, teams can proactively address potential issues before they grow, making adjustments to the jobs or their configurations as needed.
-
Alerting for Job Failures: Implement alerts that leverage the Jenkins job metrics to notify team members in case of job failures. This proactive approach can enhance operational awareness and speed up response times to failures, ensuring that critical jobs are monitored effectively.
Google Cloud Monitoring
-
Multi-Project Metric Aggregation: Use this plugin to send aggregated metrics from various applications across different projects into a single Google Cloud Monitoring project. This use case helps centralize metrics for teams managing multiple applications, providing a unified view for performance monitoring and enhancing decision-making. By configuring different quota projects for billing, organizations can ensure proper cost management while benefiting from a consolidated monitoring strategy.
-
Anomaly Detection Setup: Integrate the plugin with a machine learning-based analytics tool that identifies anomalies in the collected metrics. Using the historical data provided by the plugin, the tool can learn normal baseline behavior and promptly alert the operations team when unusual patterns arise, enabling proactive troubleshooting and minimizing service disruptions.
-
Dynamic Resource Labeling: Implement dynamic tagging by utilizing the tags_as_resource_label option to adaptively attach resource labels based on runtime conditions. This setup allows metrics to provide context-sensitive information, such as varying environmental parameters or operational states, enhancing the granularity of monitoring and reporting without changing the fundamental metric structure.
-
Custom Metric Visualization Dashboards: Leverage the data collected by the Google Cloud Monitoring output plugin to feed a custom metrics visualization dashboard using a third-party framework. By visualizing metrics in real-time, teams can achieve better situational awareness, notably by correlating different metrics, improving operational decision-making, and streamlining performance management workflows.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration