RabbitMQ and Parquet Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
This plugin reads metrics from RabbitMQ servers, providing essential insights into the performance and state of the messaging system.
This plugin writes metrics to parquet files, utilizing a schema based on the metrics grouped by name. It supports file rotation and buffered writing for optimal performance.
Integration details
RabbitMQ
The RabbitMQ plugin for Telegraf allows users to gather metrics from RabbitMQ servers via the RabbitMQ Management Plugin. This capability is crucial for monitoring the performance and health of RabbitMQ instances, which are widely utilized for message queuing and processing in various applications. The plugin provides comprehensive insights into key RabbitMQ metrics, including message rates, queue depths, and node health statistics, thereby enabling operators to maintain optimal performance and robustness of their messaging infrastructure. Additionally, it supports secret-stores for managing sensitive credentials securely, making integration with existing systems smoother. Configuration options allow for flexibility in specifying the nodes, queues, and exchanges to monitor, providing valuable adaptability for diverse deployment scenarios.
Parquet
The Parquet output plugin for Telegraf writes metrics to parquet files, which are columnar storage formats optimized for analytics. By default, this plugin groups metrics by their name, writing them to a single file. If a metric’s schema does not align with existing schemas, those metrics are dropped. The plugin generates an Apache Arrow schema based on all grouped metrics, ensuring that the schema reflects the union of all fields and tags. It operates in a buffered manner, meaning it temporarily holds metrics in memory before writing them to disk for efficiency. Parquet files require proper closure to ensure readability, and this is crucial when using the plugin, as improper closure can lead to unreadable files. Additionally, the plugin supports file rotation after specific time intervals, preventing overwrites of existing files and schema conflicts when a file with the same name already exists.
Configuration
RabbitMQ
[[inputs.rabbitmq]]
## Management Plugin url. (default: http://localhost:15672)
# url = "http://localhost:15672"
## Tag added to rabbitmq_overview series; deprecated: use tags
# name = "rmq-server-1"
## Credentials
# username = "guest"
# password = "guest"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Optional request timeouts
## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
## for a server's response headers after fully writing the request.
# header_timeout = "3s"
##
## client_timeout specifies a time limit for requests made by this client.
## Includes connection time, any redirects, and reading the response body.
# client_timeout = "4s"
## A list of nodes to gather as the rabbitmq_node measurement. If not
## specified, metrics for all nodes are gathered.
# nodes = ["rabbit@node1", "rabbit@node2"]
## A list of queues to gather as the rabbitmq_queue measurement. If not
## specified, metrics for all queues are gathered.
## Deprecated in 1.6: Use queue_name_include instead.
# queues = ["telegraf"]
## A list of exchanges to gather as the rabbitmq_exchange measurement. If not
## specified, metrics for all exchanges are gathered.
# exchanges = ["telegraf"]
## Metrics to include and exclude. Globs accepted.
## Note that an empty array for both will include all metrics
## Currently the following metrics are supported: "exchange", "federation", "node", "overview", "queue"
# metric_include = []
# metric_exclude = []
## Queues to include and exclude. Globs accepted.
## Note that an empty array for both will include all queues
# queue_name_include = []
# queue_name_exclude = []
## Federation upstreams to include and exclude specified as an array of glob
## pattern strings. Federation links can also be limited by the queue and
## exchange filters.
# federation_upstream_include = []
# federation_upstream_exclude = []
Parquet
[[outputs.parquet]]
## Directory to write parquet files in. If a file already exists the output
## will attempt to continue using the existing file.
# directory = "."
## Files are rotated after the time interval specified. When set to 0 no time
## based rotation is performed.
# rotation_interval = "0h"
## Timestamp field name
## Field name to use to store the timestamp. If set to an empty string, then
## the timestamp is omitted.
# timestamp_field_name = "timestamp"
Input and output integration examples
RabbitMQ
-
Monitoring Queue Performance Metrics: Use the RabbitMQ plugin to keep track of queue performance over time. This involves setting up monitoring dashboards that visualize crucial queue metrics such as message rates, the number of consumers, and message delivery rates. With this information, teams can proactively address any bottlenecks or performance issues by analyzing trends and making data-informed decisions about scaling or optimizing their RabbitMQ configuration.
-
Alerting on System Health: Integrate the RabbitMQ plugin with an alerting system to notify operational teams of potential issues within RabbitMQ instances. For example, if the number of unacknowledged messages reaches a critical threshold or if queues become overwhelmed, alerts can trigger, allowing for immediate investigation and swift remedial action to maintain the health of message flows.
-
Analyzing Message Processing Metrics: Employ the plugin to gather detailed metrics on message processing performance, such as the rates of messages published, acknowledged, and redelivered. By analyzing these metrics, teams can evaluate the efficiency of their message consumer applications and make adjustments to configuration or code where necessary, thereby enhancing overall system throughput and resilience.
-
Cross-System Data Integration: Leverage the metrics collected by the RabbitMQ plugin to integrate data flows between RabbitMQ and other systems or services. For example, use the gathered metrics to drive automated workflows or analytics pipelines that utilize messages processed in RabbitMQ, enabling organizations to optimize workflows and enhance data agility across their ecosystems.
Parquet
-
Data Lake Ingestion: Utilize the Parquet plugin to store metrics from various sources into a data lake. By writing metrics in parquet format, you establish a standardized and efficient way to manage time-series data, enabling faster querying capabilities and seamless integration with analytics tools like Apache Spark or AWS Athena. This setup can significantly improve data retrieval times and analysis workflows.
-
Long-term Storage of Metrics: Implement the Parquet plugin in a monitoring setup where metrics are collected over time from multiple applications. This allows for long-term storage of performance data in a compact format, making it cost-effective to store vast amounts of historical data while preserving the ability for quick retrieval and analysis later on. By archiving metrics in parquet files, organizations can maintain compliance and create detailed reports from historical performance trends.
-
Analytics and Reporting: After writing metrics to parquet files, leverage tools like Apache Arrow or PyArrow to perform complex analytical queries directly on the files without needing to load all the data into memory. This can enhance reporting capabilities, allowing teams to generate insights and visualization from large datasets efficiently, thereby improving decision-making processes based on accurate, up-to-date performance metrics.
-
Integrating with Data Warehouses: Use the Parquet plugin as part of a data integration pipeline that feeds into a modern data warehouse. By converting metrics to parquet format, the data can be easily ingested by systems like Snowflake or Google BigQuery, enabling powerful analytics and business intelligence capabilities that drive actionable insights from the collected metrics.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration