ActiveMQ and Librato Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider ActiveMQ and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The ActiveMQ Input Plugin collects metrics from the ActiveMQ message broker through its Console API, providing insights into the performance and status of message queues, topics, and subscribers.

The Librato plugin for Telegraf is designed to facilitate seamless integration with the Librato Metrics API, allowing for efficient metric reporting and monitoring.

Integration details

ActiveMQ

The ActiveMQ Input Plugin interfaces with the ActiveMQ Console API to gather metrics related to queues, topics, and subscribers. ActiveMQ, a widely-used open-source message broker, supports various messaging protocols and provides a robust Web Console for management and monitoring. This plugin allows users to track essential metrics including queue sizes, consumer counts, and message counts across different ActiveMQ entities, thereby enhancing observability within messaging systems. Users can configure various parameters such as the WebConsole URL and basic authentication credentials to tailor the plugin to their environment. The metrics collected can be used for monitoring the health and performance of messaging applications, facilitating proactive management and troubleshooting.

Librato

The Librato plugin enables Telegraf to send metrics to the Librato Metrics API. To authenticate, users must provide an api_user and api_token, which can be acquired from the Librato account settings. This integration allows for efficient monitoring and reporting of custom metrics within the Librato platform. The plugin also utilizes a source_tag option that can enrich the metrics with contextual information from Point Tags; however, it does not currently support sending associated Point Tags. It is essential to note that any point value sent that cannot be converted to a float64 type will be skipped, ensuring that only valid metrics are processed and sent to Librato. The plugin also supports secret-store options for managing sensitive authentication credentials securely, facilitating best practices in credential management.

Configuration

ActiveMQ

[[inputs.activemq]]
  ## ActiveMQ WebConsole URL
  url = "http://127.0.0.1:8161"

  ## Required ActiveMQ Endpoint
  ##   deprecated in 1.11; use the url option
  # server = "192.168.50.10"
  # port = 8161

  ## Credentials for basic HTTP authentication
  # username = "admin"
  # password = "admin"

  ## Required ActiveMQ webadmin root path
  # webadmin = "admin"

  ## Maximum time to receive response.
  # response_timeout = "5s"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

Librato

[[outputs.librato]]
  ## Librato API Docs
  ## http://dev.librato.com/v1/metrics-authentication
  ## Librato API user
  api_user = "[email protected]" # required.
  ## Librato API token
  api_token = "my-secret-token" # required.
  ## Debug
  # debug = false
  ## Connection timeout.
  # timeout = "5s"
  ## Output source Template (same as graphite buckets)
  ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
  ## This template is used in librato's source (not metric's name)
  template = "host"

Input and output integration examples

ActiveMQ

  1. Proactive Queue Monitoring: Use the ActiveMQ plugin to monitor queue sizes in real-time for a high-volume trading application. This implementation allows teams to receive alerts when queue sizes exceed a certain threshold, enabling rapid response to potential downtime caused by backlogs, thereby ensuring continuous availability of trading operations.

  2. Performance Baselines and Anomaly Detection: Integrate this plugin with machine learning frameworks to establish performance baselines for message throughput. By analyzing historical data collected through this plugin, teams can flag anomalies in processing rates, leading to quicker identification of issues impacting service reliability and performance.

  3. Cross-Messaging System Analytics: Combine metrics from ActiveMQ with those from other messaging systems in a centralized dashboard. Users can visualize and compare performance data, such as enqueue and dequeue rates, providing valuable insights into the overall messaging architecture and assisting in optimizing the message flow between different brokers.

  4. Subscriber Performance Insights: Leverage the subscriber metrics collected by this plugin to analyze behavior patterns and optimize configuration for consumer applications. Understanding metrics such as dispatched queue size and counter values can guide adjustments to improve processing efficiency and resource allocation.

Librato

  1. Real-time Application Monitoring: Utilize Librato to collect performance metrics from a web application in real-time. This setup involves sending response times, error rates, and user interactions to Librato, allowing developers to monitor the application’s health and performance metrics closely. By analyzing these metrics, teams can quickly identify and address performance bottlenecks or application failures before they impact end users.

  2. Infrastructure Metrics Aggregation: Leverage this plugin to gather and send metrics from various infrastructure components, such as servers or containers, to Librato for centralized monitoring. Configuring the plugin to send CPU, memory usage, and disk I/O metrics enables system administrators to have a comprehensive view of infrastructure performance, assisting in capacity planning and resource optimization strategies.

  3. Custom Metrics for Business Operations: Feed business-specific metrics, such as sales transactions or user sign-ups, to the Librato service using this plugin. By tracking these custom metrics, businesses can gain insights into their operational performance and make data-driven decisions to enhance their strategies, marketing efforts, or product development initiatives.

  4. Anomaly Detection in Metrics: Implement monitoring tools that utilize machine learning for anomaly detection. By continuously sending real-time metrics to Librato, teams can analyze trends and automatically flag unusual behavior, such as sudden spikes in latency or unusual traffic patterns, enabling timely intervention and troubleshooting.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration