HAProxy and Apache Hudi Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
This plugin gathers and reports statistics from HAProxy, a popular open-source load balancer and proxy server, to help in monitoring and optimizing its performance.
Writes metrics to Parquet files via Telegraf’s Parquet output plugin, preparing them for ingestion into Apache Hudi’s lakehouse architecture.
Integration details
HAProxy
The HAProxy plugin for Telegraf enables users to gather statistics directly from a HAProxy server via its stats socket or HTTP statistics page. HAProxy is a widely employed software load balancer and proxy server that provides high availability and performance for TCP and HTTP applications. By integrating with HAProxy, this plugin allows users to monitor and analyze various performance metrics such as active server counts, request rates, response codes, and session statuses in real-time, facilitating better decision-making and proactive management of network resources. Key features include support for both HTTP and socket-based metrics collection, compatibility with basic authentication for secure access, and configurable options for metric field naming, allowing for customization tailored to user preferences.
Apache Hudi
This configuration leverages Telegraf’s Parquet plugin to serialize metrics into columnar Parquet files suitable for downstream ingestion by Apache Hudi. The plugin writes metrics grouped by metric name into files in a specified directory, buffering writes for efficiency and optionally rotating files on timers. It considers schema compatibility—metrics with incompatible schemas are dropped—ensuring consistency. Apache Hudi can then consume these Parquet files via tools like DeltaStreamer or Spark jobs, enabling transactional ingestion, time-travel queries, and upserts on your time series data.
Configuration
HAProxy
[[inputs.haproxy]]
## List of stats endpoints. Metrics can be collected from both http and socket
## endpoints. Examples of valid endpoints:
## - http://myhaproxy.com:1936/haproxy?stats
## - https://myhaproxy.com:8000/stats
## - socket:/run/haproxy/admin.sock
## - /run/haproxy/*.sock
## - tcp://127.0.0.1:1936
##
## Server addresses not starting with 'http://', 'https://', 'tcp://' will be
## treated as possible sockets. When specifying local socket, glob patterns are
## supported.
servers = ["http://myhaproxy.com:1936/haproxy?stats"]
## By default, some of the fields are renamed from what haproxy calls them.
## Setting this option to true results in the plugin keeping the original
## field names.
# keep_field_names = false
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
Apache Hudi
[[outputs.parquet]]
## Directory to write parquet files in. If a file already exists the output
## will attempt to continue using the existing file.
directory = "/var/lib/telegraf/hudi_metrics"
## File rotation interval (default is no rotation)
# rotation_interval = "1h"
## Buffer size before writing (default is 1000 metrics)
# buffer_size = 1000
## Optional: compression codec (snappy, gzip, etc.)
# compression_codec = "snappy"
## When grouping metrics, each metric name goes to its own file
## If a metric’s schema doesn’t match the existing schema, it will be dropped
Input and output integration examples
HAProxy
-
Dynamic Load Adjustment: Utilize the HAProxy plugin to monitor traffic patterns in real time, enabling automated adjustments to load balancing algorithms. By continuously gathering metrics on server loads and request rates, system administrators can dynamically allocate resources, ensuring that no single server becomes a bottleneck, thus enhancing overall application performance and availability.
-
Historical Performance Analytics: Integrate this plugin with a time series database to collect HAProxy metrics over time, allowing you to analyze historical performance and traffic trends. This can facilitate predictive analysis and planning for capacity, giving businesses insights into peak traffic times and helping to identify potential future resource needs.
-
Alerting on Anomalies: Implement alerting workflows that trigger when unusual patterns are detected in HAProxy metrics, such as sudden spikes in error rates or drops in request handling capacity. By leveraging this plugin, operations teams can receive timely notifications, allowing for swift intervention and minimizing the impact of potential downtime on end-users.
Apache Hudi
-
Transactional Lakehouse Metrics: Buffer and write Web service metrics as Parquet files for DeltaStreamer to ingest into Hudi, enabling upserts, ACID compliance, and time-travel on historical performance data.
-
Edge Device Batch Analytics: Telegraf running on IoT gateways writes metrics to Parquet locally, where periodic Spark jobs ingest them into Hudi for long-term analytics and traceability.
-
Schema-Enforced Abnormal Metric Handling: Use Parquet plugin’s strict schema-dropping behavior to prevent malformed or unexpected metric changes. Hudi ingestion then guarantees consistent schema and data quality in downstream datasets.
-
Data Platform Integration: Store Telegraf metrics as Parquet files in an S3/ADLS landing zone. Hudi’s Spark-based ingestion pipeline then loads them into a unified, queryable lakehouse with business events and logs.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration