Syslog and Apache Hudi Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Syslog and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Syslog plugin enables the collection of syslog messages from various sources using standard networking protocols. This functionality is critical for environments where systems need to be monitored and logged efficiently.

Writes metrics to Parquet files via Telegraf’s Parquet output plugin, preparing them for ingestion into Apache Hudi’s lakehouse architecture.

Integration details

Syslog

The Syslog plugin for Telegraf captures syslog messages transmitted over various protocols such as TCP, UDP, and TLS. It supports both RFC 5424 (the newer syslog protocol) and the older RFC 3164 (BSD syslog protocol). This plugin operates as a service input, effectively starting a service that listens for incoming syslog messages. Unlike traditional plugins, service inputs may not function with standard interval settings or CLI options like --once. It includes options for setting network configurations, socket permissions, message handling, and connection handling. Furthermore, the integration with Rsyslog allows forwarding of logging messages, making it a powerful tool for collecting and relaying system logs in real-time, thus seamlessly integrating into monitoring and logging systems.

Apache Hudi

This configuration leverages Telegraf’s Parquet plugin to serialize metrics into columnar Parquet files suitable for downstream ingestion by Apache Hudi. The plugin writes metrics grouped by metric name into files in a specified directory, buffering writes for efficiency and optionally rotating files on timers. It considers schema compatibility—metrics with incompatible schemas are dropped—ensuring consistency. Apache Hudi can then consume these Parquet files via tools like DeltaStreamer or Spark jobs, enabling transactional ingestion, time-travel queries, and upserts on your time series data.

Configuration

Syslog

[[inputs.syslog]]
  ## Protocol, address and port to host the syslog receiver.
  ## If no host is specified, then localhost is used.
  ## If no port is specified, 6514 is used (RFC5425#section-4.1).
  ##   ex: server = "tcp://localhost:6514"
  ##       server = "udp://:6514"
  ##       server = "unix:///var/run/telegraf-syslog.sock"
  ## When using tcp, consider using 'tcp4' or 'tcp6' to force the usage of IPv4
  ## or IPV6 respectively. There are cases, where when not specified, a system
  ## may force an IPv4 mapped IPv6 address.
  server = "tcp://127.0.0.1:6514"

  ## Permission for unix sockets (only available on unix sockets)
  ## This setting may not be respected by some platforms. To safely restrict
  ## permissions it is recommended to place the socket into a previously
  ## created directory with the desired permissions.
  ##   ex: socket_mode = "777"
  # socket_mode = ""

  ## Maximum number of concurrent connections (only available on stream sockets like TCP)
  ## Zero means unlimited.
  # max_connections = 0

  ## Read timeout (only available on stream sockets like TCP)
  ## Zero means unlimited.
  # read_timeout = "0s"

  ## Optional TLS configuration (only available on stream sockets like TCP)
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key  = "/etc/telegraf/key.pem"
  ## Enables client authentication if set.
  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

  ## Maximum socket buffer size (in bytes when no unit specified)
  ## For stream sockets, once the buffer fills up, the sender will start
  ## backing up. For datagram sockets, once the buffer fills up, metrics will
  ## start dropping. Defaults to the OS default.
  # read_buffer_size = "64KiB"

  ## Period between keep alive probes (only applies to TCP sockets)
  ## Zero disables keep alive probes. Defaults to the OS configuration.
  # keep_alive_period = "5m"

  ## Content encoding for message payloads
  ## Can be set to "gzip" for compressed payloads or "identity" for no encoding.
  # content_encoding = "identity"

  ## Maximum size of decoded packet (in bytes when no unit specified)
  # max_decompression_size = "500MB"

  ## Framing technique used for messages transport
  ## Available settings are:
  ##   octet-counting  -- see RFC5425#section-4.3.1 and RFC6587#section-3.4.1
  ##   non-transparent -- see RFC6587#section-3.4.2
  # framing = "octet-counting"

  ## The trailer to be expected in case of non-transparent framing (default = "LF").
  ## Must be one of "LF", or "NUL".
  # trailer = "LF"

  ## Whether to parse in best effort mode or not (default = false).
  ## By default best effort parsing is off.
  # best_effort = false

  ## The RFC standard to use for message parsing
  ## By default RFC5424 is used. RFC3164 only supports UDP transport (no streaming support)
  ## Must be one of "RFC5424", or "RFC3164".
  # syslog_standard = "RFC5424"

  ## Character to prepend to SD-PARAMs (default = "_").
  ## A syslog message can contain multiple parameters and multiple identifiers within structured data section.
  ## Eg., [id1 name1="val1" name2="val2"][id2 name1="val1" nameA="valA"]
  ## For each combination a field is created.
  ## Its name is created concatenating identifier, sdparam_separator, and parameter name.
  # sdparam_separator = "_"

Apache Hudi

[[outputs.parquet]]
  ## Directory to write parquet files in. If a file already exists the output
  ## will attempt to continue using the existing file.
  directory = "/var/lib/telegraf/hudi_metrics"

  ## File rotation interval (default is no rotation)
  # rotation_interval = "1h"

  ## Buffer size before writing (default is 1000 metrics)
  # buffer_size = 1000

  ## Optional: compression codec (snappy, gzip, etc.)
  # compression_codec = "snappy"

  ## When grouping metrics, each metric name goes to its own file
  ## If a metric’s schema doesn’t match the existing schema, it will be dropped

Input and output integration examples

Syslog

  1. Centralized Log Management: Use the Syslog plugin to aggregate log messages from multiple servers into a central logging system. This setup can help in monitoring overall system health, troubleshooting issues effectively, and maintaining audit trails by collecting syslog data from different sources.

  2. Real-Time Alerting: Integrate the Syslog plugin with alerting tools to trigger real-time notifications when specific log patterns or errors are detected. For example, if a critical system error appears in the logs, an alert can be sent to the operations team, minimizing downtime and performing proactive maintenance.

  3. Security Monitoring: Leverage the Syslog plugin for security monitoring by capturing logs from firewalls, intrusion detection systems, and other security devices. This logging capability enhances security visibility and helps in investigating potentially malicious activities by analyzing the captured syslog data.

  4. Application Performance Tracking: Utilize the Syslog plugin to monitor application performance by collecting logs from various applications. This integration helps in analyzing the application’s behavior and performance trends, thus aiding in optimizing application processes and ensuring smoother operation.

Apache Hudi

  1. Transactional Lakehouse Metrics: Buffer and write Web service metrics as Parquet files for DeltaStreamer to ingest into Hudi, enabling upserts, ACID compliance, and time-travel on historical performance data.

  2. Edge Device Batch Analytics: Telegraf running on IoT gateways writes metrics to Parquet locally, where periodic Spark jobs ingest them into Hudi for long-term analytics and traceability.

  3. Schema-Enforced Abnormal Metric Handling: Use Parquet plugin’s strict schema-dropping behavior to prevent malformed or unexpected metric changes. Hudi ingestion then guarantees consistent schema and data quality in downstream datasets.

  4. Data Platform Integration: Store Telegraf metrics as Parquet files in an S3/ADLS landing zone. Hudi’s Spark-based ingestion pipeline then loads them into a unified, queryable lakehouse with business events and logs.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration