OpenTelemetry and Databricks Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider OpenTelemetry and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

This plugin receives traces, metrics, and logs from OpenTelemetry clients and agents via gRPC, enabling comprehensive observability of applications.

Use Telegraf’s HTTP output plugin to push metrics straight into a Databricks Lakehouse by calling the SQL Statement Execution API with a JSON-wrapped INSERT or volume PUT command.

Integration details

OpenTelemetry

The OpenTelemetry plugin is designed to receive telemetry data such as traces, metrics, and logs from clients and agents implementing OpenTelemetry via gRPC. This plugin initiates a gRPC service that listens for incoming telemetry data, making it distinct from standard plugins that collect metrics at defined intervals. The OpenTelemetry ecosystem aids developers in observing and understanding their applications’ performance by providing a vendor-neutral way to instrument, generate, collect, and export telemetry data. Key features of this plugin include customizable connection timeouts, adjustable maximum message sizes for incoming data, and options for specifying span, log, and profile dimensions to tag the incoming metrics. With this flexibility, organizations can tailor their telemetry collection to meet precise observability requirements and ensure seamless data integration into systems like InfluxDB.

Databricks

This configuration turns Telegraf into a lightweight ingestion agent for the Databricks Lakehouse. It leverages the Databricks SQL Statement Execution API 2.0, which accepts authenticated POST requests containing a JSON payload with a statement field. Each Telegraf flush dynamically renders a SQL INSERT (or, for file-based workflows, a PUT ... INTO /Volumes/... command) that lands the metrics into a Unity Catalog table or volume governed by Lakehouse security. Under the hood Databricks stores successful inserts as Delta Lake transactions, enabling ACID guarantees, time-travel, and scalable analytics. Operators can point the warehouse_id at any serverless or classic SQL warehouse, and all authentication is handled with a PAT or service-principal token—no agents or JDBC drivers required. Because Telegraf’s HTTP output supports custom headers, batching, TLS, and proxy settings, the same pattern scales from edge IoT gateways to container sidecars, consolidating infrastructure telemetry, application logs, or business KPIs directly into the Lakehouse for BI, ML, and Lakehouse Monitoring. Unity Catalog volumes provide a governed staging layer when file uploads and COPY INTO are preferred, and the approach aligns with Databricks’ recommended ingestion practices for partners and ISVs.

Configuration

OpenTelemetry

[[inputs.opentelemetry]]
  ## Override the default (0.0.0.0:4317) destination OpenTelemetry gRPC service
  ## address:port
  # service_address = "0.0.0.0:4317"

  ## Override the default (5s) new connection timeout
  # timeout = "5s"

  ## gRPC Maximum Message Size
  # max_msg_size = "4MB"

  ## Override the default span attributes to be used as line protocol tags.
  ## These are always included as tags:
  ## - trace ID
  ## - span ID
  ## Common attributes can be found here:
  ## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
  # span_dimensions = ["service.name", "span.name"]

  ## Override the default log record attributes to be used as line protocol tags.
  ## These are always included as tags, if available:
  ## - trace ID
  ## - span ID
  ## Common attributes can be found here:
  ## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
  ## When using InfluxDB for both logs and traces, be certain that log_record_dimensions
  ## matches the span_dimensions value.
  # log_record_dimensions = ["service.name"]

  ## Override the default profile attributes to be used as line protocol tags.
  ## These are always included as tags, if available:
  ## - profile_id
  ## - address
  ## - sample
  ## - sample_name
  ## - sample_unit
  ## - sample_type
  ## - sample_type_unit
  ## Common attributes can be found here:
  ## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
  # profile_dimensions = []

  ## Override the default (prometheus-v1) metrics schema.
  ## Supports: "prometheus-v1", "prometheus-v2"
  ## For more information about the alternatives, read the Prometheus input
  ## plugin notes.
  # metrics_schema = "prometheus-v1"

  ## Optional TLS Config.
  ## For advanced options: https://github.com/influxdata/telegraf/blob/v1.18.3/docs/TLS.md
  ##
  ## Set one or more allowed client CA certificate file names to
  ## enable mutually authenticated TLS connections.
  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
  ## Add service certificate and key.
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"

Databricks

[[outputs.http]]
  ## Databricks SQL Statement Execution API endpoint
  url = "https://{{ env "DATABRICKS_HOST" }}/api/2.0/sql/statements"

  ## Use POST to submit each Telegraf batch as a SQL request
  method = "POST"

  ## Personal-access token (PAT) for workspace or service principal
  headers = { Authorization = "Bearer {{ env "DATABRICKS_TOKEN" }}" }

  ## Send JSON that wraps the metrics batch in a SQL INSERT (or PUT into a Volume)
  content_type = "application/json"

  ## Serialize metrics as JSON so they can be embedded in the SQL statement
  data_format = "json"
  json_timestamp_units = "1ms"

  ## Build the request body.  Telegraf replaces the template variables at runtime.
  ## Example inserts a row per metric into a Unity-Catalog table.
  body_template = """
  {
    \"statement\": \"INSERT INTO ${TARGET_TABLE} VALUES {{range .Metrics}}(from_unixtime({{.timestamp}}/1000), {{.fields.usage}}, '{{.tags.host}}'){{end}}\",
    \"warehouse_id\": \"${WAREHOUSE_ID}\"
  }
  """

  ## Optional: add batching limits or TLS settings
  # batch_size = 500
  # timeout     = "10s"

Input and output integration examples

OpenTelemetry

  1. Unified Monitoring Across Services: Use the OpenTelemetry plugin to collect and consolidate telemetry data from various microservices within a Kubernetes environment. By instrumenting each service with OpenTelemetry, you can utilize this plugin to gather a holistic view of application performance and dependencies in real-time, enabling faster troubleshooting and improved reliability of complex systems.

  2. Enhanced Debugging with Traces: Implement this plugin to capture end-to-end traces of requests flowing through multiple services. For instance, when a user initiates a transaction that triggers several backend services, the OpenTelemetry plugin can record detailed traces that highlight performance bottlenecks, giving developers the necessary insights to debug issues and optimize their code.

  3. Dynamic Load Testing and Performance Monitoring: Leverage the capabilities of this plugin during load testing phases by collecting live metrics and traces under simulated higher loads. This approach helps to evaluate the resilience of the application components and identify potential performance degradations preemptively, ensuring a smooth user experience in production.

  4. Integrated Logging and Metrics for Real-Time Monitoring: Combine the OpenTelemetry plugin with logging frameworks to gather real-time logs alongside metric data, creating a powerful observability platform. For example, integrate it within a CI/CD pipeline to monitor builds and deployments, while collecting logs that help diagnose failures or performance issues in real-time.

Databricks

  1. Edge-to-Lakehouse Telemetry Pipe: Deploy Telegraf on factory PLCs to sample vibration metrics and post them every second to a serverless SQL warehouse. Delta tables power PowerBI dashboards that alert engineers when thresholds drift.
  2. Blue-Green CI/CD Rollout Metrics: Attach a Telegraf sidecar to each Kubernetes canary pod; it inserts container stats into a Unity Catalog table tagged by deployment_id, letting Databricks SQL compare error-rate percentiles and auto-rollback underperforming versions.
  3. SaaS Usage Metering: Insert per-tenant API-call counters via the HTTP plugin; a nightly Lakehouse query aggregates usage into invoices, eliminating custom metering micro-services.
  4. Security Forensics Lake: Upload JSON batches of Suricata IDS events to a Unity Catalog volume using PUT commands, then run COPY INTO for near-real-time enrichment with Delta Live Tables, producing a searchable threat-intel lake that joins network logs with user session data.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration