ctrlX Data Layer and Databricks Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider ctrlX data layer and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The ctrlX plugin is designed to gather data seamlessly from the ctrlX Data Layer middleware, widely used in industrial automation.

Use Telegraf’s HTTP output plugin to push metrics straight into a Databricks Lakehouse by calling the SQL Statement Execution API with a JSON-wrapped INSERT or volume PUT command.

Integration details

ctrlX Data Layer

The ctrlX Telegraf plugin provides a means to gather data from the ctrlX Data Layer, a communication middleware designed for professional automation applications. This plugin allows users to connect to ctrlX CORE devices, enabling the collection and monitoring of various metrics related to industrial and building automation, robotics, and IoT. The configuration options allow for detailed specifications of connection settings, subscription properties, and sampling rates, facilitating effective integration with the ctrlX Data Layer to meet customized monitoring needs, while leveraging the unique capabilities of the ctrlX platform.

Databricks

This configuration turns Telegraf into a lightweight ingestion agent for the Databricks Lakehouse. It leverages the Databricks SQL Statement Execution API 2.0, which accepts authenticated POST requests containing a JSON payload with a statement field. Each Telegraf flush dynamically renders a SQL INSERT (or, for file-based workflows, a PUT ... INTO /Volumes/... command) that lands the metrics into a Unity Catalog table or volume governed by Lakehouse security. Under the hood Databricks stores successful inserts as Delta Lake transactions, enabling ACID guarantees, time-travel, and scalable analytics. Operators can point the warehouse_id at any serverless or classic SQL warehouse, and all authentication is handled with a PAT or service-principal token—no agents or JDBC drivers required. Because Telegraf’s HTTP output supports custom headers, batching, TLS, and proxy settings, the same pattern scales from edge IoT gateways to container sidecars, consolidating infrastructure telemetry, application logs, or business KPIs directly into the Lakehouse for BI, ML, and Lakehouse Monitoring. Unity Catalog volumes provide a governed staging layer when file uploads and COPY INTO are preferred, and the approach aligns with Databricks’ recommended ingestion practices for partners and ISVs.

Configuration

ctrlX Data Layer

[[inputs.ctrlx_datalayer]]
   ## Hostname or IP address of the ctrlX CORE Data Layer server
   ##  example: server = "localhost"        # Telegraf is running directly on the device
   ##           server = "192.168.1.1"      # Connect to ctrlX CORE remote via IP
   ##           server = "host.example.com" # Connect to ctrlX CORE remote via hostname
   ##           server = "10.0.2.2:8443"    # Connect to ctrlX CORE Virtual from development environment
   server = "localhost"

   ## Authentication credentials
   username = "boschrexroth"
   password = "boschrexroth"

   ## Use TLS but skip chain & host verification
   # insecure_skip_verify = false

   ## Timeout for HTTP requests. (default: "10s")
   # timeout = "10s"


   ## Create a ctrlX Data Layer subscription.
   ## It is possible to define multiple subscriptions per host. Each subscription can have its own
   ## sampling properties and a list of nodes to subscribe to.
   ## All subscriptions share the same credentials.
   [[inputs.ctrlx_datalayer.subscription]]
      ## The name of the measurement. (default: "ctrlx")
      measurement = "memory"

      ## Configure the ctrlX Data Layer nodes which should be subscribed.
      ## address - node address in ctrlX Data Layer (mandatory)
      ## name    - field name to use in the output (optional, default: base name of address)
      ## tags    - extra node tags to be added to the output metric (optional)
      ## Note: 
      ## Use either the inline notation or the bracketed notation, not both.
      ## The tags property is only supported in bracketed notation due to toml parser restrictions
      ## Examples:
      ## Inline notation 
      nodes=[
         {name="available", address="framework/metrics/system/memavailable-mb"},
         {name="used", address="framework/metrics/system/memused-mb"},
      ]
      ## Bracketed notation
      # [[inputs.ctrlx_datalayer.subscription.nodes]]
      #    name   ="available"
      #    address="framework/metrics/system/memavailable-mb"
      #    ## Define extra tags related to node to be added to the output metric (optional)
      #    [inputs.ctrlx_datalayer.subscription.nodes.tags]
      #       node_tag1="node_tag1"
      #       node_tag2="node_tag2"
      # [[inputs.ctrlx_datalayer.subscription.nodes]]
      #    name   ="used"
      #    address="framework/metrics/system/memused-mb"

      ## The switch "output_json_string" enables output of the measurement as json. 
      ## That way it can be used in in a subsequent processor plugin, e.g. "Starlark Processor Plugin".
      # output_json_string = false

      ## Define extra tags related to subscription to be added to the output metric (optional)
      # [inputs.ctrlx_datalayer.subscription.tags]
      #    subscription_tag1 = "subscription_tag1"
      #    subscription_tag2 = "subscription_tag2"

      ## The interval in which messages shall be sent by the ctrlX Data Layer to this plugin. (default: 1s)
      ## Higher values reduce load on network by queuing samples on server side and sending as a single TCP packet.
      # publish_interval = "1s"

      ## The interval a "keepalive" message is sent if no change of data occurs. (default: 60s)
      ## Only used internally to detect broken network connections.
      # keep_alive_interval = "60s"

      ## The interval an "error" message is sent if an error was received from a node. (default: 10s)
      ## Higher values reduce load on output target and network in case of errors by limiting frequency of error messages.
      # error_interval = "10s"

      ## The interval that defines the fastest rate at which the node values should be sampled and values captured. (default: 1s)
      ## The sampling frequency should be adjusted to the dynamics of the signal to be sampled.
      ## Higher sampling frequencies increases load on ctrlX Data Layer.
      ## The sampling frequency can be higher, than the publish interval. Captured samples are put in a queue and sent in publish interval.
      ## Note: The minimum sampling interval can be overruled by a global setting in the ctrlX Data Layer configuration ('datalayer/subscriptions/settings').
      # sampling_interval = "1s"

      ## The requested size of the node value queue. (default: 10)
      ## Relevant if more values are captured than can be sent.
      # queue_size = 10

      ## The behaviour of the queue if it is full. (default: "DiscardOldest")
      ## Possible values: 
      ## - "DiscardOldest"
      ##   The oldest value gets deleted from the queue when it is full.
      ## - "DiscardNewest"
      ##   The newest value gets deleted from the queue when it is full.
      # queue_behaviour = "DiscardOldest"

      ## The filter when a new value will be sampled. (default: 0.0)
      ## Calculation rule: If (abs(lastCapturedValue - newValue) > dead_band_value) capture(newValue).
      # dead_band_value = 0.0

      ## The conditions on which a sample should be captured and thus will be sent as a message. (default: "StatusValue")
      ## Possible values:
      ## - "Status"
      ##   Capture the value only, when the state of the node changes from or to error state. Value changes are ignored.
      ## - "StatusValue" 
      ##   Capture when the value changes or the node changes from or to error state.
      ##   See also 'dead_band_value' for what is considered as a value change.
      ## - "StatusValueTimestamp": 
      ##   Capture even if the value is the same, but the timestamp of the value is newer.
      ##   Note: This might lead to high load on the network because every sample will be sent as a message
      ##   even if the value of the node did not change.
      # value_change = "StatusValue"

Databricks

[[outputs.http]]
  ## Databricks SQL Statement Execution API endpoint
  url = "https://{{ env "DATABRICKS_HOST" }}/api/2.0/sql/statements"

  ## Use POST to submit each Telegraf batch as a SQL request
  method = "POST"

  ## Personal-access token (PAT) for workspace or service principal
  headers = { Authorization = "Bearer {{ env "DATABRICKS_TOKEN" }}" }

  ## Send JSON that wraps the metrics batch in a SQL INSERT (or PUT into a Volume)
  content_type = "application/json"

  ## Serialize metrics as JSON so they can be embedded in the SQL statement
  data_format = "json"
  json_timestamp_units = "1ms"

  ## Build the request body.  Telegraf replaces the template variables at runtime.
  ## Example inserts a row per metric into a Unity-Catalog table.
  body_template = """
  {
    \"statement\": \"INSERT INTO ${TARGET_TABLE} VALUES {{range .Metrics}}(from_unixtime({{.timestamp}}/1000), {{.fields.usage}}, '{{.tags.host}}'){{end}}\",
    \"warehouse_id\": \"${WAREHOUSE_ID}\"
  }
  """

  ## Optional: add batching limits or TLS settings
  # batch_size = 500
  # timeout     = "10s"

Input and output integration examples

ctrlX Data Layer

  1. Industrial Automation Monitoring: Utilize this plugin to continuously monitor key performance indicators from a manufacturing system controlled by ctrlX CORE devices. By subscribing to specific data nodes that provide real-time metrics such as resource availability or machine uptime, manufacturers can dynamically adjust their operations for increased efficiency and minimal downtime.

  2. Energy Consumption Analysis: Collect energy consumption data from IoT-enabled ctrlX CORE platforms in a smart building setup. By analyzing trends and patterns in energy use, facility managers can optimize operating strategies, reduce energy costs, and support sustainability initiatives, making informed decisions about resource allocation and predictive maintenance.

  3. Predictive Maintenance for Robotics: Gather telemetry data from robotics applications deployed in warehousing environments. By monitoring vibration, temperature, and operational parameters in real-time, organizations can predict equipment failures before they occur, leading to reduced maintenance costs and enhanced robotic system uptime through timely interventions.

  4. Cross-Platform Data Integration: Connect data gathered from ctrlX CORE devices into a centralized Cloud data warehouse using this plugin. By streaming real-time metrics to other systems, organizations can create a unified view of operational performance across various manufacturing and operational systems, enabling data-driven decision-making across diverse platforms.

Databricks

  1. Edge-to-Lakehouse Telemetry Pipe: Deploy Telegraf on factory PLCs to sample vibration metrics and post them every second to a serverless SQL warehouse. Delta tables power PowerBI dashboards that alert engineers when thresholds drift.
  2. Blue-Green CI/CD Rollout Metrics: Attach a Telegraf sidecar to each Kubernetes canary pod; it inserts container stats into a Unity Catalog table tagged by deployment_id, letting Databricks SQL compare error-rate percentiles and auto-rollback underperforming versions.
  3. SaaS Usage Metering: Insert per-tenant API-call counters via the HTTP plugin; a nightly Lakehouse query aggregates usage into invoices, eliminating custom metering micro-services.
  4. Security Forensics Lake: Upload JSON batches of Suricata IDS events to a Unity Catalog volume using PUT commands, then run COPY INTO for near-real-time enrichment with Delta Live Tables, producing a searchable threat-intel lake that joins network logs with user session data.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration