ctrlX Data Layer and Parquet Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The ctrlX plugin is designed to gather data seamlessly from the ctrlX Data Layer middleware, widely used in industrial automation.
This plugin writes metrics to parquet files, utilizing a schema based on the metrics grouped by name. It supports file rotation and buffered writing for optimal performance.
Integration details
ctrlX Data Layer
The ctrlX Telegraf plugin provides a means to gather data from the ctrlX Data Layer, a communication middleware designed for professional automation applications. This plugin allows users to connect to ctrlX CORE devices, enabling the collection and monitoring of various metrics related to industrial and building automation, robotics, and IoT. The configuration options allow for detailed specifications of connection settings, subscription properties, and sampling rates, facilitating effective integration with the ctrlX Data Layer to meet customized monitoring needs, while leveraging the unique capabilities of the ctrlX platform.
Parquet
The Parquet output plugin for Telegraf writes metrics to parquet files, which are columnar storage formats optimized for analytics. By default, this plugin groups metrics by their name, writing them to a single file. If a metric’s schema does not align with existing schemas, those metrics are dropped. The plugin generates an Apache Arrow schema based on all grouped metrics, ensuring that the schema reflects the union of all fields and tags. It operates in a buffered manner, meaning it temporarily holds metrics in memory before writing them to disk for efficiency. Parquet files require proper closure to ensure readability, and this is crucial when using the plugin, as improper closure can lead to unreadable files. Additionally, the plugin supports file rotation after specific time intervals, preventing overwrites of existing files and schema conflicts when a file with the same name already exists.
Configuration
ctrlX Data Layer
[[inputs.ctrlx_datalayer]]
## Hostname or IP address of the ctrlX CORE Data Layer server
## example: server = "localhost" # Telegraf is running directly on the device
## server = "192.168.1.1" # Connect to ctrlX CORE remote via IP
## server = "host.example.com" # Connect to ctrlX CORE remote via hostname
## server = "10.0.2.2:8443" # Connect to ctrlX CORE Virtual from development environment
server = "localhost"
## Authentication credentials
username = "boschrexroth"
password = "boschrexroth"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Timeout for HTTP requests. (default: "10s")
# timeout = "10s"
## Create a ctrlX Data Layer subscription.
## It is possible to define multiple subscriptions per host. Each subscription can have its own
## sampling properties and a list of nodes to subscribe to.
## All subscriptions share the same credentials.
[[inputs.ctrlx_datalayer.subscription]]
## The name of the measurement. (default: "ctrlx")
measurement = "memory"
## Configure the ctrlX Data Layer nodes which should be subscribed.
## address - node address in ctrlX Data Layer (mandatory)
## name - field name to use in the output (optional, default: base name of address)
## tags - extra node tags to be added to the output metric (optional)
## Note:
## Use either the inline notation or the bracketed notation, not both.
## The tags property is only supported in bracketed notation due to toml parser restrictions
## Examples:
## Inline notation
nodes=[
{name="available", address="framework/metrics/system/memavailable-mb"},
{name="used", address="framework/metrics/system/memused-mb"},
]
## Bracketed notation
# [[inputs.ctrlx_datalayer.subscription.nodes]]
# name ="available"
# address="framework/metrics/system/memavailable-mb"
# ## Define extra tags related to node to be added to the output metric (optional)
# [inputs.ctrlx_datalayer.subscription.nodes.tags]
# node_tag1="node_tag1"
# node_tag2="node_tag2"
# [[inputs.ctrlx_datalayer.subscription.nodes]]
# name ="used"
# address="framework/metrics/system/memused-mb"
## The switch "output_json_string" enables output of the measurement as json.
## That way it can be used in in a subsequent processor plugin, e.g. "Starlark Processor Plugin".
# output_json_string = false
## Define extra tags related to subscription to be added to the output metric (optional)
# [inputs.ctrlx_datalayer.subscription.tags]
# subscription_tag1 = "subscription_tag1"
# subscription_tag2 = "subscription_tag2"
## The interval in which messages shall be sent by the ctrlX Data Layer to this plugin. (default: 1s)
## Higher values reduce load on network by queuing samples on server side and sending as a single TCP packet.
# publish_interval = "1s"
## The interval a "keepalive" message is sent if no change of data occurs. (default: 60s)
## Only used internally to detect broken network connections.
# keep_alive_interval = "60s"
## The interval an "error" message is sent if an error was received from a node. (default: 10s)
## Higher values reduce load on output target and network in case of errors by limiting frequency of error messages.
# error_interval = "10s"
## The interval that defines the fastest rate at which the node values should be sampled and values captured. (default: 1s)
## The sampling frequency should be adjusted to the dynamics of the signal to be sampled.
## Higher sampling frequencies increases load on ctrlX Data Layer.
## The sampling frequency can be higher, than the publish interval. Captured samples are put in a queue and sent in publish interval.
## Note: The minimum sampling interval can be overruled by a global setting in the ctrlX Data Layer configuration ('datalayer/subscriptions/settings').
# sampling_interval = "1s"
## The requested size of the node value queue. (default: 10)
## Relevant if more values are captured than can be sent.
# queue_size = 10
## The behaviour of the queue if it is full. (default: "DiscardOldest")
## Possible values:
## - "DiscardOldest"
## The oldest value gets deleted from the queue when it is full.
## - "DiscardNewest"
## The newest value gets deleted from the queue when it is full.
# queue_behaviour = "DiscardOldest"
## The filter when a new value will be sampled. (default: 0.0)
## Calculation rule: If (abs(lastCapturedValue - newValue) > dead_band_value) capture(newValue).
# dead_band_value = 0.0
## The conditions on which a sample should be captured and thus will be sent as a message. (default: "StatusValue")
## Possible values:
## - "Status"
## Capture the value only, when the state of the node changes from or to error state. Value changes are ignored.
## - "StatusValue"
## Capture when the value changes or the node changes from or to error state.
## See also 'dead_band_value' for what is considered as a value change.
## - "StatusValueTimestamp":
## Capture even if the value is the same, but the timestamp of the value is newer.
## Note: This might lead to high load on the network because every sample will be sent as a message
## even if the value of the node did not change.
# value_change = "StatusValue"
Parquet
[[outputs.parquet]]
## Directory to write parquet files in. If a file already exists the output
## will attempt to continue using the existing file.
# directory = "."
## Files are rotated after the time interval specified. When set to 0 no time
## based rotation is performed.
# rotation_interval = "0h"
## Timestamp field name
## Field name to use to store the timestamp. If set to an empty string, then
## the timestamp is omitted.
# timestamp_field_name = "timestamp"
Input and output integration examples
ctrlX Data Layer
-
Industrial Automation Monitoring: Utilize this plugin to continuously monitor key performance indicators from a manufacturing system controlled by ctrlX CORE devices. By subscribing to specific data nodes that provide real-time metrics such as resource availability or machine uptime, manufacturers can dynamically adjust their operations for increased efficiency and minimal downtime.
-
Energy Consumption Analysis: Collect energy consumption data from IoT-enabled ctrlX CORE platforms in a smart building setup. By analyzing trends and patterns in energy use, facility managers can optimize operating strategies, reduce energy costs, and support sustainability initiatives, making informed decisions about resource allocation and predictive maintenance.
-
Predictive Maintenance for Robotics: Gather telemetry data from robotics applications deployed in warehousing environments. By monitoring vibration, temperature, and operational parameters in real-time, organizations can predict equipment failures before they occur, leading to reduced maintenance costs and enhanced robotic system uptime through timely interventions.
-
Cross-Platform Data Integration: Connect data gathered from ctrlX CORE devices into a centralized Cloud data warehouse using this plugin. By streaming real-time metrics to other systems, organizations can create a unified view of operational performance across various manufacturing and operational systems, enabling data-driven decision-making across diverse platforms.
Parquet
-
Data Lake Ingestion: Utilize the Parquet plugin to store metrics from various sources into a data lake. By writing metrics in parquet format, you establish a standardized and efficient way to manage time-series data, enabling faster querying capabilities and seamless integration with analytics tools like Apache Spark or AWS Athena. This setup can significantly improve data retrieval times and analysis workflows.
-
Long-term Storage of Metrics: Implement the Parquet plugin in a monitoring setup where metrics are collected over time from multiple applications. This allows for long-term storage of performance data in a compact format, making it cost-effective to store vast amounts of historical data while preserving the ability for quick retrieval and analysis later on. By archiving metrics in parquet files, organizations can maintain compliance and create detailed reports from historical performance trends.
-
Analytics and Reporting: After writing metrics to parquet files, leverage tools like Apache Arrow or PyArrow to perform complex analytical queries directly on the files without needing to load all the data into memory. This can enhance reporting capabilities, allowing teams to generate insights and visualization from large datasets efficiently, thereby improving decision-making processes based on accurate, up-to-date performance metrics.
-
Integrating with Data Warehouses: Use the Parquet plugin as part of a data integration pipeline that feeds into a modern data warehouse. By converting metrics to parquet format, the data can be easily ingested by systems like Snowflake or Google BigQuery, enabling powerful analytics and business intelligence capabilities that drive actionable insights from the collected metrics.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration