OPC UA and Parquet Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider OPC UA and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The OPC UA plugin provides an interface for retrieving data from OPC UA server devices, facilitating effective data collection and monitoring.

This plugin writes metrics to parquet files, utilizing a schema based on the metrics grouped by name. It supports file rotation and buffered writing for optimal performance.

Integration details

OPC UA

The OPC UA Plugin retrieves data from devices that communicate using the OPC UA protocol, allowing you to collect and monitor data from your OPC UA servers.

Parquet

The Parquet output plugin for Telegraf writes metrics to parquet files, which are columnar storage formats optimized for analytics. By default, this plugin groups metrics by their name, writing them to a single file. If a metric’s schema does not align with existing schemas, those metrics are dropped. The plugin generates an Apache Arrow schema based on all grouped metrics, ensuring that the schema reflects the union of all fields and tags. It operates in a buffered manner, meaning it temporarily holds metrics in memory before writing them to disk for efficiency. Parquet files require proper closure to ensure readability, and this is crucial when using the plugin, as improper closure can lead to unreadable files. Additionally, the plugin supports file rotation after specific time intervals, preventing overwrites of existing files and schema conflicts when a file with the same name already exists.

Configuration

OPC UA


[[inputs.opcua]]
  ## Metric name
  # name = "opcua"
  #
  ## OPC UA Endpoint URL
  # endpoint = "opc.tcp://localhost:4840"
  #
  ## Maximum time allowed to establish a connect to the endpoint.
  # connect_timeout = "10s"
  #
  ## Maximum time allowed for a request over the established connection.
  # request_timeout = "5s"

  # Maximum time that a session shall remain open without activity.
  # session_timeout = "20m"
  #
  ## Security policy, one of "None", "Basic128Rsa15", "Basic256",
  ## "Basic256Sha256", or "auto"
  # security_policy = "auto"
  #
  ## Security mode, one of "None", "Sign", "SignAndEncrypt", or "auto"
  # security_mode = "auto"
  #
  ## Path to cert.pem. Required when security mode or policy isn't "None".
  ## If cert path is not supplied, self-signed cert and key will be generated.
  # certificate = "/etc/telegraf/cert.pem"
  #
  ## Path to private key.pem. Required when security mode or policy isn't "None".
  ## If key path is not supplied, self-signed cert and key will be generated.
  # private_key = "/etc/telegraf/key.pem"
  #
  ## Authentication Method, one of "Certificate", "UserName", or "Anonymous".  To
  ## authenticate using a specific ID, select 'Certificate' or 'UserName'
  # auth_method = "Anonymous"
  #
  ## Username. Required for auth_method = "UserName"
  # username = ""
  #
  ## Password. Required for auth_method = "UserName"
  # password = ""
  #
  ## Option to select the metric timestamp to use. Valid options are:
  ##     "gather" -- uses the time of receiving the data in telegraf
  ##     "server" -- uses the timestamp provided by the server
  ##     "source" -- uses the timestamp provided by the source
  # timestamp = "gather"
  #
  ## Client trace messages
  ## When set to true, and debug mode enabled in the agent settings, the OPCUA
  ## client's messages are included in telegraf logs. These messages are very
  ## noisey, but essential for debugging issues.
  # client_trace = false
  #
  ## Include additional Fields in each metric
  ## Available options are:
  ##   DataType -- OPC-UA Data Type (string)
  # optional_fields = []
  #
  ## Node ID configuration
  ## name              - field name to use in the output
  ## namespace         - OPC UA namespace of the node (integer value 0 thru 3)
  ## identifier_type   - OPC UA ID type (s=string, i=numeric, g=guid, b=opaque)
  ## identifier        - OPC UA ID (tag as shown in opcua browser)
  ## tags              - extra tags to be added to the output metric (optional); deprecated in 1.25.0; use default_tags
  ## default_tags      - extra tags to be added to the output metric (optional)
  ##
  ## Use either the inline notation or the bracketed notation, not both.
  #
  ## Inline notation (default_tags not supported yet)
  # nodes = [
  #   {name="", namespace="", identifier_type="", identifier="", tags=[["tag1", "value1"], ["tag2", "value2"]},
  #   {name="", namespace="", identifier_type="", identifier=""},
  # ]
  #
  ## Bracketed notation
  # [[inputs.opcua.nodes]]
  #   name = "node1"
  #   namespace = ""
  #   identifier_type = ""
  #   identifier = ""
  #   default_tags = { tag1 = "value1", tag2 = "value2" }
  #
  # [[inputs.opcua.nodes]]
  #   name = "node2"
  #   namespace = ""
  #   identifier_type = ""
  #   identifier = ""
  #
  ## Node Group
  ## Sets defaults so they aren't required in every node.
  ## Default values can be set for:
  ## * Metric name
  ## * OPC UA namespace
  ## * Identifier
  ## * Default tags
  ##
  ## Multiple node groups are allowed
  #[[inputs.opcua.group]]
  ## Group Metric name. Overrides the top level name.  If unset, the
  ## top level name is used.
  # name =
  #
  ## Group default namespace. If a node in the group doesn't set its
  ## namespace, this is used.
  # namespace =
  #
  ## Group default identifier type. If a node in the group doesn't set its
  ## namespace, this is used.
  # identifier_type =
  #
  ## Default tags that are applied to every node in this group. Can be
  ## overwritten in a node by setting a different value for the tag name.
  ##   example: default_tags = { tag1 = "value1" }
  # default_tags = {}
  #
  ## Node ID Configuration.  Array of nodes with the same settings as above.
  ## Use either the inline notation or the bracketed notation, not both.
  #
  ## Inline notation (default_tags not supported yet)
  # nodes = [
  #  {name="node1", namespace="", identifier_type="", identifier=""},
  #  {name="node2", namespace="", identifier_type="", identifier=""},
  #]
  #
  ## Bracketed notation
  # [[inputs.opcua.group.nodes]]
  #   name = "node1"
  #   namespace = ""
  #   identifier_type = ""
  #   identifier = ""
  #   default_tags = { tag1 = "override1", tag2 = "value2" }
  #
  # [[inputs.opcua.group.nodes]]
  #   name = "node2"
  #   namespace = ""
  #   identifier_type = ""
  #   identifier = ""

  ## Enable workarounds required by some devices to work correctly
  # [inputs.opcua.workarounds]
    ## Set additional valid status codes, StatusOK (0x0) is always considered valid
  # additional_valid_status_codes = ["0xC0"]

  # [inputs.opcua.request_workarounds]
    ## Use unregistered reads instead of registered reads
  # use_unregistered_reads = false

Parquet

[[outputs.parquet]]
  ## Directory to write parquet files in. If a file already exists the output
  ## will attempt to continue using the existing file.
  # directory = "."
  
  ## Files are rotated after the time interval specified. When set to 0 no time
  ## based rotation is performed.
  # rotation_interval = "0h"
  
  ## Timestamp field name
  ## Field name to use to store the timestamp. If set to an empty string, then
  ## the timestamp is omitted.
  # timestamp_field_name = "timestamp"

Input and output integration examples

OPC UA

  1. Basic Configuration: Set up the plugin with your OPC UA server endpoint and desired metrics. This allows Telegraf to start gathering metrics from the configured nodes.

  2. Node ID Setup: Use the configuration to specify specific nodes, such as temperature sensors, to monitor their values in real-time. For example, configure node ns=3;s=Temperature to gather temperature data directly.

  3. Group Configuration: Simplify monitoring multiple nodes by grouping them under a single configuration—this sets defaults for all nodes in that group, thereby reducing redundancy in setup.

Parquet

  1. Data Lake Ingestion: Utilize the Parquet plugin to store metrics from various sources into a data lake. By writing metrics in parquet format, you establish a standardized and efficient way to manage time-series data, enabling faster querying capabilities and seamless integration with analytics tools like Apache Spark or AWS Athena. This setup can significantly improve data retrieval times and analysis workflows.

  2. Long-term Storage of Metrics: Implement the Parquet plugin in a monitoring setup where metrics are collected over time from multiple applications. This allows for long-term storage of performance data in a compact format, making it cost-effective to store vast amounts of historical data while preserving the ability for quick retrieval and analysis later on. By archiving metrics in parquet files, organizations can maintain compliance and create detailed reports from historical performance trends.

  3. Analytics and Reporting: After writing metrics to parquet files, leverage tools like Apache Arrow or PyArrow to perform complex analytical queries directly on the files without needing to load all the data into memory. This can enhance reporting capabilities, allowing teams to generate insights and visualization from large datasets efficiently, thereby improving decision-making processes based on accurate, up-to-date performance metrics.

  4. Integrating with Data Warehouses: Use the Parquet plugin as part of a data integration pipeline that feeds into a modern data warehouse. By converting metrics to parquet format, the data can be easily ingested by systems like Snowflake or Google BigQuery, enabling powerful analytics and business intelligence capabilities that drive actionable insights from the collected metrics.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration