Modbus and Microsoft Fabric Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The Modbus plugin allows you to collect data from Modbus devices using various communication methods, enhancing your ability to monitor and control industrial processes.
The Microsoft Fabric plugin writes metrics to Real time analytics in Fabric services, enabling powerful data storage and analysis capabilities.
Integration details
Modbus
The Modbus plugin collects discrete inputs, coils, input registers, and holding registers via Modbus TCP or Modbus RTU/ASCII.
Microsoft Fabric
This plugin allows you to leverage Microsoft Fabric’s capabilities to store and analyze your Telegraf metrics. Eventhouse is a high-performance, scalable data-store designed for real-time analytics. It allows you to ingest, store and query large volumes of data with low latency. The plugin supports both events and metrics with versatile grouping options. It provides various configuration parameters including connection strings specifying details like the data source, ingestion types, and which tables to use for storage. With support for streaming ingestion and event streams, this plugin enables seamless integration and data flow into Microsoft’s analytics ecosystem, allowing for rich data querying capabilities and near-real-time processing.
Configuration
Modbus
[[inputs.modbus]]
name = "Device"
slave_id = 1
timeout = "1s"
configuration_type = "register"
discrete_inputs = [
{ name = "start", address = [0]},
{ name = "stop", address = [1]},
{ name = "reset", address = [2]},
{ name = "emergency_stop", address = [3]},
]
coils = [
{ name = "motor1_run", address = [0]},
{ name = "motor1_jog", address = [1]},
{ name = "motor1_stop", address = [2]},
]
holding_registers = [
{ name = "power_factor", byte_order = "AB", data_type = "FIXED", scale=0.01, address = [8]},
{ name = "voltage", byte_order = "AB", data_type = "FIXED", scale=0.1, address = [0]},
{ name = "energy", byte_order = "ABCD", data_type = "FIXED", scale=0.001, address = [5,6]},
{ name = "current", byte_order = "ABCD", data_type = "FIXED", scale=0.001, address = [1,2]},
{ name = "frequency", byte_order = "AB", data_type = "UFIXED", scale=0.1, address = [7]},
{ name = "power", byte_order = "ABCD", data_type = "UFIXED", scale=0.1, address = [3,4]},
{ name = "firmware", byte_order = "AB", data_type = "STRING", address = [5, 6, 7, 8, 9, 10, 11, 12]},
]
input_registers = [
{ name = "tank_level", byte_order = "AB", data_type = "INT16", scale=1.0, address = [0]},
{ name = "tank_ph", byte_order = "AB", data_type = "INT16", scale=1.0, address = [1]},
{ name = "pump1_speed", byte_order = "ABCD", data_type = "INT32", scale=1.0, address = [3,4]},
]
Microsoft Fabric
[[outputs.microsoft_fabric]]
## The URI property of the resource on Azure
connection_string = "https://trd-abcd.xx.kusto.fabric.microsoft.com;Database=kusto_eh;Table Name=telegraf_dump;Key=value"
## Client timeout
# timeout = "30s"
Input and output integration examples
Modbus
- Basic Usage: To read from a single device, configure it with the device name and IP address, specifying the slave ID and registers of interest.
- Multiple Requests: You can define multiple requests to fetch data from different Modbus slave devices in a single configuration by specifying multiple
[[inputs.modbus.request]]
sections. - Data Processing: Utilize the scaling features to convert raw Modbus readings into useful metrics, adjusting for unit conversions as needed.
Microsoft Fabric
-
Real-time Monitoring Dashboards: Utilize the Microsoft Fabric plugin to feed live metrics from your applications into a real-time dashboard on Microsoft Fabric. This allows teams to visualize key performance indicators instantly, enabling quick decision-making and timely responses to performance issues.
-
Automated Data Ingestion from IoT Devices: Use this plugin in scenarios where metrics from IoT devices need to be ingested into Azure for analysis. Using the plugin’s capabilities, data can be streamed continuously, facilitating real-time analytics and reporting without complex coding efforts.
-
Cross-Platform Data Aggregation: Leverage the plugin to consolidate metrics from multiple systems and applications into a single Azure Data Explorer table. This use case enables easier data management and analysis by centralizing disparate data sources within a unified analytics framework.
-
Enhanced Event Transformation Workflows: Integrate the plugin with Eventstreams to facilitate real-time event ingestion and transformation. By configuring different metrics and partition keys, users can manipulate the flow of data as it enters the system, allowing for advanced processing before the data reaches its final destination.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration