IPMI Sensor and IoTDB Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The IPMI Sensor Plugin facilitates the collection of server health metrics directly from hardware via the IPMI protocol, querying sensor data from either local or remote systems.
This plugin saves Telegraf metrics to an Apache IoTDB backend, supporting session connection and data insertion.
Integration details
IPMI Sensor
The IPMI Sensor plugin is designed to gather bare metal metrics via the command line utility ipmitool
, which interfaces with the Intelligent Platform Management Interface (IPMI). This protocol provides management and monitoring capabilities for hardware components in server systems, allowing for the retrieval of critical system health metrics such as temperature, fan speeds, and power supply status from both local and remote servers. When configured without specified servers, the plugin defaults to querying the local machine’s sensor statistics using the ipmitool sdr
command. In scenarios covering remote hosts, authentication is supported through username and password using the command format ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
. This flexibility allows users to monitor systems effectively across various environments. The plugin also supports multiple sensor types, including chassis power status and DCMI power readings, catering to administrators needing real-time insight into server operations.
IoTDB
Apache IoTDB (Database for Internet of Things) is an IoT native database with high performance for data management and analysis, deployable on the edge and the cloud. Its light-weight architecture, high performance, and rich feature set create a perfect fit for massive data storage, high-speed data ingestion, and complex analytics in the IoT industrial fields. IoTDB deeply integrates with Apache Hadoop, Spark, and Flink, which further enhances its capabilities in handling large scale data and sophisticated processing tasks.
Configuration
IPMI Sensor
[[inputs.ipmi_sensor]]
## Specify the path to the ipmitool executable
# path = "/usr/bin/ipmitool"
## Use sudo
## Setting 'use_sudo' to true will make use of sudo to run ipmitool.
## Sudo must be configured to allow the telegraf user to run ipmitool
## without a password.
# use_sudo = false
## Servers
## Specify one or more servers via a url. If no servers are specified, local
## machine sensor stats will be queried. Uses the format:
## [username[:password]@][protocol[(address)]]
## e.g. root:passwd@lan(127.0.0.1)
# servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
## Session privilege level
## Choose from: CALLBACK, USER, OPERATOR, ADMINISTRATOR
# privilege = "ADMINISTRATOR"
## Timeout
## Timeout for the ipmitool command to complete.
# timeout = "20s"
## Metric schema version
## See the plugin readme for more information on schema versioning.
# metric_version = 1
## Sensors to collect
## Choose from:
## * sdr: default, collects sensor data records
## * chassis_power_status: collects the power status of the chassis
## * dcmi_power_reading: collects the power readings from the Data Center Management Interface
# sensors = ["sdr"]
## Hex key
## Optionally provide the hex key for the IMPI connection.
# hex_key = ""
## Cache
## If ipmitool should use a cache
## Using a cache can speed up collection times depending on your device.
# use_cache = false
## Path to the ipmitools cache file (defaults to OS temp dir)
## The provided path must exist and must be writable
# cache_path = ""
IoTDB
[[outputs.iotdb]]
## Configuration of IoTDB server connection
host = "127.0.0.1"
# port = "6667"
## Configuration of authentication
# user = "root"
# password = "root"
## Timeout to open a new session.
## A value of zero means no timeout.
# timeout = "5s"
## Configuration of type conversion for 64-bit unsigned int
## IoTDB currently DOES NOT support unsigned integers (version 13.x).
## 32-bit unsigned integers are safely converted into 64-bit signed integers by the plugin,
## however, this is not true for 64-bit values in general as overflows may occur.
## The following setting allows to specify the handling of 64-bit unsigned integers.
## Available values are:
## - "int64" -- convert to 64-bit signed integers and accept overflows
## - "int64_clip" -- convert to 64-bit signed integers and clip the values on overflow to 9,223,372,036,854,775,807
## - "text" -- convert to the string representation of the value
# uint64_conversion = "int64_clip"
## Configuration of TimeStamp
## TimeStamp is always saved in 64bits int. timestamp_precision specifies the unit of timestamp.
## Available value:
## "second", "millisecond", "microsecond", "nanosecond"(default)
# timestamp_precision = "nanosecond"
## Handling of tags
## Tags are not fully supported by IoTDB.
## A guide with suggestions on how to handle tags can be found here:
## https://iotdb.apache.org/UserGuide/Master/API/InfluxDB-Protocol.html
##
## Available values are:
## - "fields" -- convert tags to fields in the measurement
## - "device_id" -- attach tags to the device ID
##
## For Example, a metric named "root.sg.device" with the tags `tag1: "private"` and `tag2: "working"` and
## fields `s1: 100` and `s2: "hello"` will result in the following representations in IoTDB
## - "fields" -- root.sg.device, s1=100, s2="hello", tag1="private", tag2="working"
## - "device_id" -- root.sg.device.private.working, s1=100, s2="hello"
# convert_tags_to = "device_id"
## Handling of unsupported characters
## Some characters in different versions of IoTDB are not supported in path name
## A guide with suggetions on valid paths can be found here:
## for iotdb 0.13.x -> https://iotdb.apache.org/UserGuide/V0.13.x/Reference/Syntax-Conventions.html#identifiers
## for iotdb 1.x.x and above -> https://iotdb.apache.org/UserGuide/V1.3.x/User-Manual/Syntax-Rule.html#identifier
##
## Available values are:
## - "1.0", "1.1", "1.2", "1.3" -- enclose in `` the world having forbidden character
## such as @ $ # : [ ] { } ( ) space
## - "0.13" -- enclose in `` the world having forbidden character
## such as space
##
## Keep this section commented if you don't want to sanitize the path
# sanitize_tag = "1.3"
Input and output integration examples
IPMI Sensor
-
Centralized Monitoring Dashboard: Utilize the IPMI Sensor plugin to gather metrics from multiple servers and compile them into a centralized monitoring dashboard. This enables real-time visibility into server health across data centers. Administrators can track metrics like temperature and power usage, helping them make data-driven decisions about resource allocation, potential failures, and maintenance schedules.
-
Automated Power Alerts: Incorporate the plugin into an alerting system that monitors chassis power status and triggers alerts when anomalies are detected. For instance, if the power status indicates a failure or if watt values exceed expected thresholds, automated notifications can be sent to operations teams, ensuring prompt attention to hardware issues.
-
Energy Consumption Analysis: Leverage the DCMI power readings collected via the plugin to analyze energy consumption patterns of hardware over time. By integrating these readings with analytics platforms, organizations can identify opportunities to reduce power usage, optimize efficiency, and potentially decrease operational costs in large server farms or cloud infrastructures.
-
Health Check Automation: Schedule regular health checks by using the IPMI Sensor Plugin to collect data from a fleet of servers. This data can be logged and compared against historical performance metrics to identify trends, outliers, or signs of impending hardware failure, allowing IT teams to take proactive measures and reduce downtime.
IoTDB
-
Real-Time IoT Monitoring: Utilize the IoTDB plugin to gather sensor data from various IoT devices and save it in an Apache IoTDB backend, facilitating real-time monitoring of environmental conditions such as temperature and humidity. This use case enables organizations to analyze trends over time and make informed decisions based on historical data, while also utilizing IoTDB’s efficient storage and querying capabilities.
-
Smart Agriculture Data Collection: Use the IoTDB plugin to collect metrics from smart agriculture sensors deployed in fields. By transmitting moisture levels, nutrient content, and atmospheric conditions to IoTDB, farmers can access detailed insights into optimal planting and watering schedules, thus improving crop yields and resource management.
-
Energy Consumption Analytics: Leverage the IoTDB plugin to track energy consumption metrics from smart meters across a utility network. This integration enables analytics to identify peaks in usage and predict future consumption patterns, ultimately supporting energy conservation initiatives and improved utility management.
-
Automated Industrial Equipment Monitoring: Use this plugin to gather operational metrics from machinery in a manufacturing plant and store them in IoTDB for analysis. This setup can help identify inefficiencies, predictive maintenance needs, and operational anomalies, ensuring optimal performance and minimizing unexpected downtimes.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration