IPMI Sensor and PostgreSQL Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider IPMI and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The IPMI Sensor Plugin facilitates the collection of server health metrics directly from hardware via the IPMI protocol, querying sensor data from either local or remote systems.

The Telegraf PostgreSQL plugin allows you to efficiently write metrics to a PostgreSQL database while automatically managing the database schema.

Integration details

IPMI Sensor

The IPMI Sensor plugin is designed to gather bare metal metrics via the command line utility ipmitool, which interfaces with the Intelligent Platform Management Interface (IPMI). This protocol provides management and monitoring capabilities for hardware components in server systems, allowing for the retrieval of critical system health metrics such as temperature, fan speeds, and power supply status from both local and remote servers. When configured without specified servers, the plugin defaults to querying the local machine’s sensor statistics using the ipmitool sdr command. In scenarios covering remote hosts, authentication is supported through username and password using the command format ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr. This flexibility allows users to monitor systems effectively across various environments. The plugin also supports multiple sensor types, including chassis power status and DCMI power readings, catering to administrators needing real-time insight into server operations.

PostgreSQL

The PostgreSQL plugin enables users to write metrics to a PostgreSQL database or a compatible database, providing robust support for schema management by automatically updating missing columns. The plugin is designed to facilitate integration with monitoring solutions, allowing users to efficiently store and manage time series data. It offers configurable options for connection settings, concurrency, and error handling, and supports advanced features such as JSONB storage for tags and fields, foreign key tagging, templated schema modifications, and support for unsigned integer data types through the pguint extension.

Configuration

IPMI Sensor

[[inputs.ipmi_sensor]]
  ## Specify the path to the ipmitool executable
  # path = "/usr/bin/ipmitool"

  ## Use sudo
  ## Setting 'use_sudo' to true will make use of sudo to run ipmitool.
  ## Sudo must be configured to allow the telegraf user to run ipmitool
  ## without a password.
  # use_sudo = false

  ## Servers
  ## Specify one or more servers via a url. If no servers are specified, local
  ## machine sensor stats will be queried. Uses the format:
  ##  [username[:password]@][protocol[(address)]]
  ##  e.g. root:passwd@lan(127.0.0.1)
  # servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]

  ## Session privilege level
  ## Choose from: CALLBACK, USER, OPERATOR, ADMINISTRATOR
  # privilege = "ADMINISTRATOR"

  ## Timeout
  ## Timeout for the ipmitool command to complete.
  # timeout = "20s"

  ## Metric schema version
  ## See the plugin readme for more information on schema versioning.
  # metric_version = 1

  ## Sensors to collect
  ## Choose from:
  ##   * sdr: default, collects sensor data records
  ##   * chassis_power_status: collects the power status of the chassis
  ##   * dcmi_power_reading: collects the power readings from the Data Center Management Interface
  # sensors = ["sdr"]

  ## Hex key
  ## Optionally provide the hex key for the IMPI connection.
  # hex_key = ""

  ## Cache
  ## If ipmitool should use a cache
  ## Using a cache can speed up collection times depending on your device.
  # use_cache = false

  ## Path to the ipmitools cache file (defaults to OS temp dir)
  ## The provided path must exist and must be writable
  # cache_path = ""

PostgreSQL

# Publishes metrics to a postgresql database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://www.postgresql.org/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
  ## containing fields for which there is no column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
  ## unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
  ## controls the maximum backoff duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
  ## This is an optimization to skip inserting known tag IDs.
  ## Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

Input and output integration examples

IPMI Sensor

  1. Centralized Monitoring Dashboard: Utilize the IPMI Sensor plugin to gather metrics from multiple servers and compile them into a centralized monitoring dashboard. This enables real-time visibility into server health across data centers. Administrators can track metrics like temperature and power usage, helping them make data-driven decisions about resource allocation, potential failures, and maintenance schedules.

  2. Automated Power Alerts: Incorporate the plugin into an alerting system that monitors chassis power status and triggers alerts when anomalies are detected. For instance, if the power status indicates a failure or if watt values exceed expected thresholds, automated notifications can be sent to operations teams, ensuring prompt attention to hardware issues.

  3. Energy Consumption Analysis: Leverage the DCMI power readings collected via the plugin to analyze energy consumption patterns of hardware over time. By integrating these readings with analytics platforms, organizations can identify opportunities to reduce power usage, optimize efficiency, and potentially decrease operational costs in large server farms or cloud infrastructures.

  4. Health Check Automation: Schedule regular health checks by using the IPMI Sensor Plugin to collect data from a fleet of servers. This data can be logged and compared against historical performance metrics to identify trends, outliers, or signs of impending hardware failure, allowing IT teams to take proactive measures and reduce downtime.

PostgreSQL

  1. Real-Time Analytics with Complex Queries: Leverage the PostgreSQL plugin to store metrics from various sources in a PostgreSQL database, enabling real-time analytics using complex queries. This setup can help data scientists and analysts uncover patterns and trends, as they manipulate relational data across multiple tables while utilizing PostgreSQL’s robust query optimization features. Specifically, users can create sophisticated reports with JOIN operations across different metric tables, revealing insights that would typically remain hidden in embedded systems.

  2. Integrating with TimescaleDB for Time-Series Data: Utilize the PostgreSQL plugin within a TimescaleDB instance to efficiently handle and analyze time-series data. By implementing hypertables, users can achieve greater performance and partitioning of topics over the time dimension. This integration allows users to run analytical queries over large amounts of time-series data while retaining the full power of PostgreSQL’s SQL queries, ensuring reliability and efficiency in metrics analysis.

  3. Data Versioning and Historical Analysis: Implement a strategy using the PostgreSQL plugin to maintain different versions of metrics over time. Users can set up an immutable data table structure where older versions of tables are retained, enabling easy historical analysis. This approach not only provides insights into data evolution but also aids compliance with data retention policies, ensuring that the historical integrity of the datasets remains intact.

  4. Dynamic Schema Management for Evolving Metrics: Use the plugin’s templating capabilities to create a dynamically changing schema that responds to metric variations. This use case allows organizations to adapt their data structure as metrics evolve, adding necessary fields and ensuring adherence to data integrity policies. By leveraging templated SQL commands, users can extend their database without manual intervention, facilitating agile data management practices.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration