Cisco Model-Driven Telemetry and TimescaleDB Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Cisco MDT and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Cisco Model-Driven Telemetry (MDT) plugin facilitates the collection of telemetry data from Cisco networking platforms, utilizing gRPC and TCP transport mechanisms. This plugin is essential for users looking to implement advanced telemetry solutions for better insights and operational efficiency.

This output plugin delivers a reliable and efficient mechanism for routing Telegraf collected metrics directly into TimescaleDB. By leveraging PostgreSQL’s robust ecosystem combined with TimescaleDB’s time series optimizations, it supports high-performance data ingestion and advanced querying capabilities.

Integration details

Cisco Model-Driven Telemetry

Cisco model-driven telemetry (MDT) is designed to provide a robust means of consuming telemetry data from various Cisco platforms, including IOS XR, IOS XE, and NX-OS. This plugin focuses on the efficient transport of telemetry data using either TCP or gRPC protocols, offering flexibility based on the network environment and requirements. The gRPC transport is particularly advantageous as it supports TLS for enhanced security through encryption and authentication. The plugin is compatible with a range of software versions on Cisco devices, enabling organizations to leverage telemetry capabilities across their network operations. It is especially useful for network monitoring and analytics, as it enables real-time data collection directly from Cisco devices, enhancing visibility into network performance, resource utilization, and operational metrics.

TimescaleDB

TimescaleDB is an open source time series database built as an extension to PostgreSQL, designed to handle large scale, time-oriented data efficiently. Launched in 2017, TimescaleDB emerged in response to the growing need for a robust, scalable solution that could manage vast volumes of data with high insert rates and complex queries. By leveraging PostgreSQL’s familiar SQL interface and enhancing it with specialized time series capabilities, TimescaleDB quickly gained popularity among developers looking to integrate time series functionality into existing relational databases. Its hybrid approach allows users to benefit from PostgreSQL’s flexibility, reliability, and ecosystem while providing optimized performance for time series data.

The database is particularly effective in environments that demand fast ingestion of data points combined with sophisticated analytical queries over historical periods. TimescaleDB has a number of innovative features like hypertables which transparently partition data into manageable chunks and built-in continuous aggregation. These allow for significantly improved query speed and resource efficiency.

Configuration

Cisco Model-Driven Telemetry

[[inputs.cisco_telemetry_mdt]]
 ## Telemetry transport can be "tcp" or "grpc".  TLS is only supported when
 ## using the grpc transport.
 transport = "grpc"

 ## Address and port to host telemetry listener
 service_address = ":57000"

 ## Grpc Maximum Message Size, default is 4MB, increase the size. This is
 ## stored as a uint32, and limited to 4294967295.
 max_msg_size = 4000000

 ## Enable TLS; grpc transport only.
 # tls_cert = "/etc/telegraf/cert.pem"
 # tls_key = "/etc/telegraf/key.pem"

 ## Enable TLS client authentication and define allowed CA certificates; grpc
 ##  transport only.
 # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

 ## Define (for certain nested telemetry measurements with embedded tags) which fields are tags
 # embedded_tags = ["Cisco-IOS-XR-qos-ma-oper:qos/interface-table/interface/input/service-policy-names/service-policy-instance/statistics/class-stats/class-name"]

 ## Include the delete field in every telemetry message.
 # include_delete_field = false

 ## Specify custom name for incoming MDT source field.
 # source_field_name = "mdt_source"

 ## Define aliases to map telemetry encoding paths to simple measurement names
 [inputs.cisco_telemetry_mdt.aliases]
   ifstats = "ietf-interfaces:interfaces-state/interface/statistics"
 ## Define Property Xformation, please refer README and https://pubhub.devnetcloud.com/media/dme-docs-9-3-3/docs/appendix/ for Model details.
 [inputs.cisco_telemetry_mdt.dmes]
#    Global Property Xformation.
#    prop1 = "uint64 to int"
#    prop2 = "uint64 to string"
#    prop3 = "string to uint64"
#    prop4 = "string to int64"
#    prop5 = "string to float64"
#    auto-prop-xfrom = "auto-float-xfrom" #Xform any property which is string, and has float number to type float64
#    Per Path property xformation, Name is telemetry configuration under sensor-group, path configuration "WORD         Distinguished Name"
#    Per Path configuration is better as it avoid property collision issue of types.
#    dnpath = '{"Name": "show ip route summary","prop": [{"Key": "routes","Value": "string"}, {"Key": "best-paths","Value": "string"}]}'
#    dnpath2 = '{"Name": "show processes cpu","prop": [{"Key": "kernel_percent","Value": "float"}, {"Key": "idle_percent","Value": "float"}, {"Key": "process","Value": "string"}, {"Key": "user_percent","Value": "float"}, {"Key": "onesec","Value": "float"}]}'
#    dnpath3 = '{"Name": "show processes memory physical","prop": [{"Key": "processname","Value": "string"}]}'

 ## Additional GRPC connection settings.
 [inputs.cisco_telemetry_mdt.grpc_enforcement_policy]
  ## GRPC permit keepalives without calls, set to true if your clients are
  ## sending pings without calls in-flight. This can sometimes happen on IOS-XE
  ## devices where the GRPC connection is left open but subscriptions have been
  ## removed, and adding subsequent subscriptions does not keep a stable session.
  # permit_keepalive_without_calls = false

  ## GRPC minimum timeout between successive pings, decreasing this value may
  ## help if this plugin is closing connections with ENHANCE_YOUR_CALM (too_many_pings).
  # keepalive_minimum_time = "5m"

TimescaleDB

# Publishes metrics to a TimescaleDB database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://www.postgresql.org/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is
  ## no column will be skipped. Points containing fields for which there is no
  ## column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is
  ## no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values
  ## (Postgres does not have a native unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns > 1, and a temporary error occurs, the query is
  ## retried with an incremental backoff. This controls the maximum duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using
  ## tags_as_foreign_keys). This is an optimization to skip inserting known
  ## tag IDs. Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Cut column names at the given length to not exceed PostgreSQL's
  ## 'identifier length' limit (default: no limit)
  ## (see https://www.postgresql.org/docs/current/limits.html)
  ## Be careful to not create duplicate column names!
  # column_name_length_limit = 0

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

Input and output integration examples

Cisco Model-Driven Telemetry

  1. Real-Time Network Monitoring: Utilize the Cisco MDT plugin to collect network performance metrics from Cisco routers and switches. By feeding telemetry data into a visualization tool, network operators can observe traffic trends, bandwidth usage, and error rates in real-time. This proactive monitoring allows teams to swiftly address issues before they affect network performance, resulting in a more reliable service.

  2. Automated Anomaly Detection: Integrate Cisco MDT with machine learning algorithms to create an automated anomaly detection system. By continuously analyzing telemetry data, the system can identify deviations from typical operational patterns, providing alerts for unusual conditions that may signify network problems or security threats, which can aid in maintaining operational integrity.

  3. Dynamic Configuration Management: Leveraging the telemetry data collected from Cisco devices, organizations can implement dynamic configuration management solutions that automatically adjust network settings based on current performance indicators. For instance, if the telemetry indicates high utilization on certain links, the system could dynamically route traffic to underutilized paths, optimizing resource usage.

  4. Enhanced Reporting and Analytics: Use the Cisco MDT plugin to feed detailed telemetry data into analytics platforms, enabling comprehensive reporting on network health and performance. Historical and real-time analysis can guide decision-making and strategic planning, helping organizations to allocate resources more effectively and understand their network’s operational landscape better.

TimescaleDB

  1. Real-Time IoT Data Ingestion: Use the plugin to collect and store sensor data from thousands of IoT devices in real time. This setup facilitates immediate analysis, helping organizations monitor operational efficiency and respond quickly to changing conditions.

  2. Cloud Application Performance Monitoring: Leverage the plugin to feed detailed performance metrics from distributed cloud applications into TimescaleDB. This integration supports real-time dashboards and alerts, enabling teams to swiftly identify and mitigate performance bottlenecks.

  3. Historical Data Analysis and Reporting: Implement a system where long-term metrics are stored in TimescaleDB for comprehensive historical analysis. This approach allows businesses to perform trend analysis, generate detailed reports, and make data-driven decisions based on archived time-series data.

  4. Adaptive Alerting and Anomaly Detection: Integrate the plugin with automated anomaly detection workflows. By continuously streaming metrics to TimescaleDB, machine learning models can analyze data patterns and trigger alerts when anomalies occur, enhancing system reliability and proactive maintenance.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration