Cisco Model-Driven Telemetry and Databricks Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Cisco MDT and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Cisco Model-Driven Telemetry (MDT) plugin facilitates the collection of telemetry data from Cisco networking platforms, utilizing gRPC and TCP transport mechanisms. This plugin is essential for users looking to implement advanced telemetry solutions for better insights and operational efficiency.

Use Telegraf’s HTTP output plugin to push metrics straight into a Databricks Lakehouse by calling the SQL Statement Execution API with a JSON-wrapped INSERT or volume PUT command.

Integration details

Cisco Model-Driven Telemetry

Cisco model-driven telemetry (MDT) is designed to provide a robust means of consuming telemetry data from various Cisco platforms, including IOS XR, IOS XE, and NX-OS. This plugin focuses on the efficient transport of telemetry data using either TCP or gRPC protocols, offering flexibility based on the network environment and requirements. The gRPC transport is particularly advantageous as it supports TLS for enhanced security through encryption and authentication. The plugin is compatible with a range of software versions on Cisco devices, enabling organizations to leverage telemetry capabilities across their network operations. It is especially useful for network monitoring and analytics, as it enables real-time data collection directly from Cisco devices, enhancing visibility into network performance, resource utilization, and operational metrics.

Databricks

This configuration turns Telegraf into a lightweight ingestion agent for the Databricks Lakehouse. It leverages the Databricks SQL Statement Execution API 2.0, which accepts authenticated POST requests containing a JSON payload with a statement field. Each Telegraf flush dynamically renders a SQL INSERT (or, for file-based workflows, a PUT ... INTO /Volumes/... command) that lands the metrics into a Unity Catalog table or volume governed by Lakehouse security. Under the hood Databricks stores successful inserts as Delta Lake transactions, enabling ACID guarantees, time-travel, and scalable analytics. Operators can point the warehouse_id at any serverless or classic SQL warehouse, and all authentication is handled with a PAT or service-principal token—no agents or JDBC drivers required. Because Telegraf’s HTTP output supports custom headers, batching, TLS, and proxy settings, the same pattern scales from edge IoT gateways to container sidecars, consolidating infrastructure telemetry, application logs, or business KPIs directly into the Lakehouse for BI, ML, and Lakehouse Monitoring. Unity Catalog volumes provide a governed staging layer when file uploads and COPY INTO are preferred, and the approach aligns with Databricks’ recommended ingestion practices for partners and ISVs.

Configuration

Cisco Model-Driven Telemetry

[[inputs.cisco_telemetry_mdt]]
 ## Telemetry transport can be "tcp" or "grpc".  TLS is only supported when
 ## using the grpc transport.
 transport = "grpc"

 ## Address and port to host telemetry listener
 service_address = ":57000"

 ## Grpc Maximum Message Size, default is 4MB, increase the size. This is
 ## stored as a uint32, and limited to 4294967295.
 max_msg_size = 4000000

 ## Enable TLS; grpc transport only.
 # tls_cert = "/etc/telegraf/cert.pem"
 # tls_key = "/etc/telegraf/key.pem"

 ## Enable TLS client authentication and define allowed CA certificates; grpc
 ##  transport only.
 # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

 ## Define (for certain nested telemetry measurements with embedded tags) which fields are tags
 # embedded_tags = ["Cisco-IOS-XR-qos-ma-oper:qos/interface-table/interface/input/service-policy-names/service-policy-instance/statistics/class-stats/class-name"]

 ## Include the delete field in every telemetry message.
 # include_delete_field = false

 ## Specify custom name for incoming MDT source field.
 # source_field_name = "mdt_source"

 ## Define aliases to map telemetry encoding paths to simple measurement names
 [inputs.cisco_telemetry_mdt.aliases]
   ifstats = "ietf-interfaces:interfaces-state/interface/statistics"
 ## Define Property Xformation, please refer README and https://pubhub.devnetcloud.com/media/dme-docs-9-3-3/docs/appendix/ for Model details.
 [inputs.cisco_telemetry_mdt.dmes]
#    Global Property Xformation.
#    prop1 = "uint64 to int"
#    prop2 = "uint64 to string"
#    prop3 = "string to uint64"
#    prop4 = "string to int64"
#    prop5 = "string to float64"
#    auto-prop-xfrom = "auto-float-xfrom" #Xform any property which is string, and has float number to type float64
#    Per Path property xformation, Name is telemetry configuration under sensor-group, path configuration "WORD         Distinguished Name"
#    Per Path configuration is better as it avoid property collision issue of types.
#    dnpath = '{"Name": "show ip route summary","prop": [{"Key": "routes","Value": "string"}, {"Key": "best-paths","Value": "string"}]}'
#    dnpath2 = '{"Name": "show processes cpu","prop": [{"Key": "kernel_percent","Value": "float"}, {"Key": "idle_percent","Value": "float"}, {"Key": "process","Value": "string"}, {"Key": "user_percent","Value": "float"}, {"Key": "onesec","Value": "float"}]}'
#    dnpath3 = '{"Name": "show processes memory physical","prop": [{"Key": "processname","Value": "string"}]}'

 ## Additional GRPC connection settings.
 [inputs.cisco_telemetry_mdt.grpc_enforcement_policy]
  ## GRPC permit keepalives without calls, set to true if your clients are
  ## sending pings without calls in-flight. This can sometimes happen on IOS-XE
  ## devices where the GRPC connection is left open but subscriptions have been
  ## removed, and adding subsequent subscriptions does not keep a stable session.
  # permit_keepalive_without_calls = false

  ## GRPC minimum timeout between successive pings, decreasing this value may
  ## help if this plugin is closing connections with ENHANCE_YOUR_CALM (too_many_pings).
  # keepalive_minimum_time = "5m"

Databricks

[[outputs.http]]
  ## Databricks SQL Statement Execution API endpoint
  url = "https://{{ env "DATABRICKS_HOST" }}/api/2.0/sql/statements"

  ## Use POST to submit each Telegraf batch as a SQL request
  method = "POST"

  ## Personal-access token (PAT) for workspace or service principal
  headers = { Authorization = "Bearer {{ env "DATABRICKS_TOKEN" }}" }

  ## Send JSON that wraps the metrics batch in a SQL INSERT (or PUT into a Volume)
  content_type = "application/json"

  ## Serialize metrics as JSON so they can be embedded in the SQL statement
  data_format = "json"
  json_timestamp_units = "1ms"

  ## Build the request body.  Telegraf replaces the template variables at runtime.
  ## Example inserts a row per metric into a Unity-Catalog table.
  body_template = """
  {
    \"statement\": \"INSERT INTO ${TARGET_TABLE} VALUES {{range .Metrics}}(from_unixtime({{.timestamp}}/1000), {{.fields.usage}}, '{{.tags.host}}'){{end}}\",
    \"warehouse_id\": \"${WAREHOUSE_ID}\"
  }
  """

  ## Optional: add batching limits or TLS settings
  # batch_size = 500
  # timeout     = "10s"

Input and output integration examples

Cisco Model-Driven Telemetry

  1. Real-Time Network Monitoring: Utilize the Cisco MDT plugin to collect network performance metrics from Cisco routers and switches. By feeding telemetry data into a visualization tool, network operators can observe traffic trends, bandwidth usage, and error rates in real-time. This proactive monitoring allows teams to swiftly address issues before they affect network performance, resulting in a more reliable service.

  2. Automated Anomaly Detection: Integrate Cisco MDT with machine learning algorithms to create an automated anomaly detection system. By continuously analyzing telemetry data, the system can identify deviations from typical operational patterns, providing alerts for unusual conditions that may signify network problems or security threats, which can aid in maintaining operational integrity.

  3. Dynamic Configuration Management: Leveraging the telemetry data collected from Cisco devices, organizations can implement dynamic configuration management solutions that automatically adjust network settings based on current performance indicators. For instance, if the telemetry indicates high utilization on certain links, the system could dynamically route traffic to underutilized paths, optimizing resource usage.

  4. Enhanced Reporting and Analytics: Use the Cisco MDT plugin to feed detailed telemetry data into analytics platforms, enabling comprehensive reporting on network health and performance. Historical and real-time analysis can guide decision-making and strategic planning, helping organizations to allocate resources more effectively and understand their network’s operational landscape better.

Databricks

  1. Edge-to-Lakehouse Telemetry Pipe: Deploy Telegraf on factory PLCs to sample vibration metrics and post them every second to a serverless SQL warehouse. Delta tables power PowerBI dashboards that alert engineers when thresholds drift.
  2. Blue-Green CI/CD Rollout Metrics: Attach a Telegraf sidecar to each Kubernetes canary pod; it inserts container stats into a Unity Catalog table tagged by deployment_id, letting Databricks SQL compare error-rate percentiles and auto-rollback underperforming versions.
  3. SaaS Usage Metering: Insert per-tenant API-call counters via the HTTP plugin; a nightly Lakehouse query aggregates usage into invoices, eliminating custom metering micro-services.
  4. Security Forensics Lake: Upload JSON batches of Suricata IDS events to a Unity Catalog volume using PUT commands, then run COPY INTO for near-real-time enrichment with Delta Live Tables, producing a searchable threat-intel lake that joins network logs with user session data.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration