gNMI and Clarify Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider gNMI and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The gNMI (gRPC Network Management Interface) Input Plugin collects telemetry data from network devices using the gNMI Subscribe method. It supports TLS for secure authentication and data transmission.

The Clarify plugin allows users to publish Telegraf metrics directly to Clarify, enabling enhanced analysis and monitoring capabilities.

Integration details

gNMI

This input plugin is vendor-agnostic and can be used with any platform that supports the gNMI specification. It consumes telemetry data based on the gNMI Subscribe method, allowing for real-time monitoring of network devices.

Clarify

This plugin facilitates the writing of Telegraf metrics to Clarify, a platform for managing and analyzing time series data. By transforming metrics into Clarify signals, this output plugin enables seamless integration of collected telemetry data into the Clarify ecosystem. Users must obtain valid credentials, either through a credentials file or basic authentication, to configure the plugin. The configuration also provides options for fine-tuning how metrics are mapped to signals in Clarify, including the ability to specify unique identifiers using tags. Given that Clarify supports only floating point values, the plugin ensures that any unsupported types are effectively filtered out during the publishing process. This comprehensive connectivity aligns with use cases in monitoring, data analysis, and operational insights.

Configuration

gNMI


[[inputs.gnmi]]
  ## Address and port of the gNMI GRPC server
  addresses = ["10.49.234.114:57777"]

  ## define credentials
  username = "cisco"
  password = "cisco"

  ## gNMI encoding requested (one of: "proto", "json", "json_ietf", "bytes")
  # encoding = "proto"

  ## redial in case of failures after
  # redial = "10s"

  ## gRPC Keepalive settings
  ## See https://pkg.go.dev/google.golang.org/grpc/keepalive
  ## The client will ping the server to see if the transport is still alive if it has
  ## not see any activity for the given time.
  ## If not set, none of the keep-alive setting (including those below) will be applied.
  ## If set and set below 10 seconds, the gRPC library will apply a minimum value of 10s will be used instead.
  # keepalive_time = ""

  ## Timeout for seeing any activity after the keep-alive probe was
  ## sent. If no activity is seen the connection is closed.
  # keepalive_timeout = ""

  ## gRPC Maximum Message Size
  # max_msg_size = "4MB"

  ## Enable to get the canonical path as field-name
  # canonical_field_names = false

  ## Remove leading slashes and dots in field-name
  # trim_field_names = false

  ## Guess the path-tag if an update does not contain a prefix-path
  ## Supported values are
  ##   none         -- do not add a 'path' tag
  ##   common path  -- use the common path elements of all fields in an update
  ##   subscription -- use the subscription path
  # path_guessing_strategy = "none"

  ## Prefix tags from path keys with the path element
  # prefix_tag_key_with_path = false

  ## Optional client-side TLS to authenticate the device
  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
  ## enable TLS only if any of the other options are specified.
  # tls_enable =
  ## Trusted root certificates for server
  # tls_ca = "/path/to/cafile"
  ## Used for TLS client certificate authentication
  # tls_cert = "/path/to/certfile"
  ## Used for TLS client certificate authentication
  # tls_key = "/path/to/keyfile"
  ## Password for the key file if it is encrypted
  # tls_key_pwd = ""
  ## Send the specified TLS server name via SNI
  # tls_server_name = "kubernetes.example.com"
  ## Minimal TLS version to accept by the client
  # tls_min_version = "TLS12"
  ## List of ciphers to accept, by default all secure ciphers will be accepted
  ## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
  ## Use "all", "secure" and "insecure" to add all support ciphers, secure
  ## suites or insecure suites respectively.
  # tls_cipher_suites = ["secure"]
  ## Renegotiation method, "never", "once" or "freely"
  # tls_renegotiation_method = "never"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## gNMI subscription prefix (optional, can usually be left empty)
  ## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
  # origin = ""
  # prefix = ""
  # target = ""

  ## Vendor specific options
  ## This defines what vendor specific options to load.
  ## * Juniper Header Extension (juniper_header): some sensors are directly managed by
  ##   Linecard, which adds the Juniper GNMI Header Extension. Enabling this
  ##   allows the decoding of the Extension header if present. Currently this knob
  ##   adds component, component_id & sub_component_id as additional tags
  # vendor_specific = []

  ## YANG model paths for decoding IETF JSON payloads
  ## Model files are loaded recursively from the given directories. Disabled if
  ## no models are specified.
  # yang_model_paths = []

  ## Define additional aliases to map encoding paths to measurement names
  # [inputs.gnmi.aliases]
  #   ifcounters = "openconfig:/interfaces/interface/state/counters"

  [[inputs.gnmi.subscription]]
    ## Name of the measurement that will be emitted
    name = "ifcounters"

    ## Origin and path of the subscription
    ## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
    ##
    ## origin usually refers to a (YANG) data model implemented by the device
    ## and path to a specific substructure inside it that should be subscribed
    ## to (similar to an XPath). YANG models can be found e.g. here:
    ## https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
    origin = "openconfig-interfaces"
    path = "/interfaces/interface/state/counters"

    ## Subscription mode ("target_defined", "sample", "on_change") and interval
    subscription_mode = "sample"
    sample_interval = "10s"

    ## Suppress redundant transmissions when measured values are unchanged
    # suppress_redundant = false

    ## If suppression is enabled, send updates at least every X seconds anyway
    # heartbeat_interval = "60s"

Clarify

[[outputs.clarify]]
  ## Credentials File (Oauth 2.0 from Clarify integration)
  credentials_file = "/path/to/clarify/credentials.json"

  ## Clarify username password (Basic Auth from Clarify integration)
  username = "i-am-bob"
  password = "secret-password"

  ## Timeout for Clarify operations
  # timeout = "20s"

  ## Optional tags to be included when generating the unique ID for a signal in Clarify
  # id_tags = []
  # clarify_id_tag = 'clarify_input_id'

Input and output integration examples

gNMI

  1. Monitoring Cisco Devices: Use the gNMI plugin to collect telemetry data from Cisco IOS XR, NX-OS, or IOS XE devices for performance monitoring.

  2. Real-time Network Insights: With the gNMI plugin, network administrators can gain insights into real-time metrics such as interface statistics and CPU usage.

  3. Secure Data Collection: Configure the gNMI plugin with TLS settings to ensure secure communication while collecting sensitive telemetry data from devices.

  4. Flexible Data Handling: Use the subscription options to customize which telemetry data you want to collect based on specific needs or requirements.

  5. Error Handling: The plugin includes troubleshooting options to handle common issues like missing metric names or TLS handshake failures.

Clarify

  1. Automated Data Monitoring: By integrating the Clarify plugin with sensor data collection, organizations can automate the monitoring of environmental conditions, such as temperature and humidity. The plugin processes metrics in real-time, sending updates to Clarify where they can be analyzed for trends, alerts, and historical tracking. This use case makes it easier to maintain optimal conditions in data centers or production environments, reducing the risk of equipment failures.

  2. Performance Metrics Analysis: Companies can leverage this plugin to send application performance metrics to Clarify. By transmitting key indicators such as response times and error rates, developers and operations teams can utilize Clarify’s capabilities to visualize and analyze application performance over time. This insight can drive improvements in user experience and help identify areas in need of optimization.

  3. Sensor Data Aggregation: Utilizing the plugin to push data from multiple sensors to Clarify allows for a comprehensive view of physical environments. This aggregation is particularly beneficial in sectors such as agriculture, where metrics from various sensors can be correlated to decision-making about resource allocations, pest control, and crop management. The plugin ensures the data is accurately mapped and transformed for effective analysis.

  4. Real-Time Alerts and Notifications: Implement the Clarify plugin to trigger real-time alerts based on predefined thresholds within the metrics being sent. For instance, if temperature readings exceed certain levels, alerts can be generated and sent to operational staff. This proactive approach allows for immediate responses to potential issues, enhancing operational reliability and safety.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration