Hashicorp Nomad and Sensu Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Nomad and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

This plugin allows users to collect metrics from Hashicorp Nomad agents in distributed environments.

This plugin writes metrics events to Sensu via its HTTP events API, enabling seamless integration with the Sensu monitoring platform.

Integration details

Hashicorp Nomad

The Hashicorp Nomad input plugin is designed to gather metrics from every Nomad agent within a cluster. By deploying Telegraf on each node, it can connect to the local Nomad agent, typically available at ‘http://127.0.0.1:4646’. With this setup, users can systematically collect and monitor metrics related to the performance and status of their Nomad environment, ensuring they maintain a healthy and efficient cluster operational state. This plugin enables visibility into the operational aspects of Nomad, which is essential for maintaining reliable cloud infrastructure.

Sensu

This plugin writes metrics events to Sensu via its HTTP events API. Sensu is a monitoring system that enables users to collect, analyze, and manage metrics from various components in their infrastructure. The plugin facilitates the integration of Telegraf, a server agent for collecting and reporting metrics, with the Sensu monitoring platform. Users can configure settings such as backend and agent API URLs, API keys for authentication, and optional TLS settings. The plugin’s core functionality is centered around sending metric events, including check and entity specifications, to Sensu, allowing for comprehensive monitoring and alerting. The API reference provides extensive details about the events and metrics structure, ensuring users can efficiently leverage Sensu’s capabilities for observability and incident response.

Configuration

Hashicorp Nomad

[[inputs.nomad]]
  ## URL for the Nomad agent
  # url = "http://127.0.0.1:4646"

  ## Set response_timeout (default 5 seconds)
  # response_timeout = "5s"

  ## Optional TLS Config
  # tls_ca = /path/to/cafile
  # tls_cert = /path/to/certfile
  # tls_key = /path/to/keyfile

Sensu

[[outputs.sensu]]
  ## BACKEND API URL is the Sensu Backend API root URL to send metrics to
  ## (protocol, host, and port only). The output plugin will automatically
  ## append the corresponding backend API path
  ## /api/core/v2/namespaces/:entity_namespace/events/:entity_name/:check_name).
  ##
  ## Backend Events API reference:
  ## https://docs.sensu.io/sensu-go/latest/api/events/
  ##
  ## AGENT API URL is the Sensu Agent API root URL to send metrics to
  ## (protocol, host, and port only). The output plugin will automatically
  ## append the correspeonding agent API path (/events).
  ##
  ## Agent API Events API reference:
  ## https://docs.sensu.io/sensu-go/latest/api/events/
  ##
  ## NOTE: if backend_api_url and agent_api_url and api_key are set, the output
  ## plugin will use backend_api_url. If backend_api_url and agent_api_url are
  ## not provided, the output plugin will default to use an agent_api_url of
  ## http://127.0.0.1:3031
  ##
  # backend_api_url = "http://127.0.0.1:8080"
  # agent_api_url = "http://127.0.0.1:3031"

  ## API KEY is the Sensu Backend API token
  ## Generate a new API token via:
  ##
  ## $ sensuctl cluster-role create telegraf --verb create --resource events,entities
  ## $ sensuctl cluster-role-binding create telegraf --cluster-role telegraf --group telegraf
  ## $ sensuctl user create telegraf --group telegraf --password REDACTED
  ## $ sensuctl api-key grant telegraf
  ##
  ## For more information on Sensu RBAC profiles & API tokens, please visit:
  ## - https://docs.sensu.io/sensu-go/latest/reference/rbac/
  ## - https://docs.sensu.io/sensu-go/latest/reference/apikeys/
  ##
  # api_key = "${SENSU_API_KEY}"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Timeout for HTTP message
  # timeout = "5s"

  ## HTTP Content-Encoding for write request body, can be set to "gzip" to
  ## compress body or "identity" to apply no encoding.
  # content_encoding = "identity"

  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
  ## plugin definition, otherwise additional config options are read as part of
  ## the table

  ## Sensu Event details
  ##
  ## Below are the event details to be sent to Sensu.  The main portions of the
  ## event are the check, entity, and metrics specifications. For more information
  ## on Sensu events and its components, please visit:
  ## - Events - https://docs.sensu.io/sensu-go/latest/reference/events
  ## - Checks -  https://docs.sensu.io/sensu-go/latest/reference/checks
  ## - Entities - https://docs.sensu.io/sensu-go/latest/reference/entities
  ## - Metrics - https://docs.sensu.io/sensu-go/latest/reference/events#metrics
  ##
  ## Check specification
  ## The check name is the name to give the Sensu check associated with the event
  ## created. This maps to check.metadata.name in the event.
  [outputs.sensu.check]
    name = "telegraf"

  ## Entity specification
  ## Configure the entity name and namespace, if necessary. This will be part of
  ## the entity.metadata in the event.
  ##
  ## NOTE: if the output plugin is configured to send events to a
  ## backend_api_url and entity_name is not set, the value returned by
  ## os.Hostname() will be used; if the output plugin is configured to send
  ## events to an agent_api_url, entity_name and entity_namespace are not used.
  # [outputs.sensu.entity]
  #   name = "server-01"
  #   namespace = "default"

  ## Metrics specification
  ## Configure the tags for the metrics that are sent as part of the Sensu event
  # [outputs.sensu.tags]
  #   source = "telegraf"

  ## Configure the handler(s) for processing the provided metrics
  # [outputs.sensu.metrics]
  #   handlers = ["influxdb","elasticsearch"]

Input and output integration examples

Hashicorp Nomad

  1. Cluster Health Monitoring: Use the Hashicorp Nomad plugin to aggregate metrics across all nodes in a Nomad deployment. By monitoring health metrics such as allocation status, job performance, and resource utilization, operations teams can gain insights into the overall health of their deployment, quickly identify and resolve issues, and optimize resource allocation based on real-time data.

  2. Performance Analytics for Job Execution: Leverage the metrics provided by Nomad to analyze job execution times and resource consumption. This use case enables developers to adjust job parameters effectively, optimize task performance, and illustrate trends over time, ultimately leading to increased efficiency and reduced costs in resource allocation.

  3. Alerting on Critical Conditions: Implement alerting mechanisms based on metrics scraped from Nomad agents. By setting thresholds for critical metrics like CPU usage or failed job allocations, teams can proactively respond to potential issues before they escalate, ensuring higher uptime and reliability for applications running on the Nomad platform.

  4. Integration with Visualization Tools: Use the data collected by the Hashicorp Nomad plugin to feed into visualization tools for real-time dashboards. This setup allows teams to monitor cluster workloads, job states, and system performance at a glance, facilitating better decision-making and strategic planning based on visual insights into the Nomad environment.

Sensu

  1. Real-Time Infrastructure Monitoring: Utilize the Sensu plugin to send performance metrics from various servers and services directly to Sensu. This real-time data flow enables teams to visualize infrastructure health, track resource usage, and receive immediate alerts for any anomalies detected. By centralizing monitoring through Sensu, organizations can create a holistic view of their systems and respond swiftly to issues.

  2. Automated Incident Response Workflows: Leverage the plugin to automatically trigger incident response workflows based on the metrics events sent to Sensu. For example, if CPU usage exceeds a defined threshold, the Sensu system can be configured to alert the operations team, which can then initiate automated remediation processes, reducing downtime and maintaining system reliability. This integration allows for proactive management of system resources.

  3. Dynamic Scaling of Resources: Use the Sensu plugin to feed metrics into an auto-scaling system that adjusts resources based on demand. By tracking metrics like request load and resource utilization, organizations can automatically scale their infrastructure up or down, ensuring optimal performance and cost efficiency without manual intervention.

  4. Centralized Logging and Monitoring: Combine the Sensu with logging tools to send logs and performance metrics to a centralized monitoring system. This comprehensive approach allows teams to correlate logs with metric events, providing deeper insights into system behavior and performance, which aids in troubleshooting and performance optimization over time.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration