Configuring the Alerting Plugin in InfluxDB 3

Navigate to:

Monitoring starts with data, but action depends on timely alerts. When an alerting workflow relies on scheduled queries or external checks, engineers miss short windows where values shift and conditions form.

The alerting plugin closes that gap by evaluating alert rules inside InfluxDB 3 as new values arrive, enabling faster detection and more responsive monitoring.

In this tutorial, we’ll walk through configuring the alerting plugin for InfluxDB 3, defining alert logic that evaluates incoming time series data, and emitting alert events that other systems can process. We’ll start by enabling the plugin and creating a basic threshold rule, then move to a short-window evaluation that reacts to patterns rather thanf single points. You’ll generate test data, inspect outputs, and confirm that alert conditions evaluate in real-time. By the end, you’ll have an alerting workflow that you can adapt to your metrics and environments.

How it works

The alerting plugin evaluates each new value as it enters InfluxDB 3. After you define a rule that specifies the condition you want to detect, the plugin compares each incoming point against that rule. When the condition is met, the plugin emits a structured alert event that downstream systems can act on to trigger notifications, automate responses, predict events, or feed a broader event-driven workflow.

Inside InfluxDB 3, the plugin subscribes to the ingestion path and receives points as they arrive. For windowed rules, it maintains a small in-memory buffer of recent values to compute statistics such as an average, minimum, or delta. This makes it possible to detect short-term patterns—like sudden increases or persistent shifts—that individual points may not reveal. Because evaluation happens inline with ingestion, alerts fire as soon as the condition forms.

When a rule matches, the alert event can also be routed through other plugins for additional processing, such as filtering, enrichment, or forwarding to automation systems. Keeping alert logic close to the data pipeline reduces reliance on scheduled queries or external services and ensures that alerts remain responsive under load.

Getting started

Requirements

To follow this tutorial, you’ll need:

  • An InfluxDB 3 instance
  • Access to the plugin directory or plugin configuration path
  • A dataset to monitor or a way to generate test values
  • Familiarity with writing and querying time series data in InfluxDB 3
  • A terminal for running example commands

If you prefer to test the alert rules with synthetic values, you can use any script or CLI tool that writes data to InfluxDB 3 at a regular interval.

Step 1: Enable the alerting plugin

Start by confirming that your InfluxDB 3 Core or InfluxDB 3 Enterprise installation has access to the plugin configuration directory. Each plugin loads through a configuration file that specifies how to initialize the plugin and how it should integrate with the data pipeline.

Create a new configuration file for the alerting plugin, or update an existing one, to include the block below:

[plugins.alerting]

  enabled = true

  config_path = "/etc/influxdb3/plugins/alerting/config.yaml"

This block instructs InfluxDB 3 to load the alerting plugin at startup and to read rule definitions from the specified configuration file. After saving these changes, restart your InfluxDB 3 instance so the plugin loads and registers itself with the plugin pipeline.

If you’re new to the plugin ecosystem, enabling the alerting plugin follows the same pattern as other plugins that extend the data pipeline, such as downsampling or forecast error evaluator plugins. Each plugin uses a consistent model for configuration, registration, and integration with the ingestion path.

Step 2: Create a basic threshold rule

With the plugin enabled, we’ll next define a basic alert rule. A threshold rule checks whether an incoming value crosses a limit you care about, such as CPU usage rising above a set percentage. Rules live in the configuration file referenced in the plugin block you created earlier.

Below is a simple example that watches a cpu_usage field and triggers an alert if the value goes above 90:

rules:

  - id: high_cpu

    description: "CPU usage above 90 percent"

    measurement: "system_metrics"

    field: "cpu_usage"

    condition: "value > 90"

In this rule:

  • measurement and field identify the series to monitor
  • condition defines the comparison that the alert checks
  • id helps you track or reference the rule in downstream systems

When the alerting plugin loads this file, it parses the condition expression and registers it as part of the rule evaluation engine. Each time a new data point arrives for the matching measurement and field, the plugin substitutes the point’s value into the expression and evaluates it. If the expression returns true, the rule matches and the plugin emits an alert event.

After adding the rule, save the file so the alerting plugin can load it at startup or during the next configuration reload.

Step 3: Create a short-window rule

Threshold rules work well for single values, but some conditions only appear when you look at how values change over a short period of time. Short-window rules examine a small group of recent points to detect patterns such as spikes, drops, or sustained increases. The alerting plugin keeps this window in memory so it can compute the needed statistics.

The example below monitors temperature readings and triggers an alert when the average value over the last five points rises above 80:

rules:

  - id: rising_temperature

    description: "Average temperature over last 5 points exceeds 80"

    measurement: "sensor_data"

    field: "temperature"

    window:

      size: 5

      statistic: "avg"

    condition: "value > 80"

In this rule:

  • window.size defines how many recent points to include
  • window.statistic describes what the plugin should compute (average, minimum, maximum, delta, etc.)
  • The computed statistic becomes the value used in the condition expression

As each point arrives, the plugin updates the window. Once enough points are available, it computes the statistic and evaluates the condition. If the condition matches, the plugin emits an alert event.

Short-window rules are useful for detecting behavior that individual points may not reveal.

Step 4: Route and view alert events

When a rule matches, the alerting plugin generates a structured alert event. Each event includes the rule ID, the triggering value, the timestamp, and any tags associated with the series. These events move through the plugin pipeline, where they can be logged, forwarded, or processed by other systems.

A simple configuration writes alert events to a local log file:

[plugins.alerting.outputs.log]

  enabled = true

  path = "/var/log/influxdb3/alerts.log"

Logging is the quickest way to verify that a rule fires correctly. Each alert is written in a structured format so you can check the rule ID and the value that triggered it.

After confirming rule behavior, you can route alerts to systems that process events or trigger automation.

If you want alerts to flow into an event-processing system, you can send them to a message queue such as Kafka:

[plugins.alerting.outputs.kafka]

  enabled = true

  brokers = ["localhost:9092"]

  topic = "alert_events"

To trigger notifications or automation tools through an HTTP endpoint, such as Webhook:

[plugins.alerting.outputs.webhook]

  enabled = true

  url = "https://example.com/alerts"

  method = "POST"

You can also forward events to custom plugins or internal consumers when you need domain-specific logic. Because alert events include rule metadata and series tags, downstream systems can filter or enrich them before taking action.

Choosing an Output Format

Different outputs serve different purposes, so it helps to choose the one that matches your workflow:

  • Logs: Best for development and confirming rule behavior
  • Kafka: Suited for scalable, asynchronous event processing
  • Webhooks: Good for triggering automation or notification services
  • Custom plugins: Ideal for internal logic or specialized processing
  • Multiple outputs: Enable more than one if you need alerts to reach both dev and production targets

Write a few test points during development to confirm that alerts appear where you expect them.

Step 5: Validate rule behavior with test data

Before using an alert rule in production, validate that it behaves as expected by writing controlled test data into InfluxDB 3. This confirms that the rule targets the correct measurement, field, and condition.

To test a threshold rule, write a value that intentionally exceeds the limit:

influx write \

  -b system_metrics \

  system_metrics cpu_usage=95

If the rule is configured correctly, the plugin should emit an alert event in your chosen output.

For windowed rules, write a short sequence of values that move the computed statistic toward the condition. For example, if the rule evaluates the average of the last five temperature readings:

influx write -b sensor_data sensor_data temperature=75

influx write -b sensor_data sensor_data temperature=78

influx write -b sensor_data sensor_data temperature=82

influx write -b sensor_data sensor_data temperature=85

influx write -b sensor_data sensor_data temperature=88

After the final point, the window should meet the rule’s condition and fire an alert.

Verify that:

  • the alert appears in your output target
  • the rule ID matches the rule you tested
  • the triggering value is correct

Testing with synthetic data helps ensure rules fire when expected and reduces noise as you move into production.

Best practices for alert rules

A few tips to help keep alerting accurate and reduce noise:

  • Target the right series using tags and specific fields.
  • Test rule sensitivity with controlled data before production.
  • Mix threshold and windowed rules to catch both spikes and gradual changes.
  • Keep window sizes small so alerts respond quickly.
  • Choose outputs that match your workflow, such as logs for development or queues/webhooks for production.
  • Enable multiple outputs if you want alerts to flow to both development and production destinations.

Next steps

The alerting plugin adds real-time rule evaluation to InfluxDB 3, helping you detect important changes as data arrives. With threshold and windowed rules, flexible routing options, and a straightforward way to validate behavior, you can build an alerting workflow that supports timely responses to shifting system conditions.

Ready to get started?