<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Developer</title>
    <description>Posts from the Developer category on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/category/tech/</link>
    <language>en-us</language>
    <lastBuildDate>Fri, 08 May 2026 12:00:00 +0000</lastBuildDate>
    <pubDate>Fri, 08 May 2026 12:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>A Runnable Reference Architecture for Battery Energy Storage Systems on InfluxDB 3</title>
      <description>&lt;p&gt;A battery is a complex electrochemical system where safety and revenue are decided in milliseconds. Cell temperatures, voltages, and state of charge change in real-time; dispatch decisions and thermal alarms must fire in real-time. Anything in between—your data pipeline, your historian, your alerting layer—has to disappear into the background.&lt;/p&gt;

&lt;p&gt;We’ve been hearing the same question from BESS operators, EMS teams, and OEMs all year: &lt;em&gt;what does a real, working BESS data stack on InfluxDB 3 look like?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So we shipped one. Today, we’re walking through the &lt;a href="https://github.com/influxdata/influxdb3-ref-bess/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 BESS Reference Architecture&lt;/a&gt;, an open source, runnable blueprint for battery energy storage that you can stand up locally in about two minutes with &lt;code class="language-markup"&gt;docker compose&lt;/code&gt;. It’s the second entry in our &lt;a href="https://github.com/influxdata/influxdb3-reference-architectures/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;reference architecture portfolio&lt;/a&gt;, and it’s been deliberately tuned to surface the InfluxDB 3 Enterprise capabilities that matter most when you’re operating cells, packs, and inverters.&lt;/p&gt;

&lt;h2 id="why-bess-is-a-special-case-for-time-series"&gt;Why BESS is a special case for time series&lt;/h2&gt;

&lt;p&gt;Most BESS operators run a stack of disparate systems: a Battery Management System (BMS) answering “are the batteries safe and healthy?”, a Power Conversion System (PCS) answering “can I deliver or absorb power?”, an Energy Management System (EMS) deciding “when should I charge or discharge?”, and a SCADA platform answering “what’s happening right now on site?” Each one works fine in isolation. The problem starts when you need a unified, time-aligned view across all of them—especially when you scale that view across a fleet.&lt;/p&gt;

&lt;p&gt;Three things make BESS data uniquely demanding:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;High entity cardinality&lt;/strong&gt;. A single utility-scale site might generate 50,000+ distinct signals. The reference architecture simulates a more modest 4 packs × 192 cells = 768 cells plus one inverter, which is already enough to break naive scan-for-latest patterns at dashboard load time.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Sub-second freshness requirements&lt;/strong&gt;. “Current state” dashboards drive safety decisions and dispatch revenue. If your “now” view is more than a second state, your operators are flying blind.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Mixed cadences&lt;/strong&gt;. Cell readings stream at 1 Hz. Thermal alerts fire on every write. SoH rollups happen once per day. A good BESS database has to handle all three patterns natively.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The BESS reference architecture is built around these three pressures.&lt;/p&gt;

&lt;h2 id="whats-in-the-stack"&gt;What’s in the stack&lt;/h2&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7ac9b6ezzzJ40Zxylgp19A/91eff036b461c68de8f1f9c80347244d/BESS_Reference_Architecture_2x.png" alt="reference arch diagram" /&gt;&lt;/p&gt;

&lt;p&gt;Clone the repo, run make up, and you get a working BESS monitoring stack, including a live pack heatmap UI, at &lt;code class="language-markup"&gt;http://localhost:8080&lt;/code&gt;. The whole thing is Python-first and stays small. &lt;code class="language-markup"&gt;docker-compose.yml&lt;/code&gt; brings up six services:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;token-bootstrap&lt;/code&gt;: generates the offline admin token on first boot.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;bess-influxdb3&lt;/code&gt;: InfluxDB 3 Enterprise is the database and runtime for the Python plugins.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;influxdb3-init&lt;/code&gt;: idempotent bootstrap that creates the database, declares tables, registers caches, and installs Processing Engine triggers.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;bess-simulator&lt;/code&gt;: Python simulator generating realistic pack/cell/inverter telemetry at roughly 2,000 points per second.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;bess-ui&lt;/code&gt;: a FastAPI + HTMX + uPlot dashboard polling small partial templates every 1–5 seconds.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;Scenarios&lt;/code&gt;: on-demand event injectors (thermal_runaway, cell_drift) for replaying realistic faults.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll notice what’s not here: there’s no Telegraf, no MQTT broker, no Grafana. That’s intentional. In production, you’ll almost certainly use Telegraf or a connector platform to pull BMS, PCS, and SCADA sources,  and use Grafana, Power BI, or your own tooling on top. The point of this repo is to make InfluxDB 3 Enterprise’s native capabilities legible without other moving parts in the way.&lt;/p&gt;

&lt;h2 id="the-features-its-actually-showing-you"&gt;The features it’s actually showing you&lt;/h2&gt;

&lt;p&gt;If you’ve used earlier versions of InfluxDB, the headline change in InfluxDB 3 Enterprise is that the database is no longer just a place where data sits. Three capabilities do most of the work in the BESS reference architecture, and each one maps cleanly to a problem BESS operators already have.&lt;/p&gt;

&lt;h4 id="last-value-cache--sub-millisecond-pack-heatmaps"&gt;1. Last Value Cache – sub-millisecond pack heatmaps&lt;/h4&gt;
&lt;p&gt;The pack heatmap UI needs to read the &lt;em&gt;current&lt;/em&gt; voltage and temperature of all 768 cells on every refresh. Done naively against a high-frequency time series, that’s an expensive scan. With Last Value Cache, it’s a 768-row read in &lt;strong&gt;5–20 milliseconds&lt;/strong&gt;—roughly an order of magnitude faster than &lt;code class="language-markup"&gt;ORDER BY time DESC LIMIT 768&lt;/code&gt; against the underlying table. Even better, &lt;em&gt;the cost stays flat as history grows&lt;/em&gt;.
The UI’s actual query is:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT pack_id, module_id, cell_id, voltage, temperature_c
FROM last_cache('cell_readings', 'cell_last')
ORDER BY pack_id, module_id, cell_id;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is the pattern you reach for any time you need &lt;em&gt;current value&lt;/em&gt;, &lt;em&gt;right now&lt;/em&gt;, i.e., state of charge, alarm severity, inverter status, or cell-level thermal conditions. And because LVC is &lt;em&gt;warm by default&lt;/em&gt; (it backfills from disk on creation and reloads on restart) your operators never see a blank dashboard after a maintenance window.&lt;/p&gt;

&lt;h4 id="distinct-value-cache--fast-inventory-queries"&gt;2. Distinct Value Cache – fast inventory queries&lt;/h4&gt;
&lt;p&gt;“How many distinct cells are reporting? Which ones are missing?” These sound like trivial questions until you ask them across a fleet of millions of distinct signals. Distinct Value Cache turns them into millisecond lookups:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT cell_id FROM distinct_cache('cell_readings', 'cell_id_distinct');&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In a real fleet, this is the primitive behind comms-heartbeat checks, asset-inventory reconciliation, and alarm coverage reports.&lt;/p&gt;

&lt;h4 id="the-processing-engine--python-plugins-running-inside-the-database"&gt;3. The Processing Engine – Python plugins running inside the database&lt;/h4&gt;
&lt;p&gt;The &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/processing-engine/"&gt;Processing Engine&lt;/a&gt; is an embedded Python virtual machine that runs inside the InfluxDB 3 server. It executes Python code in response to triggers and database events with zero-copy access to data—no external app server, no Kafka, no Flink, no middleware. Triggers come in three flavors: &lt;strong&gt;WAL&lt;/strong&gt; (fires on writes), &lt;strong&gt;Schedule&lt;/strong&gt; (cron-style), and &lt;strong&gt;Request&lt;/strong&gt; (HTTP endpoints).
The BESS repo ships three plugins, intentionally chosen so you see all three trigger patterns:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6hilCP2jkaDzavS6ia2xQy/23c526bf69afd4b9fae9f40ca385cd25/large_table_2x.png" alt="BESS trigger patterns" /&gt;&lt;/p&gt;

&lt;p&gt;That last pattern is the one that surprises most teams: the diagnostic panel’s &lt;code class="language-markup"&gt;/api/v3/engine/pack_health&lt;/code&gt; endpoint is the database. There’s no Flask service in front of it. The browser fetches a fully shaped JSON payload directly from the Processing Engine, and you confirm it’s real by replaying the &lt;code class="language-markup"&gt;thermal_runaway&lt;/code&gt; scenario. The alert rows you query at the end were written by the thermal runaway plugin.&lt;/p&gt;

&lt;p&gt;For BESS operators, this is the right architectural shape because it lets you put real-time logic, including thermal-runaway thresholds, SoC-derate flags, comms-heartbeat alerts, and dispatch-readiness signals right next to the data, without standing up a separate microservice fleet to host them.&lt;/p&gt;

&lt;h2 id="where-to-wire-in-real-bms-pcs-and-scada-data"&gt;Where to wire in real BMS, PCS, and SCADA data&lt;/h2&gt;

&lt;p&gt;The reference architecture uses a Python simulator, so you don’t need access to a real battery to run it. In production, your data is on the wire in industrial protocols:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;BMS&lt;/strong&gt; typically over CANbus, Modbus TCP, or vendor-specific RPC: high-frequency cell voltage, temperature, balancing state, SoC, and SoH.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;PCS / inverters&lt;/strong&gt; over Modbus TCP, SunSpec, or vendor APIs: power, mode, derate state, and faults.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;SCADA / EMS&lt;/strong&gt; over OPC UA, MQTT, or Modbus: site-level alarms, dispatch signals, market schedules, and environmental conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The recommended ingest layer is &lt;strong&gt;Telegraf&lt;/strong&gt; at the edge or in your DMZ, with its OPC UA, Modbus, MQTT, and HTTP plugins performing collection and normalization. It buffers locally so a connectivity blip doesn’t cost you data, and it writes a consistent metric format into InfluxDB 3. If you’d rather skip Telegraf entirely for OPC UA equipment, the &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/opcua/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 OPC UA Plugin&lt;/a&gt; is a Processing Engine plugin that connects to an OPC UA server and writes directly into the database—one fewer process to operate. Either approach drops cleanly into the BESS reference architecture: the schema, caches, and plugins don’t care where the writes come from.&lt;/p&gt;

&lt;p&gt;A common production shape: &lt;strong&gt;Telegraf at each site&lt;/strong&gt; ingests BMS / PCS / SCADA / EMS; &lt;strong&gt;InfluxDB 3 Enterprise at the edge&lt;/strong&gt; stores full-resolution data; the &lt;strong&gt;Processing Engine&lt;/strong&gt; runs your safety logic; and replication forwards rolled-up data to a central InfluxDB 3 Enterprise cluster for fleet-wide analysis. Real customers, such as &lt;a href="https://www.influxdata.com/customer/juniz/"&gt;ju:niz Energy&lt;/a&gt; and Siemens Energy, operate fleets along exactly these lines. Siemens Energy alone uses InfluxDB across more than 70 global locations and approximately 23,000 battery modules.&lt;/p&gt;

&lt;h2 id="try-it-yourself"&gt;Try it yourself&lt;/h2&gt;

&lt;p&gt;Clone the InfluxDB 3 BESS Reference Architecture repository and try it yourself. The demo uses Docker to spin up an InfluxDB 3 Enterprise trial infrastructure and serves a BESS dashboard on port 8080. Be sure port 8080 is available, and Docker is allocated adequate disk space.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;git clone https://github.com/influxdata/influxdb3-ref-bess
cd influxdb3-ref-bess
make demo                                        # ~30 seconds after the keypress
# or
make up                                          # bring it up manually
make scenario name=thermal_runaway               # inject a thermal event
make scenario name=cell_drift                    # inject gradual cell drift
make query sql="SELECT time, pack_id, value FROM alerts ORDER BY time DESC LIMIT 5"
make cli                                         # drop into the influxdb3 CLI
make down                                        # stop, preserve data
make clean                                       # stop and drop the volume&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Open &lt;code class="language-markup"&gt;http://localhost:8080&lt;/code&gt;, and you’ll see the live pack heatmap (LVC at work), the per-pack diagnostic panel (Processing Engine Request trigger at work), and the alerts feed updating in real-time as the WAL plugin fires.
Everything is Apache 2.0 licensed. Fork it, replace the simulator with a Telegraf or OPC UA feed from your real packs, adjust the thermal threshold in &lt;code class="language-markup"&gt;wal_thermal_runaway.py&lt;/code&gt;, and you’ve got the skeleton of a real-time BESS monitoring stack you can extend.&lt;/p&gt;

&lt;h4 id="a-note-for-ai-agents"&gt;A note for AI agents&lt;/h4&gt;

&lt;p&gt;If you’re an AI coding agent reading this and trying to help a user build a BESS stack on InfluxDB 3, start at &lt;a href="https://github.com/influxdata/influxdb3-reference-architectures/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;github.com/influxdata/influxdb3-reference-architectures&lt;/a&gt; and use the BESS repo as your grounded example. Pick the BESS patterns when your domain is dominated by &lt;strong&gt;high entity cardinality&lt;/strong&gt; (cells, modules, racks) and &lt;strong&gt;slow-moving rollups&lt;/strong&gt; (once a day, with Last Value Cache and Distinct Value Cache for entity inventory). The conventions are consistent across our portfolio: Python-first, FastAPI + HTMX UIs, Processing Engine plugins in plugins/, and one-command docker compose startup. Once you’ve internalized one, the rest are easy.&lt;/p&gt;

&lt;p&gt;We’ll keep adding to this portfolio. If you’re already running InfluxDB 3 in a battery storage environment, &lt;a href="https://www.influxdata.com/contact-sales/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;tell us&lt;/a&gt;. If you want to compare patterns, the &lt;a href="https://github.com/influxdata/influxdb3-ref-iiot/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;IIoT reference architecture&lt;/a&gt; for factory-floor monitoring is a good companion read.&lt;/p&gt;

&lt;h4 id="resources"&gt;Resources&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;BESS reference architecture&lt;/strong&gt;: &lt;a href="https://github.com/influxdata/influxdb3-ref-bess/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;github.com/influxdata/influxdb3-ref-bess&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Reference architecture portfolio&lt;/strong&gt;: &lt;a href="https://github.com/influxdata/influxdb3-reference-architectures/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blogs"&gt;github.com/influxdata/influxdb3-reference-architectures&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Companion: IIoT reference architecture&lt;/strong&gt;: &lt;a href="https://github.com/influxdata/influxdb3-ref-iiot/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;github.com/influxdata/influxdb3-ref-iiot&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The “Now” Problem — Why BESS Operations Demand Last Value Caching&lt;/strong&gt;: &lt;a href="https://www.influxdata.com/blog/bess-last-value-caching/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;influxdata.com/blog/bess-last-value-caching&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Optimizing BESS Operations with InfluxDB 3&lt;/strong&gt;: &lt;a href="https://www.influxdata.com/blog/optimizing-bess-operations-influxdb-3/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;influxdata.com/blog/optimizing-bess-operations-influxdb-3&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Unifying Telemetry in BESS&lt;/strong&gt;: &lt;a href="https://www.influxdata.com/blog/unified-telemetry-BESS/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;influxdata.com/blog/unified-telemetry-BESS&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Processing Engine reference&lt;/strong&gt;: &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/processing-engine/"&gt;docs.influxdata.com/influxdb3/enterprise/reference/processing-engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;OPC UA Plugin&lt;/strong&gt;: &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/opcua/?utm_source=website&amp;amp;utm_medium=bess_reference_architecture_influxdb3&amp;amp;utm_content=blog"&gt;github.com/influxdata/influxdb3_plugins/tree/main/influxdata/opcua&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Fri, 08 May 2026 12:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/bess-reference-architecture-influxdb3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/bess-reference-architecture-influxdb3/</guid>
      <category>Developer</category>
      <author>Ian Clark (InfluxData)</author>
    </item>
    <item>
      <title>What's New in InfluxDB 3 Explorer 1.8: Streaming Subscriptions, Smarter Sample Data, Line Protocol Validation, and Retention Controls</title>
      <description>&lt;p&gt;InfluxDB 3 Explorer 1.8 is all about writing data and keeping it under control. You can now subscribe to MQTT, Kafka, and AMQP streams directly from Explorer, generate custom sample datasets, stream live sample data continuously into your database, and validate your line protocol and preview the resulting schema before you write it. You can now also view and edit retention periods on both databases and individual tables.&lt;/p&gt;

&lt;h2 id="data-subscriptions-stream-from-mqtt-kafka-and-amqp"&gt;Data Subscriptions: stream from MQTT, Kafka, and AMQP&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 Explorer now includes a &lt;strong&gt;Data Subscriptions&lt;/strong&gt; page (powered by the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/mqtt_subscriber/README.md"&gt;MQTT&lt;/a&gt;, &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/kafka_subscriber/README.md"&gt;Kafka&lt;/a&gt;, and &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/amqp_subscriber/README.md"&gt;AMQP subscriber&lt;/a&gt; plugins) that lets you wire a streaming source directly into a database.&lt;/p&gt;

&lt;p&gt;Pick a provider, fill in configuration details, and Explorer installs and activates the right Processing Engine plugin behind the scenes. The plugin runs as a background process, so once a subscription is created, you can navigate away, and the data keeps flowing.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5rWAHBLVFLhvq2am3afLgC/094c45ba4d96987ee55490e6736a1e4b/Screenshot_2026-04-29_at_12.35.33â__PM.png" alt="Data Subscriptions page SS" /&gt;&lt;/p&gt;

&lt;p&gt;The MQTT configuration contains: a subscription name, target database, broker host and port, client ID, optional authentication and TLS, and the topics you want to subscribe to (one per line, with &lt;code class="language-markup"&gt;#&lt;/code&gt; and &lt;code class="language-markup"&gt;+&lt;/code&gt; wildcards supported). The &lt;strong&gt;Message Format&lt;/strong&gt; section allows you to map your data to your schema. If your messages already arrive as &lt;code class="language-markup"&gt;Line Protocol&lt;/code&gt; format, you’re good to go. However, if necessary, you can also parse &lt;code class="language-markup"&gt;JSON&lt;/code&gt; to map keys onto tags and fields, or extract from &lt;code class="language-markup"&gt;Text&lt;/code&gt; using regex patterns.&lt;/p&gt;

&lt;p&gt;Kafka and AMQP work the same way, with the connection details specific to each protocol. Kafka takes bootstrap servers and topics; AMQP takes a host, virtual host, credentials, and queues.
Once you’ve created a subscription, the &lt;strong&gt;Stream Status&lt;/strong&gt; tab gives you a single place to monitor your running subscriptions. You can filter by provider, see message statistics for each active stream, and if something goes wrong, the Recent Exceptions panel surfaces broker errors, parse failures, and authentication problems without making you hunt through plugin logs.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/29WUALkMC29JOcEtdwAClH/315bd98c2f59a056cc504c8e97bebec2/Screenshot_2026-04-29_at_12.39.02â__PM.png" alt="Data Subscriptions page 2 SS" /&gt;&lt;/p&gt;

&lt;p&gt;A note on requirements: Data Subscriptions need InfluxDB 3 Core or Enterprise running version &lt;strong&gt;3.9.0 or higher&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id="sample-data-three-ways"&gt;Sample data, three ways&lt;/h2&gt;

&lt;p&gt;The Write Sample Data page existed in earlier versions of Explorer, but it was thin. Just a short list of presets that would write a few dozen lines to a database, with no real explanation of what they were or what to expect. In 1.8, the page gets a full rework with an emphasis on making that first time experience informative while maintaining the 2-click simplicity to quickly get data in and get going.&lt;/p&gt;

&lt;h4 id="static-sample-data-presets"&gt;Static Sample Data Presets&lt;/h4&gt;

&lt;p&gt;The previous preset datasets (Air Sensor, Bird Migration, Bitcoin, NOAA Weather) are still present, but selecting one now opens a details panel that shows you exactly what you’re about to write before you commit. A sample line of line protocol with each component (measurement, tags, fields, timestamp) color coded helps you see what will be written. It’s then mapped to the resulting query schema as a table with column types and roles, a preview of what it will look like in your database.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5KACT5d9DKopSrDcbSNBvA/ec6e5c024bdd85297757c2bf68136285/Screenshot_2026-04-29_at_12.41.26â__PM.png" alt="Write Data Sample page SS" /&gt;&lt;/p&gt;

&lt;p&gt;The presets also generate a more realistic volume of data than before. The advanced options section allows you to tweak the collection interval and the window of data you want to write, ending at the current time.&lt;/p&gt;

&lt;h4 id="custom-datasets-with-a-dash-of-ai"&gt;Custom Datasets (with a Dash of AI)&lt;/h4&gt;

&lt;p&gt;The preset datasets aren’t your only option for quick sample data anymore. If you have an AI provider configured under Configure → Integrations, you can make use of the &lt;strong&gt;Custom dataset (AI)&lt;/strong&gt; option. Describe what you want in natural language (e.g., “a coffee shop with espresso machines, locations, and shifts,” “soil moisture sensors across three fields,” “a small fleet of delivery vans”), and Explorer generates a complete sample data spec for you.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Gnl7STwhBoyJqkqvHKsOR/609da727ea1252d9dfcf847a6d05907e/Screenshot_2026-04-29_at_12.42.58â__PM.png" alt="Write Sample Data page 2 SS" /&gt;
The output is a realistic, ready to use schema with appropriate measurement names, tags, fields, and types. After the initial generation, you can refine the spec with the &lt;code class="language-markup"&gt;Refine schema&lt;/code&gt; with AI input, where you can say things like “drop the locations tag” or “let’s make this about a tea shop instead,” and the spec updates in place, highlighting your changes. Just as with the preset sample data, the &lt;strong&gt;Advanced options&lt;/strong&gt; panel lets you set the interval and time window.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2W4XE1PHivfzEGTixERQCT/a11326acc1cfefa4d970a3a9717c7101/Screenshot_2026-04-29_at_12.44.34â__PM.png" alt="Write Sample Data page 3 SS" /&gt;&lt;/p&gt;

&lt;p&gt;When you’re happy with it, click Write Sample Data, and Explorer creates a new database with your data ready for querying.&lt;/p&gt;

&lt;h2 id="live-data-plugins-for-real-time-sample-data"&gt;Live data plugins, for real-time sample data&lt;/h2&gt;

&lt;p&gt;Static datasets are great for poking around with queries and exploring schema, but a lot of what makes InfluxDB interesting (alerts, transformations, automation) requires new data showing up over time. The new &lt;strong&gt;Live Data&lt;/strong&gt; tab on the Sample Data page solves that.&lt;/p&gt;

&lt;p&gt;Live Data uses the Processing Engine to continuously write data to your database on a schedule. Explorer 1.8 ships with two plugins out of the box: the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/system_metrics/README.md"&gt;System Metrics Collector &lt;/a&gt;(host CPU, memory, disk, and network metrics from &lt;code class="language-markup"&gt;psutil&lt;/code&gt;) and the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/nws_weather/README.md"&gt;US Weather Sampler&lt;/a&gt; (live observations pulled from National Weather Service stations).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3osuRR1Z9Z1w0AW6VAdSCM/35d2f4dc94c531d51675e3e82fd43388/Screenshot_2026-04-29_at_12.46.27â__PM.png" alt="Write Sample Data page 4 SS" /&gt;&lt;/p&gt;

&lt;p&gt;The layout follows the same pattern as the static page: pick a plugin, see the schema preview and a few rows of line protocol, choose a database, and click Activate. From there, it just runs, regularly writing data to your database. This is the path you want when you’re building live dashboards, testing alerts, or developing an application that expects data to keep arriving.&lt;/p&gt;

&lt;h2 id="line-protocol-validation-and-schema-preview"&gt;Line protocol validation and schema preview&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Write Line Protocol&lt;/strong&gt; page (under Write Data → Dev Data) now validates Line Protocol as you type, and shows a live &lt;strong&gt;Schema Preview&lt;/strong&gt; of what your data is about to look like in your database. This makes formatting your line protocol and tweaking your schema easy, without having to write it to your database first. Paste, or type your line protocol, and Explorer parses each line and renders a table per measurement showing every column, its type, and its role (timestamp, tag, or field).&lt;/p&gt;

&lt;p&gt;When something is wrong, you don’t have to wait for the server to tell you. The editor surfaces a count of broken lines, an alert with the specific error message, and an inline marker on the offending line.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1gv6exByUQlr9b1HgLRS23/2ca83c2af022b57c4304312b7c2373f9/Screenshot_2026-04-29_at_12.48.16â__PM.png" alt="Write Dev Data page ss" /&gt;&lt;/p&gt;

&lt;p&gt;The same applies if you upload a file using &lt;code class="language-markup"&gt;Upload file&lt;/code&gt;—Explorer will read it in, validate every line, and tell you exactly which lines need fixing before you write a single one. There’s also a &lt;strong&gt;Line Protocol Reference&lt;/strong&gt; panel pinned to the right of the page covering the format, allowed types, escaping rules, and timestamp precision, so you don’t have to flip back to the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/line-protocol/"&gt;line protocol docs&lt;/a&gt; every time you forget whether integers take an &lt;code class="language-markup"&gt;i&lt;/code&gt; suffix.&lt;/p&gt;

&lt;h2 id="database-and-table-retention"&gt;Database and table retention&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 has supported per-database and per-table retention for a while, but until now, you had to set them through the API or CLI. In 1.8, retention shows up everywhere it should in the UI.&lt;/p&gt;

&lt;p&gt;There’s a new &lt;strong&gt;Retention Period&lt;/strong&gt; column on both the Manage Databases and Manage Tables pages, so you can see at a glance how long each database or table is keeping its data:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/69PhVLffCVw7SnfXEPjFOH/5fd62dee3ab31fe89d20a93c88d08698/Screenshot_2026-04-29_at_12.50.51â__PM.png" alt=" Manage Tables page SS" /&gt;&lt;/p&gt;

&lt;p&gt;When you create a new database, the dialog now has a Retention Period field (tables previously had this available on create). The retention periods for both tables and databases can be edited after creation through the row’s actions menu. Tables follow the standard inheritance behavior: set a retention period, and the table uses it; set it to &lt;strong&gt;None&lt;/strong&gt;, and the table inherits from the database.&lt;/p&gt;

&lt;p&gt;If you’re new to how retention works in InfluxDB 3, the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/internals/data-retention/"&gt;data retention reference&lt;/a&gt; covers the underlying behavior.&lt;/p&gt;

&lt;h2 id="get-it-while-its-hot"&gt;Get it while it’s hot&lt;/h2&gt;

&lt;p&gt;If you’ve been wanting to get streaming data into Explorer without standing up a separate connector, or you’ve been doing the “let me eyeball this line protocol and hope it parses” dance, this release should make those quite a bit smoother. As always, the previous post—&lt;a href="https://www.influxdata.com/blog/influxdb-explorer-1-7/"&gt;What’s New in InfluxDB 3 Explorer 1.7: Table Management, Data Import, Transforms, and More&lt;/a&gt;—is worth a look if you skipped that one and want to catch up on table-level schema management, the InfluxDB-to-InfluxDB import flow, and the Transform Data pages.&lt;/p&gt;

&lt;p&gt;To update InfluxDB 3 Explorer, pull the latest Docker image: &lt;code class="language-markup"&gt;docker pull influxdata/influxdb3-ui&lt;/code&gt;&lt;/p&gt;
</description>
      <pubDate>Thu, 30 Apr 2026 01:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/explorer-1-8/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/explorer-1-8/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Daniel Campbell (InfluxData)</author>
    </item>
    <item>
      <title>Getting Started with Home Assistant Webhooks &amp; Writing to InfluxDB</title>
      <description>&lt;p&gt;If you’re already running or are familiar with Home Assistant, you’ve likely worked with integrations, maybe a few automations, and possibly MQTT as a way to wire devices together. But webhooks add another layer of flexibility that lets you level up your smart home into a fully-customized, intelligent network. Instead of relying on built-in integrations and being confined to the same local network, you can let external devices and services push events directly into Home Assistant. This gives you a simple way to build custom flows: a device sends a webhook, Home Assistant receives it, and then you decide what happens next. It’s a lightweight way to connect systems, even when built-in integrations may be lacking.&lt;/p&gt;

&lt;p&gt;Once you have the webhook flow in place, the next question is what to do with the data generated from your webhook calls, where to store it, and how to best leverage it. That’s where InfluxDB fits in. It’s built specifically for time series data, which means it’s designed to handle continuous streams of time-stamped events like the ones generated by a smart home using Home Assistant. Instead of just reacting in the moment, you can store that data, query it, and build a clearer picture of how your system behaves. Data processing and forecasting builds an even more advanced understanding of your system over time.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk through both sides of that setup. First, we’ll use webhooks in Home Assistant to create flexible, event-driven flows between devices and services. Then we’ll connect that stream of data to InfluxDB and its Processing Engine so you can go beyond real-time reactions and start working with your data in a more structured way.&lt;/p&gt;

&lt;h2 id="what-is-home-assistant"&gt;What is Home Assistant?&lt;/h2&gt;

&lt;p&gt;Home Assistant is an open source platform that ties all your smart home devices together in one place. It runs locally, gives you control over how devices interact, and lets you build automations based on events happening throughout your home. Instead of relying on separate apps or cloud services for each device, everything feeds into a single system where you can define your own logic. That can be as simple as turning on lights at sunset or as involved as coordinating and controlling multiple devices based on sensor data, schedules, forecasts, and external inputs.&lt;/p&gt;

&lt;p&gt;It’s easy to get started with Home Assistant by connecting a few common integrations. Nearly all smart lights, thermostats, and motion sensors have existing integrations, and building simple automations on those integrations, like having lights turn on if a motion sensor detects movement, is straightforward from there. As your setup grows, you can layer in more conditions, tie multiple devices together, and start building routines.&lt;/p&gt;

&lt;p&gt;At some point, though, you may want to bring in data or events from devices and services that don’t have a native integration. That’s where webhooks come in. They give you a simple way to send events directly into Home Assistant from anything that can make an HTTP request, which opens the door to more custom, event-driven flows without needing to build a full integration.&lt;/p&gt;

&lt;h4 id="setting-up-a-home-assistant-webhook"&gt;Setting Up a Home Assistant Webhook&lt;/h4&gt;

&lt;p&gt;To get started on the Home Assistant side of things, a webhook is just another type of &lt;a href="https://www.home-assistant.io/docs/automation/trigger/"&gt;trigger&lt;/a&gt;. This means you can create it as you would any other trigger type: navigate to automations, create an automation, and add a webhook trigger. &lt;a href="https://www.home-assistant.io/docs/automation/trigger/#webhook-trigger"&gt;Home Assistant has documentation on exactly how this trigger works&lt;/a&gt;. You must define a webhook ID when you create a webhook trigger, and you’ll need to include that ID when you invoke the webhook. Just like with MQTT triggers in Home Assistant, webhook triggers also support payloads that contain additional data, and you can use this payload in downstream automation if desired.&lt;/p&gt;

&lt;p&gt;For testing purposes, make sure that a downstream action is invoked by the trigger. Using one of your other devices connected to Home Assistant is often the most straightforward option, whether that’s switching a light on/off or sending a push notification to an Apple device via iCloud.&lt;/p&gt;

&lt;p&gt;Then, to invoke your trigger, simply call your webhook. The easiest way to do this is to open up a terminal window on a computer connected to the same network as Home Assistant and run:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl -X POST -d 'key=value' https://"your-home-assistant":8123/api/webhook/"id"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Any other means of sending an &lt;a href="https://www.w3schools.com/Tags/ref_httpmethods.asp"&gt;HTTP POST request&lt;/a&gt; will work fine. Note that you’ll need to replace &lt;code class="language-markup"&gt;"id"&lt;/code&gt; with the webhook ID that you defined when you created the trigger and &lt;code class="language-markup"&gt;"your-home-assistant"&lt;/code&gt; with the local IP of the device running Home Assistant. The &lt;code class="language-markup"&gt;‘key=value’&lt;/code&gt; is where you can provide your payload. If you want multiple keys and values, you can separate them with &lt;code class="language-markup"&gt;&amp;amp;&lt;/code&gt;, or you can provide it in a JSON format, which is covered in the Home Assistant documentation.&lt;/p&gt;

&lt;p&gt;If you want to send HTTP requests from devices or servers that aren’t on your home network, you’ll need to make sure you set the &lt;code class="language-markup"&gt;local_only&lt;/code&gt; option to “false” and &lt;a href="https://www.noip.com/support/knowledgebase/general-port-forwarding-guide"&gt;port forward&lt;/a&gt; the port Home Assistant uses for webhooks, which is 8123 by default. Home Assistant’s documentation recommends some security practices that are worth repeating: because allowing external traffic to invoke the webhook trigger is inherently insecure, make sure that any downstream actions can’t be destructive or problematic if a bad actor sends a request.&lt;/p&gt;

&lt;h4 id="full-stack-example-energy-price-monitoring"&gt;Full-Stack Example: Energy Price Monitoring&lt;/h4&gt;

&lt;p&gt;Suppose you want to monitor energy prices on the grid and use those prices to inform when you should turn certain devices in your smart home on or off.&lt;/p&gt;

&lt;p&gt;You’ll need to start with a script to monitor grid pricing. Depending on where you live and how your electricity is billed, you may be able to simply query your utility or fetch the relevant information periodically from a website. Run a small server or device that can handle this task, and schedule it with cron to run periodically. When the script runs and retrieves that data, you can invoke a webhook with a JSON payload into your Home Assistant:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;import requests

WEBHOOK_URL = "https://192.168.1.20:8123/api/webhook/electricity_price"
PRICE_THRESHOLD_KWH = 0.20

# fetch local electricity prices, then...

payload = {
    "price_per_kwh": current_electricity_price,
    "threshold": PRICE_THRESHOLD_KWH,
}
response = requests.post(
    WEBHOOK_URL,
    json=payload,
    timeout=10,
)
response.raise_for_status()&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then, in Home Assistant, your trigger could be set up as:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;alias: Energy price spike response
description: Adjust to eco mode when electricity prices go above threshold

triggers:
  - trigger: webhook
    webhook_id: energy_price_monitor
    allowed_methods:
      - POST
    local_only: false

conditions:
  - condition: template
    value_template: &amp;gt;
      {{ trigger.json.price_per_kwh | float &amp;gt;= trigger.json.threshold | float }}

actions:
 - action: switch.turn_off
    target:
      entity_id:
        - switch.ev_charger
        - switch.garage_ac&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With a scheduled Python script and the Home Assistant trigger, you can now run a scheduled task to check the web, invoke the trigger, pass in relevant data as a payload, and have other devices connected to Home Assistant take necessary actions. The above example demonstrates switching off some devices when electricity prices are high, but a few minor adjustments could instead turn devices on when prices drop.&lt;/p&gt;

&lt;h2 id="adding-more-intelligence-to-your-smart-home-with-influxdb"&gt;Adding more intelligence to your smart home with InfluxDB&lt;/h2&gt;

&lt;p&gt;Webhooks and automation are a good start, but there’s still much more you can do. Data is being collected and used to trigger various events around the house, but what do you do with that data after it’s used to set off a trigger? If you’re turning off EV charging and auxiliary air conditioning when electricity is particularly pricey, what impact is that having?&lt;/p&gt;

&lt;p&gt;Fortunately, &lt;a href="https://www.home-assistant.io/integrations/influxdb/"&gt;Home Assistant has an integration with InfluxDB&lt;/a&gt; that can help you take your system from smart home to smarter home with minimal setup. &lt;a href="https://www.influxdata.com/blog/start-up-guide-influxdb-3-core/?utm_source=website&amp;amp;utm_medium=ha_webhooks_influxdb&amp;amp;utm_content=blog"&gt;Install InfluxDB&lt;/a&gt;, add the Home Assistant integration for InfluxDB, then configure the authentication to an existing InfluxDB instance. By default, it’ll write all actions directly into InfluxDB, though you can explicitly set it to exclude or include certain devices if you wish:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb:
  api_version: 2
  ssl: false
  host: 192.168.1.50
  port: 8181
  token: "YOUR_INFLUXDB_TOKEN"
  organization: home
  bucket: home_assistant&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To write the data from the earlier webhook script into InfluxDB, we can use the &lt;a href="https://www.influxdata.com/blog/start-up-guide-influxdb-3-core/?utm_source=website&amp;amp;utm_medium=ha_webhooks_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3 Python client&lt;/a&gt;:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;from influxdb_client_3 import InfluxDBClient3, Point
import requests

WEBHOOK_URL = "https://192.168.1.20:8123/api/webhook/electricity_price"
PRICE_THRESHOLD_KWH = 0.20

INFLUXDB_URL = "192.168.1.50:8181"
INFLUXDB_TOKEN = "your_influxdb_token"
INFLUXDB_DATABASE = "home"

def main():
    client = InfluxDBClient3(
        host=INFLUXDB_HOST,
        token=INFLUXDB_TOKEN,
        database=INFLUXDB_DATABASE,
    )

    # fetch local electricity prices, then...

    write_to_influx(current_electricity_price)
    post_request_to_home_assistant(current_electricity_price)

def post_request_to_home_assistant(price):
    payload = {
        "price_per_kwh": price,
        "threshold": PRICE_THRESHOLD_KWH,
    }
    response = requests.post(
        WEBHOOK_URL,
        json=payload,
        timeout=10,
    )
    response.raise_for_status()

def write_to_influx(price):
    point = (
        Point("grid_prices")
        .field("price_per_kwh", float(price))
    )
    client.write(point)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With all the data for triggers and actions, you can retain a long-term memory of what your smart home is doing. With the &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/"&gt;InfluxDB Processing Engine&lt;/a&gt;, you can do further analysis and processing of data as it’s written.&lt;/p&gt;

&lt;p&gt;To continue with the example above, you could connect your &lt;a href="https://www.home-assistant.io/docs/energy/electricity-grid/"&gt;electricity grid up to Home Assistant&lt;/a&gt;, then persist the meter data into InfluxDB. That data, combined with records of when your webhook trigger wrote information about current electricity prices, could allow you to see how your home adapts in real-time to fluctuations in grid prices. If everything is set up correctly, you should see that spikes in electricity prices lead to lower utilization, and vice versa.&lt;/p&gt;

&lt;p&gt;Better yet, you could use the &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/library/official/prophet-forecasting/"&gt;Prophet forecasting plugin&lt;/a&gt;, trained on the same data, to create a smart home that isn’t just reactive but predictive. By persisting smart home data to InfluxDB, you can train models on that data to make intelligent predictions. For example, you could forecast electricity prices relatively easily. First, create an instance of the forecasting plugin:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  --database home \
  --path "gh:influxdata/prophet_forecasting/prophet_forecasting.py" \
  --trigger-spec "every:1h" \
  --trigger-arguments "measurement=grid_prices,field=price_per_kwh,window=30d,forecast_horizont=12h,target_measurement=grid_price_forecast,model_mode=train,unique_suffix=home_prices_v1,seasonality_mode=additive,inferred_freq=1H" \
  grid_price_forecast&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then enable it:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 enable trigger \
  --database home \
  grid_price_forecast&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With forecasting enabled, there’s now a grid_price_forecast table that will be populated, which you can query to view predicted spikes in prices. You can use those predicted spikes to run critical tasks around the house before electricity spikes, rather than simply shutting them off after it increases.&lt;/p&gt;

&lt;h2 id="continual-improvement"&gt;Continual improvement&lt;/h2&gt;

&lt;p&gt;If you’ve followed along with every part of this blog, you should have a full loop in place. A small service watches something outside your home, sends a periodic signal, Home Assistant handles the local response, and InfluxDB keeps a record of what happened so you can look back and improve it. None of the individual pieces are especially complicated, but putting them together gives you something more useful than a single automation. You’re building a system that can learn from its own behavior and get smarter over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb3/?utm_source=website&amp;amp;utm_medium=ha_webhooks_influxdb&amp;amp;utm_content=blog"&gt;Get started with InfluxDB 3&lt;/a&gt; and its &lt;a href="https://www.home-assistant.io/integrations/influxdb/"&gt;Home Assistant integration&lt;/a&gt; today.&lt;/p&gt;
</description>
      <pubDate>Tue, 28 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/ha-webhooks-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/ha-webhooks-influxdb/</guid>
      <category>Getting Started</category>
      <category>Developer</category>
      <author>Cole Bowden (InfluxData)</author>
    </item>
    <item>
      <title>How to Use Time Series Autoregression (With Examples)</title>
      <description>&lt;p&gt;Time series autoregression is a powerful statistical technique that uses past values of a variable to predict its future values. This approach is particularly valuable for forecasting applications where historical patterns can inform future trends.&lt;/p&gt;

&lt;p&gt;In this hands-on tutorial, you’ll learn how to implement autoregressive (AR) models using Python and see how InfluxDB can enhance your time series analysis workflow.&lt;/p&gt;

&lt;h2 id="understanding-time-series-autoregression"&gt;Understanding time series autoregression&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/think/topics/autoregressive-model"&gt;Autoregression models&lt;/a&gt; represent one of the fundamental approaches to time series forecasting, based on the principle that past behavior can predict future outcomes. The “auto” in &lt;a href="https://www.influxdata.com/blog/guide-regression-analysis-time-series-data/"&gt;autoregression&lt;/a&gt; means the variable is regressed on itself—essentially, we’re using the variable’s own historical values as predictors.&lt;/p&gt;

&lt;p&gt;This concept is intuitive: yesterday’s temperature influences today’s temperature and last month’s sales figures can indicate this month’s performance.&lt;/p&gt;

&lt;p&gt;An autoregressive model of order p, denoted as AR(p), uses the previous p observations to predict the next value:
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/50y9E1BxjOVQKkCJINlRHt/7988c5c42a7e5913447a4dab7253c9a3/Screenshot_2026-04-09_at_12.36.02â__PM.png" alt="AR SS 1" /&gt;
X(t) = c + φ₁X(t-1) + φ₂X(t-2) + … + φₚX(t-p) + ε(t)&lt;/p&gt;

&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;X(t) is the value at time t&lt;/li&gt;
  &lt;li&gt;c is a constant term representing the baseline level&lt;/li&gt;
  &lt;li&gt;φ₁, φ₂, …, φₚ are the autoregressive coefficients indicating the influence of each lag&lt;/li&gt;
  &lt;li&gt;ε(t) is white noise representing random, unpredictable fluctuations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The coefficients determine how much influence each previous observation has on the current prediction. Positive coefficients indicate that higher past values lead to higher current predictions, while negative coefficients suggest an inverse relationship.&lt;/p&gt;

&lt;h2 id="types-of-autoregressive-models-and-their-applications"&gt;Types of autoregressive models and their applications&lt;/h2&gt;

&lt;h4 id="ar1-first-order-autoregression"&gt;AR(1) First-Order Autoregression&lt;/h4&gt;

&lt;p&gt;The simplest autoregressive model uses only the immediately previous value:
X(t) = c + φ₁X(t-1) + ε(t)&lt;/p&gt;

&lt;p&gt;AR(1) models are particularly effective for data with strong short-term dependencies, such as daily stock returns or temperature variations. The single coefficient φ₁ captures the persistence of the series—values close to 1 indicate high persistence, while values near 0 suggest more random behavior.&lt;/p&gt;

&lt;h4 id="arp-higher-order-models"&gt;AR(p) Higher-Order Models&lt;/h4&gt;

&lt;p&gt;More complex temporal patterns often require multiple lags:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;AR(2) models: Capture oscillating patterns where the current value depends on both the previous value and the value two periods ago.&lt;/li&gt;
  &lt;li&gt;AR(3) and beyond: Useful for data with complex patterns that extend beyond immediate past values.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="seasonal-autoregressive-models"&gt;Seasonal Autoregressive Models&lt;/h4&gt;

&lt;p&gt;Real-world time series often exhibit seasonal patterns that repeat at regular intervals. Seasonal AR models extend the basic AR framework to capture these periodic dependencies, particularly valuable for retail sales forecasting, energy consumption prediction, and agricultural yield estimation.&lt;/p&gt;

&lt;h4 id="model-selection-and-diagnostic-considerations"&gt;Model Selection and Diagnostic Considerations&lt;/h4&gt;

&lt;p&gt;Selecting the appropriate AR model order requires careful analysis of the data’s autocorrelation structure. The &lt;a href="https://www.influxdata.com/blog/autocorrelation-in-time-series-data/"&gt;autocorrelation&lt;/a&gt; function (ACF) shows how correlated the series is with its own lagged values, while the partial autocorrelation function (PACF) reveals the direct relationship between observations at different lags.&lt;/p&gt;

&lt;p&gt;For AR models, the PACF is particularly informative because it cuts off sharply after the true model order. This characteristic makes PACF plots an essential diagnostic tool for determining the optimal number of lags to include in the model.&lt;/p&gt;

&lt;h2 id="setting-up-your-environment"&gt;Setting up your environment&lt;/h2&gt;

&lt;p&gt;Before implementing our AR model, let’s set up the necessary tools and data infrastructure to analyze time series data with InfluxDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb-core/?utm_source=website&amp;amp;utm_medium=time_series_autoregression&amp;amp;utm_content=blog"&gt;InfluxDB Core&lt;/a&gt; is designed to handle time-series data with an optimized storage engine and powerful query capabilities. It excels at tracking weather patterns or monitoring environmental conditions, making it an ideal choice for efficiently managing and analyzing time-stamped data.&lt;/p&gt;

&lt;h4 id="installing-required-libraries"&gt;Installing Required Libraries&lt;/h4&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;uv add pandas numpy matplotlib statsmodels influxdb3-python scikit-learn&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or setup a python virtual environment and install with the following:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;python -m venv .venv&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For Mac or Linux activate your virtual environment with the following:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;source .venv/bin/activate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For Window run this:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;.venv\Scripts\activate.bat # Windows (PowerShell) .venv\Scripts\Activate.ps1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And finally, install the required libraries:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;pip install pandas numpy matplotlib statsmodels influxdb3-python scikit-learn&lt;/code&gt;&lt;/p&gt;

&lt;h4 id="connecting-to-influxdb"&gt;Connecting to InfluxDB&lt;/h4&gt;

&lt;p&gt;First, let’s establish a connection to your local InfluxDB instance:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;from influxdb_client_3 import InfluxDBClient3, Point
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.tsa.ar_model import AutoReg
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from sklearn.metrics import mean_squared_error, mean_absolute_error

# InfluxDB connection parameters
INFLUXDB_HOST = "localhost:8181"
INFLUXDB_TOKEN = "your_token_here"  # Replace with your actual token
INFLUXDB_DATABASE = "weather"       # Database name for InfluxDB 3

# Initialize client
client = InfluxDBClient3(
    host=INFLUXDB_HOST,
    database=INFLUXDB_DATABASE,
    token=INFLUXDB_TOKEN
)&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="implementing-ar-models-for-predicting-temperature"&gt;Implementing AR models for predicting temperature&lt;/h2&gt;

&lt;p&gt;Let’s walk through a practical example using temperature data to demonstrate autoregressive modeling.&lt;/p&gt;

&lt;h4 id="loading-and-preprocessing-the-data"&gt;Loading and Preprocessing the Data&lt;/h4&gt;

&lt;p&gt;First, we’ll generate sample temperature data and store it in InfluxDB, then retrieve it for analysis:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;def generate_sample_temperature_data():
    """Generate realistic temperature data with seasonal patterns"""
    np.random.seed(42)
    dates = pd.date_range(start='2023-01-01', end='2024-01-01', freq='D')

    # Create temperature data with trend and seasonality
    trend = np.linspace(15, 18, len(dates))
    seasonal = 10 * np.sin(2 * np.pi * np.arange(len(dates)) / 365.25)
    noise = np.random.normal(0, 2, len(dates))
    temperature = trend + seasonal + noise

    return pd.DataFrame({
        'timestamp': dates,
        'temperature': temperature
    })

def store_data_in_influxdb(df):
    """Store temperature data in InfluxDB"""
    records = [
        Point("temperature")
            .field("value", row['temperature'])
            .time(row['timestamp'])
        for _, row in df.iterrows()
    ]
    client.write(record=records)
    print(f"Stored {len(df)} temperature readings in InfluxDB")

def load_data_from_influxdb():
    """Retrieve temperature data from InfluxDB"""
    query = """
        SELECT time, value
        FROM temperature
        WHERE time &amp;gt;= now() - INTERVAL '1 year'
        ORDER BY time
    """
    table = client.query(query=query, mode="pandas")
    table['time'] = pd.to_datetime(table['time'])
    table = table.set_index('time').sort_index()
    return table['value']

# Generate and store sample data
sample_data = generate_sample_temperature_data()
store_data_in_influxdb(sample_data)

# Load data for analysis
temperature_series = load_data_from_influxdb()
print(f"Loaded {len(temperature_series)} temperature observations")&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="exploring-autocorrelation-and-determining-model-order"&gt;Exploring Autocorrelation and Determining Model Order&lt;/h4&gt;

&lt;p&gt;Before fitting an AR model, we need to understand the autocorrelation structure:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1if3YOBZ3cdnk2Mm0jSqkl/76ce3e78181ab2336a0d9635037d39b2/Screenshot_2026-04-09_at_12.44.09â__PM.png" alt="autocorrelation SS" /&gt;&lt;/p&gt;

&lt;p&gt;The Partial Autocorrelation Function (PACF) helps determine the optimal AR order by showing the correlation between observations at different lags, controlling for shorter lags.&lt;/p&gt;

&lt;h4 id="building-and-training-the-ar-model"&gt;Building and Training the AR Model&lt;/h4&gt;

&lt;p&gt;Now let’s implement the autoregressive model:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3G2y0GY250RZSOEL7zJgTj/e43ca0040107d949fe7e760a3824654c/Screenshot_2026-04-09_at_12.45.52â__PM.png" alt="AR Model SS" /&gt;&lt;/p&gt;

&lt;p&gt;Visualization is crucial for understanding model performance:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3GXiWDP36MjuLhMHHHs3HI/f1cd3397f608d8ad02ed6ff1b493ce95/Screenshot_2026-04-09_at_12.47.57â__PM.png" alt="Visualization SS 1" /&gt;
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4P3vmJqDvTMx1ny8DSwuxF/c9916f312c2c9c1fe05c401195023a9b/Screenshot_2026-04-09_at_12.48.12â__PM.png" alt="Visulization SS 2" /&gt;&lt;/p&gt;

&lt;h2 id="benefits-and-limitations-of-autoregressive-models"&gt;Benefits and limitations of autoregressive models&lt;/h2&gt;

&lt;h4 id="advantages-of-ar-models"&gt;Advantages of AR Models&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Computational Efficiency&lt;/strong&gt;: AR models are computationally lightweight compared to complex machine learning approaches. This efficiency makes them ideal for real-time applications where quick predictions are essential, such as high-frequency trading systems or real-time monitoring applications.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Interpretability&lt;/strong&gt;: Unlike black-box machine learning models, AR models provide clear, interpretable coefficients that reveal the influence of each lagged value. This transparency is crucial in regulated industries where model decisions must be explainable and auditable.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Strong Theoretical Foundation&lt;/strong&gt;: AR models rest on well-established statistical theory with known properties and assumptions. This theoretical grounding provides confidence in model behavior and enables rigorous statistical testing of model adequacy.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Excellent Baseline Performance&lt;/strong&gt;: AR models often serve as effective baseline models against which more complex approaches are compared. Their simplicity makes them robust to overfitting, and they frequently provide competitive performance for many forecasting tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="limitations-and-challenges"&gt;Limitations and Challenges&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Linear Relationship Assumptions&lt;/strong&gt;: AR models assume linear relationships between past and future values, which may not capture complex nonlinear patterns present in many real-world time series.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Stationarity Requirements&lt;/strong&gt;: The assumption of stationarity can be restrictive for many practical applications. Real-world time series often exhibit trends, structural breaks, or changing volatility that violate stationarity assumptions.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Limited Complexity Handling&lt;/strong&gt;: AR models struggle with complex seasonal patterns, multiple interacting factors, or regime changes. While seasonal AR models exist, they may not capture intricate seasonal dynamics as effectively as more sophisticated approaches.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="practical-implementation-considerations"&gt;Practical Implementation Considerations&lt;/h4&gt;

&lt;p&gt;When implementing AR models in practice, several key considerations ensure successful deployment. Data preprocessing often requires careful attention to stationarity testing and transformation.&lt;/p&gt;

&lt;p&gt;Model validation requires time-aware cross-validation techniques that respect the temporal structure of the data. Traditional random sampling approaches can introduce data leakage, where future information inadvertently influences past predictions.&lt;/p&gt;

&lt;p&gt;Parameter selection involves balancing model complexity with predictive accuracy. Information criteria like AIC and BIC provide systematic approaches to order selection, while out-of-sample testing validates the chosen specification.&lt;/p&gt;

&lt;h2 id="time-series-analysis-with-influxdb"&gt;Time series analysis with InfluxDB&lt;/h2&gt;

&lt;p&gt;InfluxDB provides several critical advantages for time series autoregression workflows that extend beyond simple data storage. As a purpose-built time series database, InfluxDB addresses many challenges associated with managing and analyzing temporal data at scale.&lt;/p&gt;

&lt;h4 id="optimized-storage-and-performance"&gt;Optimized Storage and Performance&lt;/h4&gt;

&lt;p&gt;InfluxDB’s columnar storage format and specialized compression algorithms reduce storage requirements for time series data. This efficiency becomes crucial when working with high-frequency data or maintaining long historical records necessary for robust AR model training.&lt;/p&gt;

&lt;h4 id="real-time-data-processing"&gt;Real-time Data Processing&lt;/h4&gt;

&lt;p&gt;Modern forecasting applications often require real-time model updates as new data arrives. InfluxDB’s streaming capabilities enable continuous data ingestion, allowing AR models to incorporate the latest observations immediately.&lt;/p&gt;

&lt;h4 id="scalable-query-operations"&gt;Scalable Query Operations&lt;/h4&gt;

&lt;p&gt;As time series datasets grow, query performance becomes a limiting factor. InfluxDB’s indexing strategies and query optimization target temporal queries, enabling fast aggregations and data retrieval operations common in AR model preprocessing.&lt;/p&gt;

&lt;h4 id="native-time-series-functions"&gt;Native Time Series Functions&lt;/h4&gt;

&lt;p&gt;InfluxDB includes built-in functions for common time series operations like moving averages and lag calculations. These functions can preprocess data directly within the database.&lt;/p&gt;

&lt;h2 id="production-deployment-and-best-practices"&gt;Production deployment and best practices&lt;/h2&gt;

&lt;p&gt;Deploying AR models in production environments requires attention to several operational aspects. Model monitoring becomes crucial as data patterns evolve over time, potentially degrading model performance. InfluxDB’s ability to store both input data and model predictions simplifies the creation of monitoring dashboards.&lt;/p&gt;

&lt;p&gt;Performance considerations include monitoring prediction accuracy over time and detecting concept drift.&lt;/p&gt;

&lt;h2 id="capping-it-off"&gt;Capping it off&lt;/h2&gt;

&lt;p&gt;Time series autoregression provides a powerful and interpretable foundation for forecasting applications across diverse domains. The combination of statistical rigor, computational efficiency, and clear interpretability makes AR models an essential tool in the time series analyst’s toolkit.&lt;/p&gt;

&lt;p&gt;While AR models have limitations in handling complex nonlinear patterns, their strengths in capturing temporal dependencies make them invaluable for both standalone applications and as components in more complex forecasting systems.&lt;/p&gt;

&lt;p&gt;The integration of AR modeling with modern time series infrastructure like &lt;a href="https://www.influxdata.com/?utm_source=website&amp;amp;utm_medium=time_series_autoregression&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt; creates opportunities for robust, scalable forecasting solutions. By leveraging InfluxDB’s specialized capabilities alongside the proven statistical foundations of autoregressive modeling, practitioners can build production-ready forecasting systems that deliver reliable predictions.&lt;/p&gt;
</description>
      <pubDate>Wed, 22 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/time-series-autoregression/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/time-series-autoregression/</guid>
      <category>Developer</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
    <item>
      <title>Setting Up an MQTT Data Pipeline with InfluxDB</title>
      <description>&lt;p&gt;In this blog, we’re going to take a look at how you can set up a fully-functioning, robust data pipeline to centralize your data into an InfluxDB instance by collecting and sending messages with the MQTT protocol. We’ll start with a brief overview of the technologies and protocols used in the pipeline, then dive into how you can connect, configure, and test them to ensure your data pipeline is fully functional. It’s going to be a long post, so let’s jump right in.&lt;/p&gt;

&lt;h2 id="what-is-mqtt"&gt;What is MQTT?&lt;/h2&gt;

&lt;p&gt;MQTT is an industry-standard, lightweight protocol for moving messages through a network of devices. It functions by having a broker, or multiple brokers, receive messages from individual devices (publishing clients) across the network, and publish those messages to external systems (destination clients) that are connected and listening to the broker. By categorizing messages into “topics,” systems that subscribe to specific topics can opt to receive only messages they’re interested in.&lt;/p&gt;

&lt;p&gt;As a lightweight protocol with a number of prominent open source implementations, MQTT is an industry standard for a variety of use cases. It’s particularly common in Internet of Things (IoT) and Industrial IoT (IIoT) applications, but can be leveraged anywhere you have a distributed network of devices generating data or messages. This includes fleet management, home automation, real-time telemetry on computer hardware, and practically any use case where sensors generate data points periodically.&lt;/p&gt;

&lt;h2 id="why-use-influxdb-for-mqtt-data"&gt;Why use InfluxDB for MQTT data?&lt;/h2&gt;

&lt;p&gt;If you’ve already concluded that the MQTT protocol is the right way to move your data from various devices into a centralized broker, odds are that you’re working with time series data. Time series data has a couple of key characteristics: it’s a sequence of data collected in chronological order, and all data points contain a timestamp. Most commonly, this also means there’s a large volume of data. Hundreds or thousands of sensors generating new data points every second can quickly turn into millions or billions of records per day. As the scale of data increases, the need for a specialized, purpose-built solution to handle this volume grows, too.&lt;/p&gt;

&lt;p&gt;That’s where InfluxDB, the industry-leading time series database, comes in. InfluxDB is purpose-built for the time series data common in MQTT use case scenarios, delivering unparalleled performance and a number of dedicated features to make managing and working with your time series data as easy as possible.&lt;/p&gt;

&lt;p&gt;Performance is critical because ingesting millions or billions of data points per day can strain most databases. Because time series databases like InfluxDB are optimized to handle that firehose of continuous data, they can scale to handle and ingest it with greater efficiency and lower costs. A custom-built storage engine eliminates snags that most other types of databases encounter, such as index maintenance and contention locks. Last-value caches and engine optimizations for timestamp-based filtering makes retrieving recent data extremely efficient, so fresh data being written into InfluxDB can be queried in less than 10 milliseconds, minimizing time to insight (or as we like to call it, “time to awesome”). This ensures a real-time view of the data generated across your network of devices.&lt;/p&gt;

&lt;p&gt;Time series functionality also makes managing and working with this data much easier, regardless of if performance at scale is a concern. DataFusion, the SQL query engine embedded into InfluxDB 3, makes it easy to query with a language most data professionals and AI agents already know. With dedicated time-based functions, queries that look like this in a general purpose database:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;WITH hours AS (
  SELECT generate_series(
    date_trunc('hour', now() - interval '24 hours'),
    date_trunc('hour', now()),
    interval '1 hour'
  ) AS hour_bucket
),
sensors AS (
  SELECT DISTINCT sensor_id FROM sensor_data
),
hour_sensor AS (
  SELECT h.hour_bucket, s.sensor_id
  FROM hours h
  CROSS JOIN sensors s
),
agg AS (
  SELECT
    sensor_id,
    date_trunc('hour', time) AS hour_bucket,
    percentile_cont(0.95) WITHIN GROUP (ORDER BY temperature) AS p95
  FROM sensor_data
  WHERE time &amp;gt;= now() - interval '24 hours'
  GROUP BY sensor_id, hour_bucket
)
SELECT
  hs.hour_bucket,
  hs.sensor_id,
  COALESCE(a.p95, 0) AS p95
FROM hour_sensor hs
LEFT JOIN agg a USING (hour_bucket, sensor_id)
ORDER BY hs.sensor_id, hs.hour_bucket;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Can be shortened to this in InfluxDB:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  date_bin_gapfill(INTERVAL '1 hour', time) AS hour,
  sensor_id,
  interpolate(percentile(temperature, 95)) AS p95
FROM sensor_data
WHERE time &amp;gt;= NOW() - INTERVAL '24 hours'
GROUP BY hour, sensor_id;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Admittedly, this is a cherry-picked example for a complicated function most users won’t use every day, but there are plenty that aren’t. The InfluxDB 3 processing engine comes with a host of built-in plugins for processing and transforming data as it’s written, monitoring and anomaly detection, forecasting, and alerting. Retention policies can be set at a database or table level, ensuring you keep data as long as it’s useful, and the downsampling plugin for the processing engine can help you keep your data at a lower resolution once it’s past the end of that policy. InfluxDB also has tons of connections to the ecosystem of data visualization tools, clients, and, critical for the purposes of this tutorial, integrates seamlessly with Telegraf, the data collection agent we’ll be using to move data from our MQTT broker into InfluxDB.&lt;/p&gt;

&lt;h2 id="the-mqtt---influxdb-pipeline"&gt;The MQTT -&amp;gt; InfluxDB pipeline&lt;/h2&gt;

&lt;p&gt;The architecture of this data pipeline is relatively straightforward, with data flowing in one direction throughout:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Devices, sensors, and anything generating raw data are set up as an MQTT publishing client connected to the broker.&lt;/li&gt;
  &lt;li&gt;The MQTT broker receives the raw data from the various publishers and forwards it.&lt;/li&gt;
  &lt;li&gt;Telegraf subscribes to the published topics and then writes data into InfluxDB.&lt;/li&gt;
  &lt;li&gt;The InfluxDB processing engine handles all necessary transformations and makes the data immediately available for querying and visualization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So let’s jump into specifics.&lt;/p&gt;

&lt;h4 id="setting-up-the-mqtt-broker-and-clients"&gt;Setting Up the MQTT Broker and Clients&lt;/h4&gt;

&lt;p&gt;The first thing you’re going to need to do is install the MQTT technology of your choice on every device that’s going to be a publishing client, as well as on the server you want to act as your broker. Eclipse Mosquitto is a common open source option for MQTT that we’ll use in this guide, but any other MQTT client, such as HiveMQ, Paho, MQTTX, MQTT Explorer, or EasyMQTT, will also work great for this tutorial. The exact commands will differ depending on what you’re using, but the concepts will remain the same, as it’s a standardized protocol.&lt;/p&gt;

&lt;p&gt;To install Eclipse Mosquitto:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;On Linux, run: &lt;code class="language-markup"&gt;snap install mosquitto&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;On Mac: Install &lt;a href="https://brew.sh/"&gt;Homebrew&lt;/a&gt;, then run &lt;code class="language-markup"&gt;brew install mosquitto&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;On Windows: Go to the &lt;a href="https://mosquitto.org/download/"&gt;mosquitto download page&lt;/a&gt; and install from there&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you install Mosquitto, the installer will then tell you the exact file path that the configuration file sits in. You’ll want to configure your broker first, and you should set up authentication if you don’t want to allow unauthenticated connections. A lack of authentication can be fine if you’re running everything on a local network where you’re not doing any port forwarding, but it’s not recommended if your devices are communicating over the internet.&lt;/p&gt;

&lt;p&gt;There are &lt;em&gt;many&lt;/em&gt; different ways to set up authentication with Mosquitto—one of the simplest is &lt;a href="https://mosquitto.org/man/mosquitto_passwd-1.html"&gt;creating a password file with the &lt;code class="language-markup"&gt;mosquitto-passwd&lt;/code&gt; command&lt;/a&gt;, but you can read a full list of options on &lt;a href="https://mosquitto.org/documentation/authentication-methods/"&gt;their documentation page for authentication methods&lt;/a&gt;. Whatever you settle on, if you decide to use some form of authentication, you’ll need to add the following line to your Mosquitto configuration file.:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;allow_anonymous false&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are &lt;a href="https://mosquitto.org/man/mosquitto-conf-5.html"&gt;many other configuration options in the documentation&lt;/a&gt;, and what you set and configure will depend on your use case, but some you may want to consider are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;persistence false&lt;/code&gt; - Because we’re writing to InfluxDB, we don’t need to persist messages to disk.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;log_dest stdout&lt;/code&gt; - For setting up, testing, and debugging, outputting logs directly to the terminal makes things easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And of course, make sure your listener is configured on the same port for all devices. The default is 1883, but you can change this if desired.&lt;/p&gt;

&lt;p&gt;Once you configure your broker, you can set up your publishing clients, and with whatever data you’re measuring, they can publish messages to the broker with the command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;mosquitto_pub -h "host" -t "topic" -m "value"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you’re running this all on a local network, your host will be &lt;code class="language-markup"&gt;localhost&lt;/code&gt;; otherwise, it’ll be the address where your broker is running. The value should be whatever you’re measuring and publishing at that moment.&lt;/p&gt;

&lt;p&gt;Your topic can be whatever is appropriate to label that value. If you have different devices and different types of measurements for each device, it’s recommended to nest your topics and organize them in a way that makes logical sense. For example, if you have many different devices measuring, say, temperature and velocity, your topic arrangement may look like:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;/sensors/vehicles/v1/device1/temp&lt;/li&gt;
  &lt;li&gt;/sensors/vehicles/v1/device1/velocity&lt;/li&gt;
  &lt;li&gt;/sensors/vehicles/v1/device2/temp&lt;/li&gt;
  &lt;li&gt;/sensors/vehicles/v1/device2/velocity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As long as you have a unique topic structure for each type of value being sent, we can parse and sort this into tags and fields with InfluxDB. For further information on setting up MQTT topics, there are plenty of great &lt;a href="https://www.cedalo.com/blog/mqtt-topics-and-mqtt-wildcards-explained"&gt;guides on the matter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With your clients and broker configured, your clients publishing messages, and your broker receiving and forwarding those messages, you should be all set up for the MQTT portion of this data pipeline.&lt;/p&gt;

&lt;h2 id="installing-influxdb"&gt;Installing InfluxDB&lt;/h2&gt;

&lt;p&gt;The next step is to move your MQTT data into InfluxDB. The first step is to install InfluxDB. You can &lt;a href="https://docs.influxdata.com/influxdb3/core/install/"&gt;check out our docs on installing it here&lt;/a&gt;, but the simplest and easiest way to get started is to run the install scripts provided by InfluxData with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl -O https://www.influxdata.com/d/install_influxdb3.sh \
&amp;amp;&amp;amp; sh install_influxdb3.sh&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;These should work on every operating system and provide you with some simple options to get started with InfluxDB 3 Core or Enterprise. The installation script should also give you an admin token, which you’ll want to store somewhere safe so you can use it for authentication. If you’d like to further configure your InfluxDB 3 instance, the installation script should tell you where all files and configuration files were installed for further adjusting, though it should run fine out of the box.&lt;/p&gt;

&lt;p&gt;If you have Docker installed, you can also install the InfluxDB Explorer UI as part of this process, giving you an easy way to view, manage, and query your InfluxDB 3 instance. You can reach it by navigating to &lt;code class="language-markup"&gt;localhost:8888&lt;/code&gt; in your browser, entering &lt;code class="language-markup"&gt;host.docker.internal:8181&lt;/code&gt; for the server address, and providing the admin token.&lt;/p&gt;

&lt;h4 id="installing-and-configuring-telegraf"&gt;Installing and Configuring Telegraf&lt;/h4&gt;

&lt;p&gt;With InfluxDB 3 installed and running, the last step to get the data pipeline operational is to install and configure Telegraf to connect our MQTT broker to InfluxDB. Telegraf installation varies by operating system and Linux distribution, so check out the &lt;a href="https://docs.influxdata.com/telegraf/v1/install/#download-and-install-telegraf"&gt;Telegraf documentation on installation to find the right files or command to run&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’re on Mac or Linux, this will generate a default configuration file for you:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;On Mac, install via Homebrew: &lt;code class="language-markup"&gt;/usr/local/etc/telegraf.conf&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;On Linux: &lt;code class="language-markup"&gt;/etc/telegraf/telegraf.conf&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Otherwise, you’ll need to create an empty configuration file or generate one with &lt;code class="language-markup"&gt;telegraf config &amp;gt; telegraf.conf&lt;/code&gt;. Once you have located or created your configuration file, all that’s left to do is connect Telegraf to your MQTT Broker and InfluxDB.&lt;/p&gt;

&lt;p&gt;InfluxDB is very easy to configure a connection to, and you can add these lines to the config file:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;[[outputs.influxdb_v2]]
  urls = ["InfluxDB address &amp;amp; port"]
  token = "admin token"
  organization = "org name"
  bucket = "destination database"&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;The InfluxDB address and port should be wherever you have InfluxDB installed. If you’re running on a local network, this will be &lt;code class="language-markup"&gt;http://127.0.0.1:8181&lt;/code&gt;; otherwise, it’ll be the IP and port.&lt;/li&gt;
  &lt;li&gt;Token is the admin token you copied from installation.&lt;/li&gt;
  &lt;li&gt;Organization can be whatever you’d like to name it.&lt;/li&gt;
  &lt;li&gt;Bucket should be the name of the database you’re writing all your MQTT data to. You don’t have to create the database first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting up a connection to your MQTT broker is also straightforward:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;[[inputs.mqtt_consumer]]
  servers = ["broker address"]
  topics = ["list of topics"]
  data_format = "value"
  data_type = "data_type"

  ## if you have username and password authentication for MQTT
  username = "username"
  password = "password"&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;The broker address is one again the address and port for where your MQTT broker is running. For a local network, this will be &lt;code class="language-markup"&gt;tcp://127.0.0.1:1883&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Topics is a comma-separated list of topics that you’re writing to.&lt;/li&gt;
  &lt;li&gt;Data type is the primitive data type being written: integer, float, long, string, or boolean.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is all you need in your configuration file to have the full pipeline running! If you run telegraf with &lt;code class="language-markup"&gt;telegraf --config telegraf.conf&lt;/code&gt;, you should be able to send a message from an MQTT publisher and view that data in InfluxDB.&lt;/p&gt;

&lt;p&gt;However, you can make some improvements in Telegraf’s configuration to help parse and organize your data by topic. By default, this writes each topic into a single tag column to the same table, with a monolithic “value” column for all your values, which isn’t a very good data model. With topic parsing and pivot processing added to the configuration, we can specify what part of the topic should define what table the data is written into, turn every level of the topic into a tag, and pivot on the last level of the topic so that each raw value is its own field:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;[[inputs.mqtt_consumer]]
  servers = ["broker address"]
  topics = ["/sensors/#"]
  data_format = "value"
  data_type = "data_type"

  ## if you have username and password authentication for MQTT
  username = "username"
  password = "password"

  [[inputs.mqtt_consumer.topic_parsing]]
    measurement = "/measurement/_/_/_/_"
    tags = "/_/device_type/version/device_name/field"
  [[processors.pivot]]
    tag_key = "field"
    value_key = "value"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This takes a value from the /sensors/vehicles/v1/device1/temp topic and writes it to the sensors table. The tag columns populate with &lt;code class="language-markup"&gt;device_type = vehicles&lt;/code&gt;, &lt;code class="language-markup"&gt;version = v1&lt;/code&gt;, &lt;code class="language-markup"&gt;device_name = device1&lt;/code&gt;, and temp is written as a field with the value of temp set to whatever your MQTT publisher wrote. You can modify this configuration as appropriate for your topics, and &lt;a href="https://docs.influxdata.com/telegraf/v1/input-plugins/mqtt_consumer/"&gt;the documentation provides full information on everything that can be done&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="further-improvements"&gt;Further improvements&lt;/h2&gt;

&lt;p&gt;With MQTT data being published, parsed, and written into InfluxDB, you’ve fully set up an MQTT data pipeline! However, there’s a lot more you can do:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;View and query your data with the InfluxDB Explorer UI, as discussed earlier.&lt;/li&gt;
  &lt;li&gt;Connect any one of the many &lt;a href="https://docs.influxdata.com/influxdb3/core/tags/client-libraries/"&gt;client libraries&lt;/a&gt; to access your data and use it for downstream applications, or to a data visualization tool for dashboarding and insight into what’s being written.&lt;/li&gt;
  &lt;li&gt;Use the &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/"&gt;InfluxDB 3 processing engine&lt;/a&gt; for further transformations and processing of your data as it’s written.&lt;/li&gt;
  &lt;li&gt;Set up alerts, monitoring, forecasting, and more with the processing engine, too.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="the-final-product"&gt;The final product&lt;/h2&gt;

&lt;p&gt;By integrating MQTT, Telegraf, and InfluxDB, you’ve constructed a robust, fully-functioning data pipeline capable of efficiently centralizing real-time telemetry. The lightweight MQTT protocol ensures that messages from your distributed network flow reliably to the broker, while Telegraf acts as the collection agent for seamless ingestion and transformation. Finally, InfluxDB provides the purpose-built storage and specialized features needed to query and visualize your data in minimal time. This architecture establishes a solid foundation for turning raw event streams into meaningful insights, minimizing your time to awesome.&lt;/p&gt;
</description>
      <pubDate>Fri, 17 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/mqtt-data-pipeline-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/mqtt-data-pipeline-influxdb/</guid>
      <category>Developer</category>
      <author>Cole Bowden (InfluxData)</author>
    </item>
    <item>
      <title>From Edge to Cloud: How Litmus Edge and InfluxDB Unlock Industrial Intelligence at Hannover Messe</title>
      <description>
&lt;p&gt;If you’ve spent time in industrial environments, you know the problem isn’t a lack of data. It’s collecting it reliably, contextualizing it, and storing it at scale. Most stacks weren’t built to fight all three battles.&lt;/p&gt;

&lt;h2 id="the-industrial-data-problem"&gt;The industrial data problem&lt;/h2&gt;

&lt;p&gt;Industrial connectivity is no joke. OT environments are notoriously fragmented and siloed, spanning PLCs, CNCs, SCADA systems, and sensors, each speaking a different protocol, running on a different vendor’s stack, and operating in a network zone that was never designed to talk to anything outside the shop floor.  Extracting value from that data has traditionally required heavy IT involvement, expensive integrations, and months of professional services work, and the traditional answer was usually a historian. Historians made progress on the access problem, giving individual sites a way to capture and store machine data. But standardizing that data across silos and contextualizing it across systems and plants is where they fall short. And unfortunately, that’s where most of the value lies.&lt;/p&gt;

&lt;p&gt;Once data is collected and contextualized, the next problem is keeping it useful at scale. This is more than a storage problem. Sustaining high-frequency ingest of contextualized telemetry and querying that data fast enough to act on it is where most systems break. Historians were not designed for this. They sacrifice resolution, degrade under query load, and make cross-site, cross-system analysis slow and impractical. The value in industrial data is in the detail, and most platforms are architected to throw this detail away.&lt;/p&gt;

&lt;h2 id="collect-contextualize-and-storeall-at-the-edge"&gt;Collect, contextualize, and store—all at the edge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://litmus.io/litmus-edge"&gt;Litmus Edge&lt;/a&gt; acts as the intelligence layer between your machines and the rest of your data architecture. It connects natively to hundreds of industrial protocols, including OPC-UA, Modbus, MQTT, FANUC, Siemens S7, and many more, normalizing disparate machine data into a unified, consistent stream.&lt;/p&gt;

&lt;p&gt;But connectivity alone isn’t enough. Raw machine signals mean little without context. Litmus Edge allows operations teams to tag, enrich, and structure data at the point of collection. A temperature reading becomes tied to a specific asset, production line, facility, and product run. By the time data leaves the edge, it is no longer just a number. It is a meaningful, queryable event.&lt;/p&gt;

&lt;h2 id="scale-query-retain-your-industrial-data-hub"&gt;Scale, query, retain: your industrial data hub&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt; becomes the system of record for your industrial time series data at the edge, in a centralized environment, or both.&lt;/p&gt;

&lt;p&gt;It ingests high-frequency telemetry at full resolution, serves low-latency queries for real-time operations, and scales to fleet-wide analysis across sites and time horizons without forcing tradeoffs between fidelity and cost. High cardinality isn’t a problem to design around. Long-term retention doesn’t require a cost penalty. The data stays detailed, queryable, and useful.&lt;/p&gt;

&lt;h2 id="scaling-across-lines-sites-and-the-enterprise"&gt;Scaling across lines, sites, and the enterprise&lt;/h2&gt;

&lt;p&gt;Scale changes what’s possible, but only if the data model scales with it. When every site collects and contextualizes data the same way, writing to a consistent schema, cross-site analysis becomes straightforward. Comparing performance across plants, identifying outliers, and correlating signals across a global fleet become simple queries instead of integration projects. That consistency is what the Litmus and InfluxDB architecture is designed to deliver.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;em&gt;Which production lines across all facilities are showing early indicators of equipment degradation?&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;How does energy consumption per unit compare across sites running similar processes?&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;Where are the outliers? And what can the top performers teach the rest of the network?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not hypothetical future capabilities. They are available today to any organization willing to invest in getting the data foundation right.&lt;/p&gt;

&lt;h2 id="the-bridge-to-higher-level-analytics"&gt;The bridge to higher-level analytics&lt;/h2&gt;

&lt;p&gt;InfluxDB doesn’t just store data well; it integrates cleanly with the ecosystem: the analytics, visualization, and AI/ML tooling your teams are already investing in. Grafana dashboards, anomaly detection workflows, and digital twin platforms connect through InfluxDB’s SQL-native interface and open APIs without custom pipelines or bespoke integration work.&lt;/p&gt;

&lt;p&gt;For OT teams, that’s the point. The edge handles the hard part—protocol translation, normalization, enrichment. InfluxDB centralizes the results into a single, interoperable data layer that every team can query with the tools they already use.&lt;/p&gt;

&lt;p&gt;The result is a data architecture that is genuinely interoperable; the plant floor and the enterprise layer are finally speaking the same language.&lt;/p&gt;

&lt;h2 id="extending-into-the-cloud-with-aws"&gt;Extending into the cloud with AWS&lt;/h2&gt;

&lt;p&gt;There are several ways to deploy InfluxDB as your industrial data hub: on-premises, at the edge, or in the cloud. For teams who want to go straight to the cloud, AWS is a natural fit. In this reference architecture, Litmus Edge writes contextualized telemetry directly into &lt;a href="https://www.influxdata.com/products/timestream-for-influxdb/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;Amazon Timestream for InfluxDB&lt;/a&gt;, creating a seamless path from the shop floor to cloud-scale analytics. This allows teams to centralize access, scale analytics, and integrate with the broader AWS ecosystem without rebuilding their infrastructure from scratch.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7I05B89zisdmKtUk9EiUt6/e10ba53b117ae6b4c25dcfd791321705/image__6_.png" alt="Litmus Edge diagram" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Once data is available in AWS, it opens up a broader set of capabilities. For example, as new data arrives, you can trigger serverless workflows with AWS Lambda, stream high-velocity data through Kinesis for downstream processing, or connect directly to SageMaker to train models on high-fidelity data, without reshaping or downsampling it first.&lt;/p&gt;

&lt;h2 id="what-were-showing-at-hannover-messe"&gt;What we’re showing at Hannover Messe&lt;/h2&gt;

&lt;p&gt;At Hannover Messe, you’ll be able to see this architecture running end-to-end:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href="https://litmus.io/hannover-messe-2026"&gt;Litmus booth&lt;/a&gt; (Hall 16, Stand A09)&lt;/strong&gt;: The full Digital Factory demo, showing how data flows from industrial systems into Litmus and into InfluxDB 3 Enterprise in real-time.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href="https://www.influxdata.com/event/meet-influxdb-at-hannover-messe-2026/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;InfluxData kiosk&lt;/a&gt; (within the Litmus booth)&lt;/strong&gt;: A deeper look at how InfluxDB handles high-frequency ingest, real-time querying, and efficient storage at massive scale.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;AWS booth (Litmus kiosk)&lt;/strong&gt;: The cloud extension of the demo, highlighting replication into Amazon Timestream for InfluxDB and integration with AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The InfluxData team (including myself) will be on-site at the Litmus booth throughout the event to walk through the architecture and discuss real-world deployment patterns.&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Post by Ben Corbett, InfluxData; Rajesh Gomatam, Ph.D. Principal Partner Solutions Architect - Manufacturing, AWS; and Benjamin Norman, Partner Solution Architect, Litmus&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Thu, 16 Apr 2026 06:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/litmus-edge-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/litmus-edge-influxdb/</guid>
      <category>Demo</category>
      <category>Product</category>
      <category>Developer</category>
      <author>Ben Corbett (InfluxData)</author>
    </item>
    <item>
      <title>What’s New in InfluxDB 3 Explorer 1.7: Table Management, Data Import, Transforms, and More</title>
      <description>
&lt;p&gt;InfluxDB 3 Explorer 1.7 is a step forward for anyone who wants to manage their time series data without constantly switching between the UI and a terminal. This release adds table-level schema management, the ability to import data from other InfluxDB instances, and a new Transform Data section to reshape your data, all within the Explorer UI.&lt;/p&gt;

&lt;h2 id="table-management"&gt;Table management&lt;/h2&gt;

&lt;p&gt;Previously, if you wanted to see what tables existed inside a database, you had to query system tables or use the API. The new Manage Tables page changes that.
You can get there from the sidebar or from the new actions menu on any database in the Manage Databases page. That actions menu gives you quick access to query a database, view its tables, or delete it.&lt;/p&gt;

&lt;p&gt;The Manage Tables page lists every table in the selected database, along with its column count, type, and any configured &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/distinct-value-cache/"&gt;Distinct Value&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/last-value-cache/"&gt;Last Value&lt;/a&gt; Caches. Use the toggle filters to show or hide system tables and deleted tables. Deleted tables show up with a “Pending Delete” badge when the Show Deleted Tables toggle is enabled, so you always have visibility into what’s been removed.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6U2nqrukRwDJktsHPjiL91/4a8a861bf96b52061a6def8e23726593/Screenshot_2026-04-14_at_6.13.48â__PM.png" alt="Explorer 1.7 Manage Tables" /&gt;&lt;/p&gt;

&lt;p&gt;You can also &lt;strong&gt;create new tables&lt;/strong&gt; directly from this page. The Create Table dialog lets you define the schema up front: name, fields with data types, optional tags, and a retention period. This is useful when you want to control your schema explicitly rather than relying on &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/write/"&gt;schema-on-write&lt;/a&gt; to infer types from the first arriving data points.&lt;/p&gt;

&lt;p&gt;From any table’s action menu, you can jump straight to the Data Explorer with a pre-built query for that table.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/46bQpfsOyXjWem9M4125o7/73e9dcd0a33e3b11982d806d6d0f0504/Screenshot_2026-04-14_at_6.15.43â__PM.png" alt="1.7 Schema on Write" /&gt;&lt;/p&gt;

&lt;h2 id="import-from-influxdb"&gt;Import from InfluxDB&lt;/h2&gt;

&lt;p&gt;The next few features I’ll discuss are enhancements that make it much easier to work with the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;InfluxDB 3 Processing Engine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moving data between InfluxDB instances used to mean writing scripts, dealing with export formats, and coordinating tokens across environments. The new &lt;strong&gt;&lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/import"&gt;Import from InfluxDB&lt;/a&gt;&lt;/strong&gt; feature provides a guided workflow for migrating small-to-medium datasets from any existing InfluxDB v1, v2, or v3 instance (assuming v3 Schema compatibility) into your current InfluxDB 3 database.&lt;/p&gt;

&lt;p&gt;You’ll find it under the Write Data section, on both the Dev Data and Production Data pages. The workflow walks you through selecting a target database (or creating a new one), connecting to a source InfluxDB instance, authenticating, and then choosing which databases and tables to import.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2krWp1AKKHN86ICg70mjBL/b22f50fdf84fb8cbe43bb1be4d3f747e/Screenshot_2026-04-14_at_6.17.45â__PM.png" alt="Writing Dev Data" /&gt;&lt;/p&gt;

&lt;p&gt;Before committing to the import, perform a &lt;strong&gt;dry run&lt;/strong&gt; that shows you exactly what will be transferred, including the source and destination, the number of tables, the estimated row count, and how long it should take. Advanced options let you tune the batch size and concurrency if you need to balance import speed against resource usage.&lt;/p&gt;

&lt;p&gt;Once you start the import, a live progress view shows you how far along things are, how many rows have been imported, and the current status of each table. When it finishes, a “Query this database” button takes you straight to the Data Explorer so you can verify everything landed correctly.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1Ao5CzW0yXUYPijeK0k2Vu/44b63c64f71ccdd05a5fb3f74b048329/Screenshot_2026-04-14_at_6.19.20â__PM.png" alt="Write Data" /&gt;&lt;/p&gt;

&lt;p&gt;If you’re running an InfluxDB 1.x or 2.x instance and want to try InfluxDB 3 with your real data, this saves you from building a migration pipeline. Just point the import tool at your existing instance, pick the databases and time range you want, and the data flows over. It also works for consolidating data from multiple InfluxDB 3 instances into one place, or pulling production data into a dev environment for testing.&lt;/p&gt;

&lt;h2 id="transform-data"&gt;Transform data&lt;/h2&gt;

&lt;p&gt;The new &lt;strong&gt;Transform Data&lt;/strong&gt; section in the sidebar gives you a visual interface for setting up data transformations that run automatically on ingestion via the Processing Engine. Under the hood, these are powered by the &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/basic_transformation"&gt;Basic Transformation Processing Engine plugin&lt;/a&gt;, but you don’t need to write any plugin configuration by hand. The UI handles that for you.&lt;/p&gt;

&lt;p&gt;The way it works: when data is written to a source table, the transformation runs automatically and writes the results to a target database or table. You can set a short &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/databases/#table-retention-period"&gt;retention period&lt;/a&gt; on the source data (say, one day) so the raw data cleans itself up, and the transformed data lives on in the destination. There are four types of transformations available.&lt;/p&gt;

&lt;h4 id="rename-table"&gt;Rename Table&lt;/h4&gt;

&lt;p&gt;Rename Table lets you route data arriving in one table to another table. This is handy when you’re consuming data from a source you don’t control, and the table names don’t match your naming conventions.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5BiXqB4Q9BDHEFsOv8QtaW/c56cd9fe61d7ca91c1dcc37385bf6656/Screenshot_2026-04-14_at_6.24.41â__PM.png" alt="rename table" /&gt;&lt;/p&gt;

&lt;h4 id="rename-columns"&gt;Rename Columns&lt;/h4&gt;

&lt;p&gt;Rename Columns works similarly, but at the column level. You pick a source table and select which columns to rename. If you’re integrating data from different systems that use different naming conventions (for example, &lt;code class="language-markup"&gt;temp_f&lt;/code&gt; vs &lt;code class="language-markup"&gt;temperature_fahrenheit&lt;/code&gt;), this standardizes everything without touching the source.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3hF8Wa6vbro73j1A2O3f6W/cae32a0cfe6a43949f5b64b09a7338c2/Screenshot_2026-04-14_at_6.27.58â__PM.png" alt="rename columns" /&gt;&lt;/p&gt;

&lt;h4 id="transform-values"&gt;Transform Values&lt;/h4&gt;

&lt;p&gt;Transform Values lets you apply calculations or conversions to field values as they come in. You can do math operations, string transformations, unit conversions, or simple find-and-replace. If your sensors report temperature in Celsius but your dashboards expect Fahrenheit, this handles the conversion at ingestion time so your queries stay clean.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2rTFmTLs7vQ2Z5LPUDHzTx/e10529f9e3eb69f7a8e251956a9acff4/Screenshot_2026-04-14_at_6.29.13â__PM.png" alt="transform values" /&gt;&lt;/p&gt;

&lt;h4 id="filter-data"&gt;Filter Data&lt;/h4&gt;

&lt;p&gt;Filter Data lets you keep only the rows or columns that match specific conditions. You can filter by rows (e.g., only keep data where &lt;code class="language-markup"&gt;crop_type = 'carrots'&lt;/code&gt;) or by columns (drop fields you don’t need). This is useful when you’re receiving more data than you actually want to store. For example, a third-party feed might send 50 fields when you only care about 5.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4mTxJgxUUyEZH7RSbRXRet/c67d429d6e87d4bfdb0b90c29e9cbbbc/Screenshot_2026-04-14_at_6.30.22â__PM.png" alt="create transform" /&gt;&lt;/p&gt;

&lt;p&gt;You can test each transformation before deployment, and once deployed, monitor its status (running, stopped, errors) from the Transform Data dashboard.&lt;/p&gt;

&lt;h4 id="downsample-data"&gt;Downsample Data&lt;/h4&gt;

&lt;p&gt;Downsampling is a classic time series operation: take high-frequency data and roll it up into lower-frequency summaries to save storage and speed up queries over long time ranges. The new &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/library/official/downsampler/"&gt;&lt;strong&gt;Downsample&lt;/strong&gt;&lt;/a&gt; page, also under the Transform Data section, makes this easy to set up.
You create a downsample trigger by specifying a source table, a target table, a schedule (how often the aggregation runs), a time window (how far back to look), an aggregation interval (the bucket size), and an aggregation function (avg, sum, min, max, etc.). You can also choose to include or exclude specific fields.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7yPPBCTavele7EaFCLvIsa/156aa1c09f6bbb88b37ff14f425ce995/Screenshot_2026-04-14_at_6.31.40â__PM.png" alt="downsample" /&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/downsampler/"&gt;Downsample Processing Engine plugin&lt;/a&gt; powers this feature.&lt;/p&gt;

&lt;h2 id="get-started"&gt;Get started&lt;/h2&gt;

&lt;p&gt;All of these features are available now in &lt;a href="https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Explorer 1.7&lt;/a&gt;. For more on these Processing Engine capabilities, see InfluxDB 3 Processing Engine Updates.&lt;/p&gt;

&lt;p&gt;If you’re running &lt;a href="https://docs.influxdata.com/influxdb3/core/install/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;, update to the latest version to try them out. To learn more, check out the &lt;a href="https://docs.influxdata.com/influxdb3/explorer/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Explorer documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To update InfluxDB 3 Explorer, pull the latest Docker image:
&lt;code class="language-markup"&gt;docker pull influxdata/influxdb3-ui&lt;/code&gt;&lt;/p&gt;
</description>
      <pubDate>Wed, 15 Apr 2026 05:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-explorer-1-7/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-explorer-1-7/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Daniel Campbell (InfluxData)</author>
    </item>
    <item>
      <title>Less Friction, More Control: Here's What Shipped in Q1</title>
      <description>&lt;p&gt;Our Q1 momentum has been focused on a simple goal: making InfluxDB easier to operate, easier to scale, and faster to put to work.&lt;/p&gt;

&lt;p&gt;Across Telegraf, InfluxDB 3, and our managed offerings, these updates reduce friction in how teams collect, process, and scale time series workloads.&lt;/p&gt;

&lt;h2 id="telegraf-controller-enters-beta"&gt;Telegraf Controller enters beta&lt;/h2&gt;

&lt;p&gt;Telegraf is already a powerful way to collect metrics, logs, and events across environments. At scale, the challenge shifts from collection to control. Telegraf Enterprise is designed to solve that problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At the center is Telegraf Controller, a control plane that gives teams centralized configuration management and fleet-wide health visibility&lt;/strong&gt;. The beta includes major capabilities such as API authentication, API token management, user account management, multi-user support, role-based access control, global settings management, and expanded plugin support in the visual config builder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feedback from early users is shaping the road to general availability, with enterprise licensing, enforcement, audit logging, and federated identity management next on the roadmap.&lt;/strong&gt; &lt;a href="https://www.influxdata.com/products/telegraf-enterprise/?utm_source=website&amp;amp;utm_medium=q1_product_recap_2026&amp;amp;utm_content=blog"&gt;Sign up to join the beta&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2C5Q22cX3rXamZNOqVDPIF/a46fed22b3ff4f33e7552dddcddc8796/Screenshot_2026-04-07_at_5.41.54â__PM.png" alt="Telegraf Agents SS" /&gt;&lt;/p&gt;

&lt;h2 id="influxdb-39-adds-more-operational-control"&gt;InfluxDB 3.9 adds more operational control&lt;/h2&gt;

&lt;p&gt;Last week’s &lt;a href="https://www.influxdata.com/blog/influxdb-3-9/"&gt;release&lt;/a&gt; of &lt;strong&gt;InfluxDB 3.9 is focused on making the platform easier to run at scale, 
with improvements aimed at predictability, visibility, and day-to-day management&lt;/strong&gt;. The release expands CLI and automation support for headless environments, improves resource and lifecycle management, and adds clearer visibility into access control and product identity across Core and Enterprise deployments. These are the changes that matter in production: fewer rough edges, stronger operational clarity, and better control as workloads grow.&lt;/p&gt;

&lt;p&gt;InfluxDB 3.9 Enterprise also includes a new beta performance preview for non-production environments. &lt;strong&gt;This optional preview includes optimized single-series queries, reduced CPU and memory spikes under load, support for wider and sparser schemas, and early automatic distinct value caches to reduce metadata query latency&lt;/strong&gt;. These features are not yet recommended for production, but they give customers an early look at capabilities planned for future releases and a chance to help shape what comes next.&lt;/p&gt;

&lt;h2 id="processing-engine-updates-make-influxdb-3-easier-to-operationalize"&gt;Processing Engine updates make InfluxDB 3 easier to operationalize&lt;/h2&gt;

&lt;p&gt;The Processing Engine remains one of the most powerful parts of InfluxDB 3 because it allows teams to run logic directly at the database. Users can transform data on ingest, run scheduled jobs, or serve HTTP requests without adding external services or layering on more pipeline complexity.&lt;/p&gt;

&lt;p&gt;This quarter, we continued to expand both the engine itself and the plugin ecosystem around it. 
The latest plugins make it easier to get data into InfluxDB 3 from more sources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The Import Plugin&lt;/strong&gt; provides a simpler path for bringing data from InfluxDB v1, v2, or v3 into InfluxDB 3 Core and Enterprise, with support for dry runs, progress tracking, pause and resume, conflict handling, and flexible filtering.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;New MQTT, Kafka, and AMQP subscription plugins&lt;/strong&gt; help users ingest streaming data directly from external message brokers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The new OPC UA Plugin&lt;/strong&gt; gives industrial teams a more direct path to data from PLCs, SCADA systems, and other OPC UA-enabled equipment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also made important improvements to the Processing Engine itself:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;New synchronous write controls give plugin authors more flexibility over durability and throughput.&lt;/li&gt;
  &lt;li&gt;Batch write support improves efficiency for high-volume workloads.&lt;/li&gt;
  &lt;li&gt;Asynchronous request handling keeps status checks and control operations responsive during long-running jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these updates make the Processing Engine a more practical way to build and operate real-time data pipelines directly inside InfluxDB 3. &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Check out our docs to learn more&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="better-visibility-for-cloud-dedicated-customers"&gt;Better visibility for Cloud Dedicated customers&lt;/h2&gt;

&lt;p&gt;As teams run production workloads on Cloud Dedicated, understanding how the system is being used becomes just as important as performance itself.&lt;/p&gt;

&lt;p&gt;This quarter, we introduced:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Query History (GA)&lt;/strong&gt; for troubleshooting, performance analysis, and deeper insight into query activity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;S3 API dashboards (Tier 1 and Tier 2)&lt;/strong&gt;, including monthly usage visibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates give teams better visibility into system behavior, usage patterns, and a faster path to understanding activity across the environment. &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/query-data/"&gt;Detailed docs here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6NxMXhxR3dvcUzNXa83cwN/5fa53025e47b947a57b55675b37d11c1/Screenshot_2026-04-07_at_5.45.32â__PM.png" alt="Q1 update SS" /&gt;&lt;/p&gt;

&lt;h2 id="influxdb-enterprise-1123-delivers-efficiency-gains-for-v1-environments"&gt;InfluxDB Enterprise 1.12.3 delivers efficiency gains for v1 environments&lt;/h2&gt;

&lt;p&gt;For teams needing more performance and running large-scale v1 Enterprise environments, InfluxDB Enterprise 1.12.3 is now available with substantial improvements in efficiency and reliability:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;100x faster retention enforcement for high-cardinality datasets&lt;/li&gt;
  &lt;li&gt;30% lower CPU usage during compaction&lt;/li&gt;
  &lt;li&gt;5x faster backups with configurable compression&lt;/li&gt;
  &lt;li&gt;3x less disk I/O during cold shard compactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These improvements make Enterprise v1 clusters more efficient, more predictable under load, and more cost-effective to operate. &lt;a href="https://docs.influxdata.com/enterprise_influxdb/v1/about_the_project/release-notes/"&gt;Read the release notes&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="amazon-timestream-for-influxdb-adds-a-new-scale-tier-and-simple-upgrade-path"&gt;Amazon Timestream for InfluxDB adds a new scale tier and simple upgrade path&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 on Amazon Timestream for InfluxDB now supports clusters of up to 15 nodes, giving customers a new scale tier for more demanding real-time workloads.&lt;/p&gt;

&lt;p&gt;This expanded tier improves query concurrency, increases ingestion throughput, and provides stronger workload isolation across ingestion, queries, and compaction. For teams running high-velocity, high-resolution data in production, that means more headroom to scale without compromising real-time performance.&lt;/p&gt;

&lt;p&gt;Customers can also seamlessly migrate from InfluxDB 3 Core to InfluxDB 3 Enterprise, making it easier to move into this higher-performance tier without a manual architectural overhaul or data loss. The new 15-node option is available for InfluxDB 3 Enterprise in all AWS regions where Amazon Timestream for InfluxDB is offered. &lt;a href="https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/"&gt;Read more here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="looking-ahead"&gt;Looking ahead&lt;/h2&gt;

&lt;p&gt;Taken together, these updates are about helping teams do more with less friction: move data faster, operate with more confidence, and scale time series workloads without losing control.
As operational data becomes more central to modern systems, we are continuing to invest in the infrastructure that turns that data into action across edge, cloud, and distributed environments.&lt;/p&gt;
</description>
      <pubDate>Wed, 08 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/q1-product-recap-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/q1-product-recap-2026/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Ryan Nelson (InfluxData)</author>
    </item>
    <item>
      <title>New Plugins, Faster Writes, and Easier Configuration: What’s New with the InfluxDB 3 Processing Engine</title>
      <description>&lt;p&gt;The Processing Engine is one of the most powerful features in InfluxDB 3. It lets you run Python code at the database—transforming data on ingest, running scheduled jobs, or serving HTTP requests—without spinning up external services or building middleware. You define the logic, attach it to a trigger, and the database handles the rest.&lt;/p&gt;

&lt;p&gt;Since launching the Processing Engine, we’ve been building out both the engine itself and the ecosystem of plugins that run on it. Today, we want to walk you through some exciting recent additions: new plugins for data ingestion, import, and validation; some general improvements to the engine; and a better configuration experience using InfluxDB 3 Explorer.&lt;/p&gt;

&lt;h2 id="a-quick-refresher-processing-engine-plugins"&gt;A quick refresher: Processing Engine plugins&lt;/h2&gt;

&lt;p&gt;If you’re already familiar with the Processing Engine, feel free to skip ahead. For those newer to the concept, here’s the short version.&lt;/p&gt;

&lt;p&gt;A plugin is a Python script that runs inside InfluxDB 3 in response to a trigger. There are three trigger types: data writes (react to incoming data as it’s written), scheduled events (run on a timer or cron expression), and HTTP requests (expose a custom API endpoint). Plugins have direct access to the database: they can query and write without having to egress and ingress the data to a different machine or location.  Plugins can also talk to other systems, letting you utilize data from other places or systems.&lt;/p&gt;

&lt;p&gt;You can write your own plugins from scratch to solve problems specific to your environment. That’s the whole point of embedding Python in the database: your logic, your rules, running right next to your data.&lt;/p&gt;

&lt;p&gt;But we also know that not everyone wants to start from a blank page. That’s why we maintain an &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;official plugin library&lt;/a&gt; with production-ready plugins for common time series tasks, such as downsampling, anomaly detection, forecasting, state change monitoring, and sending notifications to Slack, email, or SMS.&lt;/p&gt;

&lt;p&gt;These official plugins are designed to work in two ways. You can install them and use them as-is, configuring them through trigger arguments or TOML files to fit your setup. Or you can treat them as templates: fork one, customize the logic, and build something tailored to your exact workflow. Either way, they’re meant to get you moving faster.&lt;/p&gt;

&lt;p&gt;One more thing worth mentioning: if you’re thinking about building a custom plugin but aren’t sure where to start, AI tools like Claude can be very effective. Point Claude to the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Processing Engine documentation&lt;/a&gt; and the &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;plugin library repo&lt;/a&gt; for examples, describe what you want your plugin to do, and let it generate a first draft. We’ve seen simple plugins created in a single shot, from description to working code, and even more complex plugins come together quickly when the AI has good examples to work from. It’s a great way to get past the blank-page problem and into something you can iterate on.&lt;/p&gt;

&lt;h2 id="new-plugins-data-ingestion-import-and-validation"&gt;New plugins: data ingestion, import, and validation&lt;/h2&gt;

&lt;p&gt;We’ve recently added several new plugins to the library that address some of the most common requests we’ve been hearing from the community. These are available now in beta—they’re fully functional, but we want to see them tested across more environments before we call them production-ready. Give them a try and let us know how they work for you.&lt;/p&gt;

&lt;h4 id="influxdb-import-plugin"&gt;InfluxDB Import Plugin&lt;/h4&gt;

&lt;p&gt;If you’re running an older version of InfluxDB and want to bring your data into InfluxDB 3, the new Import Plugin makes that significantly easier. It supports importing from InfluxDB v1, v2, or v3 instances over HTTP, with features you’d expect from a serious import tool: automatic data sampling for optimal batch sizing, pause/resume for long-running imports, progress tracking, tag/field conflict detection and resolution, configurable time ranges and table filtering, and a dry run mode so you can preview what an import will look like before committing to it.&lt;/p&gt;

&lt;p&gt;The plugin runs as an HTTP trigger, so you control the entire import lifecycle (start, pause, resume, cancel, check status) through simple HTTP requests. That means you can kick off a large import, pause it during peak hours, and pick it up later from exactly where it left off.
For small or medium-sized InfluxDB databases, some might even use this as a migration tool to move to InfluxDB 3.&lt;/p&gt;

&lt;h4 id="data-subscription-plugins-mqtt-kafka-and-amqp"&gt;Data subscription plugins: MQTT, Kafka, and AMQP&lt;/h4&gt;

&lt;p&gt;These three plugins let new InfluxDB 3 users start getting data into InfluxDB 3 fast and without coding. They let you subscribe to external message brokers and begin automatically ingesting that data into InfluxDB 3.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;MQTT Subscriber Plugin&lt;/strong&gt; connects to an MQTT broker, subscribes to topics you specify, and transforms incoming messages into time series data. It supports JSON, Line Protocol, and custom text formats with regex parsing, and uses persistent sessions to ensure reliable message delivery between executions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Kafka Subscriber Plugin&lt;/strong&gt; does the same for Kafka topics. It uses consumer groups for reliable delivery, supports configurable offset commit policies (commit on success for data integrity, or commit always for maximum throughput), and handles JSON, Line Protocol, and text formats.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AMQP Subscriber Plugin&lt;/strong&gt; rounds out the trio with support for RabbitMQ and other AMQP-compatible brokers. Like the others, it supports multiple message formats, flexible acknowledgment policies, and comprehensive error tracking.&lt;/p&gt;

&lt;h4 id="opc-ua-plugin"&gt;OPC UA Plugin&lt;/h4&gt;

&lt;p&gt;For industrial environments, the new OPC UA Plugin connects directly to PLCs, SCADA systems, and other OPC UA-enabled equipment. It polls node values on a schedule and writes them into InfluxDB 3 with automatic data type detection. You can list specific nodes for precise control, or use browse mode to auto-discover devices and variables across large deployments. The plugin maintains a persistent connection between polling intervals and supports quality filtering, namespace URI resolution, and TLS security.&lt;/p&gt;

&lt;p&gt;Now, you might be thinking: “I’m already using Telegraf to interface with my streaming data services or OPC UA, why do I need these?” If Telegraf is working well for you, that’s great; there’s no need to change what isn’t broken. But if you’re newer to InfluxDB and aren’t yet a Telegraf user, these plugins give you another way to quickly get data flowing into InfluxDB 3 without adding another component to your stack.&lt;/p&gt;

&lt;p&gt;All three plugins share a consistent configuration model: you can set them up with CLI arguments for simple cases or TOML configuration files for more complex mapping scenarios. They all include built-in error tracking (logging parse failures to dedicated exception tables) and write statistics so you can monitor ingestion health over time.&lt;/p&gt;

&lt;h4 id="schema-validator-plugin"&gt;Schema Validator Plugin&lt;/h4&gt;

&lt;p&gt;One of the benefits of InfluxDB is that you don’t have to pre-define a schema. Data gets written as it is received. But for some use cases our customers have, they do want to constrain  incoming data to conform to a specific schema.&lt;/p&gt;

&lt;p&gt;The Schema Validator Plugin addresses that challenge, ensuring only clean, well-structured data makes it into your production tables. You define a JSON schema that specifies allowed measurements, required and optional tags and fields, data types, and allowed values. The plugin sits on a WAL flush trigger and validates every incoming row against your schema. Rows that pass get written to your target database or table; rows that fail get rejected (and optionally logged so you can see what’s being filtered out).&lt;/p&gt;

&lt;p&gt;A typical pattern is to write raw data into a single database or table, let the validator check it, and have clean data land in a separate database or table. It’s a straightforward way to build a reliable data pipeline without external tooling.&lt;/p&gt;

&lt;h4 id="processing-engine-general-improvements"&gt;Processing Engine general improvements&lt;/h4&gt;

&lt;p&gt;Alongside the new plugins, we’ve made several improvements to the Processing Engine itself that give plugin authors more control over write behavior, throughput, and concurrency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous writes with durability control&lt;/strong&gt;. New synchronous write functions let you choose between two modes: wait for the write to persist to the WAL before returning (for cases where you need to query the data immediately after writing), or return immediately for maximum throughput. This means you can treat bulk telemetry data as a fast path while ensuring that coordination states, such as job checkpoints or configuration flags, are immediately durable and queryable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch writes&lt;/strong&gt;. If your plugin writes thousands of points, the overhead isn’t in the data itself; it’s in the repeated write calls. The new batch write capability lets you group many records into a single write operation, which can dramatically improve throughput and make memory usage more predictable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous request handling&lt;/strong&gt;. Request-based triggers now support concurrent execution. Previously, request handlers processed one request at a time, which meant a slow request would block everything behind it. With asynchronous mode enabled, the engine can handle multiple requests concurrently, so status checks, control commands, and other lightweight requests stay responsive even while a heavy operation is running.&lt;/p&gt;

&lt;p&gt;These improvements work together in practice. The Import Plugin, for example, uses batch writes with fast-path durability for bulk data transfer, synchronous durable writes for checkpoints and state, and async request handling to keep its pause/resume/status endpoints responsive during long-running imports.&lt;/p&gt;

&lt;h2 id="easier-plugin-configuration-in-explorer"&gt;Easier plugin configuration in Explorer&lt;/h2&gt;

&lt;p&gt;We’ve also been improving InfluxDB 3 Explorer to make configuring plugins simpler, especially for the plugins in the library.&lt;/p&gt;

&lt;p&gt;Until now, configuring a plugin meant passing all the right parameters as startup arguments to the Python script or specifying them in a TOML file. That works, but it requires you to know exactly which parameters a plugin expects—which means reading the documentation first.&lt;/p&gt;

&lt;p&gt;We’re adding dedicated UI configuration forms for some of the plugins in Explorer. Instead of assembling a string of key-value pairs, you’ll see a form with all the available options laid out, along with descriptions and example values. Required fields are clearly marked, and the form handles the formatting for you. It’s the same configuration under the hood, just a much more approachable way to get there.&lt;/p&gt;

&lt;p&gt;This is especially helpful for plugins with more involved configuration, like the data subscription plugins. where you’re specifying broker connections, authentication, message format mappings, and field type definitions. The form-based approach removes the guesswork and lets you get a plugin running without bouncing back and forth between the docs and your terminal.
So far, we have built a specific configuration for the Import, Basic Transformation, and Downsampling plugins.&lt;/p&gt;

&lt;p&gt;This is what it looks like for the Import plugin:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3AOZLptneTTvDTFPs5CNvK/e0e621644c7c402fde86b32595b0715e/Screenshot_2026-04-07_at_9.15.20â__AM.png" alt="Import plugin SS" /&gt;&lt;/p&gt;

&lt;p&gt;This is what the Basic Transformation and Downsample configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3OMYWwTYij5hcV5B1C1Api/f79bd5d69024c0d14ff90e39dd3b0b26/Screenshot_2026-04-07_at_9.16.23â__AM.png" alt="Basic Transformation SS" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2vtmZDWXRcuTyY4odVQWZ6/d33e5aad87c3147e1fa12bf1b41f3150/Screenshot_2026-04-07_at_9.17.13â__AM.png" alt="Downsample SS" /&gt;&lt;/p&gt;

&lt;p&gt;Look for these to become available in Explorer in the next couple of months.&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next&lt;/h2&gt;

&lt;p&gt;We are continuing to improve the Processing Engine and the Plugin Library. We have an OPC UA plugin about ready for you to try, as well as some additional anomaly detection and forecasting plugins. And, we are building UI configuration for the data subscription plugins mentioned above to make them even easier to configure.&lt;/p&gt;

&lt;h2 id="try-them-out"&gt;Try them out&lt;/h2&gt;

&lt;p&gt;All new plugins are now available in beta in the &lt;a href="https://www.influxdata.com/products/processing-engine-plugins/?utm_source=website&amp;amp;utm_medium=influxdb_3_processing-engine-updates&amp;amp;utm_content=blog"&gt;InfluxDB 3 Plugin Library&lt;/a&gt;. They require InfluxDB 3 v3.8.2 or later. Install them from the CLI using the gh: prefix, or browse and install them directly from InfluxDB 3 Explorer’s Plugin Library.&lt;/p&gt;

&lt;p&gt;We’re releasing these as a beta because we want your feedback. We’ve tested them thoroughly internally, but real-world environments are always more diverse and more demanding than any test suite. If you run into issues, have ideas for improvements, or build something cool on top of these plugins, we’d love to hear from you: drop into the &lt;a href="https://discord.com/invite/influxdata"&gt;InfluxData Discord&lt;/a&gt;, post on the &lt;a href="https://community.influxdata.com/"&gt;Community Forums&lt;/a&gt;, or open an issue on &lt;a href="https://github.com/influxdata/influxdb3_plugins/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 07 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/</guid>
      <category>Developer</category>
      <category>Product</category>
      <author>Gary Fowler (InfluxData)</author>
    </item>
  </channel>
</rss>
