<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Product</title>
    <description>Posts from the Product category on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/category/tech/influxdb/</link>
    <language>en-us</language>
    <lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate>
    <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>From Edge to Enterprise: How Litmus and InfluxDB Are Modernizing the Industrial Data Stack</title>
      <description>&lt;p&gt;Today at Hannover Messe, InfluxData is announcing a strategic partnership with Litmus to address one of the most persistent challenges in industrial data: &lt;strong&gt;getting reliable, contextualized telemetry from the shop floor into production systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Litmus bridges the gap between OT systems and modern IT infrastructure, while InfluxDB serves as the industrial data hub, giving organizations both real-time operational visibility and enterprise-scale historical analysis in a unified architecture.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/ZK8Y3Nel8ihgcMLPyAleL/171b1f00ed9918d40f48afdab4c87199/Screenshot_2026-04-17_at_2.00.54â__PM.png" alt="Influx + Litmus logo" /&gt;&lt;/p&gt;

&lt;p&gt;By integrating &lt;a href="https://litmus.io/litmus-edge"&gt;Litmus Edge&lt;/a&gt; with &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=litmus_and_influxdata_partnership&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt;, teams can collect and contextualize data at the source, then write it into a time series engine built for high-resolution data. Litmus handles connectivity and data normalization at the edge. InfluxDB provides high-throughput ingestion, real-time querying, and cost-efficient long-term storage, deployable at the edge, in the enterprise layer, or both.&lt;/p&gt;

&lt;p&gt;The result is a system that captures every signal, retains its context, and makes it immediately usable&lt;/p&gt;

&lt;h2 id="the-industrial-data-problem"&gt;The industrial data problem&lt;/h2&gt;

&lt;p&gt;Something has shifted in industrial sectors. Modernization is no longer a roadmap item, but it’s starting to hit real constraints. The pull: industrial AI initiatives, predictive maintenance, cross-site analytics, digital twins, offer attractive value propositions. The push: legacy data historians are buckling under the demands of modern industrial operations, and the cost of extension is becoming harder to justify.&lt;/p&gt;

&lt;p&gt;OT environments are notoriously fragmented. PLCs, CNCs, SCADA systems, and sensors operate across different protocols, vendors, and network boundaries. Getting that data into a usable, consistent format still requires heavy integration, time, and cost.&lt;/p&gt;

&lt;p&gt;Traditional Historians made progress on the industrial data problem, but they weren’t built for what comes next. They struggle to preserve context across systems, degrade under high-frequency ingest and query load, and make cross-site analysis slow and expensive. This forces teams into trade-offs between fidelity, scale, and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s the core issue: the value of industrial data is in its resolution and context. Most systems weren’t designed to retain either at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id="how-litmus-and-influxdb-work-together"&gt;How Litmus and InfluxDB work together&lt;/h2&gt;

&lt;p&gt;To move forward, teams need an architecture built for how industrial data actually behaves: high-frequency, distributed, and context-dependent. Litmus Edge and InfluxDB 3 Enterprise provide that foundation by collecting and structuring data at the edge, then making it available centrally without losing resolution or context.&lt;/p&gt;

&lt;p&gt;Here’s how that looks in practice:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5OMDcrZFgEbU1ZBcZ8Uy8G/870217aff5fd191fde503594b80db336/Screenshot_2026-04-17_at_2.03.15â__PM.png" alt="Litmus + IDB architecture" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;250+ prebuilt industrial connectors&lt;/strong&gt;. Out-of-the-box connectivity to industrial data sources, including legacy systems and proprietary protocols. No custom integration required.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Collect and contextualize at scale&lt;/strong&gt;. Normalize and contextualize telemetry from the source, with unlimited cardinality that preserves full context without compromising query performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Centralized data, not silos&lt;/strong&gt;. Bring telemetry from tools, teams, and sites into a single architecture, from single-site monitoring to cross-plant analytics, without a data consolidation project.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Buffered, store-and-forward data transfer&lt;/strong&gt;. Buffer and transmit data from remote sites with intermittent connectivity, with no loss or manual recovery.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Retain more, spend less&lt;/strong&gt;. Keeps high-resolution data accessible long-term with object storage, without driving up storage costs as you scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7fPG6jqxIE4VktLXwV8SbR/4520cfd13bd2e3f1b503de0ef732f5ea/Screenshot_2026-04-17_at_2.04.58â__PM.png" alt="Litmus quote 1" /&gt;&lt;/p&gt;

&lt;h2 id="the-edge-collect-contextualize-buffer"&gt;The edge: collect, contextualize, buffer&lt;/h2&gt;

&lt;p&gt;Litmus Edge acts as the intelligence layer between your machines and the rest of your data architecture. With 250+ native connectors spanning OPC-UA, Modbus, MQTT, FANUC, Siemens S7, and more, it connects directly to industrial sources (PLCs, CNCs, DCS, SCADA systems, sensors, and beyond) without custom integration.&lt;/p&gt;

&lt;p&gt;But connectivity alone isn’t enough. Raw signals without context aren’t useful. Litmus Edge tags, enriches, and structures data at the point of collection so a temperature reading is tied to an asset, production line, facility, and product run. By the time it leaves the edge, it’s already queryable.&lt;/p&gt;

&lt;h2 id="the-industrial-data-hub-centralize-scale-retain"&gt;The industrial data hub: Centralize, scale, retain&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 serves as the system of record for industrial time series data, whether deployed at the edge, centralized in the enterprise layer, or both.&lt;/p&gt;

&lt;p&gt;At the site level, InfluxDB runs locally alongside Litmus Edge, ingesting full-resolution telemetry and serving low-latency queries for real-time operations. It operates autonomously, so if connectivity to the central hub is interrupted, data is buffered locally and automatically forwarded when the connection is restored. There’s no data loss or manual intervention.&lt;/p&gt;

&lt;p&gt;At the enterprise level, a centralized InfluxDB cluster aggregates data from every site into a single query layer across assets, plants, and time horizons. This creates a consistent, high-resolution data layer that can be used across operations, analytics, and industrial AI.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/27iTqGpIQNfbNF1D1C9PUU/b6a34c5dc5099af641a34a9f803cf32f/Screenshot_2026-04-17_at_2.05.49â__PM.png" alt="Litmus quote 2" /&gt;&lt;/p&gt;

&lt;h2 id="the-bridge-to-higher-level-analytics"&gt;The bridge to higher-level analytics&lt;/h2&gt;

&lt;p&gt;With high-resolution, contextualized data available across systems, teams can move beyond basic monitoring. Predictive maintenance, anomaly detection, and cross-site analytics all depend on full-fidelity data. Industrial AI at the edge depends on low-latency access to it. Without that foundation, these systems don’t operate reliably. That’s what this architecture enables.&lt;/p&gt;

&lt;h2 id="get-started"&gt;Get started&lt;/h2&gt;

&lt;p&gt;Whether you’re starting a greenfield initiative or hitting the limits of your current industrial data infrastructure, we’d love to talk.&lt;/p&gt;

&lt;p&gt;Reach out to &lt;a href="https://www.influxdata.com/contact-sales/"&gt;connect to an expert&lt;/a&gt; or join the conversation in the &lt;a href="https://community.influxdata.com/"&gt;InfluxData Community Forums&lt;/a&gt; where our team and broader community are active.&lt;/p&gt;

&lt;p&gt;If you’re attending Hannover Messe, &lt;a href="https://www.influxdata.com/event/meet-influxdb-at-hannover-messe-2026/?utm_source=website&amp;amp;utm_medium=litmus_and_influxdata_partnership&amp;amp;utm_content=blog"&gt;come find me at the Litmus booth&lt;/a&gt; (Stand A09 in Hall 16) and see the architecture running end-to-end.&lt;/p&gt;
</description>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/litmus-and-influxdata-partnership/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/litmus-and-influxdata-partnership/</guid>
      <category>Company</category>
      <category>Product</category>
      <author>Ben Corbett (InfluxData)</author>
    </item>
    <item>
      <title>From Edge to Cloud: How Litmus Edge and InfluxDB Unlock Industrial Intelligence at Hannover Messe</title>
      <description>
&lt;p&gt;If you’ve spent time in industrial environments, you know the problem isn’t a lack of data. It’s collecting it reliably, contextualizing it, and storing it at scale. Most stacks weren’t built to fight all three battles.&lt;/p&gt;

&lt;h2 id="the-industrial-data-problem"&gt;The industrial data problem&lt;/h2&gt;

&lt;p&gt;Industrial connectivity is no joke. OT environments are notoriously fragmented and siloed, spanning PLCs, CNCs, SCADA systems, and sensors, each speaking a different protocol, running on a different vendor’s stack, and operating in a network zone that was never designed to talk to anything outside the shop floor.  Extracting value from that data has traditionally required heavy IT involvement, expensive integrations, and months of professional services work, and the traditional answer was usually a historian. Historians made progress on the access problem, giving individual sites a way to capture and store machine data. But standardizing that data across silos and contextualizing it across systems and plants is where they fall short. And unfortunately, that’s where most of the value lies.&lt;/p&gt;

&lt;p&gt;Once data is collected and contextualized, the next problem is keeping it useful at scale. This is more than a storage problem. Sustaining high-frequency ingest of contextualized telemetry and querying that data fast enough to act on it is where most systems break. Historians were not designed for this. They sacrifice resolution, degrade under query load, and make cross-site, cross-system analysis slow and impractical. The value in industrial data is in the detail, and most platforms are architected to throw this detail away.&lt;/p&gt;

&lt;h2 id="collect-contextualize-and-storeall-at-the-edge"&gt;Collect, contextualize, and store—all at the edge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://litmus.io/litmus-edge"&gt;Litmus Edge&lt;/a&gt; acts as the intelligence layer between your machines and the rest of your data architecture. It connects natively to hundreds of industrial protocols, including OPC-UA, Modbus, MQTT, FANUC, Siemens S7, and many more, normalizing disparate machine data into a unified, consistent stream.&lt;/p&gt;

&lt;p&gt;But connectivity alone isn’t enough. Raw machine signals mean little without context. Litmus Edge allows operations teams to tag, enrich, and structure data at the point of collection. A temperature reading becomes tied to a specific asset, production line, facility, and product run. By the time data leaves the edge, it is no longer just a number. It is a meaningful, queryable event.&lt;/p&gt;

&lt;h2 id="scale-query-retain-your-industrial-data-hub"&gt;Scale, query, retain: your industrial data hub&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt; becomes the system of record for your industrial time series data at the edge, in a centralized environment, or both.&lt;/p&gt;

&lt;p&gt;It ingests high-frequency telemetry at full resolution, serves low-latency queries for real-time operations, and scales to fleet-wide analysis across sites and time horizons without forcing tradeoffs between fidelity and cost. High cardinality isn’t a problem to design around. Long-term retention doesn’t require a cost penalty. The data stays detailed, queryable, and useful.&lt;/p&gt;

&lt;h2 id="scaling-across-lines-sites-and-the-enterprise"&gt;Scaling across lines, sites, and the enterprise&lt;/h2&gt;

&lt;p&gt;Scale changes what’s possible, but only if the data model scales with it. When every site collects and contextualizes data the same way, writing to a consistent schema, cross-site analysis becomes straightforward. Comparing performance across plants, identifying outliers, and correlating signals across a global fleet become simple queries instead of integration projects. That consistency is what the Litmus and InfluxDB architecture is designed to deliver.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;em&gt;Which production lines across all facilities are showing early indicators of equipment degradation?&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;How does energy consumption per unit compare across sites running similar processes?&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;Where are the outliers? And what can the top performers teach the rest of the network?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not hypothetical future capabilities. They are available today to any organization willing to invest in getting the data foundation right.&lt;/p&gt;

&lt;h2 id="the-bridge-to-higher-level-analytics"&gt;The bridge to higher-level analytics&lt;/h2&gt;

&lt;p&gt;InfluxDB doesn’t just store data well; it integrates cleanly with the ecosystem: the analytics, visualization, and AI/ML tooling your teams are already investing in. Grafana dashboards, anomaly detection workflows, and digital twin platforms connect through InfluxDB’s SQL-native interface and open APIs without custom pipelines or bespoke integration work.&lt;/p&gt;

&lt;p&gt;For OT teams, that’s the point. The edge handles the hard part—protocol translation, normalization, enrichment. InfluxDB centralizes the results into a single, interoperable data layer that every team can query with the tools they already use.&lt;/p&gt;

&lt;p&gt;The result is a data architecture that is genuinely interoperable; the plant floor and the enterprise layer are finally speaking the same language.&lt;/p&gt;

&lt;h2 id="extending-into-the-cloud-with-aws"&gt;Extending into the cloud with AWS&lt;/h2&gt;

&lt;p&gt;There are several ways to deploy InfluxDB as your industrial data hub: on-premises, at the edge, or in the cloud. For teams who want to go straight to the cloud, AWS is a natural fit. In this reference architecture, Litmus Edge writes contextualized telemetry directly into &lt;a href="https://www.influxdata.com/products/timestream-for-influxdb/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;Amazon Timestream for InfluxDB&lt;/a&gt;, creating a seamless path from the shop floor to cloud-scale analytics. This allows teams to centralize access, scale analytics, and integrate with the broader AWS ecosystem without rebuilding their infrastructure from scratch.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7I05B89zisdmKtUk9EiUt6/e10ba53b117ae6b4c25dcfd791321705/image__6_.png" alt="Litmus Edge diagram" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Once data is available in AWS, it opens up a broader set of capabilities. For example, as new data arrives, you can trigger serverless workflows with AWS Lambda, stream high-velocity data through Kinesis for downstream processing, or connect directly to SageMaker to train models on high-fidelity data, without reshaping or downsampling it first.&lt;/p&gt;

&lt;h2 id="what-were-showing-at-hannover-messe"&gt;What we’re showing at Hannover Messe&lt;/h2&gt;

&lt;p&gt;At Hannover Messe, you’ll be able to see this architecture running end-to-end:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href="https://litmus.io/hannover-messe-2026"&gt;Litmus booth&lt;/a&gt; (Hall 16, Stand A09)&lt;/strong&gt;: The full Digital Factory demo, showing how data flows from industrial systems into Litmus and into InfluxDB 3 Enterprise in real-time.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href="https://www.influxdata.com/event/meet-influxdb-at-hannover-messe-2026/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;InfluxData kiosk&lt;/a&gt; (within the Litmus booth)&lt;/strong&gt;: A deeper look at how InfluxDB handles high-frequency ingest, real-time querying, and efficient storage at massive scale.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;AWS booth (Litmus kiosk)&lt;/strong&gt;: The cloud extension of the demo, highlighting replication into Amazon Timestream for InfluxDB and integration with AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The InfluxData team (including myself) will be on-site at the Litmus booth throughout the event to walk through the architecture and discuss real-world deployment patterns.&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Post by Ben Corbett, InfluxData; Rajesh Gomatam, Ph.D. Principal Partner Solutions Architect - Manufacturing, AWS; and Benjamin Norman, Partner Solution Architect, Litmus&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Thu, 16 Apr 2026 06:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/litmus-edge-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/litmus-edge-influxdb/</guid>
      <category>Demo</category>
      <category>Product</category>
      <category>Developer</category>
      <author>Ben Corbett (InfluxData)</author>
    </item>
    <item>
      <title>What’s New in InfluxDB 3 Explorer 1.7: Table Management, Data Import, Transforms, and More</title>
      <description>
&lt;p&gt;InfluxDB 3 Explorer 1.7 is a step forward for anyone who wants to manage their time series data without constantly switching between the UI and a terminal. This release adds table-level schema management, the ability to import data from other InfluxDB instances, and a new Transform Data section to reshape your data, all within the Explorer UI.&lt;/p&gt;

&lt;h2 id="table-management"&gt;Table management&lt;/h2&gt;

&lt;p&gt;Previously, if you wanted to see what tables existed inside a database, you had to query system tables or use the API. The new Manage Tables page changes that.
You can get there from the sidebar or from the new actions menu on any database in the Manage Databases page. That actions menu gives you quick access to query a database, view its tables, or delete it.&lt;/p&gt;

&lt;p&gt;The Manage Tables page lists every table in the selected database, along with its column count, type, and any configured &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/distinct-value-cache/"&gt;Distinct Value&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/last-value-cache/"&gt;Last Value&lt;/a&gt; Caches. Use the toggle filters to show or hide system tables and deleted tables. Deleted tables show up with a “Pending Delete” badge when the Show Deleted Tables toggle is enabled, so you always have visibility into what’s been removed.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6U2nqrukRwDJktsHPjiL91/4a8a861bf96b52061a6def8e23726593/Screenshot_2026-04-14_at_6.13.48â__PM.png" alt="Explorer 1.7 Manage Tables" /&gt;&lt;/p&gt;

&lt;p&gt;You can also &lt;strong&gt;create new tables&lt;/strong&gt; directly from this page. The Create Table dialog lets you define the schema up front: name, fields with data types, optional tags, and a retention period. This is useful when you want to control your schema explicitly rather than relying on &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/write/"&gt;schema-on-write&lt;/a&gt; to infer types from the first arriving data points.&lt;/p&gt;

&lt;p&gt;From any table’s action menu, you can jump straight to the Data Explorer with a pre-built query for that table.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/46bQpfsOyXjWem9M4125o7/73e9dcd0a33e3b11982d806d6d0f0504/Screenshot_2026-04-14_at_6.15.43â__PM.png" alt="1.7 Schema on Write" /&gt;&lt;/p&gt;

&lt;h2 id="import-from-influxdb"&gt;Import from InfluxDB&lt;/h2&gt;

&lt;p&gt;The next few features I’ll discuss are enhancements that make it much easier to work with the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;InfluxDB 3 Processing Engine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moving data between InfluxDB instances used to mean writing scripts, dealing with export formats, and coordinating tokens across environments. The new &lt;strong&gt;&lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/import"&gt;Import from InfluxDB&lt;/a&gt;&lt;/strong&gt; feature provides a guided workflow for migrating small-to-medium datasets from any existing InfluxDB v1, v2, or v3 instance (assuming v3 Schema compatibility) into your current InfluxDB 3 database.&lt;/p&gt;

&lt;p&gt;You’ll find it under the Write Data section, on both the Dev Data and Production Data pages. The workflow walks you through selecting a target database (or creating a new one), connecting to a source InfluxDB instance, authenticating, and then choosing which databases and tables to import.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2krWp1AKKHN86ICg70mjBL/b22f50fdf84fb8cbe43bb1be4d3f747e/Screenshot_2026-04-14_at_6.17.45â__PM.png" alt="Writing Dev Data" /&gt;&lt;/p&gt;

&lt;p&gt;Before committing to the import, perform a &lt;strong&gt;dry run&lt;/strong&gt; that shows you exactly what will be transferred, including the source and destination, the number of tables, the estimated row count, and how long it should take. Advanced options let you tune the batch size and concurrency if you need to balance import speed against resource usage.&lt;/p&gt;

&lt;p&gt;Once you start the import, a live progress view shows you how far along things are, how many rows have been imported, and the current status of each table. When it finishes, a “Query this database” button takes you straight to the Data Explorer so you can verify everything landed correctly.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1Ao5CzW0yXUYPijeK0k2Vu/44b63c64f71ccdd05a5fb3f74b048329/Screenshot_2026-04-14_at_6.19.20â__PM.png" alt="Write Data" /&gt;&lt;/p&gt;

&lt;p&gt;If you’re running an InfluxDB 1.x or 2.x instance and want to try InfluxDB 3 with your real data, this saves you from building a migration pipeline. Just point the import tool at your existing instance, pick the databases and time range you want, and the data flows over. It also works for consolidating data from multiple InfluxDB 3 instances into one place, or pulling production data into a dev environment for testing.&lt;/p&gt;

&lt;h2 id="transform-data"&gt;Transform data&lt;/h2&gt;

&lt;p&gt;The new &lt;strong&gt;Transform Data&lt;/strong&gt; section in the sidebar gives you a visual interface for setting up data transformations that run automatically on ingestion via the Processing Engine. Under the hood, these are powered by the &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/basic_transformation"&gt;Basic Transformation Processing Engine plugin&lt;/a&gt;, but you don’t need to write any plugin configuration by hand. The UI handles that for you.&lt;/p&gt;

&lt;p&gt;The way it works: when data is written to a source table, the transformation runs automatically and writes the results to a target database or table. You can set a short &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/databases/#table-retention-period"&gt;retention period&lt;/a&gt; on the source data (say, one day) so the raw data cleans itself up, and the transformed data lives on in the destination. There are four types of transformations available.&lt;/p&gt;

&lt;h4 id="rename-table"&gt;Rename Table&lt;/h4&gt;

&lt;p&gt;Rename Table lets you route data arriving in one table to another table. This is handy when you’re consuming data from a source you don’t control, and the table names don’t match your naming conventions.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5BiXqB4Q9BDHEFsOv8QtaW/c56cd9fe61d7ca91c1dcc37385bf6656/Screenshot_2026-04-14_at_6.24.41â__PM.png" alt="rename table" /&gt;&lt;/p&gt;

&lt;h4 id="rename-columns"&gt;Rename Columns&lt;/h4&gt;

&lt;p&gt;Rename Columns works similarly, but at the column level. You pick a source table and select which columns to rename. If you’re integrating data from different systems that use different naming conventions (for example, &lt;code class="language-markup"&gt;temp_f&lt;/code&gt; vs &lt;code class="language-markup"&gt;temperature_fahrenheit&lt;/code&gt;), this standardizes everything without touching the source.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3hF8Wa6vbro73j1A2O3f6W/cae32a0cfe6a43949f5b64b09a7338c2/Screenshot_2026-04-14_at_6.27.58â__PM.png" alt="rename columns" /&gt;&lt;/p&gt;

&lt;h4 id="transform-values"&gt;Transform Values&lt;/h4&gt;

&lt;p&gt;Transform Values lets you apply calculations or conversions to field values as they come in. You can do math operations, string transformations, unit conversions, or simple find-and-replace. If your sensors report temperature in Celsius but your dashboards expect Fahrenheit, this handles the conversion at ingestion time so your queries stay clean.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2rTFmTLs7vQ2Z5LPUDHzTx/e10529f9e3eb69f7a8e251956a9acff4/Screenshot_2026-04-14_at_6.29.13â__PM.png" alt="transform values" /&gt;&lt;/p&gt;

&lt;h4 id="filter-data"&gt;Filter Data&lt;/h4&gt;

&lt;p&gt;Filter Data lets you keep only the rows or columns that match specific conditions. You can filter by rows (e.g., only keep data where &lt;code class="language-markup"&gt;crop_type = 'carrots'&lt;/code&gt;) or by columns (drop fields you don’t need). This is useful when you’re receiving more data than you actually want to store. For example, a third-party feed might send 50 fields when you only care about 5.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4mTxJgxUUyEZH7RSbRXRet/c67d429d6e87d4bfdb0b90c29e9cbbbc/Screenshot_2026-04-14_at_6.30.22â__PM.png" alt="create transform" /&gt;&lt;/p&gt;

&lt;p&gt;You can test each transformation before deployment, and once deployed, monitor its status (running, stopped, errors) from the Transform Data dashboard.&lt;/p&gt;

&lt;h4 id="downsample-data"&gt;Downsample Data&lt;/h4&gt;

&lt;p&gt;Downsampling is a classic time series operation: take high-frequency data and roll it up into lower-frequency summaries to save storage and speed up queries over long time ranges. The new &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/library/official/downsampler/"&gt;&lt;strong&gt;Downsample&lt;/strong&gt;&lt;/a&gt; page, also under the Transform Data section, makes this easy to set up.
You create a downsample trigger by specifying a source table, a target table, a schedule (how often the aggregation runs), a time window (how far back to look), an aggregation interval (the bucket size), and an aggregation function (avg, sum, min, max, etc.). You can also choose to include or exclude specific fields.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7yPPBCTavele7EaFCLvIsa/156aa1c09f6bbb88b37ff14f425ce995/Screenshot_2026-04-14_at_6.31.40â__PM.png" alt="downsample" /&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/downsampler/"&gt;Downsample Processing Engine plugin&lt;/a&gt; powers this feature.&lt;/p&gt;

&lt;h2 id="get-started"&gt;Get started&lt;/h2&gt;

&lt;p&gt;All of these features are available now in &lt;a href="https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Explorer 1.7&lt;/a&gt;. For more on these Processing Engine capabilities, see InfluxDB 3 Processing Engine Updates.&lt;/p&gt;

&lt;p&gt;If you’re running &lt;a href="https://docs.influxdata.com/influxdb3/core/install/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;, update to the latest version to try them out. To learn more, check out the &lt;a href="https://docs.influxdata.com/influxdb3/explorer/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Explorer documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To update InfluxDB 3 Explorer, pull the latest Docker image:
&lt;code class="language-markup"&gt;docker pull influxdata/influxdb3-ui&lt;/code&gt;&lt;/p&gt;
</description>
      <pubDate>Wed, 15 Apr 2026 05:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-explorer-1-7/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-explorer-1-7/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Daniel Campbell (InfluxData)</author>
    </item>
    <item>
      <title>Less Friction, More Control: Here's What Shipped in Q1</title>
      <description>&lt;p&gt;Our Q1 momentum has been focused on a simple goal: making InfluxDB easier to operate, easier to scale, and faster to put to work.&lt;/p&gt;

&lt;p&gt;Across Telegraf, InfluxDB 3, and our managed offerings, these updates reduce friction in how teams collect, process, and scale time series workloads.&lt;/p&gt;

&lt;h2 id="telegraf-controller-enters-beta"&gt;Telegraf Controller enters beta&lt;/h2&gt;

&lt;p&gt;Telegraf is already a powerful way to collect metrics, logs, and events across environments. At scale, the challenge shifts from collection to control. Telegraf Enterprise is designed to solve that problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At the center is Telegraf Controller, a control plane that gives teams centralized configuration management and fleet-wide health visibility&lt;/strong&gt;. The beta includes major capabilities such as API authentication, API token management, user account management, multi-user support, role-based access control, global settings management, and expanded plugin support in the visual config builder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feedback from early users is shaping the road to general availability, with enterprise licensing, enforcement, audit logging, and federated identity management next on the roadmap.&lt;/strong&gt; &lt;a href="https://www.influxdata.com/products/telegraf-enterprise/?utm_source=website&amp;amp;utm_medium=q1_product_recap_2026&amp;amp;utm_content=blog"&gt;Sign up to join the beta&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2C5Q22cX3rXamZNOqVDPIF/a46fed22b3ff4f33e7552dddcddc8796/Screenshot_2026-04-07_at_5.41.54â__PM.png" alt="Telegraf Agents SS" /&gt;&lt;/p&gt;

&lt;h2 id="influxdb-39-adds-more-operational-control"&gt;InfluxDB 3.9 adds more operational control&lt;/h2&gt;

&lt;p&gt;Last week’s &lt;a href="https://www.influxdata.com/blog/influxdb-3-9/"&gt;release&lt;/a&gt; of &lt;strong&gt;InfluxDB 3.9 is focused on making the platform easier to run at scale, 
with improvements aimed at predictability, visibility, and day-to-day management&lt;/strong&gt;. The release expands CLI and automation support for headless environments, improves resource and lifecycle management, and adds clearer visibility into access control and product identity across Core and Enterprise deployments. These are the changes that matter in production: fewer rough edges, stronger operational clarity, and better control as workloads grow.&lt;/p&gt;

&lt;p&gt;InfluxDB 3.9 Enterprise also includes a new beta performance preview for non-production environments. &lt;strong&gt;This optional preview includes optimized single-series queries, reduced CPU and memory spikes under load, support for wider and sparser schemas, and early automatic distinct value caches to reduce metadata query latency&lt;/strong&gt;. These features are not yet recommended for production, but they give customers an early look at capabilities planned for future releases and a chance to help shape what comes next.&lt;/p&gt;

&lt;h2 id="processing-engine-updates-make-influxdb-3-easier-to-operationalize"&gt;Processing Engine updates make InfluxDB 3 easier to operationalize&lt;/h2&gt;

&lt;p&gt;The Processing Engine remains one of the most powerful parts of InfluxDB 3 because it allows teams to run logic directly at the database. Users can transform data on ingest, run scheduled jobs, or serve HTTP requests without adding external services or layering on more pipeline complexity.&lt;/p&gt;

&lt;p&gt;This quarter, we continued to expand both the engine itself and the plugin ecosystem around it. 
The latest plugins make it easier to get data into InfluxDB 3 from more sources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The Import Plugin&lt;/strong&gt; provides a simpler path for bringing data from InfluxDB v1, v2, or v3 into InfluxDB 3 Core and Enterprise, with support for dry runs, progress tracking, pause and resume, conflict handling, and flexible filtering.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;New MQTT, Kafka, and AMQP subscription plugins&lt;/strong&gt; help users ingest streaming data directly from external message brokers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The new OPC UA Plugin&lt;/strong&gt; gives industrial teams a more direct path to data from PLCs, SCADA systems, and other OPC UA-enabled equipment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also made important improvements to the Processing Engine itself:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;New synchronous write controls give plugin authors more flexibility over durability and throughput.&lt;/li&gt;
  &lt;li&gt;Batch write support improves efficiency for high-volume workloads.&lt;/li&gt;
  &lt;li&gt;Asynchronous request handling keeps status checks and control operations responsive during long-running jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these updates make the Processing Engine a more practical way to build and operate real-time data pipelines directly inside InfluxDB 3. &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Check out our docs to learn more&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="better-visibility-for-cloud-dedicated-customers"&gt;Better visibility for Cloud Dedicated customers&lt;/h2&gt;

&lt;p&gt;As teams run production workloads on Cloud Dedicated, understanding how the system is being used becomes just as important as performance itself.&lt;/p&gt;

&lt;p&gt;This quarter, we introduced:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Query History (GA)&lt;/strong&gt; for troubleshooting, performance analysis, and deeper insight into query activity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;S3 API dashboards (Tier 1 and Tier 2)&lt;/strong&gt;, including monthly usage visibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates give teams better visibility into system behavior, usage patterns, and a faster path to understanding activity across the environment. &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/query-data/"&gt;Detailed docs here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6NxMXhxR3dvcUzNXa83cwN/5fa53025e47b947a57b55675b37d11c1/Screenshot_2026-04-07_at_5.45.32â__PM.png" alt="Q1 update SS" /&gt;&lt;/p&gt;

&lt;h2 id="influxdb-enterprise-1123-delivers-efficiency-gains-for-v1-environments"&gt;InfluxDB Enterprise 1.12.3 delivers efficiency gains for v1 environments&lt;/h2&gt;

&lt;p&gt;For teams needing more performance and running large-scale v1 Enterprise environments, InfluxDB Enterprise 1.12.3 is now available with substantial improvements in efficiency and reliability:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;100x faster retention enforcement for high-cardinality datasets&lt;/li&gt;
  &lt;li&gt;30% lower CPU usage during compaction&lt;/li&gt;
  &lt;li&gt;5x faster backups with configurable compression&lt;/li&gt;
  &lt;li&gt;3x less disk I/O during cold shard compactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These improvements make Enterprise v1 clusters more efficient, more predictable under load, and more cost-effective to operate. &lt;a href="https://docs.influxdata.com/enterprise_influxdb/v1/about_the_project/release-notes/"&gt;Read the release notes&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="amazon-timestream-for-influxdb-adds-a-new-scale-tier-and-simple-upgrade-path"&gt;Amazon Timestream for InfluxDB adds a new scale tier and simple upgrade path&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 on Amazon Timestream for InfluxDB now supports clusters of up to 15 nodes, giving customers a new scale tier for more demanding real-time workloads.&lt;/p&gt;

&lt;p&gt;This expanded tier improves query concurrency, increases ingestion throughput, and provides stronger workload isolation across ingestion, queries, and compaction. For teams running high-velocity, high-resolution data in production, that means more headroom to scale without compromising real-time performance.&lt;/p&gt;

&lt;p&gt;Customers can also seamlessly migrate from InfluxDB 3 Core to InfluxDB 3 Enterprise, making it easier to move into this higher-performance tier without a manual architectural overhaul or data loss. The new 15-node option is available for InfluxDB 3 Enterprise in all AWS regions where Amazon Timestream for InfluxDB is offered. &lt;a href="https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/"&gt;Read more here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="looking-ahead"&gt;Looking ahead&lt;/h2&gt;

&lt;p&gt;Taken together, these updates are about helping teams do more with less friction: move data faster, operate with more confidence, and scale time series workloads without losing control.
As operational data becomes more central to modern systems, we are continuing to invest in the infrastructure that turns that data into action across edge, cloud, and distributed environments.&lt;/p&gt;
</description>
      <pubDate>Wed, 08 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/q1-product-recap-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/q1-product-recap-2026/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Ryan Nelson (InfluxData)</author>
    </item>
    <item>
      <title>New Plugins, Faster Writes, and Easier Configuration: What’s New with the InfluxDB 3 Processing Engine</title>
      <description>&lt;p&gt;The Processing Engine is one of the most powerful features in InfluxDB 3. It lets you run Python code at the database—transforming data on ingest, running scheduled jobs, or serving HTTP requests—without spinning up external services or building middleware. You define the logic, attach it to a trigger, and the database handles the rest.&lt;/p&gt;

&lt;p&gt;Since launching the Processing Engine, we’ve been building out both the engine itself and the ecosystem of plugins that run on it. Today, we want to walk you through some exciting recent additions: new plugins for data ingestion, import, and validation; some general improvements to the engine; and a better configuration experience using InfluxDB 3 Explorer.&lt;/p&gt;

&lt;h2 id="a-quick-refresher-processing-engine-plugins"&gt;A quick refresher: Processing Engine plugins&lt;/h2&gt;

&lt;p&gt;If you’re already familiar with the Processing Engine, feel free to skip ahead. For those newer to the concept, here’s the short version.&lt;/p&gt;

&lt;p&gt;A plugin is a Python script that runs inside InfluxDB 3 in response to a trigger. There are three trigger types: data writes (react to incoming data as it’s written), scheduled events (run on a timer or cron expression), and HTTP requests (expose a custom API endpoint). Plugins have direct access to the database: they can query and write without having to egress and ingress the data to a different machine or location.  Plugins can also talk to other systems, letting you utilize data from other places or systems.&lt;/p&gt;

&lt;p&gt;You can write your own plugins from scratch to solve problems specific to your environment. That’s the whole point of embedding Python in the database: your logic, your rules, running right next to your data.&lt;/p&gt;

&lt;p&gt;But we also know that not everyone wants to start from a blank page. That’s why we maintain an &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;official plugin library&lt;/a&gt; with production-ready plugins for common time series tasks, such as downsampling, anomaly detection, forecasting, state change monitoring, and sending notifications to Slack, email, or SMS.&lt;/p&gt;

&lt;p&gt;These official plugins are designed to work in two ways. You can install them and use them as-is, configuring them through trigger arguments or TOML files to fit your setup. Or you can treat them as templates: fork one, customize the logic, and build something tailored to your exact workflow. Either way, they’re meant to get you moving faster.&lt;/p&gt;

&lt;p&gt;One more thing worth mentioning: if you’re thinking about building a custom plugin but aren’t sure where to start, AI tools like Claude can be very effective. Point Claude to the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Processing Engine documentation&lt;/a&gt; and the &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;plugin library repo&lt;/a&gt; for examples, describe what you want your plugin to do, and let it generate a first draft. We’ve seen simple plugins created in a single shot, from description to working code, and even more complex plugins come together quickly when the AI has good examples to work from. It’s a great way to get past the blank-page problem and into something you can iterate on.&lt;/p&gt;

&lt;h2 id="new-plugins-data-ingestion-import-and-validation"&gt;New plugins: data ingestion, import, and validation&lt;/h2&gt;

&lt;p&gt;We’ve recently added several new plugins to the library that address some of the most common requests we’ve been hearing from the community. These are available now in beta—they’re fully functional, but we want to see them tested across more environments before we call them production-ready. Give them a try and let us know how they work for you.&lt;/p&gt;

&lt;h4 id="influxdb-import-plugin"&gt;InfluxDB Import Plugin&lt;/h4&gt;

&lt;p&gt;If you’re running an older version of InfluxDB and want to bring your data into InfluxDB 3, the new Import Plugin makes that significantly easier. It supports importing from InfluxDB v1, v2, or v3 instances over HTTP, with features you’d expect from a serious import tool: automatic data sampling for optimal batch sizing, pause/resume for long-running imports, progress tracking, tag/field conflict detection and resolution, configurable time ranges and table filtering, and a dry run mode so you can preview what an import will look like before committing to it.&lt;/p&gt;

&lt;p&gt;The plugin runs as an HTTP trigger, so you control the entire import lifecycle (start, pause, resume, cancel, check status) through simple HTTP requests. That means you can kick off a large import, pause it during peak hours, and pick it up later from exactly where it left off.
For small or medium-sized InfluxDB databases, some might even use this as a migration tool to move to InfluxDB 3.&lt;/p&gt;

&lt;h4 id="data-subscription-plugins-mqtt-kafka-and-amqp"&gt;Data subscription plugins: MQTT, Kafka, and AMQP&lt;/h4&gt;

&lt;p&gt;These three plugins let new InfluxDB 3 users start getting data into InfluxDB 3 fast and without coding. They let you subscribe to external message brokers and begin automatically ingesting that data into InfluxDB 3.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;MQTT Subscriber Plugin&lt;/strong&gt; connects to an MQTT broker, subscribes to topics you specify, and transforms incoming messages into time series data. It supports JSON, Line Protocol, and custom text formats with regex parsing, and uses persistent sessions to ensure reliable message delivery between executions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Kafka Subscriber Plugin&lt;/strong&gt; does the same for Kafka topics. It uses consumer groups for reliable delivery, supports configurable offset commit policies (commit on success for data integrity, or commit always for maximum throughput), and handles JSON, Line Protocol, and text formats.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AMQP Subscriber Plugin&lt;/strong&gt; rounds out the trio with support for RabbitMQ and other AMQP-compatible brokers. Like the others, it supports multiple message formats, flexible acknowledgment policies, and comprehensive error tracking.&lt;/p&gt;

&lt;h4 id="opc-ua-plugin"&gt;OPC UA Plugin&lt;/h4&gt;

&lt;p&gt;For industrial environments, the new OPC UA Plugin connects directly to PLCs, SCADA systems, and other OPC UA-enabled equipment. It polls node values on a schedule and writes them into InfluxDB 3 with automatic data type detection. You can list specific nodes for precise control, or use browse mode to auto-discover devices and variables across large deployments. The plugin maintains a persistent connection between polling intervals and supports quality filtering, namespace URI resolution, and TLS security.&lt;/p&gt;

&lt;p&gt;Now, you might be thinking: “I’m already using Telegraf to interface with my streaming data services or OPC UA, why do I need these?” If Telegraf is working well for you, that’s great; there’s no need to change what isn’t broken. But if you’re newer to InfluxDB and aren’t yet a Telegraf user, these plugins give you another way to quickly get data flowing into InfluxDB 3 without adding another component to your stack.&lt;/p&gt;

&lt;p&gt;All three plugins share a consistent configuration model: you can set them up with CLI arguments for simple cases or TOML configuration files for more complex mapping scenarios. They all include built-in error tracking (logging parse failures to dedicated exception tables) and write statistics so you can monitor ingestion health over time.&lt;/p&gt;

&lt;h4 id="schema-validator-plugin"&gt;Schema Validator Plugin&lt;/h4&gt;

&lt;p&gt;One of the benefits of InfluxDB is that you don’t have to pre-define a schema. Data gets written as it is received. But for some use cases our customers have, they do want to constrain  incoming data to conform to a specific schema.&lt;/p&gt;

&lt;p&gt;The Schema Validator Plugin addresses that challenge, ensuring only clean, well-structured data makes it into your production tables. You define a JSON schema that specifies allowed measurements, required and optional tags and fields, data types, and allowed values. The plugin sits on a WAL flush trigger and validates every incoming row against your schema. Rows that pass get written to your target database or table; rows that fail get rejected (and optionally logged so you can see what’s being filtered out).&lt;/p&gt;

&lt;p&gt;A typical pattern is to write raw data into a single database or table, let the validator check it, and have clean data land in a separate database or table. It’s a straightforward way to build a reliable data pipeline without external tooling.&lt;/p&gt;

&lt;h4 id="processing-engine-general-improvements"&gt;Processing Engine general improvements&lt;/h4&gt;

&lt;p&gt;Alongside the new plugins, we’ve made several improvements to the Processing Engine itself that give plugin authors more control over write behavior, throughput, and concurrency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous writes with durability control&lt;/strong&gt;. New synchronous write functions let you choose between two modes: wait for the write to persist to the WAL before returning (for cases where you need to query the data immediately after writing), or return immediately for maximum throughput. This means you can treat bulk telemetry data as a fast path while ensuring that coordination states, such as job checkpoints or configuration flags, are immediately durable and queryable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch writes&lt;/strong&gt;. If your plugin writes thousands of points, the overhead isn’t in the data itself; it’s in the repeated write calls. The new batch write capability lets you group many records into a single write operation, which can dramatically improve throughput and make memory usage more predictable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous request handling&lt;/strong&gt;. Request-based triggers now support concurrent execution. Previously, request handlers processed one request at a time, which meant a slow request would block everything behind it. With asynchronous mode enabled, the engine can handle multiple requests concurrently, so status checks, control commands, and other lightweight requests stay responsive even while a heavy operation is running.&lt;/p&gt;

&lt;p&gt;These improvements work together in practice. The Import Plugin, for example, uses batch writes with fast-path durability for bulk data transfer, synchronous durable writes for checkpoints and state, and async request handling to keep its pause/resume/status endpoints responsive during long-running imports.&lt;/p&gt;

&lt;h2 id="easier-plugin-configuration-in-explorer"&gt;Easier plugin configuration in Explorer&lt;/h2&gt;

&lt;p&gt;We’ve also been improving InfluxDB 3 Explorer to make configuring plugins simpler, especially for the plugins in the library.&lt;/p&gt;

&lt;p&gt;Until now, configuring a plugin meant passing all the right parameters as startup arguments to the Python script or specifying them in a TOML file. That works, but it requires you to know exactly which parameters a plugin expects—which means reading the documentation first.&lt;/p&gt;

&lt;p&gt;We’re adding dedicated UI configuration forms for some of the plugins in Explorer. Instead of assembling a string of key-value pairs, you’ll see a form with all the available options laid out, along with descriptions and example values. Required fields are clearly marked, and the form handles the formatting for you. It’s the same configuration under the hood, just a much more approachable way to get there.&lt;/p&gt;

&lt;p&gt;This is especially helpful for plugins with more involved configuration, like the data subscription plugins. where you’re specifying broker connections, authentication, message format mappings, and field type definitions. The form-based approach removes the guesswork and lets you get a plugin running without bouncing back and forth between the docs and your terminal.
So far, we have built a specific configuration for the Import, Basic Transformation, and Downsampling plugins.&lt;/p&gt;

&lt;p&gt;This is what it looks like for the Import plugin:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3AOZLptneTTvDTFPs5CNvK/e0e621644c7c402fde86b32595b0715e/Screenshot_2026-04-07_at_9.15.20â__AM.png" alt="Import plugin SS" /&gt;&lt;/p&gt;

&lt;p&gt;This is what the Basic Transformation and Downsample configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3OMYWwTYij5hcV5B1C1Api/f79bd5d69024c0d14ff90e39dd3b0b26/Screenshot_2026-04-07_at_9.16.23â__AM.png" alt="Basic Transformation SS" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2vtmZDWXRcuTyY4odVQWZ6/d33e5aad87c3147e1fa12bf1b41f3150/Screenshot_2026-04-07_at_9.17.13â__AM.png" alt="Downsample SS" /&gt;&lt;/p&gt;

&lt;p&gt;Look for these to become available in Explorer in the next couple of months.&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next&lt;/h2&gt;

&lt;p&gt;We are continuing to improve the Processing Engine and the Plugin Library. We have an OPC UA plugin about ready for you to try, as well as some additional anomaly detection and forecasting plugins. And, we are building UI configuration for the data subscription plugins mentioned above to make them even easier to configure.&lt;/p&gt;

&lt;h2 id="try-them-out"&gt;Try them out&lt;/h2&gt;

&lt;p&gt;All new plugins are now available in beta in the &lt;a href="https://www.influxdata.com/products/processing-engine-plugins/?utm_source=website&amp;amp;utm_medium=influxdb_3_processing-engine-updates&amp;amp;utm_content=blog"&gt;InfluxDB 3 Plugin Library&lt;/a&gt;. They require InfluxDB 3 v3.8.2 or later. Install them from the CLI using the gh: prefix, or browse and install them directly from InfluxDB 3 Explorer’s Plugin Library.&lt;/p&gt;

&lt;p&gt;We’re releasing these as a beta because we want your feedback. We’ve tested them thoroughly internally, but real-world environments are always more diverse and more demanding than any test suite. If you run into issues, have ideas for improvements, or build something cool on top of these plugins, we’d love to hear from you: drop into the &lt;a href="https://discord.com/invite/influxdata"&gt;InfluxData Discord&lt;/a&gt;, post on the &lt;a href="https://community.influxdata.com/"&gt;Community Forums&lt;/a&gt;, or open an issue on &lt;a href="https://github.com/influxdata/influxdb3_plugins/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 07 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/</guid>
      <category>Developer</category>
      <category>Product</category>
      <author>Gary Fowler (InfluxData)</author>
    </item>
    <item>
      <title>What’s New in InfluxDB 3.9: More Operational Control and a New Performance Preview</title>
      <description>&lt;p&gt;We’ve spent the last few months listening to how teams are running InfluxDB 3 in the wild. The feedback was clear: as you scale, you need less “guesswork” and more control. Today’s release of InfluxDB 3.9 is our answer to that.&lt;/p&gt;

&lt;p&gt;As more teams move InfluxDB 3 into production, our focus has shifted toward the operational experience: how you manage the database at scale, how you ensure it remains secure, and how you provide a seamless experience for users. This release is packed with a host of quality-of-life improvements and a beta of the key features we have planned for upcoming releases.&lt;/p&gt;

&lt;p&gt;Whether you’re using the open source &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; for recent data and local workloads or scaling with &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt; for the full clustering and security suite, these 3.9 updates are designed to make your stack more predictable.&lt;/p&gt;

&lt;h2 id="operational-maturity-and-system-transparency"&gt;Operational maturity and system transparency&lt;/h2&gt;

&lt;p&gt;In 3.9, we’ve focused on making the database more predictable and transparent for operators. We have organized these refinements into three key areas:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Advanced CLI &amp;amp; Automation&lt;/strong&gt;: We’ve expanded the CLI to better support complex, headless environments. This includes new flags for non-interactive automation and data validation, alongside support for unique host overrides to target specific node types in a cluster. We’ve also improved how Parquet query outputs are piped, making it easier to integrate InfluxDB into automated data pipelines.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;System Reliability &amp;amp; Resource Management&lt;/strong&gt;: We’ve refined how the database handles resources and large-scale schemas. To better support complex data, we’ve increased the default string field limit to 1MB. We’ve also hardened the database lifecycle; administrative controls are now more rigorous, and we’ve ensured that background resources, such as triggers, are cleanly decommissioned whenever a database is removed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Visibility &amp;amp; Under-the-Hood Infrastructure&lt;/strong&gt;: We’ve upgraded our core infrastructure to improve both security and operational clarity. This includes upgrading DataFusion and the bundled Python for more efficient query execution and plugin security. Additionally, the system now provides better visibility into access control and product identity, updating metrics, headers, and metadata access to clearly distinguish between Core and Enterprise builds across your stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Collectively, these refinements remove the subtle points of friction that can accumulate as a system scales in production. By hardening resource management and streamlining automation, we’re ensuring that InfluxDB 3 remains a predictable, “set-it-and-forget-it” core for your infrastructure.&lt;/p&gt;

&lt;h2 id="now-in-beta-a-new-performance-preview"&gt;Now in beta: A new performance preview&lt;/h2&gt;

&lt;p&gt;Behind the scenes, we’ve been working on performance updates to InfluxDB 3. These improvements support large-scale time series workloads without sacrificing predictability or operational simplicity. This work lays the foundation for what’s coming in 3.10 and 3.11, specifically focusing on smoothing behavior under load and expanding the range of schemas InfluxDB 3 can handle.&lt;/p&gt;

&lt;p&gt;Because performance in time series is highly dependent on specific workloads and cardinality, we are introducing these updates as a beta in InfluxDB 3 Enterprise. The beta is intended for testing in staging or development environments only. It allows you to explore and provide feedback on:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Optimized single-series queries&lt;/strong&gt;: Targeting reduced latency when fetching single-series data over long time windows.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Resource smoothing&lt;/strong&gt;: Testing reduced CPU and memory spikes during heavy compaction or ingestion bursts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Wide-and-sparse table support&lt;/strong&gt;: For handling schemas ranging from extreme column counts to ultra-sparse data tables (or any combination).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automatic distinct value caches&lt;/strong&gt;: Early-stage, auto-creation of caches designed to reduce friction and eliminate metadata query latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates are available as an optional, flag-gated preview in InfluxDB 3.9 Enterprise. &lt;strong&gt;They are not recommended for production workloads&lt;/strong&gt;. We encourage Enterprise users to test these capabilities against their specific use cases to help us refine the features for GA. InfluxDB 3 Core will also support many of these new features in the coming releases.&lt;/p&gt;

&lt;p&gt;For instructions on how to enable these preview flags and to view the full technical requirements, visit our &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;official Enterprise documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="get-started-and-share-your-feedback"&gt;Get started and share your feedback:&lt;/h5&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Download InfluxDB 3.9&lt;/strong&gt;: Available now via our &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;downloads page&lt;/a&gt; or latest Docker images.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Join the beta&lt;/strong&gt;: If you are an InfluxDB 3 Enterprise Trial user, reach out to me in our &lt;a href="https://discord.com/invite/9zaNCW2PRT"&gt;Discord&lt;/a&gt; or &lt;a href="https://influxcommunity.slack.com/join/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA#/shared-invite/email"&gt;Community Slack&lt;/a&gt; to learn how to enable these beta features.&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Thu, 02 Apr 2026 12:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-9/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-9/</guid>
      <category>Product</category>
      <category>Developer</category>
      <category>news</category>
      <author>Peter Barnett (InfluxData)</author>
    </item>
    <item>
      <title>Telegraf Enterprise Beta is Now Available: Centralized Control for Telegraf at Scale</title>
      <description>&lt;p&gt;Telegraf is incredibly good at what it does: collecting metrics, logs, and events from just about anywhere and sending them wherever you need. But once Telegraf becomes part of your production telemetry pipeline, spread across environments, teams, regions, and edge locations, the hard part isn’t installing agents; it’s operating them.&lt;/p&gt;

&lt;p&gt;Configs drift. “Temporary” overrides linger. Rolling out changes across hundreds (or thousands) of agents becomes a careful, manual process. And when something breaks, the first question is rarely about the data—it’s about the fleet:&lt;/p&gt;

&lt;p&gt;which configuration is running where, and is every agent healthy?&lt;/p&gt;

&lt;p&gt;That’s the problem Telegraf Enterprise is built to solve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Today, we’re opening the Telegraf Enterprise beta to the broader Telegraf community so you can help us validate the product where it matters most: at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8J9tj2g9cNGnqtL94tMOn/adf53d91e1e98a76f8c9461186b1cccf/Screenshot_2026-03-25_at_10.59.07â__AM.png" alt="Telegraf Enterprise SS 1" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="what-is-telegraf-enterprise"&gt;What is Telegraf Enterprise?&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Telegraf Enterprise&lt;/strong&gt; is a commercial offering for organizations running Telegraf at scale and needing centralized management, governance, and support. It brings together two key components:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf Controller&lt;/strong&gt;: A control plane (UI + API) that centralizes Telegraf configuration management and fleet health visibility.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf Enterprise Support&lt;/strong&gt;: Official support for Telegraf Controller and official Telegraf plugins, designed for teams that need dependable response times and expert guidance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s built for real-world, large-scale agent deployments, where Telegraf isn’t a tool you occasionally touch, but a platform you rely on.&lt;/p&gt;

&lt;h2 id="meet-telegraf-controller-your-telegraf-control-plane"&gt;Meet Telegraf Controller: your Telegraf control plane&lt;/h2&gt;

&lt;p&gt;At the heart of Telegraf Enterprise is &lt;strong&gt;Telegraf Controller&lt;/strong&gt;, which centralizes two things teams struggle with most at scale:&lt;/p&gt;

&lt;h4 id="configuration-management-that-doesnt-collapse-under-growth"&gt;Configuration Management That Doesn’t Collapse Under Growth&lt;/h4&gt;

&lt;p&gt;Telegraf Controller helps you create and manage configurations to support consistency across environments while still allowing necessary variation. Core capabilities include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Centralized configuration creation and editing&lt;/li&gt;
  &lt;li&gt;Templates and parameterization to reuse configs safely&lt;/li&gt;
  &lt;li&gt;Label-based organization (so fleets don’t devolve into a long list of “agent-123”)&lt;/li&gt;
  &lt;li&gt;Bulk operations for fleet-wide changes&lt;/li&gt;
  &lt;li&gt;Environment variable and parameter management&lt;/li&gt;
  &lt;li&gt;Plugin metadata visibility to simplify config authoring and maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/63My9Gr4T1fkbk4tXziKRL/535ae3a8d927ddfe52e47d3596cd8b79/Screenshot_2026-03-25_at_11.00.14â__AM.png" alt="Telegraf Enterprise SS 2" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h4 id="fleet-wide-health-visibility"&gt;Fleet-Wide Health Visibility&lt;/h4&gt;

&lt;p&gt;Telegraf Controller gives you a single view into the overall status of your agent deployments, so you can understand:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Which agents are reporting as expected&lt;/li&gt;
  &lt;li&gt;Where health issues are clustering&lt;/li&gt;
  &lt;li&gt;What changed recently, and what might be correlated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, you don’t just manage Telegraf. You &lt;strong&gt;operate&lt;/strong&gt; it.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6LcWrqwByO7CtGvf8cDT3C/b2d04ee37b9b14bffec9e77693a716af/Screenshot_2026-03-25_at_11.01.30â__AM.png" alt="Telegraf Enterprise SS 3" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="designed-to-fit-your-telemetry-stack"&gt;Designed to fit your telemetry stack&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is designed to work with the way teams actually deploy Telegraf.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;It does not require InfluxDB&lt;/strong&gt;. You can use the Telegraf Controller regardless of where your telemetry data is going.&lt;/li&gt;
  &lt;li&gt;Configuration delivery follows a &lt;strong&gt;pull-based model&lt;/strong&gt;, where agents fetch configuration over HTTP. This keeps change management predictable and compatible with locked-down environments.&lt;/li&gt;
  &lt;li&gt;It’s built to support &lt;strong&gt;hundreds to thousands of agents&lt;/strong&gt;, with production-grade storage options and a modern UI + API architecture for automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="why-were-running-this-beta"&gt;Why we’re running this beta&lt;/h2&gt;

&lt;p&gt;This beta is open to any Telegraf user who wants to test-drive Telegraf Controller.&lt;/p&gt;

&lt;p&gt;The focus of the beta is simple:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Test Telegraf Controller at scale&lt;/strong&gt;: We want to validate how well Telegraf Controller holds up when you connect real fleets—hundreds or thousands of agents—with real operational behaviors.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Gather feedback from the community:&lt;/strong&gt; We’re intentionally inviting community input early, while we’re still shaping the GA experience. What workflows are missing? What’s confusing? What would make this tool indispensable in your environment?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this stage, your feedback directly influences what Telegraf Enterprise becomes.&lt;/p&gt;

&lt;h2 id="enterprise-support-that-matches-production-expectations"&gt;Enterprise support that matches production expectations&lt;/h2&gt;

&lt;p&gt;Operating telemetry pipelines is a production responsibility, and when Telegraf is part of that pipeline, you need support that understands the stakes.&lt;/p&gt;

&lt;p&gt;Telegraf Enterprise includes support designed for teams that need:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Clear expectations for response and escalation&lt;/li&gt;
  &lt;li&gt;Coverage for Telegraf Controller and official Telegraf plugins&lt;/li&gt;
  &lt;li&gt;Help diagnosing issues and reducing operational risk as fleets grow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially valuable when Telegraf is deployed across multiple teams, environments, or customer sites, where operational consistency matters as much as collection capability.&lt;/p&gt;

&lt;h2 id="who-is-telegraf-enterprise-for"&gt;Who is Telegraf Enterprise for?&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is built for organizations that manage Telegraf fleets at a meaningful scale, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Platform engineering and SRE teams&lt;/li&gt;
  &lt;li&gt;DevOps organizations operating across multi-cloud / hybrid / edge&lt;/li&gt;
  &lt;li&gt;Managed service providers delivering telemetry as a service&lt;/li&gt;
  &lt;li&gt;Compliance-sensitive teams that need standardized configurations and governance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re running a small number of agents and are comfortable managing configs manually, you may not need Telegraf Enterprise today. But if Telegraf is everywhere—and your team is responsible for keeping it reliable—centralized control quickly becomes less “nice to have” and more “how did we operate without this?”&lt;/p&gt;

&lt;h2 id="packaging-free-and-enterprise-options"&gt;Packaging: free and enterprise options&lt;/h2&gt;

&lt;h4 id="telegraf-controller"&gt;Telegraf Controller&lt;/h4&gt;

&lt;p&gt;A free tier is available for teams that want centralized configuration management and visibility with pre-defined limits.&lt;/p&gt;

&lt;h4 id="telegraf-enterprise"&gt;Telegraf Enterprise&lt;/h4&gt;

&lt;p&gt;For teams operating Telegraf as critical infrastructure, &lt;strong&gt;Telegraf Enterprise&lt;/strong&gt; includes the Telegraf Controller packaged with enterprise support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key difference&lt;/strong&gt;: the Telegraf Enterprise is built for scale and operational reliability, with support and capabilities aligned to production fleet management.&lt;/p&gt;

&lt;h2 id="getting-started-with-telegraf-controller"&gt;Getting started with Telegraf Controller&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is designed for teams operating Telegraf as a core part of production telemetry pipelines. If Telegraf is already how you collect metrics, logs, and events across your infrastructure, Telegraf Controller is the missing piece that helps you operate that collection layer like a platform—not a pile of configs.&lt;/p&gt;

&lt;p&gt;To join the beta, &lt;a href="https://influxdata.com/products/telegraf-enterprise"&gt;click here&lt;/a&gt; to opt in. Please share your feedback in-app with the feedback button or our slack channel #telegraf-enterprise-beta.&lt;/p&gt;

&lt;p&gt;Join the beta, push it hard, share your use case, and what makes your workflow easier!&lt;/p&gt;
</description>
      <pubDate>Thu, 26 Mar 2026 07:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/telegraf-enterprise-beta/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/telegraf-enterprise-beta/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>A New Scale Tier for Time Series on Amazon Timestream for InfluxDB</title>
      <description>
&lt;p&gt;When we first announced the &lt;a href="https://www.influxdata.com/blog/influxdb3-on-amazon-timestream/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;availability&lt;/a&gt; of &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; and &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt; on Amazon Timestream for InfluxDB last year, we set a new standard for managed time series on AWS. We gave developers a simple way to harness high performance at scale while removing the burden of infrastructure management.&lt;/p&gt;

&lt;p&gt;But as our customers have taught us, “at scale” is a moving target. Across Industrial IoT, physical AI, and real-time observability, data is growing in both volume and resolution. When you move from minute-by-minute polling to sub-millisecond, high-fidelity telemetry, the pressure on the underlying database compounds. To stay ahead of that curve, developers need a platform that scales as fast as their workloads.&lt;/p&gt;

&lt;p&gt;Today, we’re delivering that by expanding InfluxDB 3 on Amazon Timestream for InfluxDB to &lt;a href="https://aws.amazon.com/timestream/"&gt;support expanding clusters up to 15 nodes&lt;/a&gt;. We’re also introducing a seamless migration path from InfluxDB 3 Core to InfluxDB 3 Enterprise, allowing teams to unlock this massive performance tier without friction, risk of a manual architectural overhaul, or any data loss.&lt;/p&gt;

&lt;h2 id="scaling-for-the-mission-critical"&gt;Scaling for the mission-critical&lt;/h2&gt;

&lt;p&gt;At InfluxData, we’re seeing time series expand from infrastructure monitoring to the foundation for autonomous systems. In high-stakes environments like power grid management or autonomous vehicle navigation, increased latency is a significant operational risk rather than just a performance metric.&lt;/p&gt;

&lt;p&gt;Previously, AWS Timestream’s support of InfluxDB 3 was focused on smaller, highly efficient configurations. By expanding to 15 nodes, we are providing major upgrades across three important areas:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Query concurrency&lt;/strong&gt;: More nodes mean more hands on deck to process complex, concurrent queries. Large teams can now run heavy analytical workloads without impacting real-time dashboards or critical alerts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Massive throughput&lt;/strong&gt;: With a larger cluster, you can ingest millions of data points per second across hundreds of millions of unique series, maintaining real-time query performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Workload isolation and optimization&lt;/strong&gt;: These expanded clusters enable true functional isolation between ingestion, queries, and compaction. This allows granular performance tuning optimized for your most demanding workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="architected-for-enterprise-demand"&gt;Architected for enterprise demand&lt;/h2&gt;

&lt;p&gt;This new 15-node option is available for InfluxDB 3 Enterprise and is designed for organizations that require high availability, enhanced security, and the power to maintain high ingestion and real-time query performance across high-resolution, high-velocity datasets. InfluxDB 3 Core will continue to operate in single-node deployments.&lt;/p&gt;

&lt;p&gt;By leveraging AWS infrastructure, you can spin up these expanded clusters in minutes directly from the AWS Console. With our new seamless migration capabilities, you can transition your existing Core workloads to Enterprise clusters with a single click. This ensures that as your data grows (from a few local sensors to a global fleet of devices), your database never becomes the bottleneck, and your team never has to worry about the downtime typically associated with a migration. These larger clusters are available today in all AWS regions where Amazon Timestream for InfluxDB is available, ensuring you can deploy and optimize mission-critical time series infrastructure wherever your data lives.&lt;/p&gt;

&lt;h2 id="the-foundation-for-physical-ai"&gt;The foundation for physical AI&lt;/h2&gt;

&lt;p&gt;Our partnership with AWS is about meeting developers where they build. By integrating with services like AWS Lambda, SageMaker, and Kinesis, we’ve simplified the path from high-volume streams into Physical AI. This is the frontier where intelligence moves from the digital realm into the physical world.&lt;/p&gt;

&lt;p&gt;Time series is the heartbeat of this transition, fueling a two-part lifecycle:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Training&lt;/strong&gt;: Using massive volumes of historical data to establish baselines and “normal” patterns.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Inference&lt;/strong&gt;: Streaming real-time data against those models to trigger automated, deterministic actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes our partnership with AWS unique is that we support both sides of this loop. With up to 15 nodes at your disposal, InfluxDB 3 has the headroom to act as a distributed inference engine, running predictive maintenance and anomaly detection against your data. This eliminates the latency tax of moving massive datasets between layers, ensuring that whether you are managing a robotic fleet or a smart grid, your autonomous systems can perceive and react with real-time precision.&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next?&lt;/h2&gt;

&lt;p&gt;The future of time series is about speed, precision, and scale. With today’s announcement, we’re handing you the keys to all three. By removing the barriers between single-node efficiency and enterprise-grade performance, we’re making it easier than ever to evolve your architecture as fast as your data grows.&lt;/p&gt;

&lt;p&gt;We’re excited to see what the community builds with this new level of power. If you’re ready to scale your real-time workloads, head over to the &lt;a href="https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fus-east-1.console.aws.amazon.com%2Ftimestream%2Fhome%3Fca-oauth-flow-id%3D3617%26hashArgs%3D%2523welcome%26isauthcode%3Dtrue%26oauthStart%3D1768948312939%26region%3Dus-east-1%26state%3DhashArgsFromTB_us-east-1_89587d800d106091&amp;amp;client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fpyramid&amp;amp;forceMobileApp=0&amp;amp;code_challenge=0mEuy-XrhJW82iYjevEt3OqO4t46aGARztfwPAhfPX4&amp;amp;code_challenge_method=SHA-256"&gt;AWS Console&lt;/a&gt; and start building.&lt;/p&gt;
</description>
      <pubDate>Mon, 16 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Pat Walsh (InfluxData)</author>
    </item>
    <item>
      <title>How to Integrate Grafana with Home Assistant in 2026</title>
      <description>&lt;p&gt;This post covers how to get started with Home Assistant and Grafana, including setting up &lt;a href="https://www.influxdata.com/products/influxdb-overview/?utm_source=website&amp;amp;utm_medium=integrate_grafana_ha_influxdb-3&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt; and &lt;a href="https://www.influxdata.com/grafana/?utm_source=website&amp;amp;utm_medium=integrate_grafana_ha_influxdb-3&amp;amp;utm_content=blog"&gt;Grafana&lt;/a&gt; with Docker, configuring InfluxDB to receive data from Home Assistant, and creating a Grafana dashboard to visualize your data. It provides a comprehensive guide for real-time monitoring and analysis of Home Assistant data.&lt;/p&gt;

&lt;p&gt;In this tutorial, you’ll learn how to install and configure both InfluxDB and Grafana, and create a Grafana dashboard to visualize data from Home Assistant.&lt;/p&gt;

&lt;p&gt;Want to know more about InfluxDB and Grafana before you get started?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/resources/infrastructure-monitoring-basics-with-telegraf-grafana-influxdb/"&gt;Watch the webinar&lt;/a&gt;.&lt;u&gt;&lt;/u&gt;&lt;/p&gt;

&lt;h2 id="definitions-grafana-home-assistant-influxdb"&gt;Definitions: Grafana, Home Assistant, InfluxDB&lt;/h2&gt;

&lt;h4 id="home-assistant"&gt;Home Assistant&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Home Assistant&lt;/strong&gt; is an open source home automation platform that enables control and monitoring of various devices using a web interface or mobile app. With support for diverse devices and protocols, it’s highly customizable, allowing users to integrate new devices and protocols through custom components.&lt;/p&gt;

&lt;h4 id="grafana"&gt;Grafana&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; is a versatile open source analytics and monitoring platform that offers powerful data visualization capabilities. It supports various data sources, including popular databases, such as InfluxDB, Elasticsearch, Prometheus, MySQL, PostgreSQL, and more.&lt;/p&gt;

&lt;p&gt;With its extensibility through plugins, Grafana allows users to easily incorporate new data sources for enhanced analytics and monitoring.&lt;/p&gt;

&lt;h4 id="influxdb"&gt;InfluxDB&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;InfluxDB&lt;/strong&gt; is an open source time series database designed for storing and querying time series data. It supports a wide range of data types, including integers, floats, strings, booleans, and others. Additionally, InfluxDB provides the flexibility to incorporate new data types via plugins.&lt;/p&gt;

&lt;p&gt;Home Assistant integrates with Grafana, an analytics and monitoring platform, to visualize and analyze data. This data can be collected and stored in InfluxDB, an open source time series database, which allows for efficient querying of timestamped data.&lt;/p&gt;

&lt;p&gt;Together, these tools enable users to control and monitor their smart home devices through Home Assistant while visualizing and analyzing the data in real-time using Grafana and InfluxDB.&lt;/p&gt;

&lt;h2 id="prerequisites-and-versions"&gt;Prerequisites and versions&lt;/h2&gt;

&lt;p&gt;To follow this tutorial, you will need the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.home-assistant.io/installation"&gt;Home Assistant&lt;/a&gt; 2025.1+&lt;/li&gt;
  &lt;li&gt;Docker 24.0+&lt;/li&gt;
  &lt;li&gt;InfluxDB 3.x&lt;/li&gt;
  &lt;li&gt;Grafana 12&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="step-1-install-influxdb-3-core"&gt;Step 1: Install InfluxDB 3 Core&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Option A: Install via Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, pull the InfluxDB 3 image using this command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker pull influxdb:3-core&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once the download is complete, run the InfluxDB container with the following command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run -d\
	--name influxdb \
	–p 8086:8086 \
-v influxdb3_data:/var/lib/influxdb2 \
	-v influxdb3_config:/etc/influxdb2 \
-e DOCKER_INFLUXDB_INIT_MODE=setup \
     -e DOCKER_INFLUXDB_INIT_USERNAME=admin \
     -e DOCKER_INFLUXDB_INIT_PASSWORD=your_password \
     -e DOCKER_INFLUXDB_INIT_ORG=home_org \
     -e DOCKER_INFLUXDB_INIT_BUCKET=home_assistant \
     influxdb:3&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This command starts InfluxDB on port 8086, configures an initial user, organization, and bucket, and stores data in Docker volumes so it survives container restarts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B: Install Directly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, you’ll need to install InfluxDB on your machine. You can find instructions for your specific operating system on the &lt;a href="https://www.influxdata.com/?utm_source=website&amp;amp;utm_medium=integrate_grafana_ha_influxdb-3&amp;amp;utm_content=blog"&gt;InfluxDB downloads page&lt;/a&gt;.&lt;/p&gt;

&lt;h4 id="step-2-configure-influxdb-and-generate-an-api-token"&gt;Step 2: Configure InfluxDB and Generate an API Token&lt;/h4&gt;

&lt;p&gt;After installing InfluxDB, you’ll need to configure it to accept data from Home Assistant. You can do this by creating a new InfluxDB database and user account.&lt;/p&gt;

&lt;p&gt;Open your browser and navigate to http://localhost:8086.&lt;/p&gt;

&lt;p&gt;If you used the environment variables in Step 1, you can log in with the username and password you specified. Otherwise, complete the initial setup process.&lt;/p&gt;

&lt;p&gt;Next, generate an API Token by:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In the InfluxDB UI, click &lt;strong&gt;Load Data&lt;/strong&gt; in the left sidebar&lt;/li&gt;
  &lt;li&gt;Click &lt;strong&gt;API Tokens&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Click &lt;strong&gt;Generate API Token&lt;/strong&gt; (All Access API Token)&lt;/li&gt;
  &lt;li&gt;Give it a description like “Home Assistant Token”&lt;/li&gt;
  &lt;li&gt;Grant &lt;strong&gt;Write&lt;/strong&gt; permission to the &lt;code class="language-markup"&gt;home_assistant&lt;/code&gt; bucket&lt;/li&gt;
  &lt;li&gt;Click &lt;strong&gt;Generate&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lastly, copy and save the token, as you’ll need it for Home Assistant and Grafana configuration.&lt;/p&gt;

&lt;h4 id="step-3-configure-the-influxdb-integration-in-home-assistant"&gt;Step 3: Configure the InfluxDB integration in Home Assistant&lt;/h4&gt;

&lt;p&gt;Now, you’ll need to install the InfluxDB integration in Home Assistant. You can do this by navigating to the Home Assistant web interface, selecting &lt;strong&gt;Settings&lt;/strong&gt; from the sidebar, and then selecting &lt;strong&gt;Integrations&lt;/strong&gt;. From there, you can search for &lt;strong&gt;InfluxDB&lt;/strong&gt; and follow the prompts to configure the integration.
&lt;img src="" alt="" /&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6ad093ddc249421e98f1400fe87df2bf/d1a15fd8ea44bf9efc84f393fd4ddc52/unnamed.png" alt="" /&gt;
Next, open your Home Assistant &lt;code class="language-markup"&gt;configuration.yaml&lt;/code&gt; file and add the following &lt;code class="language-markup"&gt;influxdb&lt;/code&gt; configuration block:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-yaml"&gt; influxdb:
     api_version: 2
     ssl: false
     host: YOUR_INFLUXDB_IP
     port: 8086
     token: YOUR_API_TOKEN
     organization: home_org
     bucket: home_assistant
     tags:
       source: HomeAssistant
     tags_attributes:
       - friendly_name
     default_measurement: state
     exclude:
       entity_globs:
         - sensor.date*
         - sensor.time*
     include:
       domains:
         - sensor
         - binary_sensor
         - climate
         - light
         - switch&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then, restart Home Assistant to apply the changes.&lt;/p&gt;

&lt;p&gt;For additional configuration options, see the &lt;a href="https://www.home-assistant.io/integrations/influxdb"&gt;Home Assistant InfluxDB integration documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="verify-data-is-flowing-to-influxdb"&gt;Verify Data is Flowing to InfluxDB&lt;/h5&gt;

&lt;p&gt;After restarting Home Assistant:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Return to the InfluxDB UI&lt;/li&gt;
  &lt;li&gt;Click the &lt;strong&gt;Data Explorer&lt;/strong&gt; in the left sidebar&lt;/li&gt;
  &lt;li&gt;Select your &lt;code class="language-markup"&gt;home_assistant&lt;/code&gt; bucket&lt;/li&gt;
  &lt;li&gt;Within a few minutes, you should see measurements appearing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don’t see data, check the logs at &lt;strong&gt;Settings&lt;/strong&gt; -&amp;gt; &lt;strong&gt;System&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Logs&lt;/strong&gt; for connection errors.&lt;/p&gt;

&lt;h4 id="step-4-install-grafana-using-docker"&gt;Step 4: Install Grafana Using Docker&lt;/h4&gt;

&lt;p&gt;If you prefer to run Grafana in Docker rather than install it locally, follow these steps.&lt;/p&gt;

&lt;h5 id="install-docker"&gt;Install Docker&lt;/h5&gt;

&lt;p&gt;If you haven’t already, you’ll need to install Docker on your machine. You can find instructions for your specific OS on the &lt;a href="https://www.docker.com/"&gt;Docker website&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="pull-the-grafana-docker-image"&gt;Pull the Grafana Docker Image&lt;/h5&gt;

&lt;p&gt;Once Docker is installed, you can pull the Grafana Docker image by running the following command in your terminal:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker pull grafana/grafana&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will download the latest version of the &lt;a href="https://hub.docker.com/r/grafana/grafana"&gt;Grafana Docker&lt;/a&gt; image to your machine.&lt;/p&gt;

&lt;h5 id="run-the-grafana-docker-container"&gt;Run the Grafana Docker container&lt;/h5&gt;

&lt;p&gt;To start a new Grafana Docker container, run the following command in your terminal:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run -d -p 3000:3000 --name=grafana grafana/grafana&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will start a new container named &lt;strong&gt;grafana&lt;/strong&gt; and map port 3000 on the container to port 3000 on your local machine.&lt;/p&gt;

&lt;h5 id="access-the-grafana-web-interface"&gt;Access the Grafana web interface&lt;/h5&gt;

&lt;p&gt;Once the container is running, you can access the Grafana web interface by navigating to http://localhost:3000 in your web browser.&lt;/p&gt;

&lt;h5 id="create-a-grafana-account-and-connect-data-sources"&gt;Create a Grafana account and connect data sources&lt;/h5&gt;

&lt;p&gt;When you first access the Grafana web interface, you’ll be prompted to create a new user account. After creating your account, you can start configuring Grafana by adding a data source and creating dashboards, as described in the previous steps.&lt;/p&gt;

&lt;h5 id="configure-grafana"&gt;Configure Grafana&lt;/h5&gt;

&lt;p&gt;After installing Grafana, you’ll need to configure it to connect to the InfluxDB database. You can do this by following these steps:&lt;/p&gt;

&lt;p&gt;Open the Grafana web interface by navigating to http://localhost:3000 in your web browser. Log in to Grafana using the default username &lt;strong&gt;admin&lt;/strong&gt; and password &lt;strong&gt;admin&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Click on the &lt;strong&gt;Configuration&lt;/strong&gt; icon in the sidebar and select &lt;strong&gt;Data Sources&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Click on the &lt;strong&gt;Add data source&lt;/strong&gt; button and select &lt;strong&gt;InfluxDB&lt;/strong&gt;. Enter the following information in the &lt;strong&gt;InfluxDB details&lt;/strong&gt; section:
    &lt;ul&gt;
      &lt;li&gt;URL: &lt;a href="http://localhost:8086"&gt;http://localhost:8086&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Query Language: SQL or InfluxQL&lt;/li&gt;
      &lt;li&gt;Database: homeassistant&lt;/li&gt;
      &lt;li&gt;User: homeassistant&lt;/li&gt;
      &lt;li&gt;Password: yourpassword&lt;/li&gt;
      &lt;li&gt;HTTP method: GET&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Click on the &lt;strong&gt;Save &amp;amp; Test&lt;/strong&gt; button to save the data source and test the connection to InfluxDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5 id="create-a-grafana-dashboard"&gt;Create a Grafana dashboard&lt;/h5&gt;

&lt;p&gt;After configuring the InfluxDB data source in Grafana, you can create a new dashboard to visualize the Home Assistant data. To do this, follow these steps:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Click the &lt;strong&gt;Create&lt;/strong&gt; icon in the sidebar, then select &lt;strong&gt;Dashboard&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Click on the &lt;strong&gt;Add new panel&lt;/strong&gt; button and select &lt;strong&gt;Graph&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Click on &lt;strong&gt;Panel title&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;In the &lt;strong&gt;Query&lt;/strong&gt; tab, select the InfluxDB data source that you configured.&lt;/li&gt;
  &lt;li&gt;Enter your InfluxDB query in the &lt;strong&gt;Query editor&lt;/strong&gt;. For example, you might enter a query like this to display the temperature from a temperature sensor:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT mean("value") FROM "temperature" WHERE ("entity_id" = 'sensor.temperature') AND $timeFilter GROUP BY time($__interval)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This query selects the average temperature from the temperature measurement for the sensor.temperature entity in Home Assistant and groups the results by time interval.&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Apply&lt;/strong&gt; button to save the query and display the results.&lt;/p&gt;

&lt;p&gt;Customize your panel as desired by selecting different visualization options, adding legends and annotations, etc.&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Save&lt;/strong&gt; icon in the toolbar to save the panel to your dashboard.&lt;/p&gt;

&lt;p&gt;You can repeat these steps to add more panels to your dashboard, using different InfluxDB queries to display different data. Check &lt;a href="https://docs.influxdata.com/influxdb3/core/visualize-data/grafana/"&gt;InfluxDB’s Grafana documentation&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h4 id="step-8-view-your-dashboard"&gt;Step 8: View Your Dashboard&lt;/h4&gt;

&lt;p&gt;After creating your Grafana dashboard, you can view it by navigating to http://localhost:3000/dashboards in your web browser and selecting the dashboard from the list. You should now see a visual representation of your Home Assistant data in Grafana!&lt;/p&gt;

&lt;h2 id="troubleshooting-and-faqs"&gt;Troubleshooting and FAQs&lt;/h2&gt;

&lt;h4 id="no-data-appears-in-influxdb-after-configuring-home-assistant"&gt;No data appears in InfluxDB after configuring Home Assistant&lt;/h4&gt;

&lt;p&gt;Check your Home Assistant logs (&lt;code class="language-markup"&gt;Settings &amp;gt; System &amp;gt; Logs&lt;/code&gt;). If the connection fails, you will see errors there. Ensure the &lt;code class="language-markup"&gt;token&lt;/code&gt; in &lt;code class="language-markup"&gt;configuration.yaml&lt;/code&gt; is correct and wrapped in quotes if it contains special characters.&lt;/p&gt;

&lt;h4 id="can-i-write-to-multiple-influxdb-databases-or-buckets-from-home-assistant"&gt;Can I write to multiple InfluxDB databases or buckets from Home Assistant?&lt;/h4&gt;

&lt;p&gt;No. Home Assistant’s InfluxDB integration only supports writing to a single database or bucket. All your sensor data must be sent to a single location. If you need to separate data streams, you’ll need to use Home Assistant’s filtering options in &lt;code class="language-markup"&gt;configuration.yaml&lt;/code&gt; to exclude certain entities, or run multiple Home Assistant instances.&lt;/p&gt;

&lt;h4 id="my-grafana-cant-connect-to-influxdb-even-though-both-are-running-whats-wrong"&gt;My Grafana can’t connect to InfluxDB even though both are running. What’s wrong?&lt;/h4&gt;

&lt;p&gt;This is often a Docker networking issue. If both services are running in Docker containers, they may not be able to reach each other via &lt;code class="language-markup"&gt;localhost&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Try using your host machine’s actual IP address (like &lt;code class="language-markup"&gt;192.168.1.100:8086&lt;/code&gt;) instead of &lt;code class="language-markup"&gt;localhost:8086&lt;/code&gt; in Grafana’s data source configuration. Also, verify that the port number matches; InfluxDB’s HTTP API uses port 8086 by default.&lt;/p&gt;

&lt;p&gt;If you’re using Home Assistant add-ons, make sure you’re using the correct internal container names (like &lt;code class="language-markup"&gt;a0d7b954-influxdb:8086&lt;/code&gt; instead of &lt;code class="language-markup"&gt;a0d7b954_influxdb:8086&lt;/code&gt;, note the hyphen vs underscore).&lt;/p&gt;

&lt;h4 id="can-i-use-influxdb-cloud-instead-of-running-my-own-instance"&gt;Can I use InfluxDB Cloud instead of running my own instance?&lt;/h4&gt;

&lt;p&gt;Yes. InfluxDB Cloud offers a fully-managed InfluxDB 3 service, and the setup is very similar. You’ll get a URL, organization, bucket, and API token from the InfluxDB Cloud console, then configure Home Assistant and Grafana to point at those cloud endpoints instead of a local instance.&lt;/p&gt;

&lt;p&gt;This eliminates the need to manage Docker containers, backups, and updates yourself. However, you’ll need to consider data egress costs if you have high query volumes, and your Home Assistant data will be stored in the cloud rather than locally.&lt;/p&gt;

&lt;h4 id="can-i-migrate-from-influxdb-2x-to-3x"&gt;Can I migrate from InfluxDB 2.x to 3.x?&lt;/h4&gt;

&lt;p&gt;Yes, InfluxDB 3 maintains backward compatibility with the v2 write API. Your Home Assistant configuration will continue to work. For data migration, refer to the &lt;a href="https://docs.influxdata.com/influxdb3/core/"&gt;official InfluxDB migration documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="next-steps-for-home-assistant-and-grafana"&gt;Next steps for Home Assistant and Grafana&lt;/h2&gt;

&lt;p&gt;Getting started with Home Assistant and Grafana requires installing InfluxDB and Grafana on your machine, configuring InfluxDB to accept data from Home Assistant, installing the InfluxDB integration in Home Assistant, configuring Grafana to connect to the InfluxDB database, and creating a Grafana dashboard to visualize your data. With these tools and techniques, you can easily monitor and analyze your Home Assistant data in real-time.&lt;/p&gt;
</description>
      <pubDate>Thu, 08 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/how-integrate-gafana-home-assistant/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/how-integrate-gafana-home-assistant/</guid>
      <category>Developer</category>
      <category>Product</category>
      <author>Community (InfluxData)</author>
    </item>
  </channel>
</rss>
