<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Suyash Joshi</title>
    <description>Posts by Suyash Joshi on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/author/suyash-joshi/</link>
    <language>en-us</language>
    <lastBuildDate>Tue, 10 Mar 2026 08:00:00 +0000</lastBuildDate>
    <pubDate>Tue, 10 Mar 2026 08:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>When Your Plant Talks Back: Conversational AI with InfluxDB 3</title>
      <description>&lt;p&gt;No one wants to stare at a plant and guess if it needs water. It’s much easier if the plant can say, “I’m thirsty.” A few years ago, we built &lt;a href="https://www.influxdata.com/blog/prototyping-iot-with-influxdb-cloud-2-0/?utm_source=website&amp;amp;utm_medium=plant_buddy_influxdb_3&amp;amp;utm_content=blog"&gt;Plant Buddy using InfluxDB Cloud 2.0&lt;/a&gt;. The linked article is still a great guide for cloud-first IoT prototyping as it shows how quickly you can connect devices, store time series data, and build dashboards in the cloud with the previous version of InfluxDB.&lt;/p&gt;

&lt;p&gt;But this time, the goal was different. Instead of sending soil moisture data to the cloud, the entire system runs locally using the latest InfluxDB 3 Core, similar to a modern industrial setup powered by LLM for a natural conversational interaction.&lt;/p&gt;

&lt;h2 id="the-architecture-the-factory-at-home"&gt;The architecture: the “factory” at home&lt;/h2&gt;

&lt;p&gt;In real factories, raw PLC data is captured at the edge, often using MQTT and a local database. That same architecture now powers PlantBuddy v3 with the following setup.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Edge Device (ESP32 / Arduino)&lt;/strong&gt;: Works like a small PLC. It reads soil moisture and publishes the plant’s state to the network without knowing anything about the database.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Soil Moisture Sensor (Analog)&lt;/strong&gt;: Outputs an analog signal based on soil moisture. The microcontroller converts it to digital using its built-in ADC.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Message Bus (Mosquitto MQTT)&lt;/strong&gt;: Handles publish/subscribe communication. The Arduino publishes data to the broker (running locally), and Telegraf subscribes to forward data to the database.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Database (InfluxDB 3 Core)&lt;/strong&gt;: Runs locally in Docker as a high-performance time series database storing all sensor readings.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;User Interface (Claude + MCP)&lt;/strong&gt;: Enables natural language queries. Instead of writing SQL, questions about plant health can be asked conversationally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1ZSbIHFEYUbPMC1AdqrrST/ea99e0486c676472a7f68eec9b8b7d7e/Screenshot_2026-02-19_at_9.59.35â__AM.png" alt="Plant Buddy architecture" /&gt;&lt;/p&gt;

&lt;h4 id="collecting--sending-data-from-the-edge"&gt;1. Collecting &amp;amp; Sending Data from the Edge&lt;/h4&gt;

&lt;p&gt;To make this scalable, I treat the sensor data like an industrial payload. It’s not just a number; it’s a structured JSON object containing the ID, raw metrics, and a pre-calculated status flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Arduino Payload&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-xml"&gt;{ 
"id": "pothos_01",    // Device identifier (like a PLC tag) 
"raw": 715,  		// Raw ADC value (0-1023) 
"pct": 19,  		// Calculated moisture percentage 
"stat": "DRY_ALERT"   // Pre-computed status 
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Why compute status at the edge?&lt;/strong&gt; In factories, PLCs make local decisions (e.g., stop motor, trigger alarm). Here, the Arduino determines “DRY_ALERT” so the database can trigger alerts without recalculating thresholds.&lt;/p&gt;

&lt;h4 id="the-ingest-pipeline"&gt;2. The Ingest Pipeline&lt;/h4&gt;

&lt;p&gt;Below are two approaches to send plant data to InfluxDB. In this project, I went with MQTT and Telegraf, which are more standard for an industrial setup.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5McEkD3dooB2Ii4nfJQ6D1/2d370c54ba97a41a460a66ec05c07af1/Screenshot_2026-02-19_at_10.02.34â__AM.png" alt="Plant Buddy Ingest Pipeline" /&gt;&lt;/p&gt;

&lt;p&gt;Telegraf acts as the gateway, subscribing to the MQTT broker and translating the JSON into InfluxDB Line Protocol. This configuration is identical to what you’d see in a manufacturing plant monitoring vibration sensors.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-toml"&gt;# telegraf.conf - Complete Working Example
[agent]
  interval = "10s"
  flush_interval = "10s"

[[inputs.mqtt_consumer]]
  servers = ["tcp://127.0.0.1:1883"]
  topics = ["home/livingroom/plant/moisture"]
  data_format = "json"

  # Tags become indexed dimensions (fast filtering)
  tag_keys = ["id", "stat"]

  # Fields become measured values
  json_string_fields = ["raw", "pct"]

[[outputs.influxdb_v2]]
  urls = ["http://127.0.0.1:8181"]
  token = "$INFLUX_TOKEN"
  organization = "my-org"
  bucket = "plant_data"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If Telegraf runs in Docker, use &lt;code class="language-markup"&gt;host.docker.internal:8181&lt;/code&gt; to reach the database.&lt;/p&gt;

&lt;h4 id="time-series-database-influxdb-3-docker-container"&gt;3. Time Series Database: InfluxDB 3 (Docker Container)&lt;/h4&gt;

&lt;p&gt;InfluxDB 3 Core runs locally in Docker as the time series database. It stores soil moisture readings and enables real-time analytics, all without depending on external cloud connectivity.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Create persistent storage 
mkdir -p ~/influxdb3-data

# Run InfluxDB 3 Core with proper configuration
docker run --rm -p 8181:8181 \
  -v $PWD/data:/var/lib/influxdb3/data \
  -v $PWD/plugins:/var/lib/influxdb3/plugins \
  influxdb:3-core influxdb3 serve \
    --node-id=my-node-0 \
    --object-store=file \
    --data-dir=/var/lib/influxdb3/data \
    --plugin-dir=/var/lib/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="the-ai-interface-influxdb-mcp--claude"&gt;4. The “AI” Interface: InfluxDB MCP &amp;amp; Claude&lt;/h4&gt;

&lt;p&gt;Instead of writing SQL queries or building dashboards, the system connects an LLM to InfluxDB through the Model Context Protocol (MCP). I’ve written another blog post on how to connect InfluxDB 3 to MCP, which you can find here.&lt;/p&gt;

&lt;p&gt;Now the question isn’t:
&lt;strong&gt;“What’s the SQL query for average soil moisture over the last 24 hours?”&lt;/strong&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;It becomes:
&lt;strong&gt;“Has the plant been dry today?”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The LLM generates the correct SQL under the hood. If needed, the generated query can be inspected. This makes time series analytics accessible through conversation.&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;claude_desktop_config.json&lt;/code&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;{
  "mcpServers": {
    "influxdb": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "--interactive",
        "--add-host=host.docker.internal:host-gateway",
        "--env",
        "INFLUX_DB_PRODUCT_TYPE",
        "--env",
        "INFLUX_DB_INSTANCE_URL",
        "--env",
        "INFLUX_DB_TOKEN",
        "influxdata/influxdb3-mcp-server"
      ],
      "env": {
        "INFLUX_DB_PRODUCT_TYPE": "core",
        "INFLUX_DB_INSTANCE_URL": "http://host.docker.internal:8181",
        "INFLUX_DB_TOKEN": "YOUR_RESOURCE_TOKEN"
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="the-result"&gt;The Result:&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5ic88rDutPS2omn2Z6tD1k/908b17ccb43b429d80c7dfa134de9dd2/Screenshot_2026-02-19_at_10.08.18â__AM.png" alt="Plant Buddy result" /&gt;&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next&lt;/h2&gt;

&lt;p&gt;In the next post, we will upgrade this Plant Buddy project to do more than passively monitor. New features will include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;A water pump, motor, and small display&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automatic watering&lt;/strong&gt; when the plant enters &lt;code class="language-markup"&gt;DRY_ALERT&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;An extended system with &lt;strong&gt;light and temperature sensors&lt;/strong&gt; to determine how placement of the potted plant affects its health, especially during trips when no one is home.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try to build one yourself with &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=plant_buddy_influxdb_3&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt;! We would love to hear your questions/comments in our &lt;a href="https://community.influxdata.com"&gt;community forum&lt;/a&gt;, &lt;a href="https://join.slack.com/t/influxcommunity/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA"&gt;Slack&lt;/a&gt;, or Discord.&lt;/p&gt;
</description>
      <pubDate>Tue, 10 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/plant-buddy-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/plant-buddy-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>The "Now" Problem: Why BESS Operations Demand Last Value Caching</title>
      <description>&lt;p&gt;Battery Energy Storage Systems (BESS) represent one of the most unforgiving environments for real-time data. Unlike a passive asset, a battery is a complex electrochemical system where safety and revenue are determined by split-second decisions. In this context, “average” latency can become a serious problem. Performance depends entirely on one key question:&lt;/p&gt;

&lt;h2 id="what-is-happening-right-now"&gt;“What is happening right now?”&lt;/h2&gt;

&lt;p&gt;For grid operators, Energy Management Systems (EMS), and trading desks, this is the most critical question. To answer it, operations teams rely on dashboards that answer:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Safety &amp;amp; Health&lt;/strong&gt;: What is the current State of Health (SoH) of my BESS operations? Is the site healthy, or are there active thermal alarms?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Bottlenecks&lt;/strong&gt;: What is limiting performance right now? (Is it a Power Conversion System [PCS] derate, a specific rack, or a container-level issue?)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Revenue&lt;/strong&gt;: What is the precise State of Charge (SoC) available for immediate dispatch?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="the-challenge-the-latest-value-bottleneck"&gt;The challenge: the “latest value” bottleneck&lt;/h2&gt;

&lt;p&gt;“Current state” dashboards create a punishing workload for standard time series databases. A single utility-scale site might generate 50,000+ distinct signals (high cardinality) from cells, inverters, and meters. To display a “Live View,” the database must repeatedly scan recent data on disk to find the most recent timestamp for every single one of those signals.&lt;/p&gt;

&lt;p&gt;At the site level, this is difficult. &lt;strong&gt;At fleet scale with more assets, more concurrent users, and millions of streams, this “scan-for-latest” pattern becomes a crippling bottleneck.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id="the-solution-last-value-cache"&gt;The solution: Last Value Cache&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 solves this architectural conflict with its built-in &lt;strong&gt;Last Value Cache (LVC)&lt;/strong&gt;. Instead of scanning historical data to compute the current state, LVC automatically caches the most recent values (or the last N values) in memory for your critical signals. This ensures that “current state” queries remain sub-millisecond (&amp;lt; 10ms) and consistent, regardless of write throughput or fleet size, bridging the gap between historical analysis and real-time situational awareness.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3P8QsCW6bSfmliLYxMmNVP/5b074db94e9b2f58b57a9f18c65922cb/Image-2026-02-23_16_33_24.png" alt="BESS LVC solution" /&gt;&lt;/p&gt;

&lt;h2 id="how-to-use-influxdbs-last-value-cache-lvc-in-memory-for-bess-operations"&gt;How to use InfluxDB’s Last Value Cache (LVC) in memory for BESS operations&lt;/h2&gt;

&lt;h4 id="define-your-hot-signals"&gt;1. Define Your “Hot” Signals&lt;/h4&gt;

&lt;p&gt;Don’t cache everything. Pick the specific high-leverage fields that power your “Current State” dashboards and safety alerts, for example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Safety&lt;/strong&gt;: Cell Temperature (&lt;code class="language-markup"&gt;temp_c&lt;/code&gt;), Voltage (&lt;code class="language-markup"&gt;volts&lt;/code&gt;), Alarm Severity (&lt;code class="language-markup"&gt;alarm_level&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt;: State of Charge (&lt;code class="language-markup"&gt;soc&lt;/code&gt;), State of Health (&lt;code class="language-markup"&gt;soh&lt;/code&gt;), Inverter Mode (&lt;code class="language-markup"&gt;inv_state&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Ops&lt;/strong&gt;: Comms Heartbeat (&lt;code class="language-markup"&gt;last_seen&lt;/code&gt;), Charge/Discharge Limits (&lt;code class="language-markup"&gt;p_limit_kw&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="design-your-keys"&gt;2. Design Your Keys&lt;/h4&gt;

&lt;p&gt;Choose the columns that define how operators slice the system. These will become your cache keys.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Best Practice&lt;/strong&gt;: Match your dashboard filters. If your dashboard filters by &lt;code class="language-markup"&gt;site_id → container_id → rack_id&lt;/code&gt;, those are your keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cardinality Note&lt;/strong&gt;: Keep keys efficient. While InfluxDB 3 handles high cardinality exceptionally well, unnecessary keys (like a unique &lt;code class="language-markup"&gt;transaction_id&lt;/code&gt; per second) waste memory. Stick to asset identifiers.&lt;/p&gt;

&lt;h4 id="shape-the-cache-behavior"&gt;3. Shape the Cache Behavior&lt;/h4&gt;

&lt;p&gt;Configure the cache to match your visualization needs:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;count&lt;/code&gt;:
    &lt;ul&gt;
      &lt;li&gt;Set to &lt;strong&gt;1&lt;/strong&gt; for Gauges, Status Lights, and “Single Value” tiles.&lt;/li&gt;
      &lt;li&gt;Set to &lt;strong&gt;3–10&lt;/strong&gt; for “Sparklines” (mini-charts) where operators need to see the immediate trend (e.g., “Is voltage diving or stable?”).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;ttl&lt;/code&gt; (&lt;strong&gt;time-to-live&lt;/strong&gt;): Define when data becomes “stale.” If a sensor stops reporting, how long should the dashboard show the last value before switching to “Offline/Unknown”? (e.g., &lt;code class="language-markup"&gt;30s&lt;/code&gt; for safety, &lt;code class="language-markup"&gt;1h&lt;/code&gt; for capacity).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="create-the-cache"&gt;4. Create the Cache&lt;/h4&gt;

&lt;p&gt;Create the Last Value Cache using the UI explorer, HTTP API or the CLI.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Key arguments:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Database name: bess_db&lt;/li&gt;
  &lt;li&gt;Table name: bess_telemetry&lt;/li&gt;
  &lt;li&gt;Cache name: bess_ops_lvc&lt;/li&gt;
  &lt;li&gt;Key columns: site_id, rack_id (field keys to cache)&lt;/li&gt;
  &lt;li&gt;Value columns: soc, temp_max, alarm_state (field values to cache)&lt;/li&gt;
  &lt;li&gt;Count: 5 (the number of values to cache per unique key column combination, range 1-10)&lt;/li&gt;
  &lt;li&gt;TTL: 30s (time duration until data becomes stale)&lt;/li&gt;
  &lt;li&gt;Token: InfluxDB 3 authentication token&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="the-warm-cache-advantage"&gt;5. The “Warm Cache” Advantage&lt;/h4&gt;

&lt;p&gt;Unlike a standard cache that starts empty, LVC in InfluxDB 3 is “warm” by default.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;On creation&lt;/strong&gt;: It instantly backfills from existing data on disk.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;On restart&lt;/strong&gt;: It automatically reloads the state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;: Ops teams never see “blank” dashboards after a maintenance window. The system is ready the moment it comes back online.&lt;/p&gt;

&lt;h4 id="querying-the-cache"&gt;6. Querying the Cache&lt;/h4&gt;

&lt;p&gt;Use standard SQL and &lt;code class="language-markup"&gt;last_cache()&lt;/code&gt; function that replaces complex analytical queries with a simple lookup.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="architecture-built-for-scale-using-influxdb-3-enterprise"&gt;7. Architecture: Built for Scale Using InfluxDB 3 Enterprise&lt;/h4&gt;

&lt;p&gt;Last Value Cache can help separate heavy “writing” from “reading” workloads:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Dedicated Ingest Nodes&lt;/strong&gt;: Handle the massive flood of 1Hz sensor data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Dedicated Query Nodes&lt;/strong&gt;: Host the LVC in memory to serve dashboards instantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --node-spec "nodes:query-01,query-02" \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;The benefit&lt;/strong&gt;: Heavy write loads (e.g., a fleet-wide firmware update logging millions of events) will never slow down the control room’s live view.&lt;/p&gt;

&lt;h4 id="the-value-of-lvc"&gt;The value of LVC&lt;/h4&gt;

&lt;p&gt;In BESS operations, latency isn’t just a delay; it’s a risk. InfluxDB 3’s Last Value Cache eliminates that risk by serving the “current state” of your entire fleet instantly from memory, removing the need for complex external caching.&lt;/p&gt;

&lt;p&gt;When you’re ready to start building, &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=bess_last_value_caching&amp;amp;utm_content=blog"&gt;download InfluxDB 3 Enterprise&lt;/a&gt;, or &lt;a href="https://www.influxdata.com/contact-sales-enterprise/?utm_source=website&amp;amp;utm_medium=bess_last_value_caching&amp;amp;utm_content=blog"&gt;contact us&lt;/a&gt; to talk about running a proof of concept.&lt;/p&gt;
</description>
      <pubDate>Thu, 26 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/bess-last-value-caching/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/bess-last-value-caching/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>From Legacy Data Historians to a Modern, Open Industrial Data Stack</title>
      <description>&lt;p&gt;We recently sat down with founder and principal consultant at recultiv8, &lt;a href="https://za.linkedin.com/in/coenraadpretorius"&gt;Coenraad Pretorius&lt;/a&gt;, who drew on his years of data engineering experience in the manufacturing and energy sectors to share key industrial IoT insights. In this article, I list the top takeaways; you can also watch the full webinar recording &lt;a href="https://www.influxdata.com/resources/modernizing-industrial-data-stacks-energy-optimization-with-recultiv8-influxdb"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-challenge-with-traditional-data-historians"&gt;The challenge with traditional data historians&lt;/h2&gt;

&lt;p&gt;Industrial systems generate large volumes of time series data from machines, sensors, and control systems. Historically, this data has been managed using proprietary data historian platforms.&lt;/p&gt;

&lt;p&gt;These systems often lead to the following challenges:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Complexity&lt;/strong&gt;: Traditional stacks involve many tightly coupled components: SCADA systems, OPC servers, historians, data extraction tools, and analytics layers. Each layer requires specialized skills, making systems difficult to debug, extend, or modernize.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;High cost&lt;/strong&gt;: Per-tag licensing, annual maintenance fees, and specialized training significantly increase the total cost of ownership, particularly as systems scale.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Slow time to insight&lt;/strong&gt;: Extracting and analyzing data often takes days or weeks, delaying decisions and limiting optimization opportunities.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The analytics gap&lt;/strong&gt;: Traditional historians prioritize &lt;strong&gt;data storage&lt;/strong&gt;, not &lt;strong&gt;data analysis&lt;/strong&gt;. Common pain points include proprietary query languages, reliance on Excel exports, overloaded BI integrations, and additional licensing for advanced features. As a result, time to action is measured in days or weeks rather than hours, which is an unacceptable delay for modern industrial operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="data-historian-technical-architecture"&gt;Data Historian Technical Architecture&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Atyss4Y4ewXA83dyvy5tP/210bc838fdb9f5c416b8c1af0603f021/Screenshot_2026-02-11_at_9.41.38â__PM.png" alt="Data Historian Traditional Architecture" /&gt;&lt;/p&gt;

&lt;h2 id="a-modern-open-architecture-edge--cloud"&gt;A modern, open architecture: edge + cloud&lt;/h2&gt;

&lt;p&gt;To address these limitations, Coenraad presented a modern architecture built around InfluxDB 3, open source tooling, and cloud analytics. The core idea is a &lt;strong&gt;clear separation of responsibilities&lt;/strong&gt; that leads to improved performance, security, cost efficiency, and scalability while keeping systems simpler and easier to operate.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Edge systems&lt;/strong&gt; handle real-time ingestion, short-term storage, and operational dashboards close to the data source.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cloud systems&lt;/strong&gt; handle long-term storage, historical analysis, and advanced analytics without impacting operational performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="modern-iiot-technical-architecture"&gt;Modern IIoT Technical Architecture&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3LQnfjtGdArcpRRAPgZnel/a49068e5d20e85b5f92e3f927231f93b/Screenshot_2026-02-11_at_9.56.11â__PM.png" alt="Modern Stack Overview" /&gt;&lt;/p&gt;

&lt;h2 id="example-from-coenraads-case-study"&gt;Example from Coenraad’s case study&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Typical deployment setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Four OPC UA servers&lt;/li&gt;
  &lt;li&gt;10k+ tags&lt;/li&gt;
  &lt;li&gt;Windows-based servers&lt;/li&gt;
  &lt;li&gt;Telegraf running as Windows service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration approach&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Split config files (agent, inputs, outputs)&lt;/li&gt;
  &lt;li&gt;Custom Starlark processor for schema management&lt;/li&gt;
  &lt;li&gt;Environment variables for cloud credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: Rapid implementation of the modern data stack using open source solution resulted in saving $70k (once off) and $5 (annually).&lt;/p&gt;

&lt;h2 id="why-this-approach-works"&gt;Why this approach works&lt;/h2&gt;

&lt;p&gt;This modern stack delivers several practical benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Simpler systems&lt;/strong&gt; built with familiar tools like SQL and Python that most developers are familiar with.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Faster dashboards&lt;/strong&gt; move from multi-second load times to near instant response as detailed in this &lt;a href="https://h3xagn.com/blazingly-fast-dashboards-with-influxdb"&gt;blog post&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Lower costs&lt;/strong&gt; are incurred by replacing proprietary licensing with open source and consumption-based services.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Flexible data pipelines&lt;/strong&gt; use Telegraf to ingest data from industrial protocols such as OPC UA, MQTT, and Modbus into InfluxDB Core with optional streaming to the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="recap"&gt;Recap&lt;/h2&gt;

&lt;p&gt;The difference is fairly cut and dry: traditional data historians often limit agility and slow down insights, while modern industrial data stacks focus on speed, openness, and maintainability by separating edge operations from cloud analytics and using familiar, developer-friendly tools. For industrial and IIoT teams, modernizing the data pipeline is now foundational. To learn more, read the Teréga &lt;a href="https://www.influxdata.com/blog/terega-replaced-legacy-data-historian-with-influxdb-aws-io-base/"&gt;case study&lt;/a&gt; and connect with our community in the InfluxDB forums.&lt;/p&gt;
</description>
      <pubDate>Thu, 12 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/modern-industrial-stack-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/modern-industrial-stack-influxdb/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>Building with the InfluxDB 3 MCP Server &amp; Claude</title>
      <description>&lt;p&gt;InfluxDB 3 &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/mcp-server/"&gt;Model Context Protocol (MCP) server&lt;/a&gt; lets you manage and query InfluxDB 3 (Core, Enterprise, Dedicated, Serverless, Clustered) using natural language through popular LLM tools like Claude Desktop, ChatGPT Desktop, and other MCP-compatible agents.&lt;/p&gt;

&lt;p&gt;The setup is straightforward. In this article, we will focus on &lt;strong&gt;setting up InfluxDB 3 Enterprise&lt;/strong&gt; using Docker with &lt;strong&gt;Claude Desktop&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;Install InfluxDB 3 Enterprise using Docker (if you’re a new user, try out our &lt;a href="https://www.influxdata.com/lp/influxdb-database/?utm_source=website&amp;amp;utm_medium=influxdb_3_mcp_server_claude&amp;amp;utm_content=blog"&gt;free trial&lt;/a&gt;) on your machine by running the installer script:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl -O https://www.influxdata.com/d/install_influxdb3.sh &amp;amp;&amp;amp; sh install_influxdb3.sh enterprise&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;InfluxDB 3 Explorer UI will also make it easier to manage InfluxDB operations, so it’s recommended that you &lt;a href="https://docs.influxdata.com/influxdb3/explorer/install/#installation-methods"&gt;install&lt;/a&gt; it (using Docker) during initial setup or afterwards.&lt;/p&gt;

&lt;h2 id="create-an-influxdb-3-token-for-mcp-server"&gt;1. Create an InfluxDB 3 token for MCP server&lt;/h2&gt;

&lt;p&gt;The easiest way to create a scoped token is within &lt;strong&gt;InfluxDB 3 Explorer&lt;/strong&gt; UI.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open Explorer at http://localhost:8888.&lt;/li&gt;
  &lt;li&gt;Go to &lt;strong&gt;Manage Tokens&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Create a &lt;strong&gt;Database (resource) token&lt;/strong&gt; with &lt;strong&gt;read&lt;/strong&gt; (and optional write) &lt;strong&gt;permissions&lt;/strong&gt; for the databases you want your LLM to access.&lt;/li&gt;
  &lt;li&gt;Copy the token string and store it securely; the MCP server will use it as &lt;code class="language-markup"&gt;INFLUX_DB_TOKEN&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alternatively, you can run the following command inside a Docker container to create the token.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it YOUR_CONTAINER_ID influxdb3 create token \
  --permission "db:DATABASE1,DATABASE2:read,write" \
  --name "Read-write on DATABASE1, DATABASE2" \
  --token YOUR_ADMIN_TOKEN \
  --expiry 1y&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Tip&lt;/strong&gt;: Use resource tokens with the minimum required permissions and an expiration date, rather than providing a full admin token to the LLM MCP.&lt;/p&gt;

&lt;h2 id="configure-the-claude-desktop-mcp-server-docker-for-influxdb-3-enterprise"&gt;2. Configure the Claude Desktop MCP server (Docker) for InfluxDB 3 Enterprise&lt;/h2&gt;

&lt;p&gt;The InfluxDB 3 MCP server runs as a separate service and can be started using either &lt;a href="https://nodejs.org/en/download/current"&gt;Node.js&lt;/a&gt; or Docker. We will use Docker, as it’s already running InfluxDB 3 and Explorer UI.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open Claude Desktop.&lt;/li&gt;
  &lt;li&gt;Navigate to Settings → Developers → Edit Config&lt;/li&gt;
  &lt;li&gt;Open the Claude Desktop configuration file, add the following to the existing file, save, and restart Claude Desktop.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-json"&gt;{
  "mcpServers": {
    "influxdb": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "--interactive",
        "--add-host=host.docker.internal:host-gateway",
        "--env",
        "INFLUX_DB_PRODUCT_TYPE",
        "--env",
        "INFLUX_DB_INSTANCE_URL",
        "--env",
        "INFLUX_DB_TOKEN",
        "influxdata/influxdb3-mcp-server"
      ],
      "env": {
        "INFLUX_DB_PRODUCT_TYPE": "enterprise",
        "INFLUX_DB_INSTANCE_URL": "http://host.docker.internal:8181",
        "INFLUX_DB_TOKEN": "YOUR_RESOURCE_TOKEN"
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/W8sa9IGq6VeCO5bsc8FTl/976bc915980cadadc4b9411e6c9d2522/Claude_desktop_1.jpg" alt="Claude desktop 1" /&gt;&lt;/p&gt;

&lt;h2 id="use-claude-with-influxdb-via-mcp"&gt;3. Use Claude with InfluxDB via MCP&lt;/h2&gt;

&lt;p&gt;Once restarted, verify that Claude can access the InfluxDB 3 MCP server by chatting with it.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3copMcWux1iw9eHUigdSao/7de96f342fc2143eabb9cc76e74efd85/Claude_desktop_2.jpg" alt="Claude desktop 2" /&gt;&lt;/p&gt;

&lt;p&gt;Finally, you can interact with the database however you’d like, such as performing operations, getting analytics, etc., using natural language. Try the following prompts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“List all the databases and permissions you have access to.”&lt;/li&gt;
  &lt;li&gt;“Show me the schema for the &lt;code class="language-markup"&gt;sensor_data table&lt;/code&gt;.”&lt;/li&gt;
  &lt;li&gt;“Analyze bitcoin sample data price in the last 30 days.” You can also see the actual SQL query that gets executed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5tIqhG4xebUYR6CL1bZnVx/08e82f4f590945ea5c77fecce7234ee7/Claude_desktop_3.jpg" alt="Claude desktop 3" /&gt;&lt;/p&gt;

&lt;h2 id="connecting-other-llms"&gt;Connecting other LLMs&lt;/h2&gt;

&lt;p&gt;In this article we used Claude Desktop, but the InfluxDB 3 MCP server itself is generic. Any LLM agent that supports the Model Context Protocol can be used. For example, ChatGPT Desktop. In a follow-up article, we’ll cover how to run the MCP server and an LLM locally using other tools. We would love to hear your comments/questions, etc., on our community &lt;a href="https://community.influxdata.com"&gt;website&lt;/a&gt;, &lt;a href="https://www.influxdata.com/slack"&gt;Slack&lt;/a&gt;, or &lt;a href="https://discord.gg/YFFJvkfb"&gt;Discord&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Fri, 30 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-mcp-server-claude/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-mcp-server-claude/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>Optimizing BESS Operations: Real-Time Monitoring &amp; Predictive Maintenance with InfluxDB 3</title>
      <description>&lt;p&gt;For IT and OT engineers managing Battery Energy Storage Systems (BESS) and other distributed energy resources (DER), the challenge isn’t just dealing with energy. It’s a data problem, or managing the massive stream of real-time telemetry these systems generate. For example, a BESS site produces a constant stream of time-series data from BMS, PCS, SCADA, EMS, and more, and operating it means ingesting, correlating, and acting on that data in real time. And this challenge changes with scope. At a single site, telemetry drives asset health and safe operation—from cell temperatures to inverter vibration. At fleet scale, the same data supports coordinated operations and incident response across sites. When retained at full resolution, it also enables historical analysis for degradation tracking, predictive maintenance, and long-term optimization.&lt;/p&gt;

&lt;p&gt;Data flows from Operational Technology (OT) signals to Information Technology (IT) systems. Most BESS operators already run a slew of disparate systems:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;BMS&lt;/strong&gt; answers: Are the batteries safe and healthy?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;PCS&lt;/strong&gt; answers: Can I deliver or absorb power?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;EMS&lt;/strong&gt; answers: When should I charge or discharge?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;SCADA&lt;/strong&gt; answers: What’s happening right now on site?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, these systems work well. The problem starts when you need a &lt;strong&gt;unified, time-aligned view&lt;/strong&gt; across all of them, especially across multiple sites. &lt;strong&gt;InfluxDB sits at the center as a shared time series platform&lt;/strong&gt;, consolidating telemetry from all sources and serving it to operations, analytics, and automation workflows.&lt;/p&gt;

&lt;h2 id="one-popular-architecture-tig-telegraf--influxdb--grafana"&gt;One popular architecture: TIG (Telegraf → InfluxDB → Grafana)&lt;/h2&gt;

&lt;p&gt;A typical pattern for BESS telemetry is the &lt;strong&gt;TIG stack&lt;/strong&gt;, because it cleanly separates &lt;strong&gt;collection&lt;/strong&gt;, &lt;strong&gt;storage/query&lt;/strong&gt;, and &lt;strong&gt;visualization&lt;/strong&gt; and scales from a single site to a fleet.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2SPhzZMaZkYFeJN3J0uzCj/408377cef1eb966ce2b5b8c8e3bd4457/BESS_Architecture_4_with_shadows.png" alt="BESS graphic" /&gt;&lt;/p&gt;

&lt;h4 id="telegraf-collection--normalization"&gt;Telegraf (Collection + Normalization)&lt;/h4&gt;

&lt;p&gt;Telegraf acts as a lightweight collection agent at the edge or in your DMZ, with plugins for common OT and IoT protocols (Modbus, OPC-UA, MQTT, SNMP, HTTP). Use it when you want:
Fast onboarding of new signals without writing custom collectors
Store-and-forward style buffering patterns at the edge (architecture-dependent)
A consistent metric format before data hits your central platform&lt;/p&gt;

&lt;h4 id="influxdb-3-the-time-series-database"&gt;InfluxDB 3 (The Time Series Database)&lt;/h4&gt;

&lt;p&gt;InfluxDB is where BESS telemetry becomes operationally usable, offering:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;High-speed ingestion&lt;/strong&gt; so you don’t drop high-frequency telemetry during bursts (faults, transients, dispatch changes).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;High-cardinality modeling&lt;/strong&gt; so you can tag by &lt;code class="language-markup"&gt;site/rack/module/cell/inverter&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;SQL support&lt;/strong&gt; so IT/data teams can query using familiar tools and patterns (and integrate with BI/analytics stacks).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Low-latency “hot path” reads&lt;/strong&gt; using &lt;strong&gt;Last Value Cache&lt;/strong&gt; and &lt;strong&gt;Distinct Value Cache&lt;/strong&gt; for dashboards that need current state now (SoC, alarms, inverter status, thermal conditions).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scalable&lt;/strong&gt;: Deploy a single InfluxDB 3 Core or multiple-node Enterprise cluster as per your needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="grafana-or-power-biapache-superset-etc"&gt;Grafana (or Power BI/Apache, SuperSet, etc.)&lt;/h4&gt;
&lt;p&gt;Grafana turns fast queries into multi-panel dashboards commonly used for:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Live SoC/power/dispatch tracking&lt;/li&gt;
  &lt;li&gt;Temperature gradients and thermal risk monitoring&lt;/li&gt;
  &lt;li&gt;Voltage spreads, imbalance indicators, and fault timelines&lt;/li&gt;
  &lt;li&gt;Per-site and fleet rollups with consistent tags&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="real-time-data-processing-anomaly-detection-for-predictive-maintenance"&gt;Real-time data processing: anomaly detection for predictive maintenance&lt;/h2&gt;

&lt;p&gt;Traditionally, predictive maintenance required a complex pipeline: extracting data to a separate Python application server, running analysis, and writing results back. This adds latency, maintenance overhead, and security risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;InfluxDB 3 Core &amp;amp; Enterprise brings the data processing to where the data lives.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using the Processing Engine and ready-made plugins, you can perform stream processing within the database infrastructure.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Real-Time Detection&lt;/strong&gt;: As shown in the “Anomaly Detector” toggle in our demo, the system can identify thresholds (e.g., Temp &amp;gt; 80°C or Vibration Drift) in real-time as data arrives.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Simplified Architecture&lt;/strong&gt;: You eliminate the need for an external Python application server or complex stream-processing clusters (such as Kafka or Flink) to detect spikes.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Operational Plugins&lt;/strong&gt;: Beyond &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/mad_check/?utm_source=website&amp;amp;utm_medium=optimizing_bess_operations_influxdb_3&amp;amp;utm_content=blog"&gt;anomaly detection&lt;/a&gt;, plugins handle tasks like &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/downsampler/?utm_source=website&amp;amp;utm_medium=optimizing_bess_operations_influxdb_3&amp;amp;utm_content=blog"&gt;downsampling&lt;/a&gt; (converting 10 ms raw data into one-minute averages for long-term storage) and &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier/?utm_source=website&amp;amp;utm_medium=optimizing_bess_operations_influxdb_3&amp;amp;utm_content=blog"&gt;alerting&lt;/a&gt; without leaving the platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;

&lt;p&gt;BESS operations depend on fast, reliable, and time-aligned telemetry. InfluxDB 3 provides a single platform to support real-time monitoring, anomaly detection, and forecasting at fleet scale without adding unnecessary complexity to your data pipeline. 
If you’re building or operating real-time BESS data systems, you may find our customer case study on &lt;a href="https://www.influxdata.com/customer/juniz/?utm_source=website&amp;amp;utm_medium=optimizing_bess_operations_influxdb_3&amp;amp;utm_content=blog"&gt;ju:niz energy&lt;/a&gt; helpful. As always, we’d love to hear your questions/comments or see what you have built on &lt;a href="https://www.influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=optimizing_bess_operations_influxdb_3&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt;, &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt;, and our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=optimizing_bess_operations_influxdb_3&amp;amp;utm_content=blog"&gt;Community Forum&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 13 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/optimizing-bess-operations-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/optimizing-bess-operations-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>Performing Real-Time Anomaly Detection with InfluxDB 3: An In-Depth Guide</title>
      <description>&lt;p&gt;If you’re working with sensors, machines, or embedded systems, your primary goal is simple:  no unplanned downtime and smooth operations. This means detecting errors and taking action as soon as possible, ideally preventing them through predictive maintenance before they become critical issues.&lt;/p&gt;

&lt;p&gt;This is where anomaly detection becomes essential. In this blog, we will take a deep dive into anomaly detection using two ready-to-use Python plugins for real-world IoT use cases. We will be leveraging the Python Processing Engine within InfluxDB 3 Core or Enterprise. This means you can detect outliers, level shifts, and unusual patterns without ever leaving your database, simplifying your streaming data process and pipeline right where the data lives.&lt;/p&gt;

&lt;h2 id="understanding-the-anomaly-detection-landscape"&gt;&lt;strong&gt;Understanding the anomaly detection landscape&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Before diving into the plugins, let’s look at where these approaches fit in the broader anomaly detection ecosystem.&lt;/p&gt;

&lt;h4 id="the-three-approaches-to-anomaly-detection"&gt;The Three Approaches to Anomaly Detection&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5Oglj7S1NLOklsGqcuOFZT/c571dd8e62668b4f0248b7f2f702f652/Internal_image.png" alt="three approaches to anomaly deetection" /&gt;&lt;/p&gt;

&lt;p&gt;For most industrial IoT, infrastructure monitoring, and operational scenarios, proven statistical and classical ML methods are not just “good enough,” they’re often the better choice because they’re reliable, battle-tested in production for decades, explainable (engineers understand why alerts were fired), and they deploy and receive alerts right away without training ML models, etc.&lt;/p&gt;

&lt;h2 id="how-to-use-mad-and-adtk-plugins-in-influxdb-3"&gt;How to use MAD and ADTK plugins in InfluxDB 3&lt;/h2&gt;

&lt;h4 id="start-influxdb-3-with-the-processing-engine-enabled"&gt;&lt;strong&gt;1. Start InfluxDB 3 with the Processing Engine Enabled&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Choose a directory for plugins and start the server with it:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 serve \
  --plugin-dir ~/.influxdb3/plugins \
  # other flags...&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="install-python-dependencies-into-the-processing-engine"&gt;&lt;strong&gt;2. Install Python Dependencies into the Processing Engine&lt;/strong&gt;&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# For notifications (optional)
influxdb3 install package httpx twilio

# For ADTK plugin
influxdb3 install package adtk
influxdb3 install package pandas

# For MAD plugin
influxdb3 install package requests&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="add-plugins-to-the-plugin-directory"&gt;&lt;strong&gt;3. Add Plugins to the Plugin Directory&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Clone the &lt;code class="language-markup"&gt;influxdb3_plugins&lt;/code&gt; &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main"&gt;GitHub repository&lt;/a&gt; and copy the plugin files you need. For example, copy them into your &lt;code class="language-markup"&gt;PLUGIN_DIR&lt;/code&gt; or use them directly from GitHub.&lt;/p&gt;

&lt;h4 id="access-and-configure-plugins-from-the-plugin-library"&gt;&lt;strong&gt;4. Access and Configure Plugins From the Plugin Library&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Use InfluxDB Explorer or by following the steps below in your local console.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/c4492604269e44c5ba134d9c70970608/127f5c294716b7cf1ebba96453834775/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;h2 id="mad-plugin-for-real-time-spike-detection"&gt;MAD plugin for real-time spike detection&lt;/h2&gt;

&lt;p&gt;The MAD-Based Anomaly Detection &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/mad_check"&gt;Plugin&lt;/a&gt; provides real-time anomaly detection for time series data in InfluxDB 3 using Median Absolute Deviation (MAD), which is a pure statistical approach.&lt;/p&gt;

&lt;h5 id="example-use-case-get-instant-alerts-when-temperature-suddenly-spikes-above-the-normal-range"&gt;&lt;strong&gt;Example use case&lt;/strong&gt;: Get instant alerts when temperature suddenly spikes above the normal range.&lt;/h5&gt;

&lt;h5 id="step-1-set-up-notification-handler-separate-plugin-for-alerting-purposehttpsdocsinfluxdatacominfluxdb3enterprisepluginslibraryofficialnotifier"&gt;&lt;strong&gt;Step 1: Set up notification handler (&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/library/official/notifier/"&gt;separate plugin for alerting purpose&lt;/a&gt;)&lt;/strong&gt;&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \

 --database sensors \

 --plugin-filename gh:influxdata/notifier/notifier_plugin.py \

 --trigger-spec "request:notify" \

 notification_trigger \

 --token YOUR_TOKEN

influxdb3 enable trigger --database sensors notification_trigger --token YOUR_TOKEN&lt;/code&gt;&lt;/pre&gt;

&lt;h5 id="step-2-create-mad-detector-runs-on-every-write"&gt;&lt;strong&gt;Step 2: Create MAD detector (runs on every write)&lt;/strong&gt;&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \

  --database sensors \

  --plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \

  --trigger-spec "all_tables" \

  --trigger-arguments \

    'measurement=environment,\

     mad_thresholds=temperature:2.5:20:5,\

\     senders=slack,\

     slack_webhook_url=https://hooks.slack.com/services/YOUR/WEBHOOK/URL,\

     influxdb3_auth_token=YOUR_TOKEN' \

  temperature_spike_detector \

  --token YOUR_TOKEN

influxdb3 enable trigger --database sensors temperature_spike_detector --token YOUR_TOKEN&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;What this does&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;temperature:2.5:20:5&lt;/code&gt; = monitor temperature field, flag if 2.5× MAD away from median, use 20-point window, alert after 5 consecutive anomalies&lt;/li&gt;
  &lt;li&gt;Triggers on every write for instant detection&lt;/li&gt;
&lt;/ul&gt;

&lt;h5 id="step-3-write-test-data"&gt;&lt;strong&gt;Step 3: Write test data&lt;/strong&gt;&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Establish normal baseline (22°C)

for i in {1..25}; do

  influxdb3 write --database sensors --token YOUR_TOKEN \

    "environment,room=factory temperature=22.$((RANDOM % 5))"

done

# Simulate equipment failure (sudden spike to 45°C+)

for temp in 46 47 48 49 50; do

  influxdb3 write --database sensors --token YOUR_TOKEN \

    "environment,room=factory temperature=${temp}.0"

done&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Expected result&lt;/strong&gt;: You should see an “anomaly detected” alert in InfluxDB 3 Logs and also in the Slack channel after the fifth spike: “MAD count alert: Field temperature in environment outlier for 5 consecutive points”.&lt;/p&gt;

&lt;h2 id="adtk-plugin-for-detecting-sustained-instability"&gt;ADTK plugin for detecting sustained instability&lt;/h2&gt;

&lt;p&gt;This &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/stateless_adtk_detector"&gt;plugin&lt;/a&gt; is built on top of the popular time series machine learning library, &lt;a href="https://github.com/odnura/adtk"&gt;Anomaly Detection Toolkit&lt;/a&gt; (ADTK).&lt;/p&gt;

&lt;h5 id="example-use-case-detect-when-temperature-becomes-erraticunstable-eg-sensor-malfunction-causing-wild-swings"&gt;&lt;strong&gt;Example use case&lt;/strong&gt;: Detect when temperature becomes erratic/unstable (e.g., sensor malfunction causing wild swings).&lt;/h5&gt;

&lt;h5 id="step-1-create-adtk-detector-scheduled-every-30-seconds"&gt;&lt;strong&gt;Step 1: Create ADTK detector (scheduled every 30 seconds)&lt;/strong&gt;&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \

  --database sensors \

  --plugin-filename gh:influxdata/stateless_adtk_detector/adtk_anomaly_detection_plugin.py \

  --trigger-spec "every:30s" \

  --trigger-arguments \

    "measurement=environment,\

     field=temperature,\

     detectors=VolatilityShiftAD,\

     detector_params=eyJWb2xhdGlsaXR5U2hpZnRBRCI6IHsid2luZG93IjogMTV9fQo=,\

     window=600s,\

     senders=slack,\

     slack_webhook_url=https://hooks.slack.com/services/YOUR/WEBHOOK/URL,\

     influxdb3_auth_token=YOUR_TOKEN" \

  temperature_stability_detector \

  --token YOUR_TOKEN

influxdb3 enable trigger --database sensors temperature_stability_detector --token YOUR_TOKEN&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;What this does&lt;/strong&gt;:
* Checks every 30 seconds (scheduled trigger)
* Analyzes last 10 minutes of data (&lt;code class="language-markup"&gt;window=600s&lt;/code&gt;)
* Detects when variance shifts (stable → erratic)&lt;/p&gt;

&lt;h5 id="step-2-write-test-data"&gt;&lt;strong&gt;Step 2: Write Test Data&lt;/strong&gt;&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Stable readings (±2°C variation)

for i in {1..15}; do

  influxdb3 write --database sensors --token YOUR_TOKEN \

    "environment,room=factory temperature=$((20 + RANDOM % 5)).5"

done

# Erratic readings (wild swings indicating sensor malfunction)

for temp in 5.2 35.8 8.5 40.3 12.1 38.7 7.9 42.4 6.6 44.3; do

  influxdb3 write --database sensors --token YOUR_TOKEN \

    "environment,room=factory temperature=${temp}"

done&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Expected result&lt;/strong&gt;: Within 30 seconds, you should get a Slack alert → “Volatility shift detected in environment.temperature.”&lt;/p&gt;

&lt;h5 id="why-use-these-plugins-together"&gt;&lt;strong&gt;Why use these plugins together?&lt;/strong&gt;&lt;/h5&gt;

&lt;ul&gt;
  &lt;li&gt;MAD catches acute problems such as immediate dangers, hazards, etc.&lt;/li&gt;
  &lt;li&gt;ADTK catches chronic problems such as sensor degradation over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="next-steps"&gt;&lt;strong&gt;Next steps&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Customize this ready-to-use Anomaly Detection Plugin for your use case:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/mad_check"&gt;MAD Plugin Documentation&lt;/a&gt; &lt;u&gt;(&lt;/u&gt;adjust MAD multiplier (&lt;code class="language-markup"&gt;k&lt;/code&gt;) for sensitivity)&lt;u&gt;&lt;/u&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/stateless_adtk_detector"&gt;ADTK Plugin Documentation&lt;/a&gt; (change ADTK window size for different time horizons)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You now have a production-ready anomaly detection system running in your database with no separate infrastructure. We invite you to clone/form the plugins in the GitHub repository, publish your own plugins for others to use, and share your questions and projects with our web community on our &lt;a href="https://influxcommunity.slack.com/join/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA#/shared-invite/email"&gt;Slack&lt;/a&gt; and &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; channels.&lt;/p&gt;
</description>
      <pubDate>Fri, 02 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/real-time-anomaly-detection-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/real-time-anomaly-detection-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>Master the Basics of InfluxDB 3 with These Two Free Courses</title>
      <description>&lt;p&gt;If you’ve been wanting to build your skills with InfluxDB 3, InfluxDB University now has &lt;strong&gt;two new free courses&lt;/strong&gt; that make learning simple, clear, and hands-on. Each course includes short videos and interactive quizzes to help you learn at your own pace.&lt;/p&gt;

&lt;p&gt;Both courses go beyond the basics and walk you step-by-step through real examples so you can start using InfluxDB 3 Core &amp;amp; Enterprise along with the Processing Engine with confidence.&lt;/p&gt;

&lt;h2 id="course-influxdb-3-core--enterprise-essentialshttpsuniversityinfluxdatacomlearncourses30influxdb-3-core-enterprise-essentialsutmsourcewebsiteutmmediuminfluxdb3basicsfreecourseutmcontentbloguu"&gt;Course: &lt;a href="https://university.influxdata.com/learn/courses/30/influxdb-3-core-enterprise-essentials/?utm_source=website&amp;amp;utm_medium=influxdb_3_basics_free_course&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core &amp;amp; Enterprise Essentials&lt;/a&gt;&lt;u&gt;&lt;/u&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1iLX54udLpJZ12bYYBHGro/2b3a20cc72db184552be0c7bea4a9892/Frame_1707481216.png" alt="Core &amp;amp; Enterprise Essentials" /&gt;&lt;/p&gt;

&lt;p&gt;This fundamentals course helps you understand the architecture and features of both InfluxDB 3 Core and Enterprise. You’ll learn:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The structure and key differences between the two editions&lt;/li&gt;
  &lt;li&gt;How to install and use InfluxDB 3 with the CLI and UI Explorer&lt;/li&gt;
  &lt;li&gt;How the InfluxDB 3 data model works&lt;/li&gt;
  &lt;li&gt;How to write and query time series data using SQL&lt;/li&gt;
  &lt;li&gt;How caching improves performance&lt;/li&gt;
  &lt;li&gt;An introduction to the Processing Engine&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="course-processing-engine-essentialshttpsuniversityinfluxdatacomlearncourses33processing-engine-essentialsutmsourcewebsiteutmmediuminfluxdb3basicsfreecourseutmcontentbloguu"&gt;Course: &lt;a href="https://university.influxdata.com/learn/courses/33/processing-engine-essentials/?utm_source=website&amp;amp;utm_medium=influxdb_3_basics_free_course&amp;amp;utm_content=blog"&gt;Processing Engine Essentials&lt;/a&gt;&lt;u&gt;&lt;/u&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5Ka9Zwdug17qhvAdzXa5gu/ca255711604d0321979fabff2ff5b5fb/Frame_1707481217.png" alt="Processing Engine Essentials" /&gt;&lt;/p&gt;

&lt;p&gt;The Processing Engine is one of the most powerful features of InfluxDB 3, allowing you to run Python directly in the database. This course shows you how to use it effectively, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The basics of how the Processing Engine works&lt;/li&gt;
  &lt;li&gt;Different trigger types and when to use them&lt;/li&gt;
  &lt;li&gt;Step-by-step guides for WAL Flush, Schedule and HTTP triggers&lt;/li&gt;
  &lt;li&gt;How to create a custom Python plugin&lt;/li&gt;
  &lt;li&gt;How to use community and InfluxData built plugins with the UI Explorer&lt;/li&gt;
  &lt;li&gt;How the Processing Engine APIs fit into real workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="bonus-earn-shareable-badges"&gt;Bonus: earn shareable badges&lt;/h2&gt;

&lt;p&gt;When you finish each course, you’ll receive a verified badge you can add to your LinkedIn profile or portfolio. It’s a simple way to show that you understand the latest version of InfluxDB and its key features.&lt;/p&gt;

&lt;h2 id="start-learning-for-free-today"&gt;Start learning for free today&lt;/h2&gt;

&lt;p&gt;InfluxDB University is completely free; just &lt;a href="https://university.influxdata.com/?utm_source=website&amp;amp;utm_medium=influxdb_3_basics_free_course&amp;amp;utm_content=blog"&gt;sign up&lt;/a&gt; and begin. These new courses are the easiest way to get hands-on with InfluxDB 3 and its Processing Engine, with clear explanations and guided examples.&lt;/p&gt;

&lt;p&gt;If you have any questions or feedback, we’re always happy to help. You can reach the team at &lt;strong&gt;university@influxdata.com&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Happy learning!&lt;/p&gt;
</description>
      <pubDate>Fri, 05 Dec 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-basics-free-course/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-basics-free-course/</guid>
      <category>Training</category>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>Smart Home Monitoring with InfluxDB 3, Google Nest, and Grafana</title>
      <description>&lt;p&gt;Your smart home devices generate vast amounts of scattered data—this tutorial shows you how to centralize it into a unified platform using InfluxDB 3 and Grafana. You’ll not only track your home’s vital signs but also learn professional software development concepts, such as time series database design and building resilient data pipelines, applicable to various monitoring and analytics systems.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2YG4BNLTRiZjQeblQVXAqI/aa6b7537ce038892a5ca88ca76802419/Screenshot_2025-11-06_at_12.01.10â__PM.png" alt="Raspberry Pi" /&gt;
Before we begin, ensure you have:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Basic familiarity with Python and API concepts&lt;/li&gt;
  &lt;li&gt;Administrative access to your router (for bandwidth monitoring)&lt;/li&gt;
  &lt;li&gt;At least one smart device (Nest thermostat, smart meter, etc.)&lt;/li&gt;
  &lt;li&gt;A computer or Raspberry Pi to run &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/"&gt;InfluxDB 3&lt;/a&gt;, &lt;a href="https://grafana.com/oss/"&gt;Grafana&lt;/a&gt;, and Python programs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="understanding-what-youre-working-with"&gt;&lt;strong&gt;Understanding what you’re working with&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Time series data differs fundamentally from traditional relational data. Instead of focusing on relationships between entities, we’re capturing how values change over time. Each data point consists of:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Timestamp&lt;/strong&gt;: When the measurement was taken&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Measurement&lt;/strong&gt;: What we’re measuring (temperature, power, bandwidth)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Tags&lt;/strong&gt;: Metadata that helps us categorize data (device_id, location, type)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Fields&lt;/strong&gt;: The actual values measured&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure makes InfluxDB handy for IoT data because it’s optimized for write-heavy workloads with time-based queries. We define this in a syntax called “&lt;a href="https://docs.influxdata.com/influxdb3/core/reference/line-protocol/"&gt;line protocol&lt;/a&gt;,” and it looks like this:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;weather,location=london,season=summer temperature=30 1465839830100400200&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Understanding this syntax:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“&lt;/strong&gt;weather&lt;strong&gt;”&lt;/strong&gt; is the name of your database table, also known as a measurement.&lt;/p&gt;

&lt;p&gt;“location=london,season=summer” are key-value pairs or ‘tag sets’ separated by a comma that provide metadata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“&lt;/strong&gt;temperature=30&lt;strong&gt;”&lt;/strong&gt; is the fieldset, which is the actual data set.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“&lt;/strong&gt;1465839830100400200&lt;strong&gt;”&lt;/strong&gt; is optional; it’s actually the timestamp  2016-06-13T17:43:50.1004002Z in RFC3339 format. If you don’t provide a timestamp, InfluxDB will use your server’s local nanosecond timestamp in UTC.&lt;/p&gt;

&lt;h2 id="setting-up-influxdb-3"&gt;&lt;strong&gt;Setting up InfluxDB 3&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;We’ll use InfluxDB 3 Enterprise’s free at-home license, which will only require you to provide your email. It includes 2 CPUs and is for personal use only. You will need to check your inbox and verify the link to activate the at-home license.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Pull image from Docker for InfluxDB 3 Enterprise
docker pull influxdb:3-enterprise

# Run InfluxDB 3 Enterprise with proper configuration
docker run -d \
  --name influxdb3-enterprise \
  -p 8181:8181 \
  -v $PWD/data:/var/lib/influxdb3/data \
  -v $PWD/plugins:/var/lib/influxdb3/plugins \
  -e INFLUXDB3_ENTERPRISE_LICENSE_EMAIL=you@example.com \
  influxdb:3-enterprise \
    influxdb3 serve \
      --node-id=node0 \
      --cluster-id=cluster0 \
      --object-store=file \
      --data-dir=/var/lib/influxdb3/data \
      --host=0.0.0.0 \
      --port=8181&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="building-a-robust-data-collector"&gt;&lt;strong&gt;Building a robust data collector&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;We’ll focus on creating a single comprehensive data collector that covers all the patterns you’ll need for any IoT integration. This example uses a Nest thermostat, but the principles apply to any smart device or API. We’ll build a collector that polls the &lt;a href="https://developers.google.com/nest/device-access/api/thermostat"&gt;Google Nest API&lt;/a&gt; and writes to InfluxDB 3 Enterprise using the v3 Python client.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a database and optionally set a retention period.

  &lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create database home-data&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;

  &lt;li&gt;Nest API setup. To collect data from your Nest thermostat, you need API access. To do this:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
  &lt;li&gt;Go to &lt;a href="https://console.cloud.google.com/welcome/new?pli=1"&gt;Google Cloud Console&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Create a new project.&lt;/li&gt;
  &lt;li&gt;Save your Project ID.&lt;/li&gt;
  &lt;li&gt;Visit &lt;a href="https://console.nest.google.com/device-access/tos"&gt;Device Access Console&lt;/a&gt; and follow the console instructions.&lt;/li&gt;
  &lt;li&gt;Create a project, link it to your Google Cloud project, and download OAuth credentials.&lt;/li&gt;
  &lt;li&gt;Create a Python program “get_nest_token.py” as follows:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# get_nest_token.py
import requests
import webbrowser
from urllib.parse import urlencode

CLIENT_ID = "your-client-id-here"
CLIENT_SECRET = "your-client-secret-here"

# Generate authorization URL
auth_url = f"https://accounts.google.com/o/oauth2/v2/auth?{urlencode({
    'client_id': CLIENT_ID,
    'redirect_uri': 'http://localhost',
    'response_type': 'code',
    'scope': 'https://www.googleapis.com/auth/sdm.service',
    'access_type': 'offline'
})}"

print(f"Visit: {auth_url}")
webbrowser.open(auth_url)

# Get authorization code from redirect URL
auth_code = input("Enter the code from the redirect URL: ")

# Exchange for tokens
token_response = requests.post('https://oauth2.googleapis.com/token', data={
    'client_id': CLIENT_ID,
    'client_secret': CLIENT_SECRET,
    'code': auth_code,
    'grant_type': 'authorization_code',
    'redirect_uri': 'http://localhost'
})

tokens = token_response.json()

print(f"Access Token: {tokens['access_token']}")
print(f"Refresh Token: {tokens['refresh_token']}")&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="building-the-data-collector"&gt;&lt;strong&gt;Building the data collector&lt;/strong&gt;&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a .env file and save it locally.

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;NEST_ACCESS_TOKEN=your_access_token_here
GOOGLE_CLOUD_PROJECT_ID=your_project_id
INFLUXDB_HOST=http://localhost:8181
INFLUXDB_TOKEN=your_influxdb_token
INFLUXDB_DATABASE=home-data&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;

&lt;li&gt;Install the Python dependencies.

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;pip install influxdb3-python requests python-dotenv&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;

&lt;li&gt;Create a new Python program that will act as a data collector and write to InfluxDB 3 “nest_collector.py.”

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# nest_collector.py
import os
import time
import logging
from datetime import datetime, timezone
from functools import wraps
import requests
from influxdb_client_3 import InfluxDBClient3
from dotenv import load_dotenv

load_dotenv()

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(levelname)s - %(message)s"

)

def retry_on_failure(max_retries=3, delay=5):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            for attempt in range(max_retries):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    if attempt == max_retries - 1:
                        logging.error(f"{func.__name__} failed: {e}")
                        raise
                    logging.warning(f"Retry {attempt + 1}: {e}")
                    time.sleep(delay)
        return wrapper
    return decorator

class NestCollector:
    def __init__(self):
        self.access_token = os.getenv("NEST_ACCESS_TOKEN")
        self.project_id = os.getenv("GOOGLE_CLOUD_PROJECT_ID")

        if not self.access_token or not self.project_id:
            raise ValueError("Missing NEST_ACCESS_TOKEN or GOOGLE_CLOUD_PROJECT_ID in .env file")

        # Initialize InfluxDB 3 client
        self.client = InfluxDBClient3(
            host=os.getenv("INFLUXDB_HOST", "http://localhost:8181"),
            token=os.getenv("INFLUXDB_TOKEN"),
            database=os.getenv("INFLUXDB_DATABASE", "home-data"),
        )

        # Test connection
        try:
            list(self.client.query("SELECT 1", language="sql"))
            logging.info("InfluxDB connection successful")
        except Exception as e:
            logging.error(f"InfluxDB connection failed: {e}")
            raise

    @retry_on_failure(max_retries=3, delay=5)
    def get_thermostat_data(self):
        """Fetch data from Nest API"""
        url = f"https://smartdevicemanagement.googleapis.com/v1/enterprises/{self.project_id}/devices"
        headers = {
            "Authorization": f"Bearer {self.access_token}",
            "Content-Type": "application/json"
        }

        response = requests.get(url, headers=headers, timeout=30)
        response.raise_for_status()

        devices = response.json().get("devices", [])
        data_points = []        

        for device in devices:
            if "THERMOSTAT" not in device.get("type", ""):
                continue                `

            traits = device.get("traits", {})
            device_id = device.get("name", "").split("/")[-1]            `

            # Extract measurements
            temp_trait = traits.get("sdm.devices.traits.Temperature", {})
            humidity_trait = traits.get("sdm.devices.traits.Humidity", {})
            hvac_trait = traits.get("sdm.devices.traits.ThermostatHvac", {})
            setpoint_trait = traits.get("sdm.devices.traits.ThermostatTemperatureSetpoint", {})
            info_trait = traits.get("sdm.devices.traits.Info", {})            `

            try:
                temp_celsius = float(temp_trait.get("ambientTemperatureCelsius", 0))
                humidity = float(humidity_trait.get("ambientHumidityPercent", 0))
            except (TypeError, ValueError):
                continue

            # Build data point for InfluxDB
            point = {
                "measurement": "nest_thermostat",
                "tags": {
                    "device_id": device_id,
                    "room": info_trait.get("customName", "main"),
                    "device_type": "thermostat"
                },
                "fields": {
                    "temperature_celsius": temp_celsius,
                    "temperature_fahrenheit": temp_celsius * 9/5 + 32,
                    "humidity_percent": humidity,
                    "hvac_status": hvac_trait.get("status", "OFF"),
                    "hvac_mode": hvac_trait.get("mode", "UNKNOWN")
                },
                "time": int(datetime.now(timezone.utc).timestamp())
            }

            # Add setpoint temperatures if available
            if "heatCelsius" in setpoint_trait:
                heat_c = float(setpoint_trait["heatCelsius"])
                point["fields"]["heat_setpoint_celsius"] = heat_c
                point["fields"]["heat_setpoint_fahrenheit"] = heat_c * 9/5 + 32                `

`            if "coolCelsius" in setpoint_trait:
                cool_c = float(setpoint_trait["coolCelsius"])
                point["fields"]["cool_setpoint_celsius"] = cool_c
                point["fields"]["cool_setpoint_fahrenheit"] = cool_c * 9/5 + 32            `

            data_points.append(point)
            logging.info(f"Collected {device_id}: {temp_celsius:.1f}°C, {humidity:.0f}%")           
        return data_points

    def write_to_influx(self, points):
        """Write data to InfluxDB"""
        if not points:
            logging.warning("No data to write")
            return            `

        success_count = 0
        for point in points:

            try:
                self.client.write(record=point, write_precision="s")
                success_count += 1
            except Exception as e:
                logging.error(f"Write failed: {e}")

        logging.info(f"Wrote {success_count}/{len(points)} points")

    def run_cycle(self):
        """Run one collection cycle"""
        try:
            data = self.get_thermostat_data()
            self.write_to_influx(data)
        except Exception as e:
            logging.error(f"Cycle failed: {e}")

if __name__ == "__main__":
    collector = NestCollector()    `

    try:
        while True:
            collector.run_cycle()
            time.sleep(300)  # Run every 5 minutes
    except KeyboardInterrupt:
        logging.info("Stopped by user")&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
  &lt;/ol&gt;

&lt;h2 id="installing-and-configuring-grafana"&gt;&lt;strong&gt;Installing and configuring Grafana&lt;/strong&gt;&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Install Grafana using Docker
docker run -d \
  --name grafana \
  -p 3000:3000 \
  -v grafana-storage:/var/lib/grafana \
  -e "GF_SECURITY_ADMIN_PASSWORD=your-secure-password" 
  grafana/grafana:latest&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="essential-dashboard-configuration"&gt;&lt;strong&gt;Essential dashboard configuration&lt;/strong&gt;&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Make sure Grafana is up and running locally on port 3000.
    &lt;ul&gt;
      &lt;li&gt;Log into Grafana using your username/password at localhost:3000&lt;/li&gt;
      &lt;li&gt;Navigate to &lt;strong&gt;Connection&lt;/strong&gt; —&amp;gt; Type ‘InfluxDB’ —&amp;gt; ‘Add new Data Source’&lt;/li&gt;
      &lt;li&gt;Type: InfluxDB3 Enterprise Home&lt;/li&gt;
      &lt;li&gt;Language : SQL&lt;/li&gt;
      &lt;li&gt;Database: home-data&lt;/li&gt;
      &lt;li&gt;URL: &lt;a href="http://influxdb3-enterprise:8181/"&gt;http://influxdb3-enterprise:8181&lt;/a&gt; for connecting to InfluxDB 3 Enterprise&lt;/li&gt;
      &lt;li&gt;Token: Paste the string value for &lt;code class="language-markup"&gt;INFLUXDB_TOKEN&lt;/code&gt; environment variable from your .env file &amp;amp; toggle Insecure Connection to “ON”&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Create dashboards with two panels using the following SQL queries to monitor the data:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Current Temperature Panel&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT 

  temperature_fahrenheit,

  device_id

FROM nest_thermostat 

WHERE time &amp;gt;= now() - interval '5 minutes'

ORDER BY time DESC 

LIMIT 1&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;24-Hour Trend Panel&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT 

  date_trunc('minute', time) as time,

  AVG(temperature_fahrenheit) as avg_temp

FROM nest_thermostat 

WHERE time &amp;gt;= now() - interval '24 hours'

GROUP BY date_trunc('minute', time)

ORDER BY time&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="optional-health-monitoring-script"&gt;(Optional) Health Monitoring Script&lt;/h4&gt;

&lt;p&gt;Keep your systems healthy with simple checks by creating the script “health_check.py” as follows:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# health_check.py
import requests
from datetime import datetime

def check_health():
    services = {
        'InfluxDB': 'http://localhost:8181/health',
        'Grafana': 'http://localhost:3000/api/health'
    }

    print(f"\n=== Health Check - {datetime.now().strftime('%H:%M:%S')} ===")

    all_healthy = True
    for service, url in services.items():
        try:
            response = requests.get(url, timeout=5)
            healthy = response.status_code == 200
            status = "✅" if healthy else "❌"
            print(f"{service}: {status}")
            all_healthy = all_healthy and healthy
        except Exception:
            print(f"{service}: ❌ Connection failed")
            all_healthy = False

    print(f"Overall: {'✅ HEALTHY' if all_healthy else '❌ ISSUES'}\n")

if __name__ == "__main__":
    check_health()&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="bringing-it-all-together"&gt;&lt;strong&gt;Bringing it all together&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;What you’ve built here goes far beyond just monitoring your thermostat; you’ve implemented the foundational patterns that power modern observability systems at scale. The retry logic and circuit breakers you wrote to handle flaky IoT APIs are the same resilience patterns that keep Netflix running when services fail. At the same time, the time series data modeling and visualization pipeline you created mirrors the monitoring infrastructure used by major tech companies to track millions of metrics per second.&lt;/p&gt;

&lt;p&gt;Most importantly, you now understand how to think about data as a stream of events over time rather than static records in tables, which is a mental shift that will serve you well whether you’re building application monitoring dashboards, analyzing business metrics, or working with any system that generates continuous data streams.&lt;/p&gt;
</description>
      <pubDate>Tue, 25 Nov 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/smart-home-monitoring-google-nest-grafana-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/smart-home-monitoring-google-nest-grafana-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>How to Visualize Time Series Data with InfluxDB 3 &amp; Apache Superset</title>
      <description>&lt;h2 id="introduction"&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Learn how to visualize time series data from InfluxDB 3 Core using popular open source Apache Superset. This tutorial walks you through setting up both systems with Docker, writing sample IoT data, and creating your first visualization. For more information about Apache Superset, this &lt;a href="https://www.influxdata.com/blog/introduction-apache-superset/?utm_source=website&amp;amp;utm_medium=visualize_data_apache_superset_influxdb_3&amp;amp;utm_content=blog"&gt;article&lt;/a&gt; may be helpful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you’ll build:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=visualize_data_apache_superset_influxdb_3&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt; (Core / Enterprise) instance with sample home sensor data&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://superset.apache.org/docs/using-superset/creating-your-first-dashboard"&gt;Apache Superset dashboard&lt;/a&gt; connected to InfluxDB 3&lt;/li&gt;
  &lt;li&gt;A simple temperature visualization powered by SQL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7bb4871a85a9475d884daf62f17e8c94/d1e709e241af59e18667ee405f859d3f/unnamed.png" alt="" /&gt;
&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Docker Desktop&lt;/strong&gt; running on your system (&lt;a href="https://www.docker.com/products/docker-desktop"&gt;download&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Terminal/Command line&lt;/strong&gt; access&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Basic understanding&lt;/strong&gt; of SQL&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="part-1-setting-up-influxdb-3"&gt;Part 1: Setting up InfluxDB 3&lt;/h2&gt;

&lt;h4 id="step-1-install-influxdb-3-core-optionally-use-influxdb-3-enterprise"&gt;Step 1: Install InfluxDB 3 Core (optionally use InfluxDB 3 Enterprise)&lt;/h4&gt;

&lt;p&gt;Download and run the installation script:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl -O https://www.influxdata.com/d/install_influxdb3.sh
sh install_influxdb3.sh&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;When prompted, &lt;strong&gt;select the Docker installation option&lt;/strong&gt;. The script will handle pulling the InfluxDB 3 Docker image and setting up the CLI.&lt;/p&gt;

&lt;h4 id="step-2-verify-installation"&gt;Step 2: Verify Installation&lt;/h4&gt;

&lt;p&gt;Check that InfluxDB 3 CLI is installed. Command should print the latest InfluxDB 3 version:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 --version&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="step-3-start-influxdb-3-server"&gt;Step 3: Start InfluxDB 3 Server&lt;/h4&gt;

&lt;p&gt;Run the following two commands to create a local directory for storing data (optionally, it can point to a remote object store) and then start the InfluxDB 3 database.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Create a local directory for data
mkdir -p ~/influxdb3-data

# Start InfluxDB 3 Core with local file system storage
docker run -d \
  --name influxdb3 \
  -p 8181:8181 \
  --volume ~/influxdb3-data:/var/lib/influxdb3 \
  influxdb:3-core influxdb3 serve \
  --node-id my_node \
  --object-store file \
  --data-dir /var/lib/influxdb3&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;What this does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;-d&lt;/code&gt; runs in detached mode (background)&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;-p 8181:8181&lt;/code&gt; exposes the default InfluxDB 3 port&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--volume&lt;/code&gt; mounts local storage for data persistence&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--object-store file&lt;/code&gt; uses local file system (can also use S3, GCS, or Azure Blob)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="step-4-create-an-authentication-token"&gt;Step 4: Create an Authentication Token&lt;/h4&gt;

&lt;p&gt;Generate an admin token for database operations by executing the following docker command in the influxdb3 container:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it influxdb3 influxdb3 create token --admin&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Save the token somewhere safe!&lt;/strong&gt;&lt;/p&gt;

&lt;h4 id="step-5-create-a-database"&gt;Step 5: Create a Database&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;**docker exec -it influxdb3 influxdb3 create database home_sensors --token "PASTE_YOUR_TOKEN_STRING"&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="part-2-load-sample-data-using-cli"&gt;Part 2: Load sample data using CLI&lt;/h2&gt;

&lt;h4 id="write-home-sensor-data-optionally-stream-or-write-your-own-data"&gt;Write Home Sensor Data (optionally stream or write your own data)&lt;/h4&gt;

&lt;p&gt;We’ll load sample line protocol data consisting of temperature, humidity, and CO readings from two rooms:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it influxdb3 influxdb3 write \
  --database home_sensors \ 
  --token "PASTE_YOUR_TOKEN_STRING" \
  'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1741593600
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1741593600
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1741597200
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1741597200
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1741600800
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1741600800
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1741604400
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1741604400
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1741608000
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1741608000
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1741611600
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1741611600'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Understanding the data format (Line Protocol):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;home&lt;/code&gt; - measurement name&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;room=Living\ Room&lt;/code&gt; - tag (indexed, for filtering)&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;temp=21.1,hum=35.9,co=0i&lt;/code&gt; - fields (actual data values)&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;1741593600&lt;/code&gt; - timestamp (Unix epoch in seconds)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Other ways to write data:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/time-series-platform/telegraf/?utm_source=website&amp;amp;utm_medium=visualize_data_apache_superset_influxdb_3&amp;amp;utm_content=blog"&gt;Telegraf&lt;/a&gt; - You can run the popular open source tool Telegraf in another Docker container to collect system metrics in real-time automatically&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/core/reference/client-libraries/v3"&gt;InfluxDB Client SDKs v3&lt;/a&gt; - Python, Go, JavaScript, Java, C#, Node, etc.&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/core/api/v3"&gt;HTTP APIs&lt;/a&gt; - Direct write endpoint for custom integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this tutorial, we’re using static data to keep it simple.&lt;/p&gt;

&lt;h4 id="step-6-verify-data-with-cli-query"&gt;Step 6: Verify Data with CLI Query&lt;/h4&gt;

&lt;p&gt;Query the data using InfluxDB 3 CLI and SQL to confirm it loaded correctly:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it influxdb3 influxdb3 query \
  --database home_sensors \
  "SELECT * FROM home ORDER BY time DESC LIMIT 10" \
  --token "PASTE_YOUR_TOKEN_STRING"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You should see the 10 most recent readings.&lt;/p&gt;

&lt;p&gt;Try another query—average temperature by room:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it influxdb3 influxdb3 query \
  --database home_sensors \
  "SELECT room, AVG(temp) as avg_temp FROM home GROUP BY room" \
  --token "PASTE_YOUR_TOKEN_STRING"&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="part-3-setting-up-apache-superset"&gt;Part 3: Setting up Apache Superset&lt;/h2&gt;

&lt;h4 id="step-1-clone-superset-and-add-flight-sql-support"&gt;Step 1: Clone Superset and Add Flight SQL Support&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Clone the repository
git clone https://github.com/apache/superset.git
cd superset&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="step-2-add-apache-flightsql-support"&gt;Step 2: Add Apache FlightSQL Support&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Add flightsql-dbapi to the requirements file
echo "flightsql-dbapi" &amp;gt;&amp;gt; docker/requirements-local.txt
# Verify it was added
cat docker/requirements-local.txt&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="step-3-start-superset"&gt;Step 3: Start Superset&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Download/copy docker-compose-non-dev.yml. That takes care of setting up Superset locally.&lt;/li&gt;
  &lt;li&gt;Start Superset in Docker containers using Docker Compose.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Pull images
docker-compose -f docker-compose-non-dev.yml pull

# Start superset services
docker-compose -f docker-compose-non-dev.yml up -d --no-deps superset&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Wait 2-3 minutes&lt;/strong&gt; for all services to start. Check status:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker ps | grep superset&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Make sure containers show a “healthy” status.&lt;/p&gt;

&lt;h4 id="step-4-access-superset-ui"&gt;Step 4: Access Superset UI&lt;/h4&gt;

&lt;p&gt;Open your browser to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;http://localhost:8088&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Login credentials:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;Username: admin&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;Password: admin&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Optionally: Change the admin password after first login via Settings → List Users.&lt;/p&gt;

&lt;h2 id="part-4-connect-influxdb-to-superset"&gt;Part 4: Connect InfluxDB to Superset&lt;/h2&gt;

&lt;h4 id="step-1-add-database-connection"&gt;Step 1: Add Database Connection&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Click Settings (top right) → Database Connections&lt;/li&gt;
  &lt;li&gt;Click + Database button&lt;/li&gt;
  &lt;li&gt;Select Other from the dropdown&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="step-2-configure-connection"&gt;Step 2: Configure Connection&lt;/h4&gt;

&lt;p&gt;Display Name:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;InfluxDB3&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;SQLAlchemy URI:&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;datafusion+flightsql://localhost:8181?database=home_sensors&amp;amp;token=YOUR_TOKEN_HERE&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Replace &lt;code class="language-markup"&gt;YOUR_TOKEN_HERE&lt;/code&gt; with your actual token from earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Use &lt;code class="language-markup"&gt;datafusion+flightsql://&lt;/code&gt; (not just &lt;code class="language-markup"&gt;flightsql://&lt;/code&gt;).&lt;/p&gt;

&lt;h4 id="step-3-test-connection"&gt;Step 3: Test Connection&lt;/h4&gt;

&lt;p&gt;Click &lt;strong&gt;Test Connection&lt;/strong&gt;. You should see a “success” message.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3s3mHnrNMyR04I700W0zUc/e73aafdd61679e6bd5a884a7be3b84aa/Screenshot_2025-11-04_at_1.51.28â__PM.png" alt="Primary Credentials" /&gt;
Click &lt;strong&gt;Connect&lt;/strong&gt; to save.&lt;/p&gt;

&lt;h2 id="part-5-query-and-visualize-data"&gt;Part 5: Query and visualize data&lt;/h2&gt;

&lt;p&gt;Open SQL Lab, write the SQL query and execute to see the data:
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/a8ac6bee42164246a785d4d4f1c3b062/7048dcc801d8d204a94b6d93a2f6a6b2/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Lastly, don’t forget to save your dashboard for easier future reference.&lt;/p&gt;

&lt;h2 id="troubleshooting"&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;&lt;/h2&gt;

&lt;h4 id="problem-could-not-load-database-driver"&gt;Problem: “Could not load database driver”&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt; &lt;code class="language-markup"&gt;sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:flightsql&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; The &lt;code class="language-markup"&gt;flightsql-dbapi&lt;/code&gt; package wasn’t installed before starting Superset&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Stop containers
docker-compose down

# Ensure requirements file exists
echo "flightsql-dbapi" &amp;gt; ./docker/requirements-local.txt

# Restart

docker-compose -f docker-compose-non-dev.yml up -d&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="problem-connection-timeout"&gt;Problem: Connection Timeout&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Causes:&lt;/strong&gt; Wrong host/port, InfluxDB not running, or firewall blocking&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Verify InfluxDB is running: &lt;code class="language-markup"&gt;docker ps | grep influxdb&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Check port (default: 8181 for Core)&lt;/li&gt;
  &lt;li&gt;Use &lt;code class="language-markup"&gt;localhost&lt;/code&gt;, not &lt;code class="language-markup"&gt;127.0.0.1&lt;/code&gt; when both are in Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="problem-wrong-protocol"&gt;Problem: Wrong Protocol&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt; Connection fails with &lt;code class="language-markup"&gt;flightsql://&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Use &lt;code class="language-markup"&gt;datafusion+flightsql://&lt;/code&gt; (not just &lt;code class="language-markup"&gt;flightsql://&lt;/code&gt;)&lt;/p&gt;

&lt;h2 id="next-steps"&gt;&lt;strong&gt;Next steps&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;You now have a complete stack for collecting, storing, and visualizing time series data! You can customize your dashboard, add your own (real-time) data, and connect with InfluxDB 3 Enterprise (use the same steps as above, just make sure the Docker image is for the enterprise version). For more help and inspiration, check out the InfluxDB community &lt;a href="https://t.co/Pc5k7frBEW"&gt;forum&lt;/a&gt;, &lt;a href="https://www.influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=visualize_data_apache_superset_influxdb_3&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt;, &lt;a href="https://www.reddit.com/r/influxdb"&gt;Reddit&lt;/a&gt;, and &lt;a href="https://t.co/Pc5k7frBEW"&gt;Discord&lt;/a&gt; community. Happy visualizing!&lt;/p&gt;
</description>
      <pubDate>Thu, 06 Nov 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/visualize-data-apache-superset-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/visualize-data-apache-superset-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
  </channel>
</rss>
