<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Anais Dotis-Georgiou</title>
    <description>Posts by Anais Dotis-Georgiou on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/author/anais/</link>
    <language>en-us</language>
    <lastBuildDate>Fri, 24 Oct 2025 08:00:00 +0000</lastBuildDate>
    <pubDate>Fri, 24 Oct 2025 08:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>How to Use the Power BI Desktop InfluxDB 3 ODBC Connector </title>
      <description>&lt;p&gt;The challenge of storing, processing, and alerting on your time series data is only part of the battle when it comes to deriving value from time-stamped data. While InfluxDB 3 addresses those hurdles with the database and Python processing engine, data analytics teams still need to be able to visualize their data and build dashboards to complete the time series story.&lt;/p&gt;

&lt;p&gt;A Power BI connector for InfluxDB is a long-desired feature, and I’m happy to present you with today’s tutorial, in which we’ll learn how to use the Power BI InfluxDB 3 ODBC (Open Database Connectivity) connector to bring your time series data stored in InfluxDB 3 into Power BI. This blog post assumes you have the following requirements; if not, please download and install them:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A Docker instance of InfluxDB 3 Core or Enterprise running, or a free trial of InfluxDB 3 Cloud&lt;/li&gt;
  &lt;li&gt;Windows (For Mac users, consider using a VM like &lt;a href="https://www.parallels.com/solutions/smb/?utm_medium=cpc&amp;amp;utm_source=google&amp;amp;utm_term=&amp;amp;utm_content=&amp;amp;utm_id=21742280617&amp;amp;extensionid=&amp;amp;matchtype=&amp;amp;device=c&amp;amp;devicemodel=&amp;amp;creative=&amp;amp;network=x&amp;amp;placement=&amp;amp;x-source=ppc&amp;amp;campaign_name=PDfM-B-EN-US-AMER-PLA-PMax-SMB&amp;amp;gad_source=1&amp;amp;gad_campaignid=21742284448&amp;amp;gbraid=0AAAAAD-bykDT3oy3koz2k5tZBiExxTVNd&amp;amp;gclid=Cj0KCQjw58PGBhCkARIsADbDilyRLar8PnqstRPR5ehBTG9LF4ob9-s__XYasIuHLSat4GhYB3o0pPgaApPCEALw_wcB"&gt;Parallels&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.microsoft.com/en-us/power-platform/products/power-bi/getting-started-with-power-bi"&gt;Power BI Desktop&lt;/a&gt; (a free trial is also available)&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/visualize-data/powerbi/"&gt;Arrow Flight SQL ODBC Driver&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/visualize-data/powerbi/"&gt;The Power BI Desktop InfluxDB 3 connector&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please note that this tutorial is for Power BI Desktop only.&lt;/p&gt;

&lt;h2 id="setup-walk-through"&gt;Setup walk through&lt;/h2&gt;

&lt;p&gt;In case you’re entirely new to InfluxDB 3, Docker, and Power BI, this section is for you as we’ll walk through the installation process for all the requirements above. Otherwise, feel free to skip this section and move on to the next.&lt;/p&gt;

&lt;p&gt;I’m a Mac user, so the first thing I did was to install &lt;a href="https://www.parallels.com/solutions/smb/?utm_medium=cpc&amp;amp;utm_source=google&amp;amp;utm_term=&amp;amp;utm_content=&amp;amp;utm_id=21742280617&amp;amp;extensionid=&amp;amp;matchtype=&amp;amp;device=c&amp;amp;devicemodel=&amp;amp;creative=&amp;amp;network=x&amp;amp;placement=&amp;amp;x-source=ppc&amp;amp;campaign_name=PDfM-B-EN-US-AMER-PLA-PMax-SMB&amp;amp;gad_source=1&amp;amp;gad_campaignid=21742284448&amp;amp;gbraid=0AAAAAD-bykDT3oy3koz2k5tZBiExxTVNd&amp;amp;gclid=Cj0KCQjw58PGBhCkARIsADbDilyRLar8PnqstRPR5ehBTG9LF4ob9-s__XYasIuHLSat4GhYB3o0pPgaApPCEALw_wcB"&gt;Parallels&lt;/a&gt; and activate their free trial. The docs and driver are easy to follow so I won’t go into detail here. I’m assuming that most people reading this post are Windows users anyways.&lt;/p&gt;

&lt;h4 id="install-power-bi-desktop-and-the-arrow-flight-sql-odbc-driver"&gt;Install Power BI Desktop and the Arrow Flight SQL ODBC Driver&lt;/h4&gt;

&lt;p&gt;First, I &lt;a href="https://www.microsoft.com/en-us/power-platform/products/power-bi/downloads?"&gt;downloaded Power BI Desktop&lt;/a&gt;. Activating your free trial is easy; just follow the prompts after installation. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/9d075c597fdb4aafacc10478a9cfaa8d/3d12e321aa03b89545b6fc8d10720cf1/unnamed.png" alt="" /&gt;
A screenshot of Power BI Desktop’s Free Trial Home page.&lt;/p&gt;

&lt;p&gt;To install the Arrow Flight SQL ODBC Driver, you can run the following commands in PowerShell, making sure to correct the pathways by specifying your user:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# 1) Download
Invoke-WebRequest -Uri "https://download.dremio.com/arrow-flight-sql-odbc-driver/arrow-flight-sql-odbc-LATEST-win64.msi"
  -OutFile "C:\Users\"YOUR USER" \Downloads\arrow-flight-sql-odbc-win64.msi"

# 2) Unblock (mark as trusted)
Unblock-File "C:\Users\"YOUR USER" \Downloads\arrow-flight-sql-odbc-win64.msi"

# 3) Install (GUI)
Start-Process msiexec.exe -Wait -ArgumentList '/i "C:\Users\"YOUR USER"\Downloads\arrow-flight-sql-odbc-win64.msi"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From there, follow the installation instructions with default settings as per the driver instructions.&lt;/p&gt;

&lt;h3 id="setting-up-influxdb"&gt;Setting Up InfluxDB&lt;/h3&gt;

&lt;p&gt;Next, I set up InfluxDB 3 Core. I decided to use InfluxDB 3 Core as it is InfluxData’s OSS version of InfluxDB. However, you can also use InfluxDB 3 Enterprise and register for a free trial during setup. I used &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; to run this instance; follow that link to install Docker if you don’t already have it. I also used VSCode and the &lt;a href="https://code.visualstudio.com/docs/devcontainers/containers"&gt;Dev Containers extension&lt;/a&gt; to connect to the running Docker container to run shell commands directly within the container. However, for simplicity, I’ll include &lt;a href="https://docs.docker.com/reference/cli/docker/container/exec/"&gt;Docker container exec&lt;/a&gt; commands here instead. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/fa73c7e918f54730bb20fa13c32c2535/9569bdd4b975e809e3a98c77a68eab6f/unnamed.png" alt="" /&gt;
A screenshot of running commands directly within the container, after connecting with the Dev Containers extension in VS Code.&lt;/p&gt;

&lt;p&gt;You can find the complete install docs for InfluxDB 3 Core &lt;a href="https://docs.influxdata.com/influxdb3/core/install/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;here&lt;/a&gt;. First, we need to pull the image with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker pull influxdb:3-core&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next, we can serve InfluxDB 3 Core with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run --rm --name influxdb3-core -p 8181:8181 \
  -v $PWD/data:/var/lib/influxdb3/data \
  -v $PWD/plugins:/var/lib/influxdb3/plugins \

  influxdb:3-core influxdb3 serve \
    --node-id=my-node-0 \
    --object-store=file \
    --data-dir=/var/lib/influxdb3/data \
    --plugin-dir=/var/lib/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This command runs an InfluxDB 3 Core container and removes it when stopped (&lt;code class="language-markup"&gt;--rm&lt;/code&gt;). It maps port &lt;code class="language-markup"&gt;8181&lt;/code&gt; from the container to your local machine so you can access the database at localhost:8181. Two local directories are mounted into the container: one for persistent data (&lt;code class="language-markup"&gt;$PWD/data&lt;/code&gt;) and one for plugins (&lt;code class="language-markup"&gt;$PWD/plugins&lt;/code&gt;). Finally, it starts the InfluxDB 3 server with a specific node ID, using the filesystem as the object store (see all the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/config-options/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;options here&lt;/a&gt;), and points it to the mounted data and plugin directories. While we won’t be using the Python Processing Engine and plugins as a part of this tutorial, I like to serve all my instances of InfluxDB this way so it is available for future use.&lt;/p&gt;

&lt;p&gt;Now, we need to create a token for InfluxDB 3 Core and write some data to it. Let’s create that admin token with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# With Docker — in a new terminal:
docker exec -it influxdb3-core influxdb3 create token --admin&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Copy the first token you see; we’ll use it in the next command. In order to write data into InfluxDB, we first need to &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/create/database/#Copyright/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;create a database&lt;/a&gt; to write data to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it influxdb3-core influxdb3 create database --token AUTH_TOKEN DATABASE_NAME&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Replace &lt;code class="language-markup"&gt;DATABASE_NAME&lt;/code&gt; with the name of your choice—I used &lt;code class="language-markup"&gt;mydatabase&lt;/code&gt;, for example. Congratulations! You’ve successfully set up InfluxDB 3 Core.&lt;/p&gt;

&lt;h2 id="writing-data-to-influxdb-3-core"&gt;Writing data to InfluxDB 3 Core&lt;/h2&gt;

&lt;p&gt;Now we’re ready to write some data to InfluxDB 3 Core so we can query and visualize it in Power BI Desktop. But before we do, I want to showcase a useful feature in the docs:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/9244db4416ee4fba90e33189d11082d5/42868bb32948780036edb0e874991399/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Editing code snippets in the InfluxDB Docs to more easily write data to InfluxDB 3.&lt;/p&gt;

&lt;p&gt;Notice the red text: there you can edit any variables you need directly within the docs, like tokens and database names, to more easily copy code snippets.&lt;/p&gt;

&lt;p&gt;Right, let’s get back to actually writing some sample data with the following:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 write \
  --database DATABASE_NAME \
  --token AUTH_TOKEN \
  --precision s \
'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The sample datasets consist of sensor data that records the temperature of various rooms in a house. Alternatively, consider writing a file of line protocol data with the &lt;a href="https://docs.influxdata.com/influxdb3/core/write-data/influxdb3-cli/?t=file/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;file flag instead&lt;/a&gt;, using a &lt;a href="https://docs.influxdata.com/influxdb3/core/write-data/client-libraries/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;client library&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/core/write-data/use-telegraf/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;Telegraf&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="enabling-the-power-bi-desktop-influxdb-3-connector"&gt;Enabling the Power BI Desktop InfluxDB 3 Connector&lt;/h2&gt;

&lt;p&gt;We’re ready to set up the gateway to query our InfluxDB 3 instance, connect to our InfluxDB 3 Core instance, query our data, and create visualizations in Power BI Desktop.&lt;/p&gt;

&lt;h4 id="connector-extensibility"&gt;Connector Extensibility&lt;/h4&gt;

&lt;p&gt;For this tutorial, we’re connecting InfluxDB 3 to Power BI through a custom connector. To enable custom connectors, I followed the Power BI Desktop custom &lt;a href="https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-connector-extensibility"&gt;Connector Extensibility documentation&lt;/a&gt;.  Make sure you have downloaded the .pqx file from the Power BI Desktop InfluxDB 3 Connector download page. Copy or move it to &lt;code class="language-markup"&gt;[Documents]\Microsoft Power BI Desktop\Custom Connectors&lt;/code&gt; (make sure to create those folders if they don’t exist) with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# 1. Create the Power BI custom connectors folder if it doesn't exist
mkdir -p ~/Documents/Power\ BI\ Desktop/Custom\ Connectors

# 2. Move your connector into that folder
mv InfluxDB.pqx ~/Documents/Power\ BI\ Desktop/Custom\ Connectors/&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, return to Power BI Desktop. From the home page, select &lt;strong&gt;Options and Settings&lt;/strong&gt; in the bottom left, then &lt;strong&gt;Options.&lt;/strong&gt;
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/bc92866eb6de4d8baef7d995197ae6db/e3f2efa5672c91245393e2408489035c/unnamed.png" alt="" /&gt;
Screenshot of the &lt;strong&gt;options and settings&lt;/strong&gt; page in Power BI Desktop, the first step in enabling the InfluxDB Connector.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/b590d907de80496e8e439601b870a534/b99eaf6db38194bf3a03be8e4e474204/unnamed.png" alt="" /&gt;
On the &lt;strong&gt;Options&lt;/strong&gt; page in Power BI Desktop, click on “&lt;strong&gt;(Not Recommended) Allow any extension to load without validation or warning&lt;/strong&gt;.”&lt;/p&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Security&lt;/strong&gt; and select &lt;strong&gt;(Not Recommended)&lt;/strong&gt; under &lt;strong&gt;Data Extensions&lt;/strong&gt;. As per the Connector Extensibility docs, “The default Power BI Desktop data extension security setting is ‘(Recommended) Only allow Microsoft certified and other trusted third-party extensions to load.’ With this setting, if uncertified custom connectors are on your system, the Uncertified Connectors dialog box appears at startup and lists the connectors that can’t load.”&lt;/p&gt;

&lt;p&gt;After you’ve changed these settings make sure to restart Power BI Desktop so that the connector can be discoverable.&lt;/p&gt;

&lt;h4 id="connect-to-influxdb-3"&gt;Connect to InfluxDB 3&lt;/h4&gt;

&lt;p&gt;Return to the Power BI Desktop home page. Select &lt;strong&gt;Get Data&lt;/strong&gt; from the &lt;strong&gt;Home&lt;/strong&gt; ribbon in Power BI Desktop. Search and select &lt;strong&gt;InfluxDB 3&lt;/strong&gt;.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3e9af04999684747b6428d6d996776f0/b15eb2c7eee01efcd7b2ce49d1bc503e/unnamed.png" alt="" /&gt;
A screenshot of Power BI Desktop, searching for the InfluxDB 3 Custom connector.&lt;/p&gt;

&lt;p&gt;We’re ready to use the connector to get data from our InfluxDB 3 Core instance. First, we need to specify the server URL. Since this is running on a Docker container on our machine, we can use  http://localhost:8181 for the &lt;strong&gt;Server&lt;/strong&gt; field. However, since I was running Windows on Parallels, I specified the IP directly with http://10.211.55.2. Make sure to include your database name and port (&lt;code class="language-markup"&gt;8181&lt;/code&gt;). Finally, we’ll select the &lt;strong&gt;Native Query&lt;/strong&gt; option and include the data that we want to pull into Power BI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt; Because InfluxDB can handle high throughput and dimensional data, make sure to limit the size of the query request for each data source collection, so that Power BI can successfully handle data volume. Limit the size by  using a limit clause or specifying time ranges, and optionally filtering by columns (or tags).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1a7ea7ddc5b2463883015fc02d35823a/ad78da11991957438a54dc513f5f4620/unnamed.png" alt="" /&gt;
Querying InfluxDB 3 Core with a Native Query In Power BI Desktop.&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;DirectQuery&lt;/strong&gt; as your query type and make sure to apply &lt;strong&gt;Native Query&lt;/strong&gt;. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/d3c58bded58e4d16ac14096022e87399/230a03189196d7df3572a3f1a39bfff4/unnamed.png" alt="" /&gt;
Select &lt;strong&gt;Direct Query&lt;/strong&gt; as your &lt;strong&gt;Data Connectivity mode&lt;/strong&gt; when configuring the Power BI Desktop InfluxDB3 connector.&lt;/p&gt;

&lt;p&gt;Finally, make sure to add your token on the next page in order to complete the connection to your InfluxDB 3 instance:
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/03d5da8cd4f1422bbf05d3451f7394a9/7bc26974b73151595bdbfe09847e5294/unnamed.png" alt="" /&gt;
The final step in connecting to your InfluxDB 3 instance with a custom connector in Power BI Desktop.&lt;/p&gt;

&lt;p&gt;We are then offered a preview of the data we want to load into Power BI Desktop, confirming a successful connection. Yay! We did it! Click &lt;strong&gt;Load&lt;/strong&gt; to load the data. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3dd8c0d5849e410fa2903050c3978ad2/7a56f317e75cd2cb0086393f9a4925b8/unnamed.png" alt="" /&gt;
Preview of the time series data from InfluxDB 3 to be loaded into Power BI Desktop.&lt;/p&gt;

&lt;h2 id="visualizing-data-from-influxdb-3-in-power-bi-desktop"&gt;Visualizing data from InfluxDB 3 in Power BI Desktop&lt;/h2&gt;

&lt;p&gt;Congratulations! We’re now able to see our data in Power BI, apply filters, and create an assortment of visualizations. In the screenshot below we have a table view of all our data. We can apply filters to that data through the &lt;strong&gt;Filters&lt;/strong&gt; panel. Make sure to specify the columns you want to include in your table on the right-most &lt;strong&gt;Query&lt;/strong&gt; column. In the screenshot below, I select for the &lt;strong&gt;sum of co&lt;/strong&gt;, &lt;strong&gt;room&lt;/strong&gt;, and &lt;strong&gt;time&lt;/strong&gt;. Fields are summed by default. Select the field in the Fields pane or the Visualizations pane, go to the Modeling tab, and change the Default Summarization to “Don’t Summarize,” if you desire raw values. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1aaf30cb851b47849dc691c0c514f533/65e5ec148ffc92c76922b5dd302f877f/unnamed.png" alt="" /&gt;
Viewing summarized time series data (default) in Power BI Desktop from InfluxDB 3.&lt;/p&gt;

&lt;p&gt;Click on different visualization types in the middle &lt;strong&gt;Visualizations&lt;/strong&gt; control panel, i.e., a line graph:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3095b1a2a3b446de921301e4a439eef7/6a5a960a315cc7b88aeef9e7383a9972/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;A line graph with data from InfluxDB 3 in Power BI Desktop.&lt;/p&gt;

&lt;h2 id="final-thoughts"&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;I hope this tutorial has helped you load your time series data from InfluxDB 3 to Power BI Desktop. This integration is a long awaited feature request, and I’m delighted that we’ve been able to deliver it to our users. For those who are new to InfluxDB, I want to take this opportunity to share other visualization tools that InfluxDB 3 offers in case you’re still exploring visualization and dashboarding options:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/visualize-data/grafana/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;Grafana&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/process-data/visualize/superset/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;Superset&lt;/a&gt; (for InfluxDB Cloud)&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/process-data/visualize/tableau/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;Tableau&lt;/a&gt; (for InfluxDB Cloud)&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/building-streamlit-applications-influxdb-3/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;Streamlit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As always, if you need help, please contact us on our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;community site&lt;/a&gt; or &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=power_bi_influxdb_3l&amp;amp;utm_content=blog"&gt;Slack channel&lt;/a&gt;. If you are also working on a visualization project with InfluxDB, I’d love to hear from you! Please share your feedback on those channels as well.&lt;/p&gt;
</description>
      <pubDate>Fri, 24 Oct 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/power-bi-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/power-bi-influxdb-3/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Edge Data Replication: Contributions and Status Updates for InfluxDB 3</title>
      <description>&lt;p&gt;If you’ve ever stood up multiple edge InfluxDB instances in remote locations and wished you could consolidate their data into a centralized instance for analysis, you’re not alone. That’s exactly why we designed Edge Data Replication (EDR) in InfluxDB v2. Now, with InfluxDB 3 Core and 3 Enterprise, we’re seeing new ways to handle replication using the brand-new Python Processing Engine.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;What EDR looked like in InfluxDB v2 and why it matters for TSDBs&lt;/li&gt;
  &lt;li&gt;The new Python plugin engine for InfluxDB 3 Core and 3 Enterprise&lt;/li&gt;
  &lt;li&gt;A community-built EDR plugin ready for immediate use&lt;/li&gt;
  &lt;li&gt;Setup instructions and requirements&lt;/li&gt;
  &lt;li&gt;A sneak peek at official Support and how to find both official and community plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="edr-in-influxdb-v2"&gt;EDR in InfluxDB v2&lt;/h3&gt;

&lt;p&gt;In InfluxDB v2, Edge Data Replication gave you a built-in way to forward writes from an edge instance (like InfluxDB OSS) to a remote InfluxDB Cloud bucket. It handled durability, retries, compression, filtering, and optional aggregation. This made it easy to collect data from the edge and then stream it back to a centralized system for backup, analysis, or visualization.&lt;/p&gt;

&lt;p&gt;In InfluxDB 3 Core and Enterprise, the architecture has changed. There’s no built-in EDR. Instead, we have a Python-based Processing Engine, which is &lt;strong&gt;even more flexible&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id="introducing-the-python-processing-engine-for-influxdb-3"&gt;Introducing the Python Processing Engine for InfluxDB 3&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 Core and Enterprise now support a Python-based plugin engine. It lets you run Python code inside the database, triggered by either:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;a WAL flush (new data arriving)&lt;/li&gt;
  &lt;li&gt;a schedule (like a cron job)&lt;/li&gt;
  &lt;li&gt;or a manual request (an HTTP API call you define)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means you can process data as it’s written, transform or downsample it, send alerts—or, yes, replicate it elsewhere. To activate it, you pass &lt;code class="language-markup"&gt;--plugin-dir&lt;/code&gt; when starting the server and drop your .py plugin files in that directory. We’ll walk through how to use a data replicator plugin later in this post.&lt;/p&gt;

&lt;h2 id="a-devrel-contributed-data-replicator-plugin"&gt;A DevRel-contributed data replicator plugin&lt;/h2&gt;

&lt;p&gt;The official EDR plugin for InfluxDB 3 is expected to be released soon—it’s currently undergoing some final testing. In light of that, I wanted to share Developer Advocate &lt;a href="https://github.com/suyashcjoshi"&gt;@suyashcjoshi&lt;/a&gt;&lt;a href="https://github.com/suyashcjoshi"&gt;’s&lt;/a&gt; early version of a data replicator plugin, which is available today, and inspired development details of the official plugin.&lt;/p&gt;

&lt;p&gt;You can find his &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/suyashcjoshi/data-replicator"&gt;InfluxDB 3 Custom Data Replication Plugin&lt;/a&gt; in our &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;public plugin repository&lt;/a&gt;. It works with both InfluxDB 3 Core and 3 Enterprise and includes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Replication to InfluxDB Cloud (Serverless or Dedicated) or InfluxDB 3 Enterprise&lt;/li&gt;
  &lt;li&gt;Table filtering&lt;/li&gt;
  &lt;li&gt;Optional downsampling via aggregate_interval&lt;/li&gt;
  &lt;li&gt;A compressed queue (edr_queue.jsonl.gz) for durability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Like all plugins, it works with any data source, whether you’re writing via &lt;a href="https://www.influxdata.com/time-series-platform/telegraf/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=edge_data_replication_status_influxdb_3&amp;amp;utm_content=blog"&gt;Telegraf&lt;/a&gt;, a script, or your own app.&lt;/p&gt;

&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;

&lt;p&gt;Here’s what you’ll need to run this Plugin:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;InfluxDB 3 Core or 3 Enterprise&lt;/li&gt;
  &lt;li&gt;A free-trail version of InfluxDB Cloud Serverless&lt;/li&gt;
  &lt;li&gt;InfluxDB 3 Python client: pip install influxdb3-python&lt;/li&gt;
  &lt;li&gt;Python 3.7+&lt;/li&gt;
  &lt;li&gt;Optional: Telegraf for simulating system metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="how-to-use-the-plugin"&gt;How to use the plugin&lt;/h2&gt;

&lt;p&gt;Here’s a quick-start guide for testing this plugin locally and replicating the data to InfluxDB Cloud Serverless.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Start InfluxDB with the Plugin Directory.

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;mkdir -p ~/.plugins
cp data-replicator.py ~/.plugins/

influxdb3 serve \

  --node-id host01 \
  --object-store file \
  --data-dir ~/.influxdb3 \
  --plugin-dir ~/.plugins&lt;/code&gt;&lt;/pre&gt;
  &lt;/li&gt;
&lt;br /&gt;

&lt;li&gt; Download and install the InfluxDB 3 Python Client.&lt;/li&gt; 
&lt;br /&gt;

influxdb3 install package influxdb3-python

&lt;br /&gt;  
&lt;br /&gt;

&lt;li&gt; (Optional) Run Telegraf. You can elect to run the Telegraf config that’s provided in the data replicator directory to quickly write system stats from your machine to your local InfluxDB 3 instance. You simply have to provide an authorization token. Run the following commands to generate a token and run the Telegraf config.
&lt;br /&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb 3 create token 
telegraf --config telegraf.conf&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt; 
  &lt;br /&gt;

&lt;li&gt; Create and enable the Replication Trigger. Here’s how you replicate all tables to a cloud instance:

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  -d mydb \
  --plugin-filename data-replicator.py \
  --trigger-spec "all_tables" \
  --trigger-arguments "host=YOUR_HOST,token=YOUR_TOKEN,database=TARGET_DB" \
  data_replicator_trigger&lt;/code&gt;&lt;/pre&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 enable trigger --database mydb data_replicator_trigger&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt; &lt;/ol&gt;

&lt;p&gt;If you want to replicate only some tables or downsample the data to one-minute average values, you can add the following trigger arguments to the “create trigger” command above.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;--trigger-arguments "tables=cpu,aggregate_interval=1m"&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="whats-next-official-edr-and-downsampling-plugin-status"&gt;What’s next: official EDR and Downsampling Plugin status&lt;/h2&gt;

&lt;p&gt;We’re working on an officially-supported EDR Plugin that will:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Data Replication&lt;/strong&gt;: Replicate data from a local InfluxDB 3 instance to a remote one.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Filtering&lt;/strong&gt;: Specify which tables to replicate and which fields to exclude.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Renaming&lt;/strong&gt;: Rename tables and fields during replication.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Downsampling&lt;/strong&gt;: When enabled, downsample all data within the specified time window for scheduled triggers or for each individual run for data write triggers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Queue Management&lt;/strong&gt;: Use a compressed JSONL queue file for reliable delivery.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Retry Logic&lt;/strong&gt;: Handle errors and rate limits with retry mechanisms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’re also developing a robust Downsampling Plugin that you can use alongside the EDR plugin that offers the following features:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Flexible Downsampling&lt;/strong&gt;: Aggregate data over specified time intervals (e.g., seconds, minutes, hours, days, weeks, months, quarters, or years) using functions like avg, sum, min, max, or derivative.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Field and Tag Filtering&lt;/strong&gt;: Select specific fields for aggregation and filter by tag values.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scheduler and HTTP Support&lt;/strong&gt;: Run periodically via InfluxDB triggers or on-demand via HTTP requests.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Retry Logic&lt;/strong&gt;: Configurable retries for robust write operations&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Batch Processing&lt;/strong&gt;: Process large datasets in configurable time batches for HTTP requests.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Backfill Support&lt;/strong&gt;: Downsample historical data within a specified time window.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Task ID Tracking&lt;/strong&gt;: Each execution generates a unique task_id included in logs and error messages for traceability in late-arriving data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Metadata Columns&lt;/strong&gt;: Each downsampled record includes three additional columns to verify accurate downsampling and easily re-downsample data in the face of late-arriving data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="final-thoughts"&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;Edge Data Replication isn’t just back; it’s more flexible than ever thanks to the Python Processing Engine. With InfluxDB 3 and the new Processing Engine, you can write plugins that transform, downsample, and replicate data in real-time, right from the edge. You’ll also be able to take the official plugins and easily adapt them to meet your own needs. I hope this post encourages you to contribute a plugin of your own. Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=edge_data_replication_status_influxdb_3&amp;amp;utm_content=blog"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#trigger-types/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=edge_data_replication_status_influxdb_3&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=edge_data_replication_status_influxdb_3&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=edge_data_replication_status_influxdb_3&amp;amp;utm_content=blog"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 03 Jun 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/edge-data-replication-status-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/edge-data-replication-status-influxdb-3/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Forecasting with InfluxDB 3 and HuggingFace</title>
      <description>&lt;p&gt;Machine learning models must do more than make accurate predictions; they also need to adapt as the world around them changes. In real-world systems, data distributions shift due to seasonality, equipment wear, user behavior changes, or other external forces. If your models can’t keep up, the result is poor predictions. This can lead to outages, inefficiencies, or missed opportunities. That’s why forecasting systems need to be monitored and resilient, not just accurate. In this post, we’ll walk through a full-stack ML demo that shows exactly how to do that. We combine ML model monitoring with tooling to forecast time series data, detect when models start to drift, and automatically retrain them in response.&lt;/p&gt;

&lt;p&gt;For example, in an industrial IoT setting, a predictive maintenance model might learn to detect motor failure based on temperature and vibration data. But over time, the equipment wears down, operators change procedures, or sensors begin to degrade, causing the model’s assumptions about “normal” to drift. If the model isn’t updated, it may miss early failure warnings or raise false alarms, leading to costly downtime or unnecessary maintenance. In finance, a trading model might rely on patterns that no longer hold true after market volatility shifts, leading to poor decisions if it isn’t retrained. These are the kinds of scenarios where detecting and responding to drift in real time becomes critical.&lt;/p&gt;

&lt;p&gt;The demo uses a PyTorch-based LSTM model for forecasting, InfluxDB 3 for storing time series data and model metrics, and Hugging Face Hub for cloud-based model storage and versioning. It’s all wrapped in a Streamlit app so you can interactively explore the pipeline, from synthetic data generation to drift-aware retraining. InfluxDB, a purpose-built time series database (TSDB), is particularly well suited for this kind of pipeline. Time series workloads are naturally indexed by time and benefit from TSDB features like fast temporal queries, downsampling, retention policies, and streaming ingestion. By storing everything from raw inputs to forecasts and retraining events in InfluxDB, you get efficient storage and retrieval and a transparent audit trail of your model’s behavior over time.&lt;/p&gt;

&lt;p&gt;The pipeline follows a modular ML lifecycle: generating sine wave-based time series data, writing that data into InfluxDB 3 Core (open source), and training an initial model. The model is then saved to Hugging Face. You can then simulate concept drift by injecting noise or offsets, and the system will automatically detect degraded performance using MSRE and MSE thresholds. When drift is detected, the pipeline allows you to trigger a retraining process, uploads the new model to Hugging Face, and logs the event for traceability. Every stage, from data generation to initial training, forecasts, drift detection, and retraining events, is captured in InfluxDB 3 Core to ensure full visibility of the model’s health over time.&lt;/p&gt;

&lt;p&gt;For this demo, we’re using InfluxDB 3 Core, the open source version for recent data. It’s ideal for development, prototyping, and recent time series workloads. However, if you plan to scale this pipeline and include high availability, scalability, enhanced security, and long-term storage, then InfluxDB 3 Enterprise is a better fit. Luckily, switching between products is easy. InfluxDB 3 Enterprise is a superset of InfluxDB 3 Core, and migration happens in-place (for this project, it’s as simple as pulling the Enterprise docker image instead of the Core image).&lt;/p&gt;

&lt;p&gt;This project was built with Replit; you can find the corresponding project repo &lt;a href="https://github.com/InfluxCommunity/influxdb3_huggingface_forecasting_monitoring"&gt;here&lt;/a&gt;. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/0495deeaad6149dca6b73ddfe91dab20/4b341b284802669f34b497d4afc06b3d/unnamed.png" alt="" /&gt;
A screenshot from the Streamlit application showing the difference between the original time series and the drifted data.&lt;/p&gt;

&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;

&lt;p&gt;To run this project locally, you’ll need the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Python 3.11+&lt;/strong&gt;: For model training, inference, and app execution&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/downloads/"&gt;InfluxDB&lt;/a&gt;: To store your time series data&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/core/install/#docker-image"&gt;Docker&lt;/a&gt;: Used to spin up a local instance of InfluxDB 3 Core&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://huggingface.co/"&gt;Hugging Face Account&lt;/a&gt;: To upload and download models from your repository&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://pytorch.org/"&gt;PyTorch&lt;/a&gt;: Required libraries for LSTM model implementation and persistence&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://streamlit.io/"&gt;Streamlit&lt;/a&gt;: To power the interactive UI that orchestrates the pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="quick-start"&gt;Quick start&lt;/h2&gt;

&lt;h5 id="clone-the-repository-and-set-up-the-environment"&gt;1. Clone the repository and set up the environment.&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;git clone https://github.com/yourusername/lstm-forecasting-drift-detection.git
cd lstm-forecasting-drift-detection
python -m venv venv
source venv/bin/activate  # On Windows, use: venv\Scripts\activate
pip install -r requirements.txt&lt;/code&gt;&lt;/pre&gt;

&lt;h5 id="start-influxdb-3-core-with-docker"&gt;2. Start InfluxDB 3 Core with Docker.&lt;/h5&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: We are mounting our plugin directory here to prepare for this project’s next evolution, which utilizes the InfluxDB 3 Python Processing Engine.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run -it --rm --name influxdb3core \
  -v ~/influxdb3/data:/var/lib/influxdb3 \
  -v ~/influxdb3_huggingface_forecasting_monitoringplugins:/plugins \
  -p 8181:8181 \
  quay.io/influxdb/influxdb3-core:latest serve \
  --node-id my_host \
  --object-store file \
  --data-dir /var/lib/influxdb3
  --plugin-dir /plugins&lt;/code&gt;&lt;/pre&gt;

&lt;h5 id="create-the-database-and-auth-token-for-influxdb-3-core"&gt;3. Create the database and auth token for InfluxDB 3 Core.&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it influxdb3core influxdb3 database create --name timeseries
docker exec -it influxdb3core influxdb3 auth create --name my-token&lt;/code&gt;&lt;/pre&gt;

&lt;h5 id="set-up-environment-variables-create-a-env-file-in-the-project-root"&gt;4. Set up environment variables. Create a .env file in the project root.&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;env
INFLUXDB_HOST=http://localhost:8181
INFLUXDB_TOKEN=your_influxdb_token_here
INFLUXDB_DATABASE=timeseries
HF_TOKEN=your_huggingface_token_here
HF_REPO_ID=your_username/your_repo_name&lt;/code&gt;&lt;/pre&gt;

&lt;h5 id="run-the-app"&gt;5. &lt;strong&gt;Run the app&lt;/strong&gt;.&lt;/h5&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;streamlit run [app.py](http://app.py)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The Streamlit dashboard will be live at http://localhost:5000.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/0378dbf7c34d423fa656a0bb5133f1fa/a404ebb804852f10bc691e51190304a5/unnamed.png" alt="" /&gt;
The Streamlit app: the first step runs you through configuring your InfluxDB instance. This will automatically be populated with environment variables, but you can choose to configure it through the UI instead.&lt;/p&gt;

&lt;h2 id="app-walkthrough"&gt;App walkthrough&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tab 0 InfluxDB Configuration&lt;/strong&gt;: First, we need to connect our application to an InfluxDB instance to store our time series data and model metrics. The InfluxDB Configuration tab makes this straightforward. The app will also automatically populate your &lt;strong&gt;Connection Settings&lt;/strong&gt; with env variables if you provide them. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7d7c96dffc814a6f998863beb422f6d4/158fcb6c2fb1c6059d3203444c7e4ca6/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tab 1 Data Generation:&lt;/strong&gt; Now that we’re connected, let’s generate some synthetic time series data to work with. This tab allows us to create a clean sine wave with customizable parameters.&lt;/p&gt;

&lt;p&gt;Using the sidebar sliders, you can adjust:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Number of data points: Controls how many data points to generate&lt;/li&gt;
  &lt;li&gt;Noise level: Adds random variation to the sine wave&lt;/li&gt;
  &lt;li&gt;Frequency factor: Changes how quickly the wave oscillates&lt;/li&gt;
  &lt;li&gt;Amplitude: Sets the height of the wave&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Generate New Data&lt;/strong&gt;, and the application will:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Create a synthetic time series based on your parameters&lt;/li&gt;
  &lt;li&gt;Split it into training and testing sets&lt;/li&gt;
  &lt;li&gt;Write it to InfluxDB (if connected)&lt;/li&gt;
  &lt;li&gt;Display a visualization of the generated data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/16457458cef54dc4a8bc47d230a83a10/80581597fa6c97c7c07e7be0b2ec5bb9/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tab 2 Initial Model Training:&lt;/strong&gt; Click &lt;strong&gt;Train LSTM Model&lt;/strong&gt; to begin training. The application will:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Prepare sequences from your time series data&lt;/li&gt;
  &lt;li&gt;Scale the data for better convergence&lt;/li&gt;
  &lt;li&gt;Create and train an LSTM model with your chosen parameters&lt;/li&gt;
  &lt;li&gt;Display training progress and final metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The charts show:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Training and validation loss curves (to check for overfitting)&lt;/li&gt;
  &lt;li&gt;Model predictions on the test data&lt;/li&gt;
  &lt;li&gt;A detailed view of the last portion of predictions vs. actuals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model accuracy metrics give you immediate feedback on how well your model learned the patterns in your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip&lt;/strong&gt;: If your training loss decreases but validation loss increases, you might be overfitting. Try increasing the dropout rate or reducing the number of LSTM units.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/25f26f4e1dce4ba7a9d1cf6005e3565a/21a8588757119828188faea718e47e5d/unnamed.png" alt="" /&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/044dd0ddbbb24d2ea153b09e91e83147/58942010128ce8f7a1afb02cc0ab0d44/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tab 3 Drift Injection:&lt;/strong&gt; Now comes the interesting part! In real-world scenarios, data patterns often change over time, causing models to become less accurate. This phenomenon is known as data drift. This tab lets you simulate drift to see how it affects model performance.&lt;/p&gt;

&lt;p&gt;From the left-hand panel, you can configure:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Drift type: Choose between “offset” (a vertical shift) or “noise” (increased randomness)&lt;/li&gt;
  &lt;li&gt;Drift start point: When the drift should begin (as a percentage of the dataset)&lt;/li&gt;
  &lt;li&gt;Drift magnitude: How severe the drift should be&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Inject Drift&lt;/strong&gt; to apply these changes to your data. The application will:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Take your original data and add drift at the specified point&lt;/li&gt;
  &lt;li&gt;Store the drifted data in InfluxDB 3 Core&lt;/li&gt;
  &lt;li&gt;Show a comparison between the original and drifted data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The visualization clearly highlights where the drift begins and how it affects the data pattern. The red portion represents the drifted section. Behind the scenes, the drifted data is stored in the drifted_data measurement in InfluxDB 3 Core, preserving both the original and modified versions.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8836a8ed68fb41c6a488074d754a5de2/90db7a1e584c0a91e4dcac3c9a3cc625/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tab 4 Drift Detection:&lt;/strong&gt; Once drift is present in our data, we need a systematic way to detect it. This tab demonstrates using error metrics to identify when model performance deteriorates.&lt;/p&gt;

&lt;p&gt;You can configure:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Drift metric: Choose between MSRE (Mean Squared Relative Error) or MSE (Mean Squared Error)&lt;/li&gt;
  &lt;li&gt;Drift threshold: The error value above which drift is considered detected&lt;/li&gt;
  &lt;li&gt;Window size: Number of points to include in each sliding window for error calculation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Detect Drift&lt;/strong&gt; to analyze model performance on the drifted data. The application will:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Run the original model on drifted data&lt;/li&gt;
  &lt;li&gt;Calculate error metrics using sliding windows&lt;/li&gt;
  &lt;li&gt;Determine when/if drift is detected&lt;/li&gt;
  &lt;li&gt;Visualize error metrics over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The chart shows error metrics with a horizontal line representing your threshold. When the metrics cross this line, the system flags drift detection. You’ll also see the exact window where drift was first detected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip&lt;/strong&gt;: MSRE is often more sensitive to relative changes in pattern, while MSE is better for detecting absolute magnitude changes. Choose based on what’s more important for your use case.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/b01ff640ad304a50b0aa58a2a1832356/0febe856cb6bb607d94015e9fa836769/unnamed.png" alt="" /&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8145bb35d9744b799a8bc67c82ae144c/acb34ae3cc2f6c658a981329e7a1ea8a/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tab 5 Model Retraining:&lt;/strong&gt; Once drift is detected, the appropriate response is usually to retrain the model on more recent data that includes the new patterns. This tab demonstrates how to retrain and compare model performance.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Retrain Model with Drifted Data&lt;/strong&gt; to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Create a new training dataset that includes drifted data&lt;/li&gt;
  &lt;li&gt;Train a new model with the same architecture but on updated data&lt;/li&gt;
  &lt;li&gt;Compare predictions from both the original and retrained models&lt;/li&gt;
  &lt;li&gt;Calculate improvement metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The visualization shows three lines:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Actual values (ground truth)&lt;/li&gt;
  &lt;li&gt;Predictions from the original model&lt;/li&gt;
  &lt;li&gt;Predictions from the retrained model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can clearly see how the retrained model adapts to the new pattern while the original model continues to follow the old pattern. The metrics quantify this improvement in terms of reduced error.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/dc15f067b7854e608babb0580c40bf62/73cfe0ee2cea8c797c7c53454b072b3a/unnamed.png" alt="" /&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8013b259c8a1428fa538ba3f7a023a08/5f294231b3de46d83ddd9097971d9faa/unnamed.png" alt="" /&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7718f1098ce544b2aac279e413e49b38/b203cbdbe5f57e0911297da4928e0ba1/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tab 6 Model Persistence&lt;/strong&gt;: Our final tab handles saving and loading models to/from Hugging Face, enabling model versioning and sharing capabilities.&lt;/p&gt;

&lt;p&gt;You’ll see fields for:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Hugging Face Repository ID: Where to store your models (format: “username/repo-name”)&lt;/li&gt;
  &lt;li&gt;Model Name: Identifier for this specific model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Save Model to Hugging Face&lt;/strong&gt; to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Serialize the current model (either original or retrained)&lt;/li&gt;
  &lt;li&gt;Upload it to your Hugging Face repository&lt;/li&gt;
  &lt;li&gt;Record metadata about the saved model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/cf3518ca006e4da68e4dd16b1d88eca3/f2b458ddf78e747f0413da6f4fa6061d/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;h2 id="code-overview"&gt;Code overview&lt;/h2&gt;

&lt;p&gt;The project is organized into modular utility scripts that mirror the structure of the ML pipeline.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://github.com/InfluxCommunity/influxdb3_huggingface_forecasting_monitoring/blob/main/utils/data_generator.py"&gt;data_generator.py&lt;/a&gt; handles synthetic data creation and drift injection, allowing you to simulate real-world scenarios with evolving patterns.&lt;/li&gt;
  &lt;li&gt;&lt;a href="http://model.py"&gt;model.py&lt;/a&gt; defines the LSTM architecture, training loop, forecasting logic, and model persistence methods.&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://github.com/InfluxCommunity/influxdb3_huggingface_forecasting_monitoring/blob/main/utils/drift_detection.py"&gt;drift_detection.py&lt;/a&gt; implements statistical drift detection using MSRE and MSE metrics with support for sliding windows.&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://github.com/InfluxCommunity/influxdb3_huggingface_forecasting_monitoring/blob/main/utils/huggingface_utils.py"&gt;huggingface_utils.py&lt;/a&gt; manages saving and loading models from the Hugging Face Hub, enabling easy model versioning and sharing.&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://github.com/InfluxCommunity/influxdb3_huggingface_forecasting_monitoring/blob/main/utils/influxdb_utils.py"&gt;influxdb_utils.py&lt;/a&gt; handles all interactions with InfluxDB 3 Core, from writing training metrics to querying time series data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these scripts form a complete forecasting monitoring loop—each one loosely coupled, testable, and easy to extend.&lt;/p&gt;

&lt;h2 id="limitations-and-python-processing-engine"&gt;Limitations and Python Processing Engine&lt;/h2&gt;

&lt;p&gt;While this demo provides a full walkthrough of forecasting, drift detection, and retraining, it’s currently designed as a static application. Each step runs once when manually triggered via the Streamlit UI. In real-world production systems, forecasting and monitoring need to happen continuously and autonomously. The natural next step for this project is to integrate with the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#python-plugins-and-the-processing-engine"&gt;InfluxDB 3 Python Processing Engine&lt;/a&gt;, which allows you to schedule these scripts to run on a user-defined schedule or triggered on request. This would enable automated forecasting, real-time drift monitoring, and hands-free retraining. All driven directly by new data arriving in InfluxDB. Moving from manual to scheduled execution is what transforms this project from an educational tool into a robust model monitoring observability pipeline. For example, you could generate new data every hour with a &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/#create-a-trigger-for-scheduled-events"&gt;schedule trigger&lt;/a&gt;, monitor for drift in real-time with a &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/#create-a-trigger-for-data-writes"&gt;WAL-flush trigger&lt;/a&gt;, trigger a model retrain when needed with a &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/#create-a-trigger-for-http-requests"&gt;HTTP request trigger&lt;/a&gt;, and push updated models to Hugging Face—all without user intervention. This shift would turn the project from a static demo into a dynamic model monitoring pipeline.&lt;/p&gt;

&lt;h2 id="looking-ahead-from-static-scripts-to-plugins"&gt;Looking ahead: from static scripts to plugins&lt;/h2&gt;

&lt;p&gt;For example, you could easily create a Data Generator Schedule Trigger with the following code:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;"""
Simple Sine Wave Generator for InfluxDB 3 Processing Engine

This plugin generates a sine wave with noise and writes it to InfluxDB.
"""

import numpy as np
from datetime import datetime, timedelta
def process_scheduled_call(influxdb3_local, call_time, args=None):
    """
    Generate a sine wave with noise and write it to InfluxDB.

    Parameters:
    -----------

    influxdb3_local : InfluxDB3Local
        Interface for interacting with InfluxDB
    call_time : str
        The time the scheduled call was made
    args : dict, optional
        Arguments for data generation:
        - measurement: Name of the measurement/table (default: 'synthetic_data')
        - periods: Number of data points to generate (default: 168)
        - amplitude: Sine wave amplitude (default: 10.0)
        - noise: Noise level (default: 0.5)
    """

    # Parse arguments with defaults
    args = args or {}
    measurement = args.get("measurement", "synthetic_data")
    periods = int(args.get("periods", "168"))
    amplitude = float(args.get("amplitude", "10.0"))
    noise = float(args.get("noise", "0.5"))

    influxdb3_local.info(f"Generating {periods} data points for measurement '{measurement}'")

    # Get current time and calculate start time (hourly data)
    end_time = datetime.now()
    start_time = end_time - timedelta(hours=periods)

    # Generate sine wave with noise
    for i in range(periods):
        # Calculate timestamp for this point
        timestamp = start_time + timedelta(hours=i)
        unix_nano = int(timestamp.timestamp() * 1e9)

        # Generate sine value (full cycle every 24 points)
        x = (i / 24) * 2 * np.pi
        value = amplitude * np.sin(x)

        # Add noise
        if noise &amp;gt; 0:`
            value += np.random.normal(0, noise)

        # Create and write point
        line = LineBuilder(measurement)
        line.tag("source", "generator")
        line.float64_field("value", value)
        line.time_ns(unix_nano)

        influxdb3_local.write(line)

    influxdb3_local.info(f"Successfully generated and wrote {periods} data points")

    # At the end of your process_scheduled_call function, before the return:
    stats = {
        "measurement": measurement,
        "points_generated": periods,
        "start_time": start_time.isoformat(),
        "end_time": end_time.isoformat(),
        "amplitude": amplitude,
        "noise_level": noise
    }
    influxdb3_local.info(f"Statistics: {stats}")
    return stats&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This plugin is designed to run on a schedule within the InfluxDB 3 Python Processing Engine. Each time it’s triggered, it generates a configurable number of sine wave data points with added noise, timestamps them at hourly intervals, and writes them to a specified measurement in InfluxDB. The core logic lives in the &lt;code class="language-markup"&gt;process_scheduled_call()&lt;/code&gt; function, which is the required entry point for scheduled plugins. It uses the &lt;code class="language-markup"&gt;influxdb3_local&lt;/code&gt; interface provided by the Processing Engine to log messages and write data. Data points are formatted using the &lt;code class="language-markup"&gt;LineBuilder&lt;/code&gt; class to construct line protocol records with tags, fields, and nanosecond-precision timestamps. At the end, it logs a summary of what was written, making this plugin a useful tool for simulating time series data streams in automated workflows.&lt;/p&gt;

&lt;p&gt;Then you would create with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  --trigger-spec "every:1m" \
  --plugin-filename "plugins/data_generator.py" \
  --database timeseries \
  data_generator_trigger
```
And enable it with:
```bash
influxdb3 enable trigger  --database timeseries data_generator_trigger&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It’s also worth mentioning that you don’t have to use the &lt;code class="language-markup"&gt;LineBuilder class&lt;/code&gt; to write data in an InfluxDB 3 Core or Enterprise Python Processing Plugin. You can use the &lt;a href="https://github.com/InfluxCommunity/influxdb3-python/tree/70f42ddc736913b1e38500e512191dadd929662a"&gt;InfluxDB 3 Python Client Library&lt;/a&gt; instead, especially in scheduled or on-request plugins. This approach can be more convenient when you’re writing larger datasets, such as full Pandas DataFrames, all at once. For example, when model drift is detected and a retraining step is triggered, it often makes sense to write the resulting metrics or retrained model outputs in batch rather than looping through row-by-row with LineBuilder. This should also make converting our existing scripts to an InfluxDB 3 Python Processing Engine pipeline easier since many already operate on DataFrames.&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This project helps to start bridging the gap between machine learning and model monitoring. Combining InfluxDB 3 for time series storage and Hugging Face for model versioning shows how you can monitor and adapt models as data evolves over time. While the current app runs in a controlled, step-by-step environment, the architecture lays the groundwork for a fully automated system. With just a few additions, like scheduled plugins powered by the InfluxDB 3 Processing Engine, you can turn this demo into a more robust workflow for real-time forecasting. Whether you’re monitoring sensor data, energy usage, or financial trends, this approach gives you the tools to stay ahead of model drift and keep your predictions reliable.&lt;/p&gt;

&lt;p&gt;I encourage you to look at the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/examples/schedule/system_metrics/system_metrics.py"&gt;InfluxData/influxdb3_plugins&lt;/a&gt; as we add examples and plugins. Also, please contribute your own! To learn more about building your plugin, check out these resources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/new-python-processing-engine-influxdb3/"&gt;Transform Data with the New Python Processing Engine in InfluxDB 3&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#python-plugins-and-the-processing-engine"&gt;Get started: Python Plugins and the Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Processing engine and Python plugins&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, check out these resources for more information and examples of other alert plugins:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/preventing-alert-storms-influxdb3/"&gt;Preventing Alert Storms with InfluxDB 3’s Processing Engine Cache&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/setting-up-sms-whatsapp-alerts-influxdb3/"&gt;How to Set Up Real-Time SMS/WhatsApp Alerts with InfluxDB 3 Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/core-enterprise-alerting-influxdb3/"&gt;Alerting with InfluxDB 3 Core and Enterprise&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I invite you to contribute any plugin that you create. Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#trigger-types"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 20 May 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/forecasting-hugging-face-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/forecasting-hugging-face-influxdb-3/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Build a Time Series Forecasting Pipeline in InfluxDB 3 Without Writing Code</title>
      <description>&lt;p&gt;Curious how time series forecasting fits into your InfluxDB 3 workflows? Let’s build a complete forecasting pipeline together using InfluxDB 3 Core’s Python Processing Engine and Facebook’s Prophet library.&lt;/p&gt;

&lt;p&gt;InfluxDB 3 Core’s Python Processing Engine dramatically lowers the barrier to entry—not just for experienced developers but for anyone with a basic understanding of time series data and Python. It turns what used to be a complex, multi-day task into something you can prototype in a few hours, making advanced forecasting and data processing far more accessible and accelerating the path from idea to insight.&lt;/p&gt;

&lt;p&gt;One of the most exciting aspects of this project is how quickly it came together using a large language model (LLM). By simply providing InfluxDB 3 Core’s Python Processing Engine the documentation and FB Prophet quick start guide, the LLM generated working plugin code, wired up the full pipeline, and even suggested improvements.&lt;/p&gt;

&lt;p&gt;Now, let’s build a pipeline that predicts daily pageviews for the Wikipedia article on Peyton Manning over an entire year, starting with historical data and ending with an interactive Plotly visualization. To build alongside this project, download &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core or Enterprise&lt;/a&gt;. This project will cover:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Details on what I provided my LLM (ChatGPT-4o) with to execute this&lt;/li&gt;
  &lt;li&gt;The requirements and setup required to run this end-to-end forecasting pipeline examples&lt;/li&gt;
  &lt;li&gt;Creating necessary InfluxDB 3 Core and Enterprise resources&lt;/li&gt;
  &lt;li&gt;Writing data, making a forecast, and visualizing the results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The forecasting pipeline includes three purpose-built plugins: one to load historical data, another to generate daily forecasts, and one to visualize results on demand:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;load_peyton&lt;/strong&gt; (&lt;a href="https://github.com/Anaisdg/influxdb3_plugins/blob/add-fbprophet-plugins/influxdata/Anaisdg/fbprophet/load_peyton_data.py"&gt;load_peyton_data.py&lt;/a&gt;): An HTTP-triggered plugin that loads sample Wikipedia pageview data from a CSV and writes it to the peyton_views table.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;peyton_forecast&lt;/strong&gt; (&lt;a href="https://github.com/Anaisdg/influxdb3_plugins/blob/add-fbprophet-plugins/influxdata/Anaisdg/fbprophet/forecast_peyton.py"&gt;forecast_peyton.py&lt;/a&gt;): A scheduled plugin (e.g., every:1d) that queries the peyton_views table, fits a Prophet model and writes a full 365-day forecast to the prophet_forecast table.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;forecast_plot&lt;/strong&gt; (&lt;a href="https://github.com/Anaisdg/influxdb3_plugins/blob/add-fbprophet-plugins/influxdata/Anaisdg/fbprophet/plot_forecast_http.py"&gt;plot_forecast_http.py&lt;/a&gt;): An HTTP-triggered plugin that queries both peyton_views and prophet_forecast, merges them, and returns an interactive Plotly graph as HTML.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/80efcc6832824b378e764915f41e5969/6c7f352f1937f889dc7150bdd768d87a/unnamed.png" alt="" /&gt;
A visualization of the historical page view data (blue) and forecasted page view data (green) that you can generate by following this tutorial. Accessible by visiting http://localhost:8181/api/v3/engine/plot_forecast.&lt;/p&gt;

&lt;h2 id="from-coding-to-ai-validation"&gt;From coding to AI validation&lt;/h2&gt;

&lt;p&gt;I generated the code for this project by providing the following resources to ChatGPT-4o: &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;The Processing engine and Python plugins documentation&lt;/a&gt; and the &lt;a href="https://facebook.github.io/prophet/docs/quick_start.html"&gt;Prophet Quickstart example&lt;/a&gt;. From there, I gave it a handful of natural-language prompts to build the components I needed iteratively. Here are the core prompts I used:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;“Can you write a plugin for InfluxDB 3 that uses Facebook Prophet to forecast time series data?”&lt;/strong&gt; This got the conversation started and helped ChatGPT understand the target use case and library.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;“Create a plugin that loads historical data from a public CSV (like the Peyton Manning Wikipedia views) and writes it to InfluxDB.”&lt;/strong&gt; This became the HTTP-triggered loader plugin that populates the peyton_views table.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;“Now, write a scheduled plugin that queries that table, fits a Prophet model, and writes the forecast to another table. Make sure the logic writes all forecast rows.”&lt;/strong&gt; This became the daily forecasting plugin, which outputs yhat, yhat_lower, and yhat_upper to prophet_forecast.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;“Can you build a third plugin that reads both the historical and forecasted data, plots it using Plotly, and returns the result over HTTP as an HTML page?”&lt;/strong&gt; This plugin allows me to visualize the entire pipeline in my browser without setting up an external dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By layering the prompts above, I built a fully functioning, interactive forecasting pipeline inside InfluxDB without writing any code manually. What began as traditional coding quickly became a process of prompt engineering and AI validation.&lt;/p&gt;

&lt;p&gt;What began as traditional coding quickly became a process of prompt engineering and AI validation. I described intent, reviewed generated code, and iterated until everything worked end-to-end.&lt;/p&gt;

&lt;p&gt;That said, using the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/test/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;test command&lt;/a&gt; for the Processing Engine was incredibly helpful—not just for this project, but for building and validating any plugin. The &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/test/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;test command&lt;/a&gt; quickly triggers a plugin with cURL or verifies results with SQL queries. It made the feedback loop tight and efficient, especially when paired with AI-generated code.&lt;/p&gt;

&lt;h2 id="requirements-and-setup"&gt;Requirements and setup&lt;/h2&gt;

&lt;p&gt;You can either run this example locally or within a Docker container. Follow the &lt;a href="https://docs.influxdata.com/influxdb3/core/install/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Enterprise installation guides&lt;/a&gt;, as this post applies to both products. I recommend using Docker, so this post will assume you’re running InfluxDB 3 in a containerized environment for ease of setup, isolation, and cleanup. Make sure Docker is installed on your system and you’ve pulled the latest InfluxDB 3 image for your chosen edition. I’ll use &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; as it’s the OSS version. &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt; is also freely available for local development and testing if you select at-home use.&lt;/p&gt;

&lt;p&gt;Once you pull this &lt;a href="https://github.com/Anaisdg/influxdb3_plugins/tree/add-fbprophet-plugins"&gt;repo&lt;/a&gt;, save the file in your configured plugin directory (e.g., /path/to/plugins/). Then run the following command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run -it --rm --name test_influx -v ~/influxdb3/data:/var/lib/influxdb3   -v /path/to/plugins/:/plugins -p 8181:8181 quay.io/influxdb/influxdb3-core:latest serve --node-id my_host --object-store file --data-dir /var/lib/influxdb3 --plugin-dir /plugin&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This command runs a temporary InfluxDB 3 Core container named &lt;code class="language-markup"&gt;test_influx&lt;/code&gt; using the latest image. It mounts your local data directory to persist database files and the plugin directory containing the deadman check plugin. It also exposes port 8181 so you can access the database locally, and start the server using the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/serve/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;serve command&lt;/a&gt; with file-based object storage (you could also easily use an S2 bucket for file-based storage), a custom node ID, and the mounted plugin directory.&lt;/p&gt;

&lt;h2 id="writing-data-making-a-forecast-and-visualizing-our-data"&gt;Writing data, making a forecast, and visualizing our data&lt;/h2&gt;

&lt;p&gt;Before we create and enable any triggers that use the plugins mentioned above, we need to install all dependencies. This project depends on Plotly and Prophet. Install them using:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 install package plotly 
influxdb3 install package prophet&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next, create a database to write page view data to:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create database prophet&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, we can create our triggers. First, we’ll create Plugin 1 and load data via HTTP:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  --trigger-spec "request:load_peyton" \
  --plugin-filename "load_peyton_data.py" \
  --database prophet \
  load_peyton&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;strong&gt;load_peyton&lt;/strong&gt; plugin:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Is a HTTP-triggered plugin&lt;/li&gt;
  &lt;li&gt;Downloads a public CSV of daily Wikipedia views&lt;/li&gt;
  &lt;li&gt;Writes rows to the peyton_views table in InfluxDB 3 Core&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can then execute the following cURL command to trigger the execution of the &lt;strong&gt;load_peyton&lt;/strong&gt; plugin:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl http://localhost:8181/api/v3/engine/load_peyton&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You should see the following output, which confirms a successful write of the data:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;{"status": "success", "rows_written": 2905}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next, we can create forecasts on schedule:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  --trigger-spec "every:1m" \
  --plugin-filename "forecast_peyton.py" \
  --database prophet \
  peyton_forecast&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;strong&gt;peyton_forecast&lt;/strong&gt; plugin:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Scheduled plugin (runs daily or on your schedule)&lt;/li&gt;
  &lt;li&gt;Reads data from peyton_views table&lt;/li&gt;
  &lt;li&gt;Fits a Prophet model&lt;/li&gt;
  &lt;li&gt;Forecasts 365 days into the future&lt;/li&gt;
  &lt;li&gt;Writes summary forecast results to prophet_forecast table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ve set the peyton_forecast plugin to run every minute to get a forecast quickly. However, since we’re looking at daily data, you’d more likely run this type of pipeline on a daily interval. After you’ve successfully generated and written a forecast, you should see the following output:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;processing engine: Running Prophet forecast on 'peyton_views'INFO - Chain [1] start processing
INFO - Chain [1] done processing&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To prevent the plugin from running indefinitely and recruiting the same forecasting workload, disable it with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;inflxudb3 disable trigger --database prophet peyton_forecast&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Finally, we can visualize our data by enabling the final plugin with:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  --trigger-spec "request:plot_forecast" \
  --plugin-filename "plot_forecast_http.py" \
  --database prophet \
  forecast_plot&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;strong&gt;forecast_plot&lt;/strong&gt; plugin:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Is a HTTP-triggered plugin&lt;/li&gt;
  &lt;li&gt;Reads data from both peyton_views and prophet_forecast&lt;/li&gt;
  &lt;li&gt;Creates an interactive Plotly chart combining historical data and forecasts&lt;/li&gt;
  &lt;li&gt;Returns the chart as HTML for browser viewing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Visit &lt;a href="http://localhost:8181/api/v3/engine/plot_forecast"&gt;http://localhost:8181/api/v3/engine/plot_forecast&lt;/a&gt; to view the historical data and forecast shared in the first section of this tutorial.&lt;/p&gt;

&lt;h2 id="final-thoughts-and-considerations"&gt;Final thoughts and considerations&lt;/h2&gt;

&lt;p&gt;This collection of plugins for &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core and Enterprise&lt;/a&gt; provides a basic example of how to build an end-to-end data collection, forecasting, and visualization pipeline using the InfluxDB 3 Python Processing Engine. While this project focuses on a simple FB Prophet-based forecast, it can easily be extended into a more robust and production-ready system. For example, you could load a pre-trained forecasting model from Hugging Face for faster inference, to monitor forecast accuracy over time to detect model drift, and to schedule automated retraining when performance degrades.&lt;/p&gt;

&lt;p&gt;Pairing this pipeline with InfluxDB 3’s Processing Engine alerting capabilities allows you to proactively respond to anomalies or drift events by sending notifications or triggering remediation workflows. With these building blocks, you can create intelligent, self-healing time series pipelines tailored to your use case.&lt;/p&gt;

&lt;p&gt;I encourage you to look at the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/examples/schedule/system_metrics/system_metrics.py"&gt;InfluxData/influxdb3_plugins&lt;/a&gt; as we add examples and plugins. Also, please contribute your own! To learn more about building your plugin, check out these resources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/new-python-processing-engine-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Transform Data with the New Python Processing Engine in InfluxDB 3&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog/#python-plugins-and-the-processing-engine"&gt;Get started: Python Plugins and the Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Processing engine and Python plugins&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, check out these resources for more information and examples of other alert plugins:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/preventing-alert-storms-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Preventing Alert Storms with InfluxDB 3’s Processing Engine Cache&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/setting-up-sms-whatsapp-alerts-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;How to Set Up Real-Time SMS/WhatsApp Alerts with InfluxDB 3 Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/core-enterprise-alerting-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Alerting with InfluxDB 3 Core and Enterprise&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I invite you to contribute any plugin that you create. Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog/#trigger-types"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=no_code_time_series_forecasting_pipeline_influxdb3&amp;amp;utm_content=blog"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Thu, 17 Apr 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/no-code-time-series-forecasting-pipeline-influxdb3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/no-code-time-series-forecasting-pipeline-influxdb3/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Deadman Alerts with the Python Processing Engine </title>
      <description>&lt;p&gt;Sometimes silence isn’t golden; it’s a red flag. Whether you’re monitoring IoT sensors, system logs, or application metrics, missing data can be just as critical as abnormal data. Without visibility into these gaps, you risk overlooking potential failures, security threats, or operational inefficiencies.  In time series workflows, detecting silence is often the first sign of trouble—whether it’s a network issue, device failure, sensor failure, or stalled process.&lt;/p&gt;

&lt;p&gt;In a &lt;a href="https://www.influxdata.com/blog/influxdb3-processing-engine-python-plugin/"&gt;previous blog post&lt;/a&gt;, we learned to use the Python Processing Engine with InfluxDB 3 Core and Enterprise to build threshold alerts and send notifications to Slack, Discord, or HTTP endpoint using a Wall Sync trigger. In this post, we’ll learn how to build a deadman check and alert by leveraging a schedule trigger known as a deadman trigger. Deadman triggers are a powerful alerting strategy that notifies you immediately when expected data stops arriving.&lt;/p&gt;

&lt;p&gt;The deadman check plugin can be found &lt;a href="https://github.com/Anaisdg/influxdb3_plugins/tree/add-deadman-check"&gt;here&lt;/a&gt;. The plugin monitors a target table for recent writes and sends a Slack alert if no new data is received within a configurable time threshold.&lt;/p&gt;

&lt;p&gt;This blog post will cover:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Requirements and Setup for InfluxDB 3 Core and Enterprise (this post works with both Core and Enterprise)&lt;/li&gt;
  &lt;li&gt;Getting a Slack Webhook URL&lt;/li&gt;
  &lt;li&gt;Creating InfluxDB 3 Core and Enterprise resources&lt;/li&gt;
  &lt;li&gt;Testing the plugin and sending a deadman alert&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="requirements-and-setup"&gt;Requirements and setup&lt;/h2&gt;

&lt;p&gt;Download &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core or Enterprise&lt;/a&gt; and follow the appropriate &lt;a href="https://docs.influxdata.com/influxdb3/core/install/"&gt;Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt; installation guides to work alongside this tutorial.  You can either run this example locally or within a Docker container but I recommend using Docker for ease of setup, isolation, and cleanup. That said, this post assumes you’re running InfluxDB 3 in a Docker containerized environment.&lt;/p&gt;

&lt;p&gt;Before you can run this in Docker, make sure Docker is installed on your system and pull the latest InfluxDB 3 image for your chosen edition (Core or Enterprise). I’m going to use InfluxDB 3 Core, as it’s the OSS version. You can use Enterprise for no cost by specifying “at-home use.”&lt;/p&gt;

&lt;p&gt;Once you pull this &lt;a href="https://github.com/Anaisdg/influxdb3_plugins/tree/add-deadman-check"&gt;repo&lt;/a&gt;, save the file as deadman_alert.py in your configured plugin directory (e.g., /path/to/plugins/). Then run the following command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run -it --rm --name test_influx -v
~/influxdb3/data:/var/lib/influxdb3   -v /path/to/plugins/:/plugins -p 
8181:8181 quay.io/influxdb/influxdb3-core:latest serve --node-id 
my_host --object-store file --data-dir /var/lib/influxdb3 --plugin-dir 
/plugin&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This command runs a temporary InfluxDB 3 Core container named &lt;code class="language-markup"&gt;test_influx&lt;/code&gt; using the latest image. It mounts your local data directory to persist database files and mounts the plugin directory containing the deadman check plugin. It also exposes port 8181 so you can access the database locally and start the server using the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/serve/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;serve&lt;/a&gt; &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/serve/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;command&lt;/a&gt; with file-based object storage (you could also easily use an S2 bucket for file-based storage), a custom node ID, and the mounted plugin directory.&lt;/p&gt;

&lt;p&gt;Follow this &lt;a href="https://api.slack.com/messaging/webhooks"&gt;documentation&lt;/a&gt; on how to create a Slack webhook URL. You’ll need to include the webhook as an argument during the trigger creation process. Alternatively, you can use acpublic webhook that offers users an opportunity to test out InfluxDB-related notifications. It’s pinned to the #notifications-testing channel in the InfluxDB Slack.&lt;/p&gt;

&lt;h2 id="creating-influxdb-3-resources-and-generating-a-deadman-alert"&gt;Creating InfluxDB 3 resources and generating a deadman alert&lt;/h2&gt;

&lt;p&gt;Create a database to monitor for a heartbeat:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create database my_database&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, write some data to it:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 write --database my_database "sensor_data temp=20"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next, create and enable the trigger:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \’
  --trigger-spec "every:10s" \
--plugin-filename "deadman_alert.py" \
  --trigger-arguments table=sensor_data,threshold_minutes=1,slack_webhook=https://hooks.slack.com/services/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 \
  --database my_database \
  sensor_deadman&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The deadman check plugin runs every 10 seconds. It checks for data written to the sensor_data table in the my_database within the last minute. If data was written, you’ll see the following output in the InfluxDB logs:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;INFO influxdb3_py_api::system_py: processing engine: Data exists in 'sensor_data' in the last 1 minutes.&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If no data written in the last minute, you’ll receive the following notification in Slack:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/9edfbc40f6c749279d7266d3e0bd6eb7/f912f5bee47b43d297afb8dea9e3c0f0/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;The trigger will continue firing until you disable it with:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 disable trigger --database my_database sensor_deadman
Trigger sensor_deadman disabled successfully&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="final-thoughts-and-considerations"&gt;Final thoughts and considerations&lt;/h2&gt;

&lt;p&gt;This deadman check and alert plugin for InfluxDB 3 Core and Enterprise provides a powerful and flexible way to monitor data pipeline durability in real-time. I hope this tutorial helps you start alerting with Python Plugins and enabling triggers in InfluxDB 3 Core and Enterprise with Docker. I encourage you to look at the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/examples/schedule/system_metrics/system_metrics.py"&gt;InfluxData/influxdb3_plugins&lt;/a&gt; as we add examples and plugins there. Also, please contribute your own! To learn more about building your own plugin, check out these resources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/new-python-processing-engine-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Transform Data with the New Python Processing Engine in InfluxDB 3&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog#python-plugins-and-the-processing-engine"&gt;Get started: Python Plugins and the Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Processing engine and Python plugins&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, check out these resources for more information and examples of other alert plugins:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/preventing-alert-storms-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Preventing Alert Storms with InfluxDB 3’s Processing Engine Cache&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/setting-up-sms-whatsapp-alerts-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;How to Set Up Real-Time SMS/WhatsApp Alerts with InfluxDB 3 Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/core-enterprise-alerting-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Alerting with InfluxDB 3 Core and Enterprise&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I invite you to contribute any plugin that you create there. Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#trigger-types"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=python_processing_engine_deadman_alerts&amp;amp;utm_content=blog"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Wed, 09 Apr 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/python-processing-engine-deadman-alerts/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/python-processing-engine-deadman-alerts/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Alerting with InfluxDB 3 Core and Enterprise</title>
      <description>&lt;p&gt;Monitoring is only as good as the alerts that surface critical issues before they spiral out of control. With InfluxDB 3 Core and Enterprise, you can extend alerting capabilities beyond built-in solutions by leveraging custom Python processing plugins. Whether you need real-time notifications when thresholds are exceeded or advanced anomaly detection tailored to your infrastructure, developing custom alerting logic ensures you get the right alerts at the right time. In this post, we’ll dive into how to build and integrate alerting plugins within InfluxDB 3. Specifically, we’ll learn how to incorporate a custom alerting plugin that notifies you via Slack, Discord, or other HTTP endpoints when key metrics exceed thresholds. This post will walk through how to set up and configure the alert plugin, covering its features, setup, and real-world use cases.&lt;/p&gt;

&lt;h2 id="why-custom-alerts-matter-in-influxdb-3"&gt;Why custom alerts matter in InfluxDB 3&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 provides powerful time-series storage and query capabilities, but detecting anomalies in real-time requires an efficient alerting system. The alert plugin enables dynamic notifications when a sensor value, CPU usage, or any monitored field crosses a threshold.&lt;/p&gt;

&lt;p&gt;Key features include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Multi-platform notifications – Supports Slack, Discord, and generic HTTP endpoints&lt;/li&gt;
  &lt;li&gt;Configurable thresholds – Set custom conditions for triggering alerts&lt;/li&gt;
  &lt;li&gt;Flexible messages – Customize alert content with dynamic variables&lt;/li&gt;
  &lt;li&gt;Retries with backoff – Ensure delivery with exponential retry mechanisms&lt;/li&gt;
  &lt;li&gt;Environment variable support – Securely configure webhook URLs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While this plugin focuses on sending alerts for straightforward threshold-based conditions, you can extend its functionality by building a custom processing plugin that performs sophisticated anomaly detection or forecasting. For example, you could develop a plugin that analyzes trends, detects anomalies, and writes flagged events to a separate database. Then, by setting the threshold to &lt;code class="language-markup"&gt;0&lt;/code&gt; and pointing this alert plugin at the anomaly database, you can trigger alerts whenever new anomalies are detected. This modular approach allows you to integrate advanced anomaly detection with real-time notifications, ensuring a more intelligent and proactive monitoring system.&lt;/p&gt;

&lt;h2 id="getting-started-prerequisites"&gt;Getting started: prerequisites&lt;/h2&gt;

&lt;p&gt;This blog post assumes that you have installed and are using &lt;a href="https://docs.influxdata.com/influxdb3/core/install/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt;. Additionally, before setting up alerts, install the required package for HTTP requests:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 install package httpx&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This tutorial also assumes that you have created two databases with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create database my_databaseinflxudb3 create database alerts_history&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="setting-up-notifications"&gt;Setting up notifications&lt;/h2&gt;

&lt;p&gt;To send alerts to Slack, set up a &lt;a href="https://api.slack.com/messaging/webhooks"&gt;webhook URL&lt;/a&gt;. For testing, use the public webhook (not for production):&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;export SLACK_WEBHOOK_URL= "https://hooks.slack.com/services/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"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This posts alerts to the #notifications-testing channel in the InfluxData Slack community. For production, use your own Slack webhook:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;export SLACK_WEBHOOK_URL="[your slack webhook]"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Similarly, to leverage the Discord Integration, set up a a &lt;a href="https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks"&gt;Discord webhook&lt;/a&gt;:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;export DISCORD_WEBHOOK_URL="[your discord webhook]"&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="testing-the-influxdataanaisdgalertsalertpy-plugin"&gt;Testing the influxdata/Anaisdg/Alerts/alert.py plugin&lt;/h2&gt;

&lt;p&gt;Make sure to place the alert.py script in ~/influxdb3/plugins before running the command below.&lt;/p&gt;

&lt;p&gt;Once configured, test alerts using sample data. For example, to send a Slack alert when the temperature exceeds 20°C, run:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 test wal_plugin \
  --database my_database \
  --lp="sensor_data,sensor=TempSensor1,location=living_room temperature=25 123456789" \
  --input-arguments "name=temp_alert,endpoint_type=slack,threshold=20,field_name=temperature,notification_text=Alert: Temperature ${temperature} exceeds threshold,webhook_url=$SLACK_WEBHOOK_URL" \
  alert.py&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Expected behavior&lt;/strong&gt;: The temperature field exceeds 20, so a notification is sent to Slack with the message:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;"Temperature 25 exceeds threshold"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code class="language-markup"&gt;input-arguments&lt;/code&gt; flag defines all of the configuration options for your plugin. Here’s how you can fine-tune this specific plugin:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Parameter&lt;/td&gt;
      &lt;td&gt;Required?&lt;/td&gt;
      &lt;td&gt;Description&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;name&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;Unique identifier for the alert instance&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;endpoint_type&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;Choose slack, discord, or http for alerts&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;field_name&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;Field to monitor (e.g., temperature, cpu_usage)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;threshold&lt;/td&gt;
      &lt;td&gt;Yes&lt;/td&gt;
      &lt;td&gt;Numeric value that triggers an alert&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;notification_text&lt;/td&gt;
      &lt;td&gt;Optional&lt;/td&gt;
      &lt;td&gt;Custom alert message (supports variables)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;webhook_url&lt;/td&gt;
      &lt;td&gt;Optional&lt;/td&gt;
      &lt;td&gt;Override environment variable webhook&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;headers&lt;/td&gt;
      &lt;td&gt;Optional&lt;/td&gt;
      &lt;td&gt;Base64-encoded headers for HTTP endpoints&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;alerts_db&lt;/td&gt;
      &lt;td&gt;Optional&lt;/td&gt;
      &lt;td&gt;Database to store alert history for tracking and analysis&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id="sending-and-tracking-alerts"&gt;Sending and tracking alerts&lt;/h2&gt;

&lt;p&gt;Now that we’ve tested our plugin, we’re ready to set up and enable an alert trigger. For this example, we’ll send alert notifications to Discord and use the alerts_db option:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create trigger \
  --database my_database \
  --plugin-filename alert.py \
  --trigger-spec "table:sensor_data" \
  --trigger-arguments "name=temp_alert_trigger,endpoint_type=discord,threshold=20,field_name=temperature,notification_text=Value is \${temperature},webhook_url=$DISCORD_WEBHOOK_URL,alert_db=alerts_history" \
  temperature_alert
influxdb3 enable trigger --database my_database temperature_alert&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, we can write test data and check the alerts_history database to confirm our notification sent and that we’re writing alert metrics successfully:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 write --database my_database "sensor_data,sensor=TempSensor1,location=living_room temperature=25"
influxdb3 query --database alerts_history "select * from sensor_data"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The output of your query should look like this:&lt;/p&gt;

&lt;div class="table-container has-text-centered is-v-centered"&gt;
&lt;table class="table is-bordered"&gt;
&lt;thead&gt;
  &lt;tr&gt;
    &lt;th&gt;time&lt;/th&gt;
    &lt;th&gt;alert_message&lt;/th&gt;
    &lt;th&gt;sensor&lt;/th&gt;
    &lt;th&gt;location&lt;/th&gt;
    &lt;th&gt;plugin_name&lt;/th&gt;
    &lt;th&gt;processed_at&lt;/th&gt;
    &lt;th&gt;temperature&lt;/th&gt;
  &lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
  &lt;tr class="has-text-left"&gt;
    &lt;td&gt;2025-02-25T00:25:40Z&lt;/td&gt;
    &lt;td&gt;Value is 25.0&lt;/td&gt;
    &lt;td&gt;TempSensor1&lt;/td&gt;
    &lt;td&gt;living_room&lt;/td&gt;
    &lt;td&gt;temp_alert&lt;/td&gt;
    &lt;td&gt;2025-02-25T00:25:45Z&lt;/td&gt;
    &lt;td&gt;25.0&lt;/td&gt;
  &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;

&lt;h2 id="final-thoughts"&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;The influxdata/Anaisdg/alert plugin for InfluxDB 3 provides a powerful and flexible way to monitor key metrics and trigger notifications in real-time. Whether you need Slack, Discord, or custom HTTP alerts, this plugin helps ensure that critical issues never go unnoticed. I hope this tutorial helps you start alerting with Python Plugins and enabling triggers in InfluxDB 3 Core and Enterprise with Docker. I encourage you to look at the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/examples/schedule/system_metrics/system_metrics.py"&gt;InfluxData/influxdb3_plugins&lt;/a&gt; as we add examples and plugins there. Also, please contribute your own! To learn more about building your own plugin, check out these resources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/new-python-processing-engine-influxdb3/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Transform Data with the New Python Processing Engine in InfluxDB 3&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#python-plugins-and-the-processing-engine/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Get started: Python Plugins and the Processing Engine&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Processing engine and Python plugins&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I invite you to contribute any plugin that you create there. Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#trigger-types/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=core_enterprise_alerting_influxdb3&amp;amp;utm_content=blog"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 11 Mar 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/core-enterprise-alerting-influxdb3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/core-enterprise-alerting-influxdb3/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Building Your First Python Plugin for the InfluxDB 3 Processing Engine</title>
      <description>&lt;p&gt;**Note: This blog runs the InfluxDB 3 Core CLI not Enterprise.&lt;/p&gt;

&lt;p&gt;One of the most compelling features of InfluxDB 3 is its built-in Python Processing Engine, a versatile component that adds powerful, real-time processing capabilities to InfluxDB 3 Core. For those familiar with Kapacitor in InfluxDB 1.x or Flux Tasks in 2.x, the Processing Engine represents a more streamlined, integrated, and scalable approach to acting on data. With the ability to run Python code directly within the database, you no longer need external servers or complex data pipelines to process incoming information.&lt;/p&gt;

&lt;p&gt;The Processing Engine can trigger actions as data arrives, on-demand, or on a schedule, making it ideal for real-time transformation, normalization, alerting, downsampling, and edge data replication. In this blog, we’ll build a Python plugin that standardizes IoT data from diverse sources. Standardization is critical because IoT devices often produce data in inconsistent formats—different units, structures, or naming conventions—complicating analysis and decision-making. By normalizing this data at the point of ingestion, you simplify downstream queries, ensure consistency across datasets, and improve the reliability of your analytics.&lt;/p&gt;

&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;

&lt;p&gt;To follow this tutorial, you’ll need:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Docker installed on your machine.&lt;/li&gt;
  &lt;li&gt;A code editor like Visual Studio Code (VS Code) or another integrated development environment (IDE) of your choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Using Docker for this tutorial ensures you can easily spin up a compatible environment without complex setup steps, allowing you to focus on building and testing your Python plugin. We’ll walk through the process of creating the plugin from scratch with Docker, but you can also run this tutorial locally without the Docker commands.&lt;/p&gt;

&lt;p&gt;This tutorial also assumes you have some familiarity with Docker fundamentals, such as running containers, managing images, and using Docker Compose. Additionally, it helps to have a basic understanding of InfluxDB, including the line protocol ingest format and the InfluxDB 3 Core CLI. If you’re new to any of these concepts, we recommend reviewing the InfluxDB documentation or Docker’s getting-started guide before proceeding:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.influxdata.com/blog/cli-operations-influxdb3-core-enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;CLI Operations for InfluxDB 3 Core and Enterprise&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/#Copyright/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;influxdb3 CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="process-overview"&gt;Process overview&lt;/h2&gt;

&lt;p&gt;Inconsistencies in units, field names, and measurement structures are common when dealing with IoT data from various devices.  Different sensors might report temperature in Fahrenheit or Celsius, pressure in pascals or kilopascals, and inconsistent naming conventions like humidity_percent vs. humidity. This variability makes querying, analyzing, and correlating data unnecessarily complex, leading to errors in reporting, delayed insights, and increased maintenance overhead.&lt;/p&gt;

&lt;p&gt;Standardizing units and field names during data ingestion ensures a consistent and reliable dataset for downstream analysis. Beyond simplifying analytics, consistent naming conventions are essential for maintaining compliance with industry standards, regulatory requirements, and internal data governance policies. With standardized, well-structured data, teams can more confidently generate reports, audit historical records, and integrate with other systems that rely on clear, predictable data formats.&lt;/p&gt;

&lt;p&gt;Our plugin will solve this by standardizing units and names as the data is ingested, ensuring a consistent and reliable dataset for downstream analysis.&lt;/p&gt;

&lt;p&gt;Here’s a high-level look at the process:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Create a Plugin Directory:&lt;/strong&gt; Set up a plugin directory in your local InfluxDB 3 environment and give it read/write permissions.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Start InfluxDB 3 with Docker:&lt;/strong&gt; Pull the InfluxDB 3 Core Docker image and launch a container, mounting the plugins directory.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Write the Python Processing Script:&lt;/strong&gt; Create a Python script to convert measurements into standardized units and naming conventions.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Create Source and Destination Databases:&lt;/strong&gt; Use the CLI to create databases for raw and standardized data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Test and Enable the Plugin:&lt;/strong&gt; Test the script with sample data, then enable the plugin.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Write Data and Verify the Transformation&lt;/strong&gt;: Ingest sample IoT data with inconsistent formats and query the destination database to confirm successful standardization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="building-your-first-plugin"&gt;Building your first plugin&lt;/h2&gt;

&lt;p&gt;As mentioned in the overview, start by creating a plugin directory at:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;mkdir -p ~/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Ensure the directory has the necessary read and write permissions:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;chmod 755 ~/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next, pull the latest InfluxDB 3 Enterprise Docker image:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker pull quay.io/influxdb/influxdb3-enterprise:latest&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Refer to the official documentation for details on running the Enterprise or Core editions, depending on your setup. Now, we’re ready to start the InfluxDB 3 container with the following command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker run -it \
  -v ~/influxdb3/data:/var/lib/influxdb3 \
  -v ~/influxdb3/plugins:/plugins \
  -p 8181:8181 \
  --user root \
  quay.io/influxdb/influxdb3-enterprise:latest serve \
  --node-id my_host \
  --object-store file \
  --data-dir /var/lib/influxdb3 \
  --plugin-dir /plugins&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let’s break down this command:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;-v ~/influxdb3/data:/var/lib/influxdb3&lt;/code&gt;: Mounts the local data directory as the database’s persistent storage location.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;-v ~/influxdb3/plugins:/plugins&lt;/code&gt;: Mounts the plugins directory where our Python plugin will live, making it accessible to the Processing Engine.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;-p 8181:8181&lt;/code&gt;: Maps port 8181 from the container to the host, allowing access to the InfluxDB 3 API.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--user root&lt;/code&gt;: Ensures the container runs with root privileges, which are required for the Processing Engine to access the plugins.&lt;/li&gt;
  &lt;li&gt;serve: Starts the InfluxDB 3 server.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--node-id my_host&lt;/code&gt;: Assigns a unique node ID, which can be customized based on your environment.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--object-store file&lt;/code&gt;: Configures the database to use the local filesystem for object storage.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--data-dir /var/lib/influxdb3&lt;/code&gt;: Points to the directory where InfluxDB will persist its data.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--plugin-dir /plugins&lt;/code&gt;: Instructs InfluxDB to load any available plugins from the mounted plugins directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you’re running InfluxDB 3 Core or Enterprise locally, you’ll want to serve your InfluxDB 3 instance and set the plugin directory with the following command (learn more about the serve command options with the &lt;a href="https://www.influxdata.com/blog/cli-operations-influxdb3-core-enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;CLI Operations for InfluxDB 3 Core and Enterprise&lt;/a&gt; or the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;docs&lt;/a&gt;):&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 serve --object-store file --data-dir ~/.influxdb3/data --node-id my_host --plugin-dir ~/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With this setup, the Processing Engine will have access to your Python plugins and be ready to apply transformations to incoming data.&lt;/p&gt;

&lt;p&gt;Now, we’re ready to write the Python script to handle our data standardization. We’ll name our script &lt;code class="language-markup"&gt;hello_world.py&lt;/code&gt;. It will contain a function called &lt;code class="language-markup"&gt;process_writes&lt;/code&gt; that performs the core transformation logic. This function processes incoming batches of sensor data, standardizing field names, tags, and units to ensure consistency across datasets. It iterates through each table batch, logs key information, and skips tables that match a predefined exclusion rule. For each row, it converts sensor names and locations to lowercase and replaces spaces with underscores to maintain uniform naming conventions. It also enriches the data by adding a timestamp field indicating when the record was processed. Finally, the function writes the transformed data to a new InfluxDB 3 database called &lt;code class="language-markup"&gt;unified_sensor_data&lt;/code&gt;, ensuring all sensor records share a consistent structure for easier querying, analysis, and compliance with data standards.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;import datetime

def process_writes(influxdb3_local, table_batches, args=None):
    # Log the provided arguments
    if args:
        for key, value in args.items():
            influxdb3_local.info(f"{key}: {value}")

   # Process each table batch
    for table_batch in table_batches:
        table_name = table_batch["table_name"]
        influxdb3_local.info(f"Processing table: {table_name}")

      # Skip processing a specific table if needed
        if table_name == "exclude_table":
            continue

      # Analyze each row
        for row in table_batch["rows"]:
            influxdb3_local.info(f"Row: {row}")

            # Standardize sensor names (lowercase, no spaces)
            sensor_name = row.get("sensor", "unknown").lower().replace(" ", "_")
            influxdb3_local.info(f"Standardized sensor name: {sensor_name}")

            # Standardize location and other tags by replacing spaces with underscores
            location = row.get("location", "unknown").lower().replace(" ", "_")

            # Add enriched field (e.g., timestamp)
            line = LineBuilder(table_name)
            line.tag("sensor", sensor_name)
            line.tag("location", location)
            line.float64_field("temperature_c", row.get("temperature", 0))
            line.string_field("processed_at", datetime.datetime.utcnow().isoformat())

            # Write the enriched data to a different database
            influxdb3_local.write_to_db("unified_sensor_data", line)

    influxdb3_local.info("Processing completed")&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Create the source and destination databases with the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/cli/influxdb3/create/database/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;influxdb3 create database&lt;/a&gt; command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec {container id} influxdb3 create database my_databasedocker exec {container id} influxdb3 create database unified_sensor_data&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="testing-your-plugin"&gt;Testing your plugin&lt;/h2&gt;

&lt;p&gt;You can test your plugin on a target database with the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/cli/influxdb3/test/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;influxdb3 test command&lt;/a&gt;:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec {container id} influxdb3 test wal_plugin \
-d my_database \
--lp="sensor_data,location=living\\ room temperature=22.5 123456789" \
/plugins/hello-world.py&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The command outputs a JSON object with logging information, database writes, and errors. It shows us that the trigger is parsing data from the &lt;code class="language-markup"&gt;sensor_data&lt;/code&gt; measurement, standardizing the data, and writing the transformed line protocol to the &lt;code class="language-markup"&gt;unified_sensor_data&lt;/code&gt; database without errors:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;{
  "log_lines": [
    "INFO: Processing table: sensor_data",
    "INFO: Row: {'location': 'living room', 'temperature': 22.5, 'time': 123456789}",
    "INFO: Processing completed"
  ],
  "database_writes": {
    "my_database": [],
    "unified_sensor_data": [
      "sensor_data,sensor=unknown,location=living\\ room temperature_c=22.5,processed_at=\"2025-02-13T21:33:44.117195\""
    ]
  },
  "errors": []
}&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="create-and-enable-your-trigger"&gt;Create and enable your trigger&lt;/h2&gt;

&lt;p&gt;Now that the test passes successfully, let’s create a trigger and enable it to run our Python plugin. The following command runs the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/cli/influxdb3/create/trigger/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;influxdb3 create trigger&lt;/a&gt; command. The &lt;code class="language-markup"&gt;-d&lt;/code&gt; option specifies the database where the trigger will be applied. The &lt;code class="language-markup"&gt;--plugin-filename="/plugins/hello-world.py"&lt;/code&gt; option points to the plugin script that will be executed when the trigger is activated. The &lt;code class="language-markup"&gt;--trigger-spec="all_tables"&lt;/code&gt; option indicates that the trigger should apply to all tables within the specified database. Finally, &lt;code class="language-markup"&gt;hello_world_trigger&lt;/code&gt; is the name assigned to the trigger.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec {container id} influxdb3 create trigger \
-d my_database \
--plugin-filename="/plugins/hello-world.py" \
--trigger-spec="all_tables"  \
hello_world_trigger&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now we can enable it with &lt;a href="https://www.google.com/search?q=influxdb3+enable+triggger&amp;amp;rlz=1C5CHFA_enUS970US973&amp;amp;oq=influxdb3+enable+triggger+&amp;amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIJCAEQIRgKGKABMgcIAhAhGKsCMgcIAxAhGKsCMgcIBBAhGJ8FMgcIBRAhGJ8F0gEINDM2OGowajeoAgCwAgA&amp;amp;sourceid=chrome&amp;amp;ie=UTF-8"&gt;influxdb3 enable trigger&lt;/a&gt; command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec {container id} influxdb3 enable trigger \
--database my_database  \
hello_world_trigger&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="verify-your-trigger-and-plugin-are-working"&gt;Verify your trigger and plugin are working&lt;/h2&gt;

&lt;p&gt;To verify that our trigger is enabled correctly and that our plugin is working as expected, we can write a line to the &lt;code class="language-markup"&gt;my_database&lt;/code&gt; source database and query the &lt;code class="language-markup"&gt;unified_sensor_data database&lt;/code&gt; to verify that the standardization is working as expected. Use the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/write/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;influxdb3 write&lt;/a&gt; command to accomplish the former:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec {container id} influxdb3 write \
--database my_database \
"sensor_data,sensor=TempSensor1,location=living\\ room temperature=22.5 123456789"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Finally, we verify that our data has been transformed to our standard with the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/query/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;influxdb3 query&lt;/a&gt; command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec {container id} influxdb3 query \
--database unified_sensor_data \
"SELECT * FROM sensor_data"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/f9146cb7d3b94fc68d5f4c20c7d0cced/aa7df498a611fb73dba965499a3da05b/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;The output confirms that the spaces in our &lt;code class="language-markup"&gt;location&lt;/code&gt; tag value were correctly replaced with underscores, a &lt;code class="language-markup"&gt;processed_at&lt;/code&gt; field was added, our &lt;code class="language-markup"&gt;sensor&lt;/code&gt; tag values were converted to lowercase, and our &lt;code class="language-markup"&gt;temperature&lt;/code&gt; field key now contains the temperature unit.&lt;/p&gt;

&lt;h2 id="final-thoughts"&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;I hope this tutorial helps you start creating Python Plugins and enabling triggers in InfluxDB 3 Core and Enterprise with Docker. I encourage you to look at the &lt;a href="https://github.com/influxdata/influxdb3_plugins/blob/main/examples/schedule/system_metrics/system_metrics.py/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;InfluxData/influxdb3_plugins&lt;/a&gt; as we start to add examples and plugins there. I also invite you to contribute any plugin that you create there! Get started by downloading &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;Core or Enterprise&lt;/a&gt;. Share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_processing_engine_python_plugin&amp;amp;utm_content=blog"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 04 Mar 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb3-processing-engine-python-plugin/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb3-processing-engine-python-plugin/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>CLI Operations for InfluxDB 3 Core and Enterprise</title>
      <description>&lt;p&gt;Note: This blog was written for InfluxDB 3 Core and Enterprise Alphas. Please check the &lt;a href="https://docs.influxdata.com/influxdb3/core/" title="documentation"&gt;documentation&lt;/a&gt; as there might be slight changes in its current version, Beta.&lt;/p&gt;

&lt;p&gt;This blog covers the nitty-gritty of essential command-line tools and workflows to effectively manage and interact with your InfluxDB 3 Core and Enterprise instances. Whether you’re starting or stopping the server with configurations like memory, file, or object store, this guide will walk you through the process. We’ll also look at creating and writing data into databases using authentication tokens, exploring direct line protocol input versus file-based approaches for tasks like testing. You’ll learn how to query data efficiently using SQL and InfluxQL, set up performance-boosting features like the last value cache and meta-cache, and tidy up by deleting databases and tables when necessary. Let’s get started with mastering the InfluxDB 3 Core and Enterprise CLI!&lt;/p&gt;

&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;

&lt;p&gt;This blog post assumes you meet the following requirements: &lt;a href="https://docs.influxdata.com/influxdb3/core/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;InfluxDB Enterprise&lt;/a&gt;. If you’re just getting started, I recommend beginning with InfluxDB 3 Core. As the open-source version of InfluxDB 3 Enterprise, it’s a solid foundation for learning and an excellent choice as an edge data collector. Alternatively, you can use the free trial of Enterprise or upgrade from Core to Enterprise—the CLI commands in Core are identical to those in Enterprise. To upgrade from Core to Enterprise, users need to download and install InfluxDB 3 Enterprise in free trial mode or with a valid license in the same environment where they installed Core, ensuring it points to the same object store.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/984938c8a77a4fc5a95dd26627d5a93e/546366909658364f223dfd2379cb7505/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Image of the CLI after you install it with a single cURL command. There are options to install as a simple download or with docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming Conventions Note:&lt;/strong&gt; If you’re a previous InfluxDB user, some changes in nomenclature might be confusing;  please note that the following are synonymous:&lt;/p&gt;

&lt;p&gt;Bucket ↔ Database&lt;/p&gt;

&lt;p&gt;Measurement ↔ Table&lt;/p&gt;

&lt;p&gt;Field, Tags ↔ Columns&lt;/p&gt;

&lt;h2 id="start-by-getting-code-classlanguage-markup-helpcode"&gt;Start by getting &lt;code class="language-markup"&gt;–-help&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;Once you have InfluxDB v3 OSS installed, the &lt;code class="language-markup"&gt;influxdb3&lt;/code&gt; command-line interface (CLI) becomes your go-to tool for interacting with the database. This versatile CLI allows you to manage your InfluxDB instance, create resources like databases and tokens, and perform essential operations such as querying and writing data. Running &lt;code class="language-markup"&gt;influxdb3 –-help&lt;/code&gt; provides a complete list of commands and options, with examples to get you started. Whether you’re setting up a server with &lt;code class="language-markup"&gt;influxdb3 serve&lt;/code&gt;, writing data with &lt;code class="language-markup"&gt;influxdb3 write&lt;/code&gt;, or running queries using &lt;code class="language-markup"&gt;influxdb3 query&lt;/code&gt;, this tool makes it simple to work with your time series data.&lt;/p&gt;

&lt;p&gt;Here’s a detailed explanation of each command listed in the &lt;code class="language-markup"&gt;influxdb3 –-help&lt;/code&gt; output to help you understand their purpose and usage:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;serve&lt;/code&gt;: Starts the InfluxDB 3 Core or Enterprise server, the central process that makes the database operational.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;query&lt;/code&gt;: Executes queries against a running InfluxDB 3 Core or Enterprise server, allowing you to retrieve and analyze stored data.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;write&lt;/code&gt;: Writes time series data to an InfluxDB 3 Core or Enterprise server. This command allows you to add data manually or programmatically from other tools.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;create&lt;/code&gt;: Helps you create new resources in your InfluxDB instance, such as databases or tokens.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;help&lt;/code&gt;: Displays help information for the &lt;code class="language-markup"&gt;influxdb3&lt;/code&gt; CLI or a specific command. For example, &lt;code class="language-markup"&gt;influxdb3 query –-help&lt;/code&gt; provides detailed usage instructions for the query command.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="starting-the-server-with-different-configurations"&gt;Starting the server with different configurations&lt;/h2&gt;

&lt;p&gt;When you first install InfluxDB 3, the CLI walks you through server configuration options. After that initial setup, use the &lt;code class="language-markup"&gt;serve&lt;/code&gt; command to start the instance:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 serve --object-store file --data-dir ~/.influxdb3 --node-id node0&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;InfluxDB 3 runs on port 8181 by default, but you can add &lt;code class="language-markup"&gt;--http-bind='0.0.0.0:8181&lt;/code&gt; to specify a different port. You must also specify the &lt;code class="language-markup"&gt;--node-id&lt;/code&gt; identifier that determines the server’s storage path within the configured storage location. It must be unique for any nodes sharing the same object store configuration, such as the same bucket. Parquet files serve as the durable, persisted data format for InfluxDB 3 Core and Enterprise, enabling object storage to become the preferred solution for long-term data retention. This approach significantly lowers storage costs while maintaining excellent performance. The &lt;code class="language-markup"&gt;--object-store&lt;/code&gt; option allows users to specify where they want to write those Parquet files. You can choose to write these to memory, local file system, Amazon S3, Azure Blob Storage, Google Cloud Storage, or any cloud storage. Cloud storage options include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;memory&lt;/code&gt; (default): Effectively no object persistence.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;memorythrottled&lt;/code&gt;: Like &lt;code class="language-markup"&gt;memory&lt;/code&gt; but with latency and throughput that somewhat resembles a cloud object store. Useful for testing and benchmarking.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;file&lt;/code&gt;: Stores objects in the local filesystem. Must also set &lt;code class="language-markup"&gt;--data-dir&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;s3&lt;/code&gt;: Amazon S3. Must also set &lt;code class="language-markup"&gt;--bucket&lt;/code&gt;, &lt;code class="language-markup"&gt;--aws-access-key-id&lt;/code&gt;, &lt;code class="language-markup"&gt;--aws-secret-access-key&lt;/code&gt;, and possibly &lt;code class="language-markup"&gt;--aws-default-region&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;google&lt;/code&gt;: Google Cloud Storage. Must also set &lt;code class="language-markup"&gt;--bucket&lt;/code&gt; and &lt;code class="language-markup"&gt;--google-service-account&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;azure&lt;/code&gt;: Microsoft Azure blob storage. Must also set &lt;code class="language-markup"&gt;--bucket&lt;/code&gt;, &lt;code class="language-markup"&gt;--azure-storage-account&lt;/code&gt;, and &lt;code class="language-markup"&gt;--azure-storage-access-key&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, I could use the serve command to start the instance using memory as the “object store” with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 serve --object-store memory --node-id node0&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In-memory storage doesn’t provide a permanent object store, and data clears on restart, but it is the fastest way to get running with InfluxDB 3.&lt;/p&gt;

&lt;h2 id="create-a-database-and-write-to-it-with-a-plain-token"&gt;Create a database and write to it with a plain token&lt;/h2&gt;

&lt;p&gt;Now we’re ready to create a database and write data with one command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 write --database [your database name] --file [path to your line protocol data]`&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For example, if we wanted to write the following line protocol data we might use:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 write --database airSensors --file /Desktop/airsensors.lp&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Where &lt;code class="language-markup"&gt;airsensors.lp&lt;/code&gt; is a file that contains &lt;a href="https://docs.influxdata.com/influxdb/cloud-serverless/reference/syntax/line-protocol/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;line protocol&lt;/a&gt; data, the ingest format for InfluxDB.  You can find a selection of line protocol real-time datasets &lt;a href="https://github.com/influxdata/influxdb2-sample-data/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;here&lt;/a&gt;. For example, you could download some &lt;a href="https://github.com/influxdata/influxdb2-sample-data/blob/master/air-sensor-data/air-sensor-data.lp/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;Air Sensor data&lt;/a&gt; which looks like:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;airSensors,sensor_id=TLM0100 temperature=71.24021491535241,humidity=35.0752743309533,co=0.5098629816173851 1732669098000000000
airSensors,sensor_id=TLM0101 temperature=71.84309523593232,humidity=34.934199682459,co=0.5034259382294339 1732669098000000000
airSensors,sensor_id=TLM0102 temperature=71.95391915782443,humidity=34.92433120092046,co=0.5175197455105179 1732669098000000000&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This dataset contains temperature, carbon monoxide, and humidity data from eight different sensors.&lt;/p&gt;

&lt;p&gt;Or, we could write those few lines of data directly, instead of pointing to a file with:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 write --database [your database name]
'airSensors,sensor_id=TLM0100 temperature=71.24021491535241,humidity=35.0752743309533,co=0.5098629816173851 1732669098000000000
airSensors,sensor_id=TLM0101 temperature=71.84309523593232,humidity=34.934199682459,co=0.5034259382294339 1732669098000000000
airSensors,sensor_id=TLM0102 temperature=71.95391915782443,humidity=34.92433120092046,co=0.5175197455105179 1732669098000000000'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After writing the data with the &lt;code class="language-markup"&gt;influxdb3 write&lt;/code&gt; command, you should see the following confirmation:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;success&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this example, and by default, we are serving InfluxDB 3 Core with the plain &lt;code class="language-markup"&gt;Token&lt;/code&gt;. In the next section, we’ll learn about how to write data to InfluxDB 3 Core or Enterprise with the &lt;code class="language-markup"&gt;Hashed Token&lt;/code&gt; and the difference between the two.&lt;/p&gt;

&lt;p&gt;You create a database on write with the &lt;code class="language-markup"&gt;influxdb3 write&lt;/code&gt; command. However, you can also elect to create a database with the &lt;code class="language-markup"&gt;influxdb3 create database [your database name]&lt;/code&gt; command.&lt;/p&gt;

&lt;h2 id="writing-into-the-database-with-authentication-tokens"&gt;Writing into the database with authentication tokens&lt;/h2&gt;

&lt;p&gt;You’ll need to create a token if you want to use the HTTP API or SDKs to access your data. To create a token with the InfluxDB CLI, you can use the following command:
&lt;code class="language-markup"&gt;influxdb3 create token&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-json"&gt;Token: apiv3_xxx
Hashed Token: zzz
Start the server with `influxdb3 serve --bearer-token zzz
HTTP requests require the following header: "Authorization: Bearer apiv3_xxx"
This will grant you access to every HTTP endpoint or deny it otherwise&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, you can elect to serve influxdb3 with the bearer-token:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 serve --object-store memory --bearer-token zzz --node-id node0&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can also elect to serve the instance and store objects in the local filesystem:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 serve --object-store file --data-dir ~/.influxdb3 --bearer-token zzz --node-id node0&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The hashed token is a cryptographic representation of the plain token. By passing the hashed token to the server, you avoid exposing the plain token in the command line, logs, or configuration files. So when a client sends a plain bearer token in an HTTP request, the server hashes the received token and compares the hashed result to the hashed token you provided at startup. This ensures that the server can validate the plain token securely without needing to store or process it directly. It’s best practice to serve InfluxDB 3 Core and Enterprise with the &lt;code class="language-markup"&gt;Hashed Token&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now that we’re serving with the &lt;code class="language-markup"&gt;Hashed Token&lt;/code&gt;, we can use the same CLI commands above to write data to the database. Alternatively, if we wanted to write with cURL we do the following:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;curl \
"http://127.0.0.1:8181/api/v2/write?bucket=[your database name]&amp;amp;precision=s" \
--header "Authorization: Bearer zzz" \
--data-binary 'home,room=kitchen temp=72 1732669098'&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="querying-data-with-influxql-and-sql"&gt;Querying data with InfluxQL and SQL&lt;/h2&gt;

&lt;p&gt;Now we’re ready to &lt;a href="https://docs.influxdata.com/influxdb3/core/query-data/sql/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;query our InfluxDB 3 Core instance with SQL&lt;/a&gt;:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 query --database=[your database name] "SELECT * FROM airSensors LIMIT 10"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can also elect to set the language to &lt;a href="https://docs.influxdata.com/influxdb3/core/query-data/influxql/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;InfluxQL&lt;/a&gt; instead of SQL if you’re an existing InfluxDB user and more familiar with that language. Simply specify the language with: &lt;code class="language-markup"&gt;--language=influxql&lt;/code&gt;:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 query --database=[your database name] --language=influxql "SHOW MEASUREMENTS"&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/ca2e1d9d5b434b8ba5f05ce5e723c82c/28b69d3e5ceff42dda2ed7cc44eca275/unnamed.png" alt="" /&gt;
Output from querying InfluxQL with the InfluxDB 3 CLI.&lt;/p&gt;

&lt;h2 id="setting-up-last-value-cache-and-distinct-value-cache"&gt;Setting up last value cache and distinct value cache&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 Core supports a &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/create/last_cache/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;last-n values cache&lt;/a&gt; (LVC), which stores the last N values in a series or column hierarchy in memory, and a &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/create/distinct_cache/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;distinct value cache&lt;/a&gt;, which retains unique values for a single column or a hierarchy of columns in RAM. The last value cache allows InfluxDB 3 to answer last-n values queries in under 10 milliseconds, while the distinct value cache is great for fast metadata lookups.&lt;/p&gt;

&lt;p&gt;In future blog posts, we’ll dive into benchmarking the performance of these caches, but for this blog, let’s focus on creating and using them. The process for creating both caches is almost identical, so let’s demonstrate the process by creating a last-value cache with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 create last_cache --database [your database name] --table [your database table] [CACHE_NAME]&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code class="language-markup"&gt;[CACHE_NAME]&lt;/code&gt; is optional, and the command automatically generates a name if not provided. So, for example, if we wanted to create a last cache for the airSensors table/measurement.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 create last_cache --database [your database name] --table airSensors airSensorsLVC&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-json"&gt;new cache created: {
  "table": "airSensors",
  "name": "airSensorsLVC",
  "key_columns": [
    0
  ],
  "value_columns": "all_non_key_columns",
  "count": 1,
  "ttl": 14400
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now we can return last 10 values that utilize the cache with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;influxdb3 query --database=[your database name] "SELECT * FROM last_cache(airSensors, airSensorsLVC) LIMIT 10"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The last-value cache is helpful because it enables fast, efficient access to the most recent data for specific combinations of key column values. This helps users to ensure they have up-to-date data for alerts or decision-making. It’s also extremely useful when working with an event-based time-series application where you aren’t writing subsets of data regularly. In this instance, you want to avoid full scans of historical data for queries focused on the latest values.&lt;/p&gt;

&lt;h3 id="creating-a-last-value-cache-for-foo"&gt;Creating a Last Value Cache for foo&lt;/h3&gt;

&lt;p&gt;To better understand that output, we need to dive into the additional &lt;code class="language-markup"&gt;create last_cache&lt;/code&gt; options that we didn’t utilize during our LVC creation. The  &lt;code class="language-markup"&gt;create last_cache&lt;/code&gt; command has the following options, as per &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/#last-values-cache/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;the documentation&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--key-columns&lt;/code&gt;:  A comma-separated list of columns to use as keys in the cache. For example: &lt;code class="language-markup"&gt;foo,bar,baz&lt;/code&gt;. This provides the top level of the cache.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--value-columns&lt;/code&gt;: A comma-separated list of columns to store as values in the cache. For example: &lt;code class="language-markup"&gt;foo,bar,baz&lt;/code&gt;. At the leaf (or terminal) nodes of the hierarchy, a buffer is maintained to store the values. The buffer size is determined by the &lt;code class="language-markup"&gt;--count&lt;/code&gt; parameter.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--count&lt;/code&gt;: The number of entries per unique key column combination to store in the cache. The maximum number here can be 10.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--ttl&lt;/code&gt;: The cache entries’ time-to-live (TTL) in Humantime form–for example: &lt;code class="language-markup"&gt;10s, 1min 30sec, 3 hours. If any entry in the buffer has been there for longer than the TTL, it will be removed, regardless of the buffer size and how many entries it contains.&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When data is written, the values are added to the buffer corresponding to the matching combination of key column values. For example, imagine we create a cache with:
&lt;code class="language-markup"&gt;influxdb3 create last_cache --database [your database name] --table [your database table]  --key-columns t1,t2 --value-columns f1 --count 5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Consider the following line protocol data, where 1 denotes data written at the first timestamp:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;foo,t1=A,t2=A f1=1 1 
foo,t1=A,t2=B f1=2 1 
foo,t1=B,t2=C f1=3 1 
foo,t1=B,t2=D f1=4 1&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Data would be added to the buffer in the cache in the following way: 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/ef91dfa205cd4f0fa5c8f1d749f96d10/774a5be2cc76bb6e60a30d0dff5d70b5/unnamed.png" alt="" /&gt;
A hierarchical cache structure for organizing data with Last Value Cache based on key columns and storing values in buffers for InfluxDB 3.&lt;/p&gt;

&lt;p&gt;Now imagine we write another line of data: &lt;code class="language-markup"&gt;foo,t1=A,t2=A f1=2 2&lt;/code&gt;. Now we see it added to the following buffer:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/b1c189709a934a1594b811f3592506b8/2d81ed790b172b33c04c3d3849caefb0/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Values are only buffered if their time is newer than the most recent value of their respective buffer.&lt;/p&gt;

&lt;p&gt;The distinct value cache operates similarly to the last value cache with the following option differences, as per the documentation:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--columns&lt;/code&gt; (required): A comma-separated list of columns to cache distinct values. For example: &lt;code class="language-markup"&gt;col1,col2,col3&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--max-cardinality&lt;/code&gt;: Maximum number of distinct value combinations to hold in the cache.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--max-age&lt;/code&gt;: Maximum age of an entry in the cache entered as a human-readable duration. For example: &lt;code class="language-markup"&gt;30d, 24h&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned above, the distinct value cache is optimal for fast metadata lookups. They help guide queries toward the correct subset of data by traversing the cache structure, and looking for the leaf nodes that match the keys in the query. For example, in an IoT scenario, this could help developers quickly check if a particular sensor (perhaps one of thousands of the same sensors across multiple factories) is reporting data and operating as expected.&lt;/p&gt;

&lt;h2 id="deletes"&gt;Deletes&lt;/h2&gt;

&lt;p&gt;We can also use the &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/cli/influxdb3/delete/"&gt;influxdb3 delete&lt;/a&gt; command to delete databases, distinct value cache, file index for a database, last value cache, plugins, tables in a database, and triggers. We’ll learn more about plugins and triggers in the following blog post. But let’s delete the database we created with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;influxdb3 delete database [your database name] &lt;/code&gt;&lt;/p&gt;

&lt;h2 id="stopping-the-server"&gt;Stopping the server&lt;/h2&gt;

&lt;p&gt;Finally, if you want to stop running the influxdb3 server, you can kill the process first by returning the PID with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;pgrep influxdb3&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And then using:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-json"&gt;kill [PID]&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="final-thoughts"&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;I hope this blog post helps you get started with InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;Core&lt;/a&gt; or &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;. In an upcoming blog post, we’ll learn how to create Python Plugins for the Processing Engine for InfluxDB 3 Enterprise, a feature that is also controlled through the CLI. As always, get started with &lt;a href="https://cloud2.influxdata.com/signup/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;InfluxDB 3 Cloud here&lt;/a&gt; and Core and Enterprise here.  If you need help, please contact us on our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;community site&lt;/a&gt; or &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=cli_operations_influxdb3_core_enterprise&amp;amp;utm_content=blog"&gt;Slack channel&lt;/a&gt;. If you are also working on a data processing project with InfluxDB, I’d love to hear from you!&lt;/p&gt;
</description>
      <pubDate>Tue, 04 Feb 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/cli-operations-influxdb3-core-enterprise/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/cli-operations-influxdb3-core-enterprise/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>Get Started with the TIG Stack and InfluxDB Core</title>
      <description>&lt;p&gt;Note: This blog was written for InfluxDB 3 Core and Enterprise Alphas. Please check the &lt;a href="https://docs.influxdata.com/influxdb3/core/" title="documentation"&gt;documentation&lt;/a&gt; as there might be slight changes in its current version, Beta.&lt;/p&gt;

&lt;p&gt;Time series data is everywhere—from IoT sensors and server metrics to financial transactions and user behavior. To collect, store, and analyze this data efficiently, you need tools purpose-built for the job. That’s where the TIG Stack comes in: Telegraf for data collection, InfluxDB for storage and analytics, and Grafana for visualization. Together, these tools offer a powerful solution for real-time analytics, observability, and monitoring.&lt;/p&gt;

&lt;p&gt;At the heart of the TIG Stack is InfluxDB, a database platform specifically designed to collect, process, transform, and store time series data. InfluxDB is widely used for applications requiring real-time monitoring, lightning-fast queries, and interactive dashboards—whether you’re tracking IoT sensor data, monitoring server performance, analyzing financial markets, or ensuring network reliability.&lt;/p&gt;

&lt;p&gt;The latest version, &lt;a href="https://www.influxdata.com/blog/influxdb3-open-source-public-alpha/"&gt;InfluxDB Core&lt;/a&gt;, introduces significant improvements in performance, usability, and cost-efficiency. These enhancements make it easier than ever to handle high-volume time series data and deliver real-time insights.&lt;/p&gt;

&lt;p&gt;In this blog post, you’ll learn how to get started with the TIG Stack. We’ll guide you through:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Writing data to InfluxDB 3 Core with Telegraf&lt;/li&gt;
  &lt;li&gt;Visualizing your data with Grafana&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Requirements&lt;/p&gt;

&lt;p&gt;You’ll need to meet the following requirements to run this tutorial yourself. Please install the following software (or use Grafana Cloud):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/telegraf/v1/install/#download-and-install-telegraf"&gt;Telegraf&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/"&gt;InfluxDB Core&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://grafana.com/oss/grafana/"&gt;Grafana&lt;/a&gt;: for example, if you’re on MacOS using homebrew, you can use &lt;code class="language-markup"&gt;brew install grafana&lt;/code&gt; and &lt;code class="language-markup"&gt;brew services start grafana&lt;/code&gt;. Grafana will now be available on &lt;a href="http://localhost:3000/"&gt;http://localhost:3000/&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="get-influxdb-up-and-running"&gt;Get InfluxDB up and running&lt;/h2&gt;

&lt;p&gt;After installing InfluxDB 3 Core, run the following command to start a server:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 serve --host-id=local01 --object-store file --data-dir ~/.influxdb3&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next generate a token with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create token&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;Token: apiv3_xxx
Hashed Token: zzz
Start the server with `influxdb3 serve --bearer-token zzz`
HTTP requests require the following header: "Authorization: Bearer apiv3_xxx"
This will grant you access to every HTTP endpoint or deny it otherwise.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Now, we can use the plain &lt;code class="language-markup"&gt;Token&lt;/code&gt; with Telegraf. The &lt;code class="language-markup"&gt;Hashed Token&lt;/code&gt; is a cryptographic representation of the plain token. By passing the hashed token to the server, you avoid exposing the plain token in the command line, logs, or configuration files. So when a client sends a plain bearer token in an HTTP request, the server hashes the received token and compares the hashed result to the hashed token you provided at startup. This ensures that the server can validate the plain token securely storing or processing it directly.&lt;/p&gt;

&lt;h2 id="writing-data-to-influxdb-v3-oss-with-telegraf"&gt;Writing data to InfluxDB v3 OSS with Telegraf&lt;/h2&gt;

&lt;p&gt;First, you’ll need to install Telegraf on your machine. Review the &lt;a href="https://docs.influxdata.com/telegraf/v1/install/"&gt;requirements, download, and installation guide&lt;/a&gt; to get started. Once Telegraf is installed on your machine, we can create a configuration to write data to our InfluxDB instance. However, before diving into the configuration, let’s briefly discuss what Telegraf is and why its configuration is essential.&lt;/p&gt;

&lt;p&gt;Telegraf is a lightweight, open-source server agent for collecting, processing, and sending metrics and events from various sources to a data store like InfluxDB. It works by using plugins that define what data to gather (inputs), how to process it (processors), and where to send it (outputs).&lt;/p&gt;

&lt;p&gt;The Telegraf configuration file plays a central role in this process. It specifies which plugins to use, sets parameters like authentication credentials, and defines the data flow. Without a proper configuration file, Telegraf wouldn’t know:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Where to collect the data from (e.g., a database, MQTT, system metrics)&lt;/li&gt;
  &lt;li&gt;How to process or transform the data (optional)&lt;/li&gt;
  &lt;li&gt;Where to send the data (in this case, to InfluxDB v3 OSS).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Telegraf installed, you can use the following configuration to write data to your InfluxDB instance. This example sets up Telegraf to send system CPU metrics to InfluxDB v3 using the &lt;a href="https://github.com/influxdata/telegraf/blob/master/plugins/inputs/cpu/README.md"&gt;CPU Input Plugin&lt;/a&gt; and the &lt;a href="https://github.com/influxdata/telegraf/blob/master/plugins/outputs/influxdb_v2/README.md"&gt;InfluxDB v2 Output Plugin&lt;/a&gt; (the HTTP API endpoints are the same for v2 as they are for v3). Here’s what our configuration would look like:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Global configuration
[agent]
  interval = "10s"           # Collection interval
  flush_interval = "10s"     # Data flush interval
# Input Plugin: CPU Metrics
[[inputs.cpu]]
  percpu = true              # Collect per-CPU metrics
  totalcpu = true            # Collect total CPU metrics
  collect_cpu_time = false   # Do not collect CPU time metrics
  report_active = true       # Report active CPU percentage
# Output Plugin: InfluxDB v2
[[outputs.influxdb_v2]]
  urls = ["http://127.0.0.1:8181"]
  token = "your plain Token apiv3_xxx"
  organization = ""
  bucket = "cpu"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You don’t need to provide an Organization ID with InfluxDB Core&lt;/p&gt;

&lt;p&gt;Now, we can run Telegraf with the following:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;telegraf --config pwd/telegraf.conf --debug&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The output helps us verify that we are successfully writing data with the following Telegraf logs:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;2025-01-09T23:34:02Z I! Loading config: ./telegraf.conf
2025-01-09T23:34:02Z I! Starting Telegraf 1.26.2
2025-01-09T23:34:02Z I! Available plugins: 235 inputs, 9 aggregators, 27 processors, 22 parsers, 57 outputs, 2 secret-stores
2025-01-09T23:34:02Z I! Loaded inputs: cpu
2025-01-09T23:34:02Z I! Loaded aggregators: 
2025-01-09T23:34:02Z I! Loaded processors: 
2025-01-09T23:34:02Z I! Loaded secretstores: 
2025-01-09T23:34:02Z I! Loaded outputs: influxdb_v2
2025-01-09T23:34:02Z I! Tags enabled: host=MacBook-Pro-4.local
2025-01-09T23:34:02Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"MacBook-Pro-4.local", Flush Interval:10s
2025-01-09T23:34:02Z D! [agent] Initializing plugins
2025-01-09T23:34:02Z D! [agent] Connecting outputs
2025-01-09T23:34:02Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2025-01-09T23:34:02Z D! [agent] Successfully connected to outputs.influxdb_v2
2025-01-09T23:34:02Z D! [agent] Starting service inputs
2025-01-09T23:34:12Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
2025-01-09T23:34:23Z D! [outputs.influxdb_v2] Wrote batch of 13 metrics in 792.507791ms&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="verifying-a-successful-write-with-the-influxdb-cli"&gt;Verifying a successful write with the InfluxDB CLI&lt;/h2&gt;

&lt;p&gt;If you want to, you can now verify that we are successfully writing CPU metrics to InfluxDB v3 OSS with the following CLI command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 query --database=cpu "SELECT * FROM cpu LIMIT 10"&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="configuring-the-grafana-source"&gt;Configuring the Grafana source&lt;/h2&gt;

&lt;p&gt;To configure InfluxDB Core as a new source in Grafana visit &lt;a href="http://localhost:3000/"&gt;http://localhost:3000/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and navigate to &lt;strong&gt;connections&lt;/strong&gt; &amp;gt; &lt;strong&gt;datasources&lt;/strong&gt; &amp;gt; &lt;strong&gt;new&lt;/strong&gt; &amp;gt; search and select for &lt;strong&gt;InfluxDB&lt;/strong&gt;.  Select SQL as the language type. Then provide the following credentials in the configuration:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;URL: &lt;a href="http://localhost:8181/"&gt;http://localhost:8181&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Database: cpu&lt;/li&gt;
  &lt;li&gt;Insecure Connection: toggle on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hit &lt;strong&gt;Save&amp;amp;Test&lt;/strong&gt; to verify that you can connect to InfluxDB Core. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/ca13b6df9ebf4277bb46a100d5cc9b06/20981a20271c14a71abb85c922edd0cc/unnamed.png" alt="" /&gt;
Now, you can make visualizations and add them to your dashboards as you usually would in Grafana by navigating to &lt;strong&gt;Dashboards&lt;/strong&gt; &amp;gt; &lt;strong&gt;+Create Dashboard&lt;/strong&gt; &amp;gt; &lt;strong&gt;+Add Visualization&lt;/strong&gt; &amp;gt; &lt;strong&gt;Select Datasource&lt;/strong&gt; &amp;gt; &lt;strong&gt;influxdb&lt;/strong&gt;. And using the Builder to generate a SQL query for you. For example, the Builder generated the following code and visualization:&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-bash"&gt;SELECT "cpu", "usage_user", "time" FROM "cpu" WHERE "time" &amp;gt;= $__timeFrom AND "time" &amp;lt;= $__timeTo AND "cpu" = 'cpu0'&lt;/code&gt;&lt;/pre&gt;

&lt;figure&gt;&lt;code&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/9481213f0d264990b3f92b2f4918ad7c/aa611e2fd1778169c6a38651ff54d72a/unnamed.png" /&gt;&lt;/code&gt;&lt;/figure&gt;

&lt;h2 id="final-thoughts"&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;I hope this blog post helps you get started using InfluxDB Core, Telegraf, and Grafana. If you have any feedback about InfluxDB Core, we’d love to hear it in the &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;#influxdb3_core channel on Discord&lt;/a&gt;. Your experience and opinion are important to us during the Alpha release of InfluxDB Core. Get started with &lt;a href="https://cloud2.influxdata.com/signup"&gt;InfluxDB v3 Cloud here&lt;/a&gt;. If you need help, please get in touch with us us on our &lt;a href="https://community.influxdata.com/"&gt;community site&lt;/a&gt; or &lt;a href="https://influxdata.com/slack"&gt;Slack channel&lt;/a&gt;. If you are also working on a data processing project with InfluxDB, I’d love to hear from you!&lt;/p&gt;
</description>
      <pubDate>Fri, 24 Jan 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/tig-stack-guide-influxdb-core/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/tig-stack-guide-influxdb-core/</guid>
      <category>Developer</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
  </channel>
</rss>
