<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Scott Anderson</title>
    <description>Posts by Scott Anderson on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/author/scott-anderson/</link>
    <language>en-us</language>
    <lastBuildDate>Mon, 27 Oct 2025 08:00:00 +0000</lastBuildDate>
    <pubDate>Mon, 27 Oct 2025 08:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>Query Distinct Tag Values in Under 30ms with the InfluxDB 3 Distinct Value Cache</title>
      <description>&lt;p&gt;The Distinct Value Cache (DVC) available with &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=query_distinct_tag_values_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; and &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=query_distinct_tag_values_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt; lets you cache distinct values of specific columns and query those values in under 30ms.&lt;/p&gt;

&lt;p&gt;The DVC is an in-memory cache that stores distinct values of one or more columns in a table. It is typically used to cache distinct tag values, but you can also cache distinct field values. When you create a DVC, you specify what columns’ distinct values to cache, the maximum number of distinct value combinations to cache, and how long to keep distinct values in the cache (TTL).&lt;/p&gt;

&lt;h2 id="why-use-a-distinct-value-cache"&gt;Why use a Distinct Value Cache?&lt;/h2&gt;

&lt;p&gt;The DVC provides a simple and performant way to query distinct column values in under 30ms. This type of query is commonly used to populate selectable options in web applications. For example, you can &lt;a href="https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/"&gt;create a Grafana template variable&lt;/a&gt; that lets you select from a list of distinct tag values and modify a dashboard’s queries to display data specific to the selected values.&lt;/p&gt;

&lt;p&gt;To return a list of distinct tag values without using the DVC, you would use a query similar to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT DISTINCT
  region
FROM
  system_monitor
WHERE
  time "&amp;gt;"= now() - INTERVAL '7 days'
  AND time ""= now()&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This query requires time bounds to prevent the query engine from having to read all rows in the queried table. Without time bounds or when the queried time range is too large, the query could be very “heavy,” potentially requiring a resource-hungry full table scan. However, the downside of the time bounds is that the query won’t return distinct values outside of the queried time range.&lt;/p&gt;

&lt;p&gt;To query distinct values from the DVC, the query would look similar to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT region FROM distinct_cache('system_monitor', 'region_cache')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This query doesn’t need to perform a full table scan to derive a list of distinct column values; it just queries the list of distinct values from the in-memory cache. Distinct value expiration is handled by the cache, ensuring the query returns all unexpired distinct values, regardless of time range. And it returns results in approximately 30 ms.&lt;/p&gt;

&lt;h2 id="set-up-a-distinct-value-cache"&gt;Set up a Distinct Value Cache&lt;/h2&gt;

&lt;p&gt;Each DVC is associated with a table, and a table can have multiple DVCs. You can add a DVC to an existing table, but for this example, we’ll create a new table to store the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/sample-data/#european-union-wind-data"&gt;European Union (EU) wind sample dataset&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Use the &lt;code class="language-markup"&gt;influxdb3 create table&lt;/code&gt; command to create a new &lt;code&gt;wind_data&lt;/code&gt; table. Because we know the schema of the sample data, we can pre-create the table with the necessary tag and field columns:&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create table \
  --token INFLUXDB_TOKEN \
  --database EXAMPLE_DB \
  --tags country,county,city \
  --fields wind_direction:int64,wind_speed:float64 \
  wind_data&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;INFLUXDB_TOKEN&lt;/code&gt;: Your InfluxDB &lt;em&gt;admin&lt;/em&gt; token.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;EXAMPLE_DB&lt;/code&gt;: The name of the target database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; You can also &lt;a href="https://docs.influxdata.com/influxdb3/explorer/manage-caches/distinct-value-caches/"&gt;use InfluxDB 3 Explorer to create and manage DVCs&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Use the &lt;code class="language-markup"&gt;influxdb3 create distinct_cache&lt;/code&gt; command to create a new DVC associated with the &lt;code class="language-markup"&gt;wind_data&lt;/code&gt; table. You can provide the following:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Table (&lt;code class="language-markup"&gt;--table&lt;/code&gt;):&lt;/strong&gt; &lt;em&gt;(Required)&lt;/em&gt; The name of the table to associate the DVC with.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cache name:&lt;/strong&gt; A unique name for the cache. If you don’t provide one, InfluxDB automatically generates a cache name for you.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Columns (&lt;code class="language-markup"&gt;--columns&lt;/code&gt;):&lt;/strong&gt; Specify which columns to include in the cache. The cached columns will only include the distinct values from that column. Columns that benefit from a DVC are typically tags, but you can cache any &lt;em&gt;string&lt;/em&gt;-typed column, including fields.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Tip:&lt;/em&gt;&lt;/strong&gt; The DVC structures columns in the order you provide them when creating the cache. Column order determines how rows in the cache are sorted and can affect query performance. If column values are hierarchical—meaning one column is a subset of another column—list the columns highest in hierarchical order first.&lt;/p&gt;

&lt;p&gt;In this example, we’ll create a DVC named &lt;code class="language-markup"&gt;wind_locations&lt;/code&gt; associated with the &lt;code class="language-markup"&gt;wind_data&lt;/code&gt; table. We’ll cache distinct values from the &lt;code class="language-markup"&gt;country&lt;/code&gt;, &lt;code class="language-markup"&gt;county&lt;/code&gt;, and &lt;code class="language-markup"&gt;city&lt;/code&gt; columns:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create distinct_cache \
  --token INFLUXDB_TOKEN \
  --database EXAMPLE_DB \
  --table wind_data \
  --columns country,county,city \
  wind_locations&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;INFLUXDB_TOKEN&lt;/code&gt;: Your InfluxDB &lt;em&gt;admin&lt;/em&gt; token.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;EXAMPLE_DB&lt;/code&gt;: The name of the target database.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
  &lt;li&gt;Write data to the table associated with the cache. For this example, &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/sample-data/#write-the-eu-wind-sample-data-to-influxdb"&gt;write the EU wind sample data&lt;/a&gt; &lt;em&gt;(the link provides other write methods and commands)&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 write \
  --token INFLUXDB_TOKEN \
  --database EXAMPLE_DB \
  "$(curl --request GET https://docs.influxdata.com/downloads/eu-wind-data.lp)"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Replace the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;INFLUXDB_TOKEN&lt;/code&gt;: Your InfluxDB token with write access to the target database.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;EXAMPLE_DB&lt;/code&gt;: The name of the target database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="optimize-cache-size"&gt;Optimize Cache Size&lt;/h4&gt;

&lt;p&gt;DVCs provide options to help optimize the size and memory footprint of the cache. Use the following options to ensure your cache doesn’t grow too large:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--max-cardinality&lt;/code&gt;: Specify the maximum number of unique column value combinations in your cache. For example, using the hierarchical schema of the EU wind sample data, each unique combination of &lt;code class="language-markup"&gt;country&lt;/code&gt;, &lt;code class="language-markup"&gt;county&lt;/code&gt;, and &lt;code class="language-markup"&gt;city&lt;/code&gt; counts against the cardinality limit. The default maximum cardinality is &lt;code class="language-markup"&gt;100000&lt;/code&gt; (one-hundred thousand).&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;--max-age&lt;/code&gt;: Specify the maximum age or time to live (TTL) for values in the cache. The age of each value is reset each time that a distinct value is written to InfluxDB. The default maximum age is &lt;code class="language-markup"&gt;24 hours&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="query-data-in-the-distinct-value-cache"&gt;Query data in the Distinct Value Cache&lt;/h2&gt;

&lt;p&gt;Use the &lt;code class="language-markup"&gt;distinct_cache()&lt;/code&gt; function in the &lt;code class="language-markup"&gt;FROM&lt;/code&gt; clause of a SQL &lt;code class="language-markup"&gt;SELECT&lt;/code&gt; statement to query data from the DVC. &lt;code class="language-markup"&gt;distinct_cache()&lt;/code&gt; supports the following arguments:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;table_name:&lt;/strong&gt; (Required)  the name of the table the DVC is associated with, formatted as a string literal&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;cache_name:&lt;/strong&gt; the name of the DVC to query from, formatted as a string literal &lt;em&gt;(only required if there is more than one DVC associated with the table)&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;distinct_cache(table_name, cache_name)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To query the DVC for the written sample data, execute the following query:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT * FROM distinct_cache('wind_data', 'wind_locations')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is just a normal SQL query, so you can include other SQL clauses to modify query results. For example, if you only want cities in Spain, you can use the following query:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  city
FROM
  distinct_cache('wind_data', 'wind_locations')
WHERE
  country = 'Spain'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; InfluxQL does not support the &lt;code class="language-markup"&gt;distinct_cache()&lt;/code&gt; function. You can only query data in a DVC using SQL.&lt;/p&gt;

&lt;h2 id="the-dvc-in-practice"&gt;The DVC in practice&lt;/h2&gt;

&lt;p&gt;The target use case for the DVC is to help build performant user experiences that display distinct column values for users. For example, creating a UI that lists all the unique values of a tag. Using the DVC will help you to quickly query and return that list.&lt;/p&gt;

&lt;h4 id="grafana-dashboard-variables"&gt;Grafana Dashboard Variables&lt;/h4&gt;

&lt;p&gt;One common use case that greatly benefits from using the DVC is &lt;a href="https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#add-a-query-variable"&gt;creating query-based variables&lt;/a&gt; for a Grafana dashboard. Let’s say you want to create a dashboard for the EU wind sample data that lets users select what country, county, and cities to include in the visualizations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; Grafana 12.2.0 fixed a bug that prevented DVC queries from successfully returning results. If using an earlier version of Grafana, cast each column in your &lt;code class="language-markup"&gt;SELECT&lt;/code&gt; clause to a &lt;code class="language-markup"&gt;STRING&lt;/code&gt; type–for example: &lt;code class="language-markup"&gt;SELECT country::STRING ...&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Return a list of distinct countries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;code class="language-markup"&gt;country&lt;/code&gt; variable and use the following query to return the variable values:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  country
FROM
  distinct_cache('wind_data', 'wind_locations')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Return a list of distinct counties in the selected countries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;code class="language-markup"&gt;county&lt;/code&gt; variable and use the following query to return counties from the selected countries:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  county
FROM
  distinct_cache('wind_data', 'wind_locations')
WHERE
  country IN (${country:singlequote})&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Return a list of cities in the selected countries and counties&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;code class="language-markup"&gt;city&lt;/code&gt; variable and return the list of cities from the selected countries and counties:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  city
FROM
  distinct_cache('wind_data', 'wind_locations')
WHERE
  country IN (${country:singlequote})
  AND county IN (${county:singlequote})&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Use the variable selections in your Grafana dashboard queries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In each of your dashboard queries, use the variables to filter on &lt;code class="language-markup"&gt;country&lt;/code&gt;, &lt;code class="language-markup"&gt;county&lt;/code&gt;, and &lt;code class="language-markup"&gt;city&lt;/code&gt;. The example below uses multi-select variables that return a list of values. It uses Grafana’s variable interpolation to structure the list of selected variable values as a SQL array. The variables also include an &lt;code class="language-markup"&gt;All&lt;/code&gt; option, and the query changes the behavior of the &lt;code class="language-markup"&gt;WHERE&lt;/code&gt; conditions if &lt;code class="language-markup"&gt;All&lt;/code&gt; is selected.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  time,
  city,
  wind_direction
FROM
  wind_data
WHERE
  time "&amp;gt;"= $__timeFrom
  AND time "&amp;gt;"= $__timeTo
  AND country IN (${country:singlequote})
  AND CASE
        WHEN 'All' IN (${county:singlequote}) THEN TRUE
        ELSE county IN (${county:singlequote})
      END
  AND CASE
        WHEN 'All' IN (${city:singlequote}) THEN TRUE
        ELSE city IN (${city:singlequote})
      END
ORDER BY
  time,
  city&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To see an example Grafana dashboard that visualizes the EU wind sample data and uses the InfluxDB DVC to populate dashboard variables, download and install the &lt;a href="https://gist.github.com/sanderson/b92bca03a23a58ec25f1c544457d82fd"&gt;EU Wind Sample Data Grafana Dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="what-to-know-about-the-distinct-value-cache"&gt;What to know about the Distinct Value Cache&lt;/h2&gt;

&lt;p&gt;The InfluxDB 3 Distinct Value Cache is an incredibly powerful tool, but there are important things to know when using it.&lt;/p&gt;

&lt;h4 id="high-cardinality-distinct-value-combinations"&gt;High-Cardinality Distinct Value Combinations&lt;/h4&gt;

&lt;p&gt;DVCs are stored in memory, and the larger the cache, the more memory it requires to maintain the cache. It’s critical to balance the size of your DVCs with the amount of memory it takes to store them. The higher the cardinality of your distinct value combinations, the larger your cache. “Cardinality” refers to the number of unique column combinations in your cached data. As a best practice, only cache distinct values from columns that are important to your query workload. Caching distinct tag or field values unnecessarily results in higher cardinality and memory usage without any benefit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distinct Value Caches are flushed when the server stops&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because the DVC is an in-memory cache, any time the server stops, the cache is flushed. After a server restart, InfluxDB 3 Enterprise queries previously written data and populates the cache. However, InfluxDB 3 Core does not and only writes new values to the DVC when you write data.&lt;/p&gt;

&lt;h2 id="share-your-feedback"&gt;Share your feedback&lt;/h2&gt;

&lt;p&gt;The InfluxDB 3 Distinct Value Cache is a powerful device that lets you get the best performance on queries that need to return distinct column values. It’s another tool in your time series toolbelt that helps make sure your workload is as performant as possible.&lt;/p&gt;

&lt;p&gt;Try the DVC and let us know what you think! Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/?utm_source=website&amp;amp;utm_medium=query_distinct_tag_values_influxdb&amp;amp;utm_content=blog"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/?utm_source=website&amp;amp;utm_medium=query_distinct_tag_values_influxdb&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://influxdata.com/slack/"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or on our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=query_distinct_tag_values_influxdb&amp;amp;utm_content=blog"&gt;Community Site&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Mon, 27 Oct 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/query-distinct-tag-values-influxdb</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/query-distinct-tag-values-influxdb</guid>
      <category>Developer</category>
      <category>Product</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>Telegraf’s New Labels Feature Unlocks Smarter Plugin Control</title>
      <description>&lt;p&gt;Telegraf 1.36 includes a powerful new feature that lets you add labels to plugins and enable or disable plugins dynamically at startup using labels. Telegraf &lt;strong&gt;labels and selectors&lt;/strong&gt; offer new flexibility in how you configure and run your Telegraf agents—especially when you have many Telegraf instances sharing a set of configurations or want fine-grained control over which plugins run under specific conditions or in certain environments.&lt;/p&gt;

&lt;p&gt;Let’s dive in.&lt;/p&gt;

&lt;h2 id="what-are-telegraf-labels-and-selectors"&gt;What are Telegraf labels and selectors?&lt;/h2&gt;

&lt;p&gt;Labels and selectors let you attach metadata (labels) to plugin configurations and then choose which plugins are enabled using &lt;em&gt;selectors&lt;/em&gt; (filter expressions).&lt;/p&gt;

&lt;p&gt;This gives you a lot of expressive control over what Telegraf plugins to run. Want to run certain inputs only in staging environments? Label them &lt;code class="language-markup"&gt;env = "staging"&lt;/code&gt; and use a selector to enable them in staging and disable non-staging plugins. Want to temporarily disable a subset? Use selectors instead of entirely removing config blocks.&lt;/p&gt;

&lt;h2 id="add-labels-in-your-telegraf-configuration"&gt;Add labels in your Telegraf configuration&lt;/h2&gt;

&lt;p&gt;To add labels to a plugin, include a &lt;code class="language-markup"&gt;labels&lt;/code&gt; subtable in your plugin configuration:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-toml"&gt;[[inputs.cpu]]
  percpu = true
  totalcpu = true

  [inputs.cpu.labels]
    env = "prod"
    region = "us-west"

[[inputs.mem]]
  [inputs.mem.labels]
    env = "prod"
    role = "monitoring"

[[outputs.influxdb_v2]]
  urls = ["http://localhost:8181"]
  token = "${INFLUX_TOKEN}"
  organization = ""
  bucket = "DATABASE_NAME"

  [outputs.influxdb_v2.labels]
    env = "prod"
    region = "us-west"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can add labels to &lt;strong&gt;all plugin types&lt;/strong&gt; (inputs, processors, aggregators, outputs).&lt;/p&gt;

&lt;h4 id="label-syntax--examples"&gt;Label Syntax &amp;amp; Examples&lt;/h4&gt;

&lt;p&gt;Each label is a key value pair associated with an equals sign (&lt;code class="language-markup"&gt;=&lt;/code&gt;). Keys and values support alphanumeric characters &lt;code class="language-markup"&gt;[A-Za-z0-9]&lt;/code&gt;, dots (&lt;code class="language-markup"&gt;.&lt;/code&gt;), dashes (&lt;code class="language-markup"&gt;-&lt;/code&gt;), and underscores (&lt;code class="language-markup"&gt;_&lt;/code&gt;). The value must be formatted as a string literal.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;env = "prod"
region = "us-west-1"&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="use-selectors-to-control-plugin-execution"&gt;Use selectors to control plugin execution&lt;/h2&gt;

&lt;p&gt;Once labels are defined, use selectors to filter which plugins to run based on their labels. Use the &lt;code class="language-markup"&gt;--select&lt;/code&gt; flag when starting the Telegraf agent to apply one or more selectors. At startup, Telegraf parses the selector and eliminates any plugin instances that don’t match.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;telegraf --select 'env=prod'&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="selector-syntax--examples"&gt;Selector Syntax &amp;amp; Examples&lt;/h4&gt;

&lt;p&gt;Selectors are predicate expressions that evaluate plugin labels and resolve to true or false. Telegraf selectors support the following operators:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Comparison Operators&lt;/strong&gt;:
    &lt;ul&gt;
      &lt;li&gt;&lt;code class="language-markup"&gt;=&lt;/code&gt; : Equal to&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Logical operators&lt;/strong&gt;:
    &lt;ul&gt;
      &lt;li&gt;&lt;code class="language-markup"&gt;;&lt;/code&gt;: (AND) Returns true if both operands are true, otherwise returns false&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Wildcard operator&lt;/strong&gt;:
    &lt;ul&gt;
      &lt;li&gt;&lt;code class="language-markup"&gt;*&lt;/code&gt; : Placeholder for unknown or variable characters&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To combine predicate expressions using “and” logic, separate each predicate with a semi-colon (&lt;code class="language-markup"&gt;;&lt;/code&gt;) in a single selector. To combine predicate expressions using “or” logic, pass each predicate as a separate selector.&lt;/p&gt;

&lt;p&gt;Some examples:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Run plugins with an &lt;code class="language-markup"&gt;env&lt;/code&gt; label that is &lt;code class="language-markup"&gt;prod&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;telegraf --select 'env=prod'&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Run plugins with an &lt;code class="language-markup"&gt;env&lt;/code&gt; label that is &lt;code class="language-markup"&gt;prod&lt;/code&gt; &lt;em&gt;AND&lt;/em&gt; a &lt;code class="language-markup"&gt;region&lt;/code&gt; label that is &lt;code class="language-markup"&gt;us-west-1&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;telegraf --select 'env=prod;region=us-west-1'&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Run plugins with an &lt;code class="language-markup"&gt;env&lt;/code&gt; label that is &lt;code class="language-markup"&gt;prod&lt;/code&gt; &lt;em&gt;OR&lt;/em&gt; a &lt;code class="language-markup"&gt;always_run&lt;/code&gt; label that is &lt;code class="language-markup"&gt;true&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;telegraf \
  --select 'env=prod' \
  --select 'always_run=true'&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Run plugins with a &lt;code class="language-markup"&gt;region&lt;/code&gt; label that begins with &lt;code class="language-markup"&gt;us-&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;telegraf --select 'region=us-*'&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="how-this-helps-in-deployment-scenarios"&gt;How this helps in deployment scenarios&lt;/h2&gt;

&lt;p&gt;Here are some interesting use cases that benefit from labels and selectors:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Environment-based filtering&lt;/strong&gt;: Maintain a single canonical &lt;code class="language-markup"&gt;telegraf.conf&lt;/code&gt;, label plugins for &lt;code class="language-markup"&gt;env = "dev" / "staging" / "prod"&lt;/code&gt;, and choose which run per deployment.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Multi-tenant setups / multi-instance agents&lt;/strong&gt;: If you run multiple Telegraf agents with overlapping configs, you can better isolate which plugin instances run per agent.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Gradual rollout / feature gating&lt;/strong&gt;: Enable a new input or output only for certain subsets of agents via labels and selectors.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Switching degree of metric details:&lt;/strong&gt; Defining a plugin instance twice, one instance with more detailed metrics and the other with less, each with corresponding labels. Use a selector to identify which plugin to run and control the level of metric details reported.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because the filtering occurs before plugin initialization, you reduce resource usage (no unnecessary plugin startup) and simplify operations.&lt;/p&gt;

&lt;h2 id="try-it-yourself"&gt;Try it yourself&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Upgrade to the version of Telegraf 1.36+.&lt;/li&gt;
  &lt;li&gt;Add labels to plugin configurations.&lt;/li&gt;
  &lt;li&gt;Run Telegraf with &lt;code class="language-markup"&gt;--select&lt;/code&gt; to filter instances.&lt;/li&gt;
  &lt;li&gt;Provide feedback and report any edge cases that may surface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’d love to hear from you—try it out, tell us your use cases, and let us know how well this feature helps your deployments.&lt;/p&gt;
</description>
      <pubDate>Wed, 22 Oct 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/telegraf-enhanced-plugin-control</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/telegraf-enhanced-plugin-control</guid>
      <category>Developer</category>
      <category>Product</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>Query the Latest Values in Under 10ms with the InfluxDB 3 Last Value Cache</title>
      <description>&lt;p&gt;As part of the &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_last_value_cache&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; and &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_last_value_cache&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt; &lt;a href="https://www.influxdata.com/blog/influxdb3-open-source-public-alpha/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=influxdb3_last_value_cache&amp;amp;utm_content=blog"&gt;public alpha&lt;/a&gt;, the Last Value Cache (LVC) is available for testing. The LVC lets you cache the most recent values for specific fields in a table, improving the performance of queries that return the most recent value of a field for specific time series or the last N values of a field, typical of many monitoring workloads. With the LVC, these types of queries return in under 10ms.&lt;/p&gt;

&lt;p&gt;The LVC is an in-memory cache that stores the last N number of values for specific fields of time series in a table. When you create an LVC, you can specify what fields to cache, what tags to include in the cache (which determines how many unique time series you store the last values of), and the number of values to cache for each unique tag set.&lt;/p&gt;

&lt;p&gt;For example, let’s use a dataset with the following schema (similar to the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/sample-data/#home-sensor-data"&gt;home sensor sample dataset&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;home (table)
    &lt;ul&gt;
      &lt;li&gt;tags:
        &lt;ul&gt;
          &lt;li&gt;room
            &lt;ul&gt;
              &lt;li&gt;kitchen&lt;/li&gt;
              &lt;li&gt;living room&lt;/li&gt;
            &lt;/ul&gt;
          &lt;/li&gt;
          &lt;li&gt;wall
            &lt;ul&gt;
              &lt;li&gt;north&lt;/li&gt;
              &lt;li&gt;east&lt;/li&gt;
              &lt;li&gt;south&lt;/li&gt;
            &lt;/ul&gt;
          &lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;fields:
        &lt;ul&gt;
          &lt;li&gt;co (integer)&lt;/li&gt;
          &lt;li&gt;temp (float)&lt;/li&gt;
          &lt;li&gt;hum (float)&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you were to cache the last value for each field per room and wall, the LVC would look similar to this:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;+-------------+-------+----+------+------+---------------------+
| room        | wall  | co | hum  | temp | time                |
+-------------+-------+----+------+------+---------------------+
| Kitchen     | east  | 26 | 36.5 | 22.7 | 2025-02-10T20:00:00 |
| Living Room | north | 17 | 36.4 | 22.2 | 2025-02-10T20:00:00 |
| Living Room | south | 16 | 36.3 | 22.1 | 2025-02-10T20:00:00 |
+-------------+-------+----+------+------+---------------------+&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you were to cache the last &lt;em&gt;four&lt;/em&gt; values of each field per room and wall, the LVC would look similar to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;+-------------+-------+----+------+------+---------------------+
| room        | wall  | co | hum  | temp | time                |
+-------------+-------+----+------+------+---------------------+
| Kitchen     | east  | 26 | 36.5 | 22.7 | 2025-02-10T20:00:00 |
| Kitchen     | east  | 9  | 36.0 | 22.7 | 2025-02-10T17:00:00 |
| Kitchen     | east  | 3  | 36.2 | 22.7 | 2025-02-10T15:00:00 |
| Kitchen     | east  | 0  | 36.1 | 22.7 | 2025-02-10T10:00:00 |
| Living Room | north | 17 | 36.4 | 22.2 | 2025-02-10T20:00:00 |
| Living Room | north | 5  | 35.9 | 22.6 | 2025-02-10T17:00:00 |
| Living Room | north | 1  | 36.1 | 22.3 | 2025-02-10T15:00:00 |
| Living Room | north | 0  | 36.0 | 21.8 | 2025-02-10T10:00:00 |
| Living Room | south | 16 | 36.3 | 22.1 | 2025-02-10T20:00:00 |
| Living Room | south | 4  | 35.8 | 22.5 | 2025-02-10T17:00:00 |
| Living Room | south | 0  | 36.0 | 22.3 | 2025-02-10T15:00:00 |
| Living Room | south | 0  | 35.9 | 21.8 | 2025-02-10T10:00:00 |
+-------------+-------+----+------+------+---------------------+&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="why-use-a-last-value-cache"&gt;Why use a Last Value Cache?&lt;/h2&gt;

&lt;p&gt;In short, the LVC provides last-value query responses in under 10ms, simplifying a common query type. Let’s say that you are building a monitoring dashboard and only need to know the last reported values for specific fields. Without using the LVC, you’d have to run a query similar to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  room,
  wall,
  selector_last(co, time)['value'] as co,
  selector_last(temp, time)['value'] as temp,
  selector_last(hum, time)['value'] as hum,
  selector_last(hum, time)['time'] AS time
FROM
  home
GROUP BY
  room,
  wall
WHERE
  time &amp;gt;= now() - INTERVAL '1 day'
  AND time &amp;lt;= now()&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;While this query will give you the last reported values, there are some things to note:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The time value in each row is specific to only the &lt;code class="language-markup"&gt;hum&lt;/code&gt; field. If field values are reported independently with sporadic timestamps, the time value may be inaccurate for the other fields.&lt;/li&gt;
  &lt;li&gt;The query includes a time range, which  prevents the query engine from having to read all the data in the table to generate results. Without the time bounds, this query could potentially be very “heavy,” depending on the amount of data in the table. This also means that if the last reported value is outside the queried time range, it is not included in the results.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To query this same data from the LVC, the query would look similar to:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT * FROM last_cache('home', 'homeSensorCache')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Time bounds aren’t necessary. All the returned timestamps are specific to the last reported value of each field. Results return in under 10ms, and the query is just simpler.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The LVC also unlocks some other functionalities, but we’ll discuss that in a future post.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="set-up-a-last-value-cache"&gt;Set up a Last Value Cache&lt;/h2&gt;

&lt;p&gt;An LVC is associated with a table, which can have multiple LVCs. You can add an LVC to an existing table, but for this example, we’ll create a new table to store the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/sample-data/#home-sensor-data"&gt;home sensor sample dataset&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Use the &lt;code class="language-markup"&gt;influxdb3 create table&lt;/code&gt; command to create a new &lt;code class="language-markup"&gt;home&lt;/code&gt; table. Because we know the schema of the sample data, we can pre-create the table with the necessary tag and field columns :&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create table \
  --tags room \
  --fields co:int64 temp:float64 hum:float64 \
  --database example_db \
  home&lt;/code&gt;&lt;/pre&gt;

&lt;ol&gt;
  &lt;li&gt;Use the &lt;code class="language-markup"&gt;influxdb3 create last_cache&lt;/code&gt; command to create a new LVC associated with the &lt;code&gt;home&lt;/code&gt; table. You can provide the following:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Table (&lt;code class="language-markup"&gt;--table&lt;/code&gt;):&lt;/strong&gt; &lt;em&gt;(Required)&lt;/em&gt; The name of the table to associate the LVC with.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cache name:&lt;/strong&gt; A unique name for the cache. If you don’t provide one, InfluxDB automatically generates a cache name for you.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Key columns (&lt;code class="language-markup"&gt;--key-columns&lt;/code&gt;):&lt;/strong&gt; Specify which columns to include in the primary key of the cache. Rows in the LVC are uniquely identified by their timestamp and key columns, so include all the columns you need to identify each row. These are typically tags, but you can use any columns with the following types:
    &lt;ul&gt;
      &lt;li&gt;String&lt;/li&gt;
      &lt;li&gt;Integer&lt;/li&gt;
      &lt;li&gt;Unsigned integer&lt;/li&gt;
      &lt;li&gt;Boolean&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Value columns (&lt;code class="language-markup"&gt;--value-columns&lt;/code&gt;):&lt;/strong&gt; Specify which columns to cache as value columns. These are typically fields but can also be tags. By default, &lt;code class="language-markup"&gt;time&lt;/code&gt; and columns other than those specified as &lt;code class="language-markup"&gt;--key-columns&lt;/code&gt; are cached as value columns.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Count (&lt;code class="language-markup"&gt;--count&lt;/code&gt;):&lt;/strong&gt; The number of values to cache per unique key column combination. The default count is 1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example, we’ll create an LVC named &lt;code class="language-markup"&gt;homeLastCache&lt;/code&gt; associated with the &lt;code class="language-markup"&gt;home&lt;/code&gt; table. We’ll use the &lt;code class="language-markup"&gt;room&lt;/code&gt; tag as a key column, all the fields as value columns, and only cache the latest value for each field per room:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database example-db \
  --table home \
  --key-columns room \
  homeLastCache&lt;/code&gt;&lt;/pre&gt;

&lt;ol&gt;
  &lt;li&gt;Write data to the table associated with the cache. For this example, &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/sample-data/#write-home-sensor-data-to-influxdb"&gt;write the home sensor sample data&lt;/a&gt; &lt;em&gt;(Commands and data are provided in the link, and you can adjust the timestamps of the data)&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Values are cached on write. When you create a cache, it will not cache previously written points, only newly written points.&lt;/p&gt;

&lt;h2 id="query-data-in-the-last-value-cache"&gt;Query data in the Last Value Cache&lt;/h2&gt;

&lt;p&gt;Use the &lt;code class="language-markup"&gt;last_cache()&lt;/code&gt; function in the &lt;code class="language-markup"&gt;FROM&lt;/code&gt; clause of an SQL &lt;code class="language-markup"&gt;SELECT&lt;/code&gt; statement to query data from the LVC. &lt;code class="language-markup"&gt;last_cache()&lt;/code&gt; supports the following arguments:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;table_name:&lt;/strong&gt; (required) The name of the table the LVC is associated with, formatted as a string literal.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;cache_name:&lt;/strong&gt; The name of the LVC to query from, formatted as a string literal &lt;em&gt;(only required if there is more than one LVC associated with the table)&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;last_cache(table_name, cache_name)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To query the LVC for the written sample data, execute the following query:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT * FROM last_cache('home', 'homeCache')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is just a simple SQL query, so you can include other SQL clauses to modify query results. For example, if you only want the last temperature value for the Kitchen, you can use the following query:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  room,
  temp`
FROM
  last_cache('home', 'homeCache')
WHERE
  room = 'Kitchen'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; InfluxQL does not support the &lt;code class="language-markup"&gt;last_cache()&lt;/code&gt; function, so you can only access the data in the LVC using SQL queries.&lt;/p&gt;

&lt;h2 id="what-to-know-about-the-last-value-cache"&gt;What to know about the Last Value Cache&lt;/h2&gt;

&lt;p&gt;The InfluxDB 3 Last Value Cache is an incredibly powerful tool, but there are important things to know when using it. LVCs are stored in memory; the larger the cache, the more memory it requires to maintain it. It’s essential to balance the size of your LVCs with the amount of memory it takes to store them. Things to consider:&lt;/p&gt;

&lt;h4 id="high-cardinality-key-columns"&gt;High Cardinality Key Columns&lt;/h4&gt;

&lt;p&gt;“Cardinality” refers to the number of unique key column combinations in your cached data. While the InfluxDB 3 storage engine is not limited by cardinality, it does affect the LVC. Higher cardinality increases memory requirements for storing the LVC and can affect LVC query performance. We recommend the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Only use tags important to your query workload as key columns in the LVC. Caching tags or fields as key columns unnecessarily results in higher cardinality without any benefit.&lt;/li&gt;
  &lt;li&gt;Avoid including high cardinality key columns in your LVC.&lt;/li&gt;
  &lt;li&gt;Don’t include &lt;em&gt;multiple&lt;/em&gt; high-cardinality key columns in your LVC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a general idea of total key column cardinality in an LVC, you can use the following equation:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;num_uniq_col_val_N [× num_uniq_col_val_N …] = key_column_cardinality&lt;/code&gt;&lt;/p&gt;

&lt;h4 id="value-count"&gt;Value Count&lt;/h4&gt;

&lt;p&gt;By increasing the number of values to store in the LVC, you increase the number of rows stored in the cache and the amount of memory required to store them. Be judicious with the number of values to store. This count is per unique key column combination. If you include two tags as key columns, one with three unique values and the other with 10, you could have up to 30 unique key column combinations. If you want to keep the last 10 values, you could potentially have 300+ rows in the cache. In reality, this isn’t a huge cache, but it illustrates how key column cardinality and the number of values you want to cache can explode your cache size.&lt;/p&gt;

&lt;p&gt;To get an idea of the number of rows required to cache the specified number of values, use the following equation:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;key_column_cardinality × count = number_of_rows&lt;/code&gt;&lt;/p&gt;

&lt;h4 id="last-value-caches-are-flushed-when-the-server-stops"&gt;Last Value Caches Are Flushed When the Server Stops&lt;/h4&gt;

&lt;p&gt;Because the LVC is an in-memory cache, the cache is flushed any time the server stops. After a server restart, InfluxDB only writes new values to the LVC when you write data, so there may be a period of time when some values are unavailable in the LVC.&lt;/p&gt;

&lt;h4 id="defining-value-columns"&gt;Defining Value Columns&lt;/h4&gt;

&lt;p&gt;When creating an LVC, if you include the &lt;code class="language-markup"&gt;--value-columns&lt;/code&gt; options to specify which fields to cache as value columns, any new fields added in the future will &lt;em&gt;not&lt;/em&gt; be added to the cache. However, if you omit the &lt;code class="language-markup"&gt;--value-columns&lt;/code&gt; option, all columns other than those specified as &lt;code class="language-markup"&gt;--key-columns&lt;/code&gt; are cached as value columns, including columns that are added later.&lt;/p&gt;

&lt;h2 id="share-your-feedback"&gt;Share your feedback&lt;/h2&gt;

&lt;p&gt;The InfluxDB 3 Last Value Cache is a powerful tool that lets you get the best performance on queries that need to return the latest reported values. It’s another tool in your time series toolbelt that helps make sure your workload is as performant as possible.&lt;/p&gt;

&lt;p&gt;Try the LVC, and let us know what you think! Check out our Getting Started Guide for Core and Enterprise, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, on &lt;a href="https://influxdata.com/slack"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or on our &lt;a href="https://community.influxdata.com/"&gt;Community Site&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Mon, 10 Feb 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/-influxdb3-last-value-cache</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/-influxdb3-last-value-cache</guid>
      <category>Developer</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>Product Update: Monitor Your InfluxDB Cloud Dedicated Cluster</title>
      <description>&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb-cloud/dedicated/"&gt;InfluxDB Cloud Dedicated&lt;/a&gt; provides fully-managed InfluxDB v3 clusters that power enterprise-grade workloads on a scalable infrastructure dedicated to your workload and your workload alone. As a fully-managed service, InfluxData takes the infrastructure hassle off your plate by monitoring and scaling your cluster when necessary.&lt;/p&gt;

&lt;p&gt;Until recently, cluster health-related metrics were only available to internal InfluxData support staff. To provide you access to those same metrics, InfluxDB Cloud Dedicated now gives you access to an operational dashboard that visualizes data related to the performance and health of your dedicated cluster.&lt;/p&gt;

&lt;h2 id="whats-in-the-operational-dashboard"&gt;What’s in the operational dashboard?&lt;/h2&gt;

&lt;p&gt;The operational dashboard displays the same cluster health-related metrics that InfluxData support staff use to make adjustments to your cluster. As a user, this information can help you identify unintended workload changes, potential bottlenecks, optimization opportunities, and provide insight into how each component in your cluster is performing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For information about the specific metrics available in the dashboard, see &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/monitor-your-cluster/#dashboard-sections-and-cells"&gt;the documentation&lt;/a&gt;.&lt;/em&gt;
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/529ffbf868584020bbcec1d60a3f3d88/c51874d3a7246bc2085809dc1b2d3d6e/unnamed.png" alt="" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="view-your-operational-dashboard"&gt;View your operational dashboard&lt;/h2&gt;

&lt;p&gt;The dashboard used to monitor your InfluxDB Cloud Dedicated cluster is a Grafana dashboard managed by InfluxData. By default, the operational dashboard is not enabled for all clusters. To view your cluster’s operational dashboard:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href="https://support.influxdata.com/s/"&gt;Contact InfluxData support&lt;/a&gt; to enable the operational dashboard on your dedicated cluster.&lt;/li&gt;
  &lt;li&gt;Copy your InfluxDB Cloud Dedicated cluster URL, paste it into a browser, and add /observability as the URL path. For example: https://cluster-id.a.influxdata.com/observability&lt;/li&gt;
  &lt;li&gt;Use the credentials provided by InfluxData support to log into the Grafana dashboard.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once logged in, you can view the operational dashboard specific to your InfluxDB Cloud Dedicated cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://support.influxdata.com/s/"&gt;Request access to your InfluxDB Cloud Dedicated operational dashboard&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The InfluxDB Cloud Dedicated operational dashboard and the metrics displayed are subject to change. If there is information that you feel would benefit your ability to monitor your dedicated cluster, let us know.&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Mon, 24 Jun 2024 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/monitor-influxdb-cloud-dedicated-cluster</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/monitor-influxdb-cloud-dedicated-cluster</guid>
      <category>Product</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>Product Update: SSO for InfluxDB Cloud Dedicated</title>
      <description>&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb-cloud/dedicated/"&gt;InfluxDB Cloud Dedicated&lt;/a&gt; is a fully-managed InfluxDB offering that lets you run enterprise-grade workloads on cloud infrastructure dedicated to your workload and your workload alone. A common request from those running enterprise-grade workloads on InfluxDB is the ability to use single sign-on (“SSO”) to authorize access to InfluxDB. SSO is now available as a paid option  for InfluxDB Cloud Dedicated clusters.&lt;/p&gt;

&lt;h2 id="what-is-sso"&gt;What is SSO?&lt;/h2&gt;

&lt;p&gt;SSO is a delegated authentication system that allows team members to access multiple applications using a single set of credentials managed by a corporate Identity Provider (“IdP”).  When a team member logs into an application using SSO, their corporate IdP validates their credentials, which authenticates access back to the application. In this way, SSO simplifies access for team members while also reducing administrative overhead. SSO has the following additional benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Reduces username/password fatigue&lt;/li&gt;
  &lt;li&gt;Decreases risk of weak passwords and password reuse&lt;/li&gt;
  &lt;li&gt;Reduces friction accessing multiple systems&lt;/li&gt;
  &lt;li&gt;Reduces login issues and support requests&lt;/li&gt;
  &lt;li&gt;Simplifies administration and security enforcement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SSO is now offered on InfluxDB Cloud Dedicated so that you can enjoy these benefits with InfluxDB V3; your administrators can grant/revoke access to your cluster the same way that they would administer access to any of your other systems.&lt;/p&gt;

&lt;h2 id="sso-with-influxdb-cloud-dedicated"&gt;SSO with InfluxDB Cloud Dedicated&lt;/h2&gt;

&lt;p&gt;When using SSO with InfluxDB Cloud Dedicated, you connect your identity provider to the InfluxData-managed &lt;a href="https://auth0.com/"&gt;Auth0&lt;/a&gt; service. When a user attempts to authorize using your InfluxDB Cloud Dedicated cluster, the following occurs:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;InfluxDB sends an authentication request to the InfluxData-managed Auth0 service.&lt;/li&gt;
  &lt;li&gt;Auth0 sends the provided credentials to your identity provider.&lt;/li&gt;
  &lt;li&gt;Your identity provider grants or denies authorization based on the provided credentials and returns the appropriate response to Auth0.&lt;/li&gt;
  &lt;li&gt;Auth0 returns the authorization response to InfluxDB Cloud Dedicated which grants or denies access to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/d51aff405e3e41b9a30b611527bf3171/b62db88fa0b47febb89543e3dc3eb849/unnamed.png" alt="" /&gt;Your identity provider manages access to your cluster. Once you grant a user access through your identity provider, they have administrative access to your InfluxDB Cloud Dedicated cluster.&lt;/p&gt;

&lt;h2 id="set-up-sso-for-your-cluster"&gt;Set up SSO for your cluster&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="/contact-sales/"&gt; Contact InfluxData sales&lt;/a&gt; to begin the process of enabling SSO on your dedicated cluster. They will gather the information necessary to start your SSO implementation.&lt;/li&gt;

&lt;li&gt; If you haven’t already, &lt;b&gt;set up your identity provider&lt;/b&gt;. For information about setting up your identity provider, refer to your identity provider’s documentation.

&lt;p class="pt-4"&gt;&lt;b&gt;Note:&lt;/b&gt; To use SSO with InfluxDB Cloud Dedicated, you must use an &lt;a href="https://auth0.com/docs/authenticate/identity-providers" target="_blank"&gt;identity provider supported by Auth0&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;

&lt;li&gt; &lt;b&gt;Create a new application or client&lt;/b&gt; in your identity provider to use with Auth0 and your InfluxDB Cloud Dedicated cluster. Refer to your identity provider’s documentation for more information.&lt;/li&gt;

&lt;li&gt; &lt;b&gt;Provide the necessary connection credentials to InfluxData support&lt;/b&gt;. What credentials are needed depends on your identity provider and your protocol. For example:

&lt;div class="table-container is-v-centered my-5"&gt;
  &lt;table class="table is-bordered"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;Required credentials&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OIDC&lt;/td&gt;
&lt;td&gt;Client secret&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAML&lt;/td&gt;
&lt;td&gt;Identity provider certificate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;

InfluxData support will provide more information about the specific credentials required.&lt;/li&gt;

&lt;li&gt; Add the InfluxData Auth0 connection URL as a valid callback URL to your identity provider application. This is also sometimes referred to as a “post-back” URL.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;https://auth.influxdata.com/login/callback&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With the callback URL in place, you can test the integration by attempting to authorize with your InfluxDB Cloud Dedicated cluster. The quickest way to authorize is to use any of the &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/reference/cli/influxctl/"&gt;influxctl&lt;/a&gt; commands.&lt;/p&gt;

&lt;p&gt;Once working, you can manage all access to your InfluxDB Cloud Dedicated cluster through your identity provider.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For more information about SSO with InfluxDB Cloud Dedicated, see the &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/sso/"&gt;InfluxDB Cloud Dedicated SSO documentation&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Learn more about &lt;a href="https://www.influxdata.com/products/influxdb-cloud/dedicated/"&gt;InfluxDB Cloud Dedicated&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;style&gt;
ol li {
  padding-bottom: 10px;
}
&lt;/style&gt;

</description>
      <pubDate>Mon, 10 Jun 2024 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/sso-influxdb-cloud-dedicated</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/sso-influxdb-cloud-dedicated</guid>
      <category>Product</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>InfluxDB and Elementary School Science Fairs</title>
      <description>&lt;p&gt;I’m a proud father of an incredibly smart, talented, kind, and just all-around great 9 year-old daughter. Since she first learned to talk, she has been a “questioner,” curious about the world around her and how everything connects (you know, the kid who asks questions throughout an entire movie… yeah, that’s her). As she’s grown older, her love of questions and need for answers has led her to gravitate toward math and science, both of which she excels at.&lt;/p&gt;

&lt;p&gt;Every year her elementary school does a science fair and, as usual, she was all in. After some brainstorming and research, she wanted to know, “does playing video games affect your blood pressure?” Her hypothesis was that, due to the excitement and/or stress of playing, your blood pressure increases while playing. To test her hypothesis, she did the following:&lt;/p&gt;
&lt;ol&gt;
 	&lt;li&gt;Gathered test subjects (luckily we had a family dinner that provided a pool of willing participants that varied in age).&lt;/li&gt;
 	&lt;li&gt;Measured and recorded each test subject's blood pressure before playing.&lt;/li&gt;
 	&lt;li&gt;Had each test subject play Mario Kart 8 for 7 minutes (approximately two races).&lt;/li&gt;
 	&lt;li&gt;Measured and recorded each test subject's blood pressure after playing.&lt;/li&gt;
 	&lt;li&gt;Recorded the age and sex of each test subject.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img class="aligncenter wp-image-267556 size-full img-padding" src="/images/legacy-uploads/does-playing-video-games-affect-your-blood-pressure.jpg" alt="does-playing-video-games-affect-your-blood-pressure" width="980" height="735" /&gt;&lt;/p&gt;

&lt;p&gt;I should interject and say that my wife and I support our daughter in whatever she sets out to do, but the work is hers to do. We’re just there to help. My daughter came up with her hypothesis, designed the experiment, made a list of supplies she needed, and then conducted the experiment. She’s pretty awesome.&lt;/p&gt;

&lt;p&gt;Once she collected the data, she knew what questions she wanted to answer, but wasn’t sure of the best way to find the answers. Enter data-nerd-dad (me):&lt;/p&gt;

&lt;p&gt;“I think I can help with this.”&lt;/p&gt;

&lt;p&gt;I get to work with InfluxDB everyday, so I knew this is the type of data that InfluxDB would be really good at processing. So here’s what we did:&lt;/p&gt;
&lt;h3&gt;Convert results to line protocol&lt;/h3&gt;
&lt;p&gt;We first converted my daughter’s handwritten results into line protocol. We stored the data in a &lt;code class="language-markup"&gt;bloodpressure&lt;/code&gt; measurement, included &lt;code class="language-markup"&gt;name&lt;/code&gt; and &lt;code class="language-markup"&gt;age&lt;/code&gt; tags, and included fields for systolic (&lt;code class="language-markup"&gt;sys&lt;/code&gt;) and diastolic (&lt;code class="language-markup"&gt;dia&lt;/code&gt;) pressures.&lt;/p&gt;

&lt;p&gt;Each test had two lines of line protocol; one for the initial blood pressure reading and one for the final blood pressure reading. The timestamps were arbitrary, but we wanted all the tests to align, so we used the same initial and final timestamps for all tests. Here’s what a result from a single test looked like in line protocol:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-markup"&gt;bloodpressure,name=Bob,age=32,sex=m sys=132,dia=80 1651003231
bloodpressure,name=Bob,age=32,sex=m sys=120,dia=70 1651003221&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Write queries to answer questions&lt;/h3&gt;
&lt;p&gt;As with any data analysis, you first need questions to answer to know how to query the data appropriately. Here are the questions my daughter wanted to answer:&lt;/p&gt;
&lt;ul&gt;
 	&lt;li&gt;What was the average change in blood pressure overall?&lt;/li&gt;
 	&lt;li&gt;Does sex matter?&lt;/li&gt;
 	&lt;li&gt;Does age matter?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So I began to build some queries and, while I was at it, a dashboard to visualize her results.&lt;/p&gt;

&lt;p&gt;I built cells that displayed line graphs to show the change between the average starting and ending blood pressures as well as single stat cells to display the average numeric change of systolic and diastolic pressures. I did the same for sex segments and age segments. It took me about 20 minutes to build out the first version of the dashboard.&lt;/p&gt;

&lt;p&gt;At this point, my daughter looked at the dashboard and said, “the change in numbers are hard to understand. Can we do a percentage change?”&lt;/p&gt;

&lt;p&gt;Me, swelling with pride: “Absolutely.”&lt;/p&gt;

&lt;p&gt;About 10 minutes later, all the single stat cells were updated to report the percentage change rather than just the mathematical difference. This is what our final dashboard looked like:&lt;/p&gt;

&lt;p&gt;&lt;img class="aligncenter wp-image-267557 size-full img-padding" src="/images/legacy-uploads/Blood-pressure-and-Mario-Kart.jpg" alt="Blood-pressure-and-Mario-Kart" width="980" height="935" /&gt;&lt;/p&gt;

&lt;p&gt;She asked if we could just use the dashboard on her project board, so we put the dashboard in &lt;a href="https://docs.influxdata.com/influxdb/v2.2/visualize-data/dashboards/control-dashboard/#presentation-mode"&gt;presentation mode&lt;/a&gt;, toggled &lt;a href="https://docs.influxdata.com/influxdb/v2.2/visualize-data/dashboards/control-dashboard/#toggle-dark-mode-and-light-mode"&gt;light mode&lt;/a&gt; (to save printer ink), printed the dashboard in tiles, taped them all together, and then glued them to her project board. Here was the result:&lt;/p&gt;

&lt;p&gt;&lt;img class="aligncenter wp-image-267558 size-full img-padding" src="/images/legacy-uploads/results-that-my-daughter-observed.jpg" alt="results-that-my-daughter-observed" width="980" height="735" /&gt;&lt;/p&gt;
&lt;h3&gt;Her findings&lt;/h3&gt;
&lt;p&gt;While the sample size of the experiment wasn’t large enough to draw any concrete conclusions, there were some interesting results that my daughter observed:&lt;/p&gt;
&lt;ul&gt;
 	&lt;li&gt;Overall, blood pressure &lt;strong&gt;dropped&lt;/strong&gt; by approximately 2% on average.&lt;/li&gt;
 	&lt;li&gt;The drop in blood pressure was higher in males than it was in females. Females saw almost no change in systolic pressure, but a slight increase in diastolic pressure.&lt;/li&gt;
 	&lt;li&gt;Test subjects under the age of 18 saw almost no change in blood pressure while subjects 19-30 years old saw the most significant change in blood pressure.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img class="aligncenter wp-image-267559 size-full img-padding" src="/images/legacy-uploads/the-dashboard-in-presentation-mode.jpg" alt="the-dashboard-in-presentation-mode" width="980" height="735" /&gt;&lt;/p&gt;
&lt;h3&gt;The result&lt;/h3&gt;
&lt;p&gt;It was a fun little experiment and my daughter got a taste of using InfluxDB to better understand the data she collected. AND she won her age group!&lt;/p&gt;

&lt;p&gt;Blue ribbon or not, I’m incredibly proud of her. My daughter’s need to constantly ask questions and her love of learning are going to take her far in life. I’m excited to see where she goes and happy that I was able to introduce her to the powerful toolset I get to work with every day.&lt;/p&gt;
</description>
      <pubDate>Mon, 16 May 2022 07:00:49 -0700</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-elementary-school-science-fairs</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-elementary-school-science-fairs</guid>
      <category>Product</category>
      <category>Use Cases</category>
      <category>Developer</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>InfluxDB 2.0 Documentation is Now Open Source</title>
      <description>&lt;p&gt;&lt;em&gt;Special thanks to Kelly Seivert, Nora Mullen, and Will Pierce&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Greetings, InfluxData community! ????&lt;/p&gt;

&lt;p&gt;In January, we released InfluxDB 2.0 alpha with draft documentation. With each incremental alpha release, we’ve iteratively updated the docs. Most recently, we published content for InfluxDB Cloud 2.0. Since the initial 2.0 alpha release, the 2.0 documentation source code has been kept in a private GitHub repository. Behind the scenes, we’ve been hard at work: curating content to address common InfluxDB use cases, standardizing structure and style, and increasing the depth of content.&lt;/p&gt;

&lt;p&gt;Today we’re open-sourcing the &lt;a href="https://github.com/influxdata/docs-v2"&gt;InfluxDB 2.0 documentation&lt;/a&gt;!&lt;/p&gt;
&lt;h2&gt;Optimize your time to awesome&lt;/h2&gt;
&lt;p&gt;To deliver useful, informative InfluxDB 2.0 documentation, we’re crafting content using a task-based approach. We’re organizing content by common tasks to perform, rather than by feature. We’ve separated conceptual and reference information from procedures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our goal:&lt;/strong&gt; less time reading, more time building cool stuff with InfluxDB.&lt;/p&gt;
&lt;h2&gt;Our commitment to you&lt;/h2&gt;
&lt;p&gt;We believe when project maintainers and users collaborate, the end product is always better. Your viewpoint, insight, and use case may be one we haven’t considered or addressed. Share your goals with us. We’re committed to:&lt;/p&gt;
&lt;ul&gt;
 	&lt;li&gt;Our community&lt;/li&gt;
 	&lt;li&gt;Open source&lt;/li&gt;
 	&lt;li&gt;Better solutions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our docs are “ever-evolving” – growing and changing to address your needs. Your contributions make a difference. Our hope in open sourcing the InfluxDB 2.0 documentation is to open the doors for community contributions and feedback. We want and need your help to create documentation that addresses the needs of our community and inspires new and interesting ways to use InfluxDB.&lt;/p&gt;
&lt;h2&gt;Ways to contribute&lt;/h2&gt;
&lt;p&gt;Find unclear or inaccurate information…a typo or a broken link? Missing content? Let us know! Submit an issue or a pull request on the &lt;a href="https://github.com/influxdata/docs-v2"&gt;InfluxDB 2.0 documentation repository&lt;/a&gt;. We welcome all contributions!&lt;/p&gt;
&lt;ul style="padding-top: 15px;"&gt;
 	&lt;li&gt;&lt;a href="https://github.com/influxdata/docs-v2/#readme"&gt;Download and run the InfluxDB 2.0 documentation locally&lt;/a&gt;&lt;/li&gt;
 	&lt;li&gt;&lt;a href="https://github.com/influxdata/docs-v2/blob/master/CONTRIBUTING.md"&gt;View the InfluxDB 2.0 documentation contribution guidelines&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Mon, 14 Oct 2019 09:00:00 -0700</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-2-0-documentation-is-now-open-source</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-2-0-documentation-is-now-open-source</guid>
      <category>Product</category>
      <category>Use Cases</category>
      <category>Developer</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
  </channel>
</rss>
