What’s New in InfluxDB 3 Explorer 1.7: Table Management, Data Import, Transforms, and More
By
Daniel Campbell
Apr 15, 2026
Product
Developer
Navigate to:
InfluxDB 3 Explorer 1.7 is a step forward for anyone who wants to manage their time series data without constantly switching between the UI and a terminal. This release adds table-level schema management, the ability to import data from other InfluxDB instances, and a new Transform Data section to reshape your data, all within the Explorer UI.
Table management
Previously, if you wanted to see what tables existed inside a database, you had to query system tables or use the API. The new Manage Tables page changes that. You can get there from the sidebar or from the new actions menu on any database in the Manage Databases page. That actions menu gives you quick access to query a database, view its tables, or delete it.
The Manage Tables page lists every table in the selected database, along with its column count, type, and any configured Distinct Value or Last Value Caches. Use the toggle filters to show or hide system tables and deleted tables. Deleted tables show up with a “Pending Delete” badge when the Show Deleted Tables toggle is enabled, so you always have visibility into what’s been removed.

You can also create new tables directly from this page. The Create Table dialog lets you define the schema up front: name, fields with data types, optional tags, and a retention period. This is useful when you want to control your schema explicitly rather than relying on schema-on-write to infer types from the first arriving data points.
From any table’s action menu, you can jump straight to the Data Explorer with a pre-built query for that table.

Import from InfluxDB
The next few features I’ll discuss are enhancements that make it much easier to work with the InfluxDB 3 Processing Engine.
Moving data between InfluxDB instances used to mean writing scripts, dealing with export formats, and coordinating tokens across environments. The new Import from InfluxDB feature provides a guided workflow for migrating small-to-medium datasets from any existing InfluxDB v1, v2, or v3 instance (assuming v3 Schema compatibility) into your current InfluxDB 3 database.
You’ll find it under the Write Data section, on both the Dev Data and Production Data pages. The workflow walks you through selecting a target database (or creating a new one), connecting to a source InfluxDB instance, authenticating, and then choosing which databases and tables to import.

Before committing to the import, perform a dry run that shows you exactly what will be transferred, including the source and destination, the number of tables, the estimated row count, and how long it should take. Advanced options let you tune the batch size and concurrency if you need to balance import speed against resource usage.
Once you start the import, a live progress view shows you how far along things are, how many rows have been imported, and the current status of each table. When it finishes, a “Query this database” button takes you straight to the Data Explorer so you can verify everything landed correctly.

If you’re running an InfluxDB 1.x or 2.x instance and want to try InfluxDB 3 with your real data, this saves you from building a migration pipeline. Just point the import tool at your existing instance, pick the databases and time range you want, and the data flows over. It also works for consolidating data from multiple InfluxDB 3 instances into one place, or pulling production data into a dev environment for testing.
Transform data
The new Transform Data section in the sidebar gives you a visual interface for setting up data transformations that run automatically on ingestion via the Processing Engine. Under the hood, these are powered by the Basic Transformation Processing Engine plugin, but you don’t need to write any plugin configuration by hand. The UI handles that for you.
The way it works: when data is written to a source table, the transformation runs automatically and writes the results to a target database or table. You can set a short retention period on the source data (say, one day) so the raw data cleans itself up, and the transformed data lives on in the destination. There are four types of transformations available.
Rename Table
Rename Table lets you route data arriving in one table to another table. This is handy when you’re consuming data from a source you don’t control, and the table names don’t match your naming conventions.

Rename Columns
Rename Columns works similarly, but at the column level. You pick a source table and select which columns to rename. If you’re integrating data from different systems that use different naming conventions (for example, temp_f vs temperature_fahrenheit), this standardizes everything without touching the source.

Transform Values
Transform Values lets you apply calculations or conversions to field values as they come in. You can do math operations, string transformations, unit conversions, or simple find-and-replace. If your sensors report temperature in Celsius but your dashboards expect Fahrenheit, this handles the conversion at ingestion time so your queries stay clean.

Filter Data
Filter Data lets you keep only the rows or columns that match specific conditions. You can filter by rows (e.g., only keep data where crop_type = 'carrots') or by columns (drop fields you don’t need). This is useful when you’re receiving more data than you actually want to store. For example, a third-party feed might send 50 fields when you only care about 5.

You can test each transformation before deployment, and once deployed, monitor its status (running, stopped, errors) from the Transform Data dashboard.
Downsample Data
Downsampling is a classic time series operation: take high-frequency data and roll it up into lower-frequency summaries to save storage and speed up queries over long time ranges. The new Downsample page, also under the Transform Data section, makes this easy to set up. You create a downsample trigger by specifying a source table, a target table, a schedule (how often the aggregation runs), a time window (how far back to look), an aggregation interval (the bucket size), and an aggregation function (avg, sum, min, max, etc.). You can also choose to include or exclude specific fields.

The Downsample Processing Engine plugin powers this feature.
Get started
All of these features are available now in InfluxDB 3 Explorer 1.7. For more on these Processing Engine capabilities, see InfluxDB 3 Processing Engine Updates.
If you’re running InfluxDB 3 Core or Enterprise, update to the latest version to try them out. To learn more, check out the InfluxDB 3 Explorer documentation.
To update InfluxDB 3 Explorer, pull the latest Docker image:
docker pull influxdata/influxdb3-ui