Ingest, Explore, Validate: A Quickstart with InfluxDB 3 Enterprise and Explorer UI
By
Jameelah Mercer /
Developer
Jul 22, 2025
Navigate to:
Great observability doesn’t just collect metrics—it tells you exactly what’s broken, why it’s broken, and what to do about it.
InfluxDB 3 Enterprise delivers this through real-time ingestion, fast queries, and scalable storage. InfluxDB 3 Explorer provides the intuitive interface your team needs for database management, data ingestion, querying, and visualization without the usual complexity.
This tutorial shows you how to deploy Enterprise and Explorer as a unified platform that transforms raw metrics into actionable insights your organization needs.
What you’re building
By the end of this tutorial, you’ll have:
- InfluxDB 3 Enterprise: A high-performance time series database running locally
- Explorer UI: A preconfigured web interface for your teams
- Sample Data Pipeline: System metrics demonstrating real-world ingestion patterns
- Production Readiness: Clear scaling considerations and next steps
This setup mirrors what you’d deploy in staging or production environments, scaled for local development.
Prerequisites
To follow this tutorial, you’ll need:
Required:
- Docker installed on your local machine
- A dedicated working directory for your InfluxDB 3 project
Create a working directory on your local machine by running the following in your command line:
mkdir influxdb-enterprise-demo && cd influxdb-enterprise-demo
Deploy InfluxDB 3 Enterprise
The fastest way to get started with InfluxDB 3 Enterprise locally is by running it in a Docker container using a file-based object store. This ensures your data and configuration persist across restarts, which is ideal for local development, plugin testing, and working with the Explorer UI.
Tip: For production deployments, you can configure cloud object stores (such as S3, Azure Blob Storage, or Google Cloud Storage) for enhanced durability and scalability. Learn more about production storage configuration here.
Download and Install
Run the following command to download and execute the installation script:
curl -O https://www.influxdata.com/d/install_influxdb3.sh \
&& sh install_influxdb3.sh enterprise
You’ll see a prompt like:
Select Installation Type
1) Docker Image
2) Simple Download
Choose:
Select 1 to install using Docker. The download and setup typically complete in under 30 seconds.
Create Persistent Directories
If you plan to work with the Processing Engine plugins or want your data to persist between restarts, create the following directories in your working directory:
mkdir -p data plugins
data/
stores InfluxDB 3 catalog, WAL, and Parquet data:- Catalog: Metadata about your databases, tables, and schema information
- WAL (Write-Ahead Log): Incoming data is written for recovery before being persisted to permanent Parquet files
- Parquet data: Your actual time series data stored in compressed, columnar format for fast queries
plugins/
is optional and used for custom Python plugins
Tip: These directories ensure your data and configuration survive container restarts. In production deployments, you’d typically mount these to persistent volumes or use cloud object storage for the Parquet files.
Start the InfluxDB 3 Enterprise Container
Start the container manually using this Docker command:
docker run -it \
-p 8181:8181 \
-v $PWD/data:/data \
-v $PWD/plugins:/plugins \
influxdb3-enterprise influxdb3 serve \
--node-id node0 \
--cluster-id cluster0 \
--object-store file \
--data-dir /data \
--plugin-dir /plugins
This command mounts your local directories for persistent storage and exposes InfluxDB’s API on port 8181
.
Configure License
On the first run, the container prompts you to choose a license type:
To get started, please select a license type:
1) FREE TRIAL
2) COMMERCIAL
3) HOME USE
Choose:
For this tutorial, select Option 1 (FREE TRIAL). The free trial provides access to all InfluxDB 3 Enterprise capabilities for 30 days, making it ideal for proof-of-concept deployments, team evaluation, and production planning.
Provide your email address when prompted. InfluxDB will send a verification link. Once verified, the server will complete initialization and begin serving requests.
Important: When InfluxDB starts, it will display your admin token in the console output. Copy this token immediately and store it securely. You will need it for Explorer configuration.
You’ll see a confirmation log like:
valid license found, happy data crunching
startup time: XXXXXms address=0.0.0.0:8181
Your InfluxDB 3 Enterprise instance is now running locally at: http://localhost:8181
Set up Explorer UI
InfluxDB 3 Explorer provides a browser-based interface for database management, data ingestion, and SQL queries. You’ll deploy it in admin mode and preconfigure it to automatically connect to your InfluxDB instance for seamless user access.
Create Required Directories
To persist Explorer session data and optionally preconfigure connections, create the following directories in your project root:
mkdir -m 700 db
mkdir -m 755 config
mkdir -m 755 ssl
Configure Connection (Optional)
To streamline the user experience, you can preconfigure Explorer to connect to your InfluxDB 3 Enterprise instance automatically. This provides your users with preset connection details, reducing manual setup steps.
Create a config file in your project root to set these defaults for all users:
File: config/config.json
{
"DEFAULT_INFLUX_SERVER": "http://host.docker.internal:8181",
"DEFAULT_INFLUX_DATABASE": "",
"DEFAULT_API_TOKEN": "your_admin_token_from_step_1",
"DEFAULT_SERVER_NAME": "Local Enterprise"
}
Replace your_admin_token_from_step_1
with the actual admin token from step 1.
Launch Explorer Container
Run the following command to launch the Explorer UI in admin mode, enabling full access to create tokens, databases, and more:
docker run --detach \
--name influxdb3-explorer \
--publish 8888:80 \
--publish 8889:8888 \
--volume $(pwd)/config:/app-root/config:ro \
--volume $(pwd)/db:/db:rw \
--volume $(pwd)/ssl:/etc/nginx/ssl:ro \
influxdata/influxdb3-ui:1.0.0 \
--mode=admin
Check that it’s running with:
docker ps
Now, we are ready to launch Explorer’s graphical interface. Visit the UI at: http://localhost:8888
.
If you preconfigure config.json
correctly, you’ll see “Local Enterprise” preloaded as a connected server. If not, click “Add server” and enter the connection info manually:
- Server Name: Local Enterprise
- URL:
http://host.docker.internal:8181
- Token: Paste your admin token from step 1
Security Best Practice: Store the admin token securely. Anyone with this token has full control over your InfluxDB instance.
Create your first database
With Explorer running and connected to your InfluxDB 3 Enterprise instance, create a database to store your organization’s time series data.
Database Setup
- In the Explorer UI at:
http:localhost:8888
- In the left-hand sidebar, click Manage Databases
- Click + Create New in the top right
- Enter a database name (e.g.,
observability
) - (Optional) Set a retention period:
- Enter a number (e.g.,
30
) - Select a unit (e.g.,
days
) - Leave blank for indefinite retention
- Enter a number (e.g.,
- Click Create Database
Ingest Sample Data
Now, let’s populate your database with sample data. We’ll use a CSV containing system metrics with tags, fields, and Unix nanosecond timestamps.
Prepare Sample CSV
Create a file named system_metrics.csv
in your project directory:
time,cpu,host,usage_user,usage_system
1752274800000000000,cpu0,hostA,19.8,7.2
1752274860000000000,cpu0,hostA,20.1,7.3
1752274920000000000,cpu0,hostA,18.9,6.9
1752274980000000000,cpu0,hostA,21.2,7.5
1752275040000000000,cpu0,hostA,22.4,7.1
1752275100000000000,cpu0,hostA,19.7,6.8
1752275160000000000,cpu0,hostA,20.3,7.0
1752275220000000000,cpu0,hostA,18.5,6.6
1752275280000000000,cpu0,hostA,21.0,7.4
1752275340000000000,cpu0,hostA,20.7,7.2
Important: Timestamps must be in Unix nanoseconds (UTC). cpu
and host
columns will be treated as tags, usage_user
and usage_system
as fields.
Upload via Explorer
- Open the Explorer UI at
http://localhost:8888
- In the left-hand side bar, click Data Ingest
- Select Import CSV or JSON data
- Choose your database from the dropdown (e.g.,
observability
) - Paste your CSV data directly into the text area or drag in your
system_metrics.csv
- Click Parse Data to proceed to field mapping
- Configure the data mapping
- Set Measurement Name to “system_metrics” (or your preferred name)
- Confirm timestamp is selected as the TimeStamp Column
- In the Field Mappings section, ensure:
host
andregion
are marked as Tagscpu_usage
,mem_used
,disk,io
,network_in
,network_out
are marked as Fields
6. Click Convert to Line Protocol to complete the upload
Verify with SQL
Navigate to the Data Explorer in the sidebar and run:
SELECT * FROM system_metrics LIMIT 10
You should see the records you uploaded.
Troubleshooting: If no data appears, confirm you selected the correct database, verify timestamps are formatted as Unix nanoseconds, and ensure there are no trailing characters or line breaks in the CSV.
Production considerations
Uploading a CSV is ideal for quick validation. However, in production environments, data is ingested continuously and requires different approaches for scale and reliability.
For production workloads, consider these ingestion methods:
- Telegraf: A lightweight agent for collecting system, container, and service metrics
- HTTP Write API: Send raw records directly using REST endpoint
- InfluxDB 3 client libraries: Write from your application code in Python, Go, Java, or other supported languages
These pipelines provide durable and scalable ingestion tailored for observability workloads. Learn more in the InfluxDB 3 Write Data docs.
What’s next?
You now have a complete InfluxDB 3 Enterprise and Explorer setup ready for your organization. For production deployment, focus on configuring cloud object storage (S3, Azure Blob, GCS), setting up TLS/SSL certificates, and implementing proper backup and monitoring procedures.
For your users, create fine-grained database tokens for different teams, integrate with your existing authentication system, and set Grafana dashboards for advanced visualization. To scale your data pipeline, deploy Telegraf agents across your infrastructure for automated collection, integrate with your existing observability stack, and implement automated retention policies to manage storage costs.
Additional Resources
Share Your Feedback!
We hope this guide helps you successfully deploy InfluxDB 3 Enterprise and Explorer to collect actionable insights. As you begin using this tool, we’d love to hear about your experience. Share your thoughts and feedback with our development team on our Community Forums, Slack, or our Community Site.