Scale your time series workloads in the cloud
Focus on what matters. Leave your infrastructure to us.
Faster queries for high
Faster queries for
Compared to InfluxDB Open Source
Tools and resources to help innovate fasterWith a powerful set of data collection tools, client libraries, and APIs you can get data from everywhere.
TelegrafInfluxDB Cloud uses Telegraf to collect time series data from, and to send it to databases, applications, systems, and IoT sensors. Telegraf is a plugin-driven server agent with over 300 plugins. It is written in Go, compiles into a single binary with no external dependencies, and requires a very minimal memory footprint.
APIProgrammatic access to InfluxDB Cloud is available through a robust set of APIs. These APIs are common across InfluxDB Open Source and InfluxDB Cloud allowing you to write code once and run it locally against an open source version of the database or against the elastic database as a service.
InfluxDB Cloud features
Single datastore for all time series data
- Collect, analyze, and store metric, event, and tracing data to open new use cases
- Simplify data pipelines and remove unnecessary tooling
Low latency queries
- Work with leading-edge data. Keep live and recently queried data in “hot” storage tier, built using Apache Arrow, an in-memory columnar format optimized for speed
- Ingest high volume and high cardinality data without impacting performance
- Continuously ingest, transform, and analyze hundreds of millions of time series data points per second
Open and interoperable with data ecosystems
- Many other open source ecosystems utilize Apache Parquet, an open data interchange format
- Use of data science tools to operate directly on Parquet files to power machine learning or other higher-order analytical tasks
- Connect to Google Data Studio, and other BI or data warehouses with open source ODBC and JDBC plugins based on Flight SQL
Superior data compression
- High compression storage using Apache Parquet file format
- Persist data to cloud object store, saving more data in less space, while also reducing costs