InfluxDB vs. Graphite for Time Series Data & Metrics Benchmark

Navigate to:

This blog post has been updated on September 10, 2020 with the latest benchmark results for InfluxDB 1.8.0 and Graphite 1.1.7. This blog is regularly updated with the latest benchmark figures.

At InfluxData, one of the common questions we regularly get asked by developers and architects alike is: “How does InfluxDB compare to Graphite for time series workloads?” This question might be prompted for a few reasons. First, if they’re starting a brand new project and doing the due diligence of evaluating a few solutions head-to-head, it can be helpful in creating their comparison grid. Second, they might already be using Graphite for ingesting logs in an existing application, but would like to now see how they can integrate metrics collection into their system and believe there might be a better solution than Graphite for this task.

We set out to compare the performance and features of InfluxDB and Graphite for time series workloads, specifically looking at the rates of data ingestion, on-disk data compression, and query performance. InfluxDB outperformed Graphite in two tests, with 14x greater write throughput, while using 7x less disk space when compared against Graphite’s time series optimized configuration. InfluxDB delivered 10x faster response times for tested queries, compared to response time of cached queries from Graphite.

We felt that this data would prove valuable to engineers evaluating the suitability of both these technologies for their use cases; specifically, time series use cases involving custom monitoring and metrics collection, real-time analytics, Internet of Things (IoT) and sensor data, plus container or virtualization infrastructure metrics. The benchmarking exercise did not look at the suitability of InfluxDB for workloads other than those that are time-series-based. InfluxDB is not designed to satisfy full-text search or log management use cases and therefore would be out of scope. For these use cases, we recommend sticking with Graphite or similar full-text search engines.

To read the complete details of the benchmarks and methodology, download the “Benchmarking InfluxDB vs. Graphite for Time Series Data & Metrics Management” technical paper.

Our overriding goal was to create a consistent, up-to-date comparison that reflects the latest developments in both InfluxDB and Graphite with later coverage of other databases and time series solutions. We will periodically re-run these benchmarks and update our detailed technical paper with our findings. All of the code for these benchmarks is available on GitHub. Feel free to open up issues or pull requests on that repository if you have any questions, comments, or suggestions.

Now, let’s take a look at the results…

Versions tested

InfluxDB v1.8.0

InfluxDB is an open source time series database written in Go. At its core is a custom-built storage engine called the Time-Structured Merge (TSM) Tree, which is optimized for time series data. Controlled by a custom SQL-like query language named InfluxQL, InfluxDB provides out-of-the-box support for mathematical and statistical functions across time ranges and is perfect for custom monitoring and metrics collection, real-time analytics, plus IoT and sensor data workloads.

Graphite v1.1.7

Graphite is an open source, numeric time series data-oriented database and a graph rendering engine, written in Python. It consists of a carbon daemon that listens for time series data and stores it in Whisper database on disk, and Graphite web app written in Django framework for rendering on-demand graphs.

In building a representative benchmark suite, we identified the most commonly evaluated characteristics for working with time series data. We looked at performance across three vectors:

  • Data ingest performance – measured in values per second
  • On-disk storage requirements – measured in Bytes
  • Mean query response time – measured in milliseconds

About the dataset

For this benchmark, we focused on a dataset that models a common DevOps monitoring and metrics use case, where a fleet of servers are periodically reporting system and application metrics at a regular time interval. We sampled 100 values across 9 subsystems (CPU, memory, disk, disk I/O, kernel, network, Redis, PostgreSQL, and Nginx) every 10 seconds. For the key comparisons, we looked at a dataset that represents 100 servers over a 24-hour period, which represents a relatively modest deployment.

  • Number of Servers: 100
  • Values measured per Server: 100
  • Measurement Interval: 10s
  • Dataset duration(s): 24h
  • Total values in dataset: 87M per day

This is only a subset of the entire benchmark suite, but it’s a representative example. If you’re interested in additional detail, you can read more about the testing methodology on GitHub.

Write performance

InfluxDB outperformed Graphite by 14x when it came to data ingestion.

influxdb vs graphite write throughput

On-disk compression

InfluxDB outperformed Graphite for time series by delivering 7x better compression.

influxdb vs graphite on-disk storage

Query performance

InfluxDB for time series delivers 10x better performance, when returning cached queries.

influxdb vs graphite query throughput performance

Summary

Ultimately, many of you were probably not surprised that a purpose-built time series database designed to handle metrics would significantly outperform a search database for these types of workloads. Especially glaring is that when the workloads require scalability, as is the common characteristic of real-time analytics and sensor data systems, a purpose-built time series database like InfluxDB makes all the difference.

In conclusion, we highly encourage developers and architects to run these benchmarks for themselves to independently verify the results on their hardware and data sets of choice. However, for those looking for a valid starting point on which technology will give better time series data ingestion, compression and query performance “out-of-the-box”, InfluxDB is the clear winner across all these dimensions, especially when the data sets become larger and the system runs over a longer period of time.

What's next?