InfluxData Blog

Assessing Write Performance of InfluxDB’s Clusters w/ AWS


While conducting the various benchmark tests against InfluxData, we decided to also explore the aspects of scaling clusters of InfluxDB with our closed-source InfluxEnterprise product, primarily through the lens of write performance.

This data should prove valuable to developers and architects evaluating the suitability of InfluxEnterprise for their use case, in addition to helping establish some rough guidelines for what those users should expect in terms of write performance in a real-world environment.

To read the complete details of the benchmarks and methodology, download the “Assessing Write Performance of InfluxDB’s Clusters w/ AWS” technical paper or watch the recorded video titled: “How cluster creation and differences impact performance.”

Our goal with this benchmarking test was to create a consistent, up-to-date comparison that reflects the latest developments in InfluxDB and InfluxEnterprise. Periodically, we’ll re-run these benchmarks and update this document with our findings. All of the code for these benchmarks are available on GitHub. Feel free to open up issues or pull requests on that repository or if you have any questions, comments, or suggestions.

Now, let’s take a look at the results…

Versions Tested

InfluxEnterprise: v1.1.0

InfluxDB is an open-source time-series database written in Go. At its core is a custom-built storage engine called the Time-Structured Merge (TSM) Tree, which is optimized for time series data. Controlled by a custom SQL-like query language named InfluxQL, InfluxDB provides out-of-the-box support for mathematical and statistical functions across time ranges and is perfect for custom monitoring and metrics collection, real-time analytics, plus IoT and sensor data workloads.

About the Benchmarks

In building this benchmark suite, we identified a few parameters that are most relevant to scaling write performance. As we’ll describe in additional detail below, we looked at performance across three vectors:

  • Number of Data Nodes
  • Replication Factor
  • Batch Size
  • The trends for these are relatively straightforward. We expect throughput to increase with more data nodes and a larger batch size, and to decrease with a higher replication factor (since the replication factor means that data has to be written throughout the cluster multiple times).

    About the Data Set

    For this benchmark, we focused on a dataset that models a common DevOps monitoring and metrics use case, where a fleet of servers are periodically reporting system and application metrics at a regular time interval. We sampled 100 values across 9 subsystems (CPU, memory, disk, disk I/O, kernel, network, Redis, PostgreSQL, and Nginx) every 10 seconds. For the key comparisons, we looked at a dataset that represents 10,000 servers over a 24-hour period, which represents a decent-sized production deployment. We also provided some color about how these comparisons scale with a larger dataset, both in duration and number of servers.

    • Number of Servers: 1,000
    • Values measured per Server: 100
    • Measurement Interval: 10s
    • Dataset […]
      Read Full Post

    Now in Beta: Chronograf a complete open source monitoring solution running on the TICK Stack


    Today we’re announcing the latest edition of Chronograf, the user interface of the TICK stack and moving the project to beta status. Over the past month, we have been quickly iterating on features and addressing issues based on user feedback.  Key highlights include:

    • OAuth authentication via GitHub
    • Application templates for ElasticSearch, Varnish, and 22 other templates
    • Responsive design for the host view page
    • A number of other smaller bug fixes… refer to the change log for more details

    As a part of the beta release, we have recorded a video which walks through the key capabilities of Chronograf in less than 5 mins.

    Read Full Post

    InfluxDB Week in Review – Dec 5, 2016


    InfluxDB Week in Review – Nov 28, 2016


    InfluxDB Week in Review – Nov 21, 2016


    InfluxDB Markedly Outperforms OpenTSDB in Time-Series Data & Metrics Benchmark


    This is the an update in a series of detailed benchmarking tests comparing InfluxDB vs other databases for time-series data and metrics workloads. Previously, we have completed benchmarking tests comparing InfluxDB vs Elasticsearch, Cassandra, and MongoDB.

    At InfluxData, one of the common questions we’ve been getting asked by developers and architects alike the last few months is, “How does InfluxDB compare to OpenTSDB for time-series workloads?” This question might be prompted for a few reasons. First, if they’re starting a brand new project and doing the due diligence of evaluating a few solutions head-to-head, it can be helpful in creating their comparison grid. Second, they might already be using OpenTSDB for ingesting logs in an existing monitoring setup, but would like to now see how they can integrate metrics collection into their system and believe there might be a better solution than OpenTSDB for this task.

    Read Full Post