All Customers / Customer Story
Flood IO, based in Melbourne (Australia), is bringing a new approach to the traditional load testing space. Built for testers, Flood IO allows you to run distributed load tests for millions of concurrent users across the globe with your choice of JMeter, Gatling, Selenium or Capybara. Customers can use Flood IO’s On Demand infrastructure or Host Your Own nodes on AWS. Unlike traditional load testing services, Flood is designed to maximize testing success. It allows you to run as many users and tests as you need, for as long as you like, while it takes care of the servers and collecting the results.
Flood IO offers on-demand load testing service on open source tools such as JMeter, Gatling, and Selenium with no test launch delays, no limits on number of tests and users, and real-time visibility into test performance. They wanted to enable customers to quickly distribute their test plan across hundreds of servers in multiple AWS regions. Quick feedback loops were also critical for successful testing, so Flood IO wanted to show test results as soon as they got them, allowing testers to stop a test and make changes as necessary. Limits on performance, visibility or scalability would jeopardize testing success and limit company growth. Flood IO’s on-demand testing service uses InfluxDB to provide insights into customers’ performance tests. In addition, Kapacitor is used to automatically spin up test environments for customers and provide a real-time view of the tests they run.
InfluxDB serves as the time series database to store Flood IO’s time series data, and metrics and events are collected and analyzed by Kapacitor using InfluxDB Cloud. Flood IO’s distributed grid of semi-autonomous, loosely coupled nodes hosted on AWS infrastructure runs different testing tools and collates results in near real time across multiple geographic regions including the US, EU and Asia Pacific.
“If we hadn’t adopted InfluxDB, we wouldn’t have been able to scale to the capacity or requirements of customers we have today. Running 900 nodes across 30-node clusters, Elasticsearch would have been extremely painful. We probably would have lost business.”