Batch Processing vs. Stream Processing: What's the Difference?

Navigate to:

If you’ve read DevRel Katy Farmer’s stellar post, Kapacitor and Continuous Queries: How To Decide Which Tool You Need, then you know that when our community talks, we listen. So, in alignment with that view and in honor of our very own Kapacitor Koala, let’s tackle another common community issue that has come to our attention: when should we use batch processing versus stream processing in our Kapacitor tasks?

Image of InfluxData's Kapacitor Koala mascot<figcaption> Our famous Kapacitor Koala</figcaption>

Now, if you’ve no vague idea what Kapacitor is, I recommend doing a little light reading on it here and here just to get you up to speed.  Kapacitor, the final component of our TICK Stack, offers several capabilities such as data transformation, downsampling, and alerting. Kapacitor uses its own DSL, called TICKscript, which allows you to define certain tasks, which can then be executed on your data—essentially, it’s processing your data for you.

Here’s where it gets tricky though: how do you choose whether to process your data as a batch task or streaming task?

Batch Tasks

Let’s discuss batch tasks first. A batch is a collection of data points that have been grouped together within a specific time interval. Another term often used for this is a window of data. When running a batch task, Kapacitor queries InfluxDB periodically, thereby avoiding having to buffer much of your data in RAM. There are several cases where batch processing is the way to go:

  • Performing aggregate functions such as finding the mean, maximum, or minimum of a set interval of data.
  • Cases where alerting doesn't need to run on every single data point (since state changes will probably not happen that often). You don't want to be inundated with alerts!
  • Downsampling of your data takes a large collection of data points and only retains the most significant data (so you can still view overall trends in the data).
  • Cases where a little extra latency won't severely impact your operation.
  • Cases with a super-high throughput InfluxDB instance since Kapacitor cannot process data as quickly as it can be written to InfluxDB (this occurs more frequently with InfluxDB Enterprise clusters).

Stream Tasks

On the other side, we have stream tasks. Stream tasks create subscriptions to InfluxDB so that every data point written to InfluxDB is also written to Kapacitor. One should note though that stream tasks use a high percentage of available memory, so memory availability is a key factor to take into consideration. Here’s where stream processing is most ideal:

  • If you want to transform each individual data point in real time (technically, this could also be run with a batch process but there's latency to consider).
  • Cases where lowest possible latency is paramount to the operation. If alerts need to be triggered immediately, for example, running a stream task will ensure the least possible delay.
  • Cases in which InfluxDB is handling high volume query load and you may want to alleviate some of the query pressure from InfluxDB.
  • Stream tasks understand time by the data's timestamps; there are no race conditions for when exactly a given point will make it into a window or not. With batch tasks, on the other hand, it is possible for a data point to arrive late and be left out of its relevant window.

Another advantage some might see with writing stream tasks is the ease of use in having to define the task using only Kapacitor’s TICKscript, without having to delve into writing queries for InfluxDB. If you are comfortable with writing both, however, it’s probably going to be in your best interest to go with batch processing most of the time since it uses a lot less memory. An additional factor to consider is that Kapacitor is not limited to use only with InfluxDB. For example, if you want to send data straight from Telegraf over to Kapacitor, that will have to be done as a streaming task.

Key Takeaways

  • Batch tasks query InfluxDB periodically, use limited memory, but can place additional query load on InfluxDB.
  • Batch tasks are best used for performing aggregate functions on your data, downsampling, and processing large temporal windows of data.
  • Stream tasks subscribe to writes from InfluxDB placing additional write load on Kapacitor, but can reduce query load on InfluxDB.
  • Stream tasks are best used for cases where low latency is integral to the operation.
When our community talks, we listen.

We’d love to hear how your batch and stream tasks are going! Send us your comments, questions, issues, and blog ideas on our community site and feel free to reach out to us on Twitter:

@InfluxDB @mschae16