The Immutability of Time Series Data

Navigate to:

This article was originally published in The New Stack and is reposted here with permission.

Time series data often comes in large volumes that need to be handled carefully to produce insights in near real time.

We’re constantly moving through time. The time it took you to read this sentence is now forever in the past, unchangeable. This leads to something unique about data with a time dimension: It can only go in one direction. Time series data is different from other data for many reasons. It often comes in large volumes that need to be handled carefully to produce insights in near real time. This blog post focuses on the unchangeable, immutable nature of time series data.

The past is the past

In our world, time is immutable, which means once it’s in the past, it can’t be changed, like an immutable object in programming. In a perfect world, data reflects that; you can’t change time series data any more than you can rewind the clock. Data should reflect reality, but sometimes bad data points get written into a database. Bad data points don’t reflect reality either, and it makes sense in this case to delete those points.

Deletion needs to be handled with care.

  1. You need a database that can delete points without shifting other points around.
  2. You might also need to edit the historical record by adding late-arriving points.

There’s a balance to strike so you’re not constantly rewriting the past in such a way that your data loses meaning, but you can still make necessary changes that enhance the context presented by time. When deciding if it makes sense to make an edit, consider whether the edit brings the data closer to reflecting reality or further from it.

Time keeps moving

The other thing about time is that it never stops. The present is always continuously moving forward. Because time is always moving, time series data updates continuously. When you think of a database, you might think of a place for storage where you write data and later read that data without changing it very often. A time series database is constantly being changed and updated because time is always moving and changing.

The-Immutability-of-Time-Series-Data

You can’t collect data with the infinite precision of reality, but you can choose the level of precision that makes sense for your application. For example, averages are one of the most common and useful calculations. If you’re working with data that isn’t a time series, you might average the number of people per square mile in a state. With time series data, you might average the number of people entering a building every hour. The difference here is that at each moment, the start and end of the last hour changes. Here’s some example code for this sort of calculation:

from(bucket: "sample")
    |> range(start: 2022-01-01, stop: 2022-01-31)
    |> filter(fn: (r) => r["_measurement"] == "foot_traffic")
    |> aggregateWindow(column: "number_of_people",every: 1h, fn: mean, createEmpty: false)
    |> yield(name: "running mean")

When you take a moving average, you calculate a new average at specified intervals so you can see how your calculation changes over time, resulting in a new time series. You need to consider your data set to know what sort of interval makes sense. If you choose too broad an interval, you lose information and context, but if you choose one that is too precise, you’ll have windows without any data points, and your results will drop to zero in a way that doesn’t make sense and isn’t helpful.

The context of time

No matter how close to real time your data architecture is, there will be some infinitesimal amount of lag between when your data is collected and when it lands in a database, ready to be queried. If you have automated queries or processing set up, this can skew your results. For example, if you calculate the mean of a metric from the last five minutes, the data that arrived at the database in the last five minutes might not include the full set of measurements that were taken at the edge in the last five minutes. InfluxDB lets you handle this with task offsets.

You can schedule tasks to run calculations like this while including some extra buffer time to allow all data to arrive in the database first. This is important to preserve the full context of when each point was collected. Telegraf, InfluxData’s open source data-collection agent, also allows for offsets.

There are many reasons to downsample data. Sometimes you don’t have enough storage space for the full raw data set. Sometimes an averaged signal cuts through the noise and gives you more valuable information. When you take an average, some information is lost and some new information is added. Averages aren’t the only way of downsampling either. Sometimes instead of maintaining the shape of the data, it might make more sense to count the number of times a metric goes above a set threshold.

Whatever downsampling method you use, everything you do should be intentional so you don’t lose data you realize later is important. If you aren’t careful while downsampling and aren’t handling all your timestamps properly, downsampling can skew your time series. InfluxDB is built to handle downsampling using a variety of tools and processes, and it creates multiple backup copies of your data in InfluxDB Cloud, so you don’t accidently delete a point you need. In order to keep as much context as possible InfluxDB also supports nanosecond precision. Here’s some example code for downsampling in Flux:

from(bucket: "sample")
    |> range(start: 2022-01-01, stop: 2022-01-31)
    |> filter(fn: (r) => r["_measurement"] == "foot_traffic")
    |> aggregateWindow(column: "number_of_people",every: 1h, fn: mean, createEmpty: false)
    |> to(bucket: "sample-downsampled")

Of course, time isn’t the only important context to consider, and time series data isn’t the only important kind of data. Information like customer details, location or the version of a machine being used aren’t time series data, but they are important to record. Fortunately, InfluxDB allows you to join these other types of data with time series data to produce deeper insights into your systems and processes.

Time is one of the fundamental building blocks of our reality, and understanding its nature helps you better understand the world and get more useful information out of your data.