Simplifying InfluxDB: Shards and Retention Policies

Navigate to:

floating ice chunks

I recently did a webinar on an Introduction to InfluxDB and Telegraf and in preparing for it, I came to the woeful realization that there are still a number of concepts about InfluxDB which remain quite mysterious to me. Now, if you’re anything like me, databases and data storage don’t come naturally. (If they do, it never hurts to have a bit of review). I thought I had a pretty thorough understanding of InfluxDB as a time series data store, but now I see that there’s a lot more to it than meets the eye. Coincidentally, the inner workings of InfluxDB are pretty mysterious to some of our community as well and thus this blog post. In this guide (of sorts), we’ll try to make sense of some of the more enigmatic concepts around InfluxDB - specifically with regards to retention policies, shard groups, and shards. We’ll look at what they are and how they’re related to one another.

hourglass

It's about time we paid attention to TSDBs.

Before We Jump In

If you’re new to the Time Series Database world, or if this is the first time you’re reading about InfluxDB, you may want to do a little light reading and gain some contextual knowledge. Here are some helpful resources to get you up to speed:

Retention Policies

Let’s tackle retention policies first. Time series data by nature begins to pile up pretty quickly and it can be helpful to discard old data after it’s no longer useful. Retention policies offer a simple and effective way to achieve this. It amounts to what is essentially an expiration date on your data. Once the data is “expired” it will automatically be dropped from the database, an action commonly referred to as retention policy enforcement. When it comes time to drop that data however, InfluxDB doesn’t just drop one data point at a time; it drops an entire shard group.

Balloons floating into the wind

Retention policies drop entire groups of data, not just a single data point.

Shard Groups

A shard group is a container for shards, which in turn contain the actual time series data (but more on that in a minute). Every shard group has a corresponding retention policy and any shards within a single shard group adhere to the same retention policy. Additionally, every shard group has a shard group duration, which dictates the window of time each shard group spans (the time interval). The time interval can be specified when configuring the retention policy. If nothing is specified, the shard group duration defaults to 7 days.

Shards

When we think about a typical Time Series Database, the sheer volume of time series data that is stored and queried within merits an alternative approach to the categorization of that data. This is where shards come in. Shards are ideal containers for time series data. Sharding the data within InfluxDB allows for a highly scalable approach for boosting throughput and overall performance, especially considering that the data in a Time Series Database will in all likelihood grow over time.

Shards contain temporal blocks of data and are mapped to an underlying storage engine database. The InfluxDB storage engine is called TSM or Time-Structured Merge Tree and is remarkably similar to an LSM Tree. The TSM files are what contains the encoded and compressed time series data, organized within shards.

All shards belong to a single shard group, and their time intervals fall within the shard group’s time interval. It’s quite possible to have a single shard per shard group, as we see in the open-source version of InfluxDB, or multiple shards per shard group as often occurs in a multi-node cluster.

Looping Back to RPs

Looping back to retention policies for a moment, let’s take a closer look at how things fit together. When you create a database in InfluxDB, you automatically create a default retention policy for that database called autogen. If you choose not to modify the default policy, the value is set to infinite. In this case, the shard group duration will default to 7 days. This means that your data will be stored in 1 week time windows. If your retention policy is on autogen (or infinite), the data is not actually stored infinitely - this just means the retention policy matches the shard group duration, so the retention policy is effectively disabled. On the other hand, the minimum time you can set your retention policy to is one hour.

Another way to think about it is that a retention policy is like a bucket for shard groups to live in. Once the retention policy expiration date kicks in, you throw out the shard group that has the interval of time that doesn’t pass the retention policy expiry date. So even as time passes, you’ll still have the same amount of data available to you - it will just shift in time. For example, if I set my retention policy to one year, I’ll always have a year’s worth of data available to me (once I hit that first year mark).

As you can see, shard groups (and by association, shards) are closely related to retention policies; if a retention policy has data, it will have at least one shard group. Every data point, which is a measurement consisting of any number of values and tags associated with a particular point in time, must be associated with a database and a retention policy. It’s important to remember here that a database can have more than one retention policy and that all retention policies are unique per database.

OSS vs. Enterprise

Things get a little hairy when we start looking at shards, shard groups, and retention policies in InfluxDB Enterprise as compared to the open-source version of InfluxDB. If we’re using the open-source version, we’ve only got a single node instance of InfluxDB, and this means we don’t need to worry about replicating our data because that feature isn’t available. So the shard group ends up having only one shard within it, effectively making them the same thing (another way to think about it is that the shard becomes redundant). This is because you don’t need to spread the data evenly across multiple nodes—you’ve only got one node! When the retention policy kicks in, you drop the whole shard group.

With InfluxDB Enterprise, on the other hand, you can have multiple node instances of InfluxDB. If you want to know more about this clustering capability, I recommend reading this blog, which covers the basics. Having more than one node in a cluster is the reason shard groups exist. We needed a way to spread the data evenly across multiple nodes, while still belonging to the appropriate database, retention policy, and time interval. In Enterprise, a shard group can have (and usually does have) a set of shards within it that all share the same time span. Each shard in the shard group would contain a different subset of time series.

We also see replication factor come into play with the Enterprise version of InfluxDB. The replication factor represents the number of copies you want to make of the data. You can specify the replication factor in the database retention policy. Two copies of the same data cannot end up in the same shard group. They would ideally live in separate shard groups and on separate nodes. That way, if one node goes down, you still have a backup on another node.

Seeing It in Action

To help this sink in, let’s consider all of this with a few examples:

For the open-source version, remember we’ve only got one node instance, so a shard group would have only one shard within, like so:

Data Points
---------------
series_a t0
series_a t4

series_b t2
series_b t6

series_c t3
series_c t8

series_d t7
series_d t9

Shard Group Z (t0 - t10)
-------------
Shard 1 (series_a, series_b, series_c, series_d)

From the simplified example above, you see we have a shard group (Z) with a time span from t0 to t10 and several series subsets (a, b, c, and d). Because we don’t have to worry about distribution here (spreading the data evenly across various nodes), all the series are contained within one shard (Shard 1).

For the Enterprise version, we can have more than one node, so things get a little more complicated. If we had a two-node cluster, for example, with a replication factor of 1:

Data Points
---------------
series_a t0
series_a t4

series_b t2
series_b t6

series_c t3
series_c t8

series_d t7
series_d t9

Shard Group Z (t0 - t10)
-------------
Shard 1 (series_a, series_c) (Node A)
Shard 2 (series_b, series_d) (Node B)

You can see we still have shard group Z with a time span from t0 to t10, but this shard group contains two shards. Because replication factor is only 1 (i.e. only 1 copy of data), distribution takes priority and so half the data is stored on Node A and the other half is stored on Node B. This evenly spreads the data across the two nodes and lessens possibility of performance issues. However, if we increase the replication factor to 2, the replication takes precedence over distribution and the outcome looks quite similar to the open-source example. See below:

Data Points
---------------
series_a t0
series_a t4

series_b t2
series_b t6

series_c t3
series_c t8

series_d t7
series_d t9

Shard Group Z (t0 - t10)
-------------
Shard 1 (series_a, series_b, series_c, series_d) (Node A, Node B)

Now we’re back to one shard within one shard group (Z), but it exists on both nodes, due to the replication factor. Let’s add retention policy to the mix now.

Let’s say we’ve got our database all set up with a retention policy of 1 day (24hrs) and our shard group duration set to the recommended 1 hour time interval. If this is the OSS version of InfluxDB, the shard group will contain one shard. That shard will house all series for the 1 hour time span similar to what we saw in our first example:

Shard Group Z (t0 - t60)
-------------
Shard 1 (series_a, series_b, series_c, series_d)

Of course for every hour in the day, a new shard group will be created spanning 60 minutes and the number of shard groups will continue to increase until we hit the 25th hour (after 1 full day passes). When the retention policy is enforced, we will see that the initial shard group has passed the expiration point, and so the entire shard group will be dropped. This will continue on the hour, every hour. So at any given time, we will have precisely 1 day’s worth of data.

Hand holding a lightbulb

Making Sense of It All

To summarize:

  • An InfluxDB instance can have 1 or more databases.
  • Each of those databases can have 1 or more retention policies.
  • You can specify the retention interval, shard group duration, and replication factor in your retention policy.
  • Each retention policy can have 1 or more shard groups (as long as there's data).
  • Each shard group can have 1 or more shards (always 1 shard for the OSS version).
  • Shards contain the actual data.

I hope this post has helped to clear things up a little, but if you’re still feeling confused (trust me, I know the feeling well), please reach out to us on Twitter @influxDB and @mschae16 and we can try to answer all your questions. Or check out our awesome community site where everyone comes together to help each other out with debugging and making sense of the magical and oft-times mysterious InfluxData platform.