TL;DR InfluxDB Tech Tips - Interesting InfluxDB Timestamps Issues and the INTO Clause

Navigate to:

In this weekly post we recap the most interesting InfluxDB timestamps and TICK-stack related issues, functions with an INTO Clause, workarounds, how-tos and Q&A from GitHub, IRC and the InfluxDB Google Group that you might have missed in the last week or so.

Negative timestamps

Q: I’m entering historical data into InfluxDB. Is it possible to write data with timestamps that occur before epoch 0?

A: Yes! InfluxDB supports negative timestamps. The minimum acceptable timestamp is -9223372036854775806 or 1677-09-21T00:12:43.145224194Z. Here’s an example of a negative timestamp in line protocol - the timestamp in human-readable format is 1969-12-07T18:47:12Z:

> INSERT holidata snow=3 -2092368000000000

Functions with an INTO clause

Q: When I use DISTINCT() without an INTO clause everything works as expected. When I use DISTINCT() with an INTO clause I end up missing data points in the destination measurement. Is this the expected behavior? What am I missing? My data:

> SELECT * FROM "influxdays"
name: influxdays
time                  value
----                  -----
2016-12-01T19:06:50Z  fa
2016-12-04T19:12:19Z  fa
2016-12-14T19:06:50Z  fa la
2016-12-17T19:06:50Z  fa la la

DISTINCT() without INTO:

> SELECT DISTINCT("value") FROM "influxdays"
name: influxdays
time                  distinct
----                  --------
1970-01-01T00:00:00Z  fa
1970-01-01T00:00:00Z  fa la         <---- There are three distinct values! 
1970-01-01T00:00:00Z  fa la la

DISTINCT() with INTO:

> SELECT DISTINCT("value") INTO "influxdays_copy" FROM "influxdays"
name: result
time                  written
----                  -------
1970-01-01T00:00:00Z  3             <---- It says it wrote three points!

The results in the destination measurement:

> SELECT * FROM "influxdays_copy"
name: influxdays_copy
time                  distinct
----                  --------
1970-01-01T00:00:00Z  fa la la      <---- I only see one point!

A: What you’re seeing is the intended behavior, but I can see that it’s not necessarily what you’d expect.

Functions like DISTINCT() (also see TOP() and BOTTOM()) often return several points with the same timestamp. InfluxDB assumes points with the same series and timestamp are duplicate points so, when you include an INTO clause in the query, InfluxDB simply overwrites what it thinks are duplicate points in the destination measurement.

COUNT() with fill()

Q: Most InfluxQL functions report null values for time intervals with no data, but COUNT() reports 0 for time intervals with no data. So how does fill() work with COUNT()?

A: Using fill(<fill_option>) with COUNT() replaces any 0 values with the given fill_option:

Without fill():

> SELECT COUNT("partridge") FROM "gifts" WHERE time >= '2016-12-07T00:00:00Z' AND time <= '2016-12-07T02:00:00Z' GROUP BY time(1h)
name: gifts
time                  count
----                  -----
2016-12-07T00:00:00Z  1
2016-12-07T01:00:00Z  2
2016-12-07T02:00:00Z  0

With fill():

> SELECT COUNT("partridge") FROM "gifts" WHERE time >= '2016-12-07T00:00:00Z' AND time <= '2016-12-07T02:00:00Z' GROUP BY time(1h) fill(30)
name: gifts
time                  count
----                  -----
2016-12-07T00:00:00Z  1
2016-12-07T01:00:00Z  2
2016-12-07T02:00:00Z  30        <--- now the zero is 30

For more InfluxDB tips, see our Frequently Asked Questions page and feel free to post your questions in the InfluxDB users group!

What's next

  • Downloads for the TICK-stack are live on our "downloads" page
  • Deploy on the Cloud: Get started with a FREE trial of InfluxDB Cloud featuring fully-managed clusters, Kapacitor and Grafana.
  • Deploy on Your Servers: Want to run InfluxDB clusters on your servers? Try a FREE 14-day trial of InfluxDB Enterprise featuring an intuitive UI for deploying, monitoring and rebalancing clusters, plus managing backups and restores. 
  • Tell Your Story: Over 100 companies have shared their story on how InfluxDB is helping them succeed. Submit your testimonial and get a limited edition hoodie as a thank you.