In this weekly post we recap the most interesting InfluxDB field keys and TICK-stack related issues, workarounds, how-tos and Q&A from GitHub, IRC and the InfluxDB Google Group that you might have missed in the last week or so.
A Case of Missing Data
Q: My writes to InfluxDB are succeeding but my queries don’t return any results. I can see the database I created and the measurements that I’ve written to, but the actual data points seem to be in hiding. Do you have any advice?
A: Without knowing a bit more about your data setup and queries it’ll be hard to identify exactly what’s happening on your end. Here are a couple things that might explain why your queries aren’t returning anything:
The first and most common explanation involves retention policies (RP). InfluxDB automatically queries data in a database’s
DEFAULT RP. If your data are stored in an RP other than the
DEFAULT RP, InfluxDB won’t return any results unless you specify the alternative RP.
Another possible explanation has to do with your query’s time range. By default, most
SELECT queries cover the time range between
1677-09-21 00:12:43.145224194 and
SELECT queries that also include a
GROUP BY time() clause, however, cover the time range between 1677-09-21 00:12:43.145224194 and
now(). If any of your data occur after
GROUP BY time() query will not cover those data points. Your query will need to provide an alternative upper bound for the time range if the query includes a
GROUP BY time() clause and if any of your data occur after
The final and more obscure explanation involves schemas with identical field keys and tag keys. If a field key matches a tag key and your query only specifies the key, InfluxDB will assume that you are querying the field key. In some cases, this can make it seem as though your data are missing. You’ll need to use the
:: syntax to differentiate between the field key and tag key.
A Study in *
Q: I’m including
GROUP BY * in my query. Could you help me understand the output from my query? How does InfluxDB determine what to
> SELECT MEAN("rache") FROM "scarlet" GROUP BY * name: scarlet tags: level=1, location= time mean ---- ---- 1970-01-01T00:00:00Z 13 name: scarlet tags: level=3, location=2 time mean ---- ---- 1970-01-01T00:00:00Z 2 name: scarlet tags: level=4, location=5 time mean ---- ---- 1970-01-01T00:00:00Z 14
level = 1and
location = ''
level = 3and
location = 2
level = 4and
location = 5
InfluxDB groups the data points in the
scarlet measurement by the unique tag set combinations and calculates the average
rache for each of those groups.
The Sign of the FOR
Q: I’m running a Continuous Query (CQ) and I’ve noticed that it misses some of my data points. My main problem seems to be that my data are arriving late; so data arrive for a specific time interval only after the CQ runs for that time interval. Is there a way to get around this? Here’s my CQ:
CREATE CONTINUOUS QUERY in_1888 ON treasure BEGIN SELECT MAX("darts") INTO "items" FROM "pearls" GROUP BY time(15m) END
A: Yes! There is a way to get around that behavior. You can use the advanced CQ syntax to configure the CQ’s time range. Your current CQ runs every 15 minutes and queries data that fall within the past 15 minutes. The CQ below still runs every 15 minutes, but it queries data that fall within the past 30 minutes:
CREATE CONTINUOUS QUERY in_1888 ON treasure ?RESAMPLE FOR 30m? BEGIN SELECT MAX("darts") INTO "items" FROM "pearls" GROUP BY time(15m) END
Check out the CQ documentation for examples of the advanced syntax and for additional information.
- Downloads for the TICK-stack are live on our “downloads” page
- Deploy on the Cloud: Get started with a FREE trial of InfluxDB Cloud featuring fully-managed clusters, Kapacitor and Grafana.
- Deploy on Your Servers: Want to run InfluxDB clusters on your servers? Try a FREE 14-day trial of InfluxDB Enterprise featuring an intuitive UI for deploying, monitoring and rebalancing clusters, plus managing backups and restores.
- Tell Your Story: Over 100 companies have shared their story on how InfluxDB is helping them succeed. Submit your testimonial and get a limited edition hoodie as a thank you.