Critical maintenance releases for InfluxDB OSS and InfluxDB Enterprise are available now.
If you are running the 1.7.3 release, it is imperative that you read and review the following:
We were recently made aware of a critical defect in the InfluxDB 1.7.3 release. We took swift action to make the necessary corrections and we encourage you to upgrade to the 1.7.4 release as quickly as possible.
The issue specifically affects shards larger than 16GB, which have a high potential for data loss once the shard goes through a full compaction. This typically occurs as shards go cold, meaning once no new data is being written into the database for the time range specified by the shard. Our engineering team is performing a post-mortem to determine how this defect was introduced and a subsequent blog post will highlight what we discover.
Note: This defect is not present in any other InfluxDB 1.7 release.
This maintenance release of InfluxDB 1.7.4 includes the following fixes:
- Remove copy-on-write when caching bitmaps in TSI
- Use Systemd for Amazon Linux 2
- Revert “Limit force-full and cold compaction size.”
TagValueSeriesIDCacheto use string fields
- Ensure that cached series id sets are Go heap backed
- Allow TSI bitset cache to be configured
InfluxDB Enterprise 1.7.4 does not include any additional fixes.
What if you cannot upgrade immediately?
- You can prevent the full compaction from running by modifying the configuration of InfluxDB.
compact-full-write-cold-duration = "336h0m0s"within the influx config file (typically
/etc/influxdb/influxdb.conf) on all data nodes. Restart the influxdb process once applied. This will extend the full compaction cycle to 14 days and is only a temporary workaround.
Can I downgrade?
- Yes, downgrading is also an option. However, it effectively results in the same amount of work (or more) as applying the new release.
For community members, InfluxDB 1.7.4 can be downloaded here.
For our InfluxDB Enterprise customers, log in to the InfluxDB Enterprise portal and download the binaries from there.