InfluxDB’s Strengths and Use Cases Applied in Data Science

Navigate to:

This article was written by a Sr. Data Scientist at Infosys.

Infosys is a global IT Leader, headquartered in India, with over 200,000 employees and a focus on digital transformation, AI/ML, and Analytics. Our organization faces challenges when working with data to assist with proactive anomaly detection, triaging incidents to accommodate for data and volume growth, and maintaining high availability and SLA’s for a near 100% uptime. Given the industry landscape and our current use cases, a time series database solution was vitally important.

Why we chose InfluxDB

Developers increasingly use time series databases, like InfluxDB, to improve the storage of time series data, instead of MySQL-like databases, because time series databases allow for continual monitoring of data. InfluxDB is scalable, provides analytics around memory, traffic, and CPU to support capacity planning, and IoT solutions.

We chose InfluxDB to handle our data challenges because of its wide recognition in the industry, and its quick adoption time. I also found out about InfluxDB via the virtual summit events that InfluxData, the builders of InfluxDB, hosted. These events helped connect and align our challenges, and introduced InfluxDB’s strong user-community. Unlike its competitors, InfluxDB allows for data portability, data log access, and anonymization of data to meet compliance requirements.

InfluxDB Version 2.2 is recognized as an industry leader in several technology categories, from Database as a Service (DBaaS), to Open Source, to Time Series Databases and Observability. Its power comes from its ability to scale elastically and to utilize sophisticated operations such as pipe forwarders to send one function’s output into that of another. InfluxDB is able to offer more granularity than some alternative databases. This is especially important for metrics like CPU, RAM, I/O, and disk space utilization, which help our organization with capacity planning and systems engineering. InfluxDB also handles greater average throughput, uses less disk space, and focuses on virtualization for cloud infrastructure.

The wealth of InfluxDB’s resources around API’s and metrics made it seamless for us to begin working with InfluxDB. APIs especially are a great resource for development in CI/CD pipelines, which can be brought together with some information in Splunk by sharded queries to support High Availability and Throughput. InfluxDB also has group functions that aggregate along series and create new group keys based on designated properties. It’s important to understand InfluxDB’s line protocol, which consists of tables, streams of tables, columns, rows, and group keys. Flux data structures are usually different from the source data format, which is often in columnar tables. Flux operates on streams of tables by taking a stream of tables as input and then operating on each table individually.

With this syntax comes Flux,, which can handle network telemetry data at almost any interval and speed, based upon the volume of data selected. Users can retrieve a specified amount of data from the source, filter it based on time or column values, process it, and shape it into results. Users can also do this within the UI. It allows them to customize queries and dashboards directly and helps with data reduction and scalability. A database cluster can also be created easily and configured based on the desired throughput. After a user aggregates metrics, the next focus is processing the data with InfluxDB tasks. The task engine has many offerings that help to analyze, modify, and act on streaming data. If a user wants to export data in JSON format for consolidation to a .zip file, InfluxDB tasks can send each record to a URL endpoint on a REST API. Then the user can use the functions json.encode() and http.post() either in a Flux script or in the User Interface.

Some of the most common uses of InfluxDB at Infosys include monitoring time series data, forecasting, creating alerts, and API development. There is also a focus on monitoring infrastructure networks, and in tying some of the issues around data reconciliation to billing, as in the case of a typical monitoring enterprise tool.

InfluxDB also helps with data checks, either using a threshold (assignment of a status for a value based on it being above, below, inside or outside of defined levels) or a deadman check (assigning a status to a value when a series or group doesn’t report in a specified amount of time). Queries can then be configured based on bucket value, measurement field, and tag sets. Checks help with monitoring data and allow users to focus on the source of the data using plugins.

Telegraf is a plugin-driven agent that supports categories of plugins including: input, aggregator, processor and output. Infosys uses plugins for projects, notably the Docker Engine and Syslog input plugins.

InfluxDB for real-time monitoring

At Infosys, InfluxDB’s value is justified because it helps with proactive monitoring of our organization, automation around auto-scaling, remediation of data errors to focus on traceability, data lineage, and aligning on KPI’s to work holistically across multiple services in the system. Prior to using InfluxDB, these capabilities were scattered across the organization, without a clear sense of accomplishment, required the use of multiple tools, were time-intensive, and were not always well communicated to other departments

InfluxDB Task Options help with the development of metrics. InfluxData’s data collection agent Telegraf incorporates the Flux language to assist with retention rate and variable manipulation, and for collecting and reporting metrics. It has a vast library of input plugins and a “plug and play” architecture. Telegraf reliably sends data to InfluxDB and can be deployed in three quick steps: installing the latest version, configuring the API token, and starting Telegraf in the command line terminal. InfluxDB’s HTTPS API is used for storing data and accepting inputs with the same protocol, which is especially helpful in getting started quickly.

InfluxDB includes several visualization options within the UI. Users can create dashboards that display data in graphs, heat maps, histograms, tables, and more. Some especially helpful visualization options include formatting line colors, shading areas below lines, selecting data to display with a cursor, and setting thresholds to change the color of a single statistic.

The UI screenshot below shows a query developed in Flux. It operates on a specified interval of time, the past hour in this case, filters by the CPU measurement, and calculates the mean data. This block of code yields the mean value of CPU utilization for the past hour.

query in Flux

One area I plan to continue to develop using InfluxDB is DevOps within Kubernetes. We currently manage the GUI with Linux-based commands. A key aspect for working with InfluxDB in our case is to not have it as just another tool in our environment, but one which helps to address security events, application errors, and overlaying applications, because we’ve gained a larger team to work collaboratively.