Metrics, Logs and Traces: More Similar Than They Appear?

Navigate to:

This article was originally published in The New Stack and is reposted here with permission.

They require different approaches for storage and querying, making it a challenge to use a single solution. But InfluxDB is working to consolidate them into one.

Time series data has unique characteristics that distinguish it from other types of data. But even within the scope of time series data, there are different types of data that require different workloads. Metrics, traces and logs are all different, making it a challenge to design a single solution that can handle all three data types.

While the data structure for all three types is generally the same, the query patterns for each workload differs. Systems designed to store time series data don’t all handle those different query patterns the same. We see this challenge reflected in the time series marketplace, where there are three classes of software for metrics, traces and logs.

Solutions like InfluxDB, Grafana, Prometheus and others can collect, store, analyze and visualize metric data. Jaeger is available for end-to-end distributed tracing, and recent updates to InfluxDB make it a viable option as well. For logs, a common solution is the so-called ELK stack, which consists of Elasticsearch, Logstash and Kibana, but a solution like Loki can handle logs too.

Logs are the most challenging type of time series data to work with, so let’s dive into why that’s the case.

Data model

As mentioned above, all time series data can use the same data model. InfluxDB’s line protocol provides a helpful example.

measurement,tag1=value,tag2=value field1=value,field2=value timestamp

Here, the measurement functions, like the name of a table and the timestamp, are exactly that. The tags and fields are key-value pairs, where tags function as metadata and fields represent the data you want to collect, store, analyze and/or visualize.

A series is the unique combination of measurement and tag values, so the more tags you have, the more unique series you have. We call this cardinality, and when cardinality gets too high, it can affect performance. This issue is typical for all approaches based on a log-structured merge-tree (LSM), which is a common solution for metrics systems.

Parsing log data

Let’s say that we’re using log data to debug an application. The raw output from a log can look like this:

"level=debug msg=\"Not resuming any session\" log.target=rustls::client::hs log.module_path=rustls::client::hs log.file=/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/rustls-0.20.8/src/client/hs.rs log.line=127 target=\"log\" time=1675710426464848718\n"

If we look closely, we can start to parse this data into key-value pairs that we can then use in our data model.

level=debug
log.target=rustls::client::hs
log.module_path=rustls::client::hs log.file=/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/rustls-0.20.8/src/client/hs.rs
log.line=127
target=\”log\”
msg=”Not resuming any session”
time=1675710426464848718

At this point, we can start to make some decisions about which keys should be tags (metadata) and which are fields. Querying against tags enables developers to slice and dice data along almost any dimension. But the more tags that exist, the more resources it takes to run each query, which ultimately affects performance.

If we look at another log file, the problem starts to become clearer. Here we already parsed the log file as key-value pairs.

level=debug
msg=\”Processing request\”
request=\”Request { method: GET, uri: http://10.144.148.50:8080/metrics, version: HTTP/2.0, headers: {\\\”user-agent\\\”: \\\”Prometheus/2.38.0\\\”, \\\”accept\\\”: \\\”application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1;q=0.75,text/plain;version=0.0.4;q=0.5,*/*;q=0.1\\\”, \\\”accept-encoding\\\”: \\\”gzip\\\”, \\\”x-prometheus-scrape-timeout-seconds\\\”: \\\”10\\\”, \\\”x-forwarded-proto\\\”: \\\”http\\\”, \\\”x-request-id\\\”: \\\”001052e8-d898-4b4b-9e21-0b0a4918970a\\\”}, body: Body(Streaming) }\”
target=\”ioxd_common::http\”
location=\”ioxd_common/src/http/mod.rs:121\”
time=1675710425927921595

If we compare the two log files, we see four common tags: level, msg, target and time. There are also several unique tags: log.target, log.module_path, log.file, log.line, request and location. As a result, each log is a series because it contains a unique tag combination. When we consider how many individual logs exist when debugging an application, it’s easy to see how complicated querying that data becomes.

One contributing factor to this situation is a lack of naming consistency for attributes within an application. For example, if you’re debugging an application, each process may need different information, so naturally, the developers create attributes, which each become a key value, for that specific process. Expanding this practice across an entire application results in thousands of different keys because every process is doing something different.

To complicate matters even more, an attribute like “error” could have a tag key of e, error, err, err_code or any other descriptive permutation a developer can come up with. Sure, it’s possible in theory to clean and standardize an attribute like error, but that also creates a lot of work and requires you to know every permutation to ensure nothing slips through the cracks.

In short, logs not only generate a lot of data, but also generate a lot of unique data. That means the database likely needs to store the data differently than other types of time series data, and query patterns must account for the shape of log data.

Comparing logs and traces

Traces can create cardinality issues too. However, there are some key differences when we think about tracing.

The total number of keys from a trace tends to be more consistent. This is because there are a finite number of points within an application, and the output of those processes contains fewer programmer-defined elements. The result is more structured output. In other words, the tag keys are more likely to be the same, such as spanID, but their values are unbounded (for instance, trace and span IDs).

Unbounded tag values also contribute to higher cardinality. However, the key difference between traces and logs is that traces are more likely to have one unbounded element (tag values), while logs have two unbounded elements (tag keys and tag values). Metrics, by contrast, tend to have both bounded tag keys and tag values. Each of these combinations requires different approaches for storage and querying, which is why it’s such a challenge to use a single solution for all three data types.

Fortunately, there is hope on the horizon. InfluxDB has long handled metrics well, but with the release of its new database core, InfluxDB IOx, it can now manage high cardinality tracing data, in addition to metric and raw event data, in a single solution. Efforts to consolidate the three classes of time series applications into one continue as the team behind InfluxDB sets its sights on logs.