Moving from Relational to Time Series Databases
By
Heather Downing /
Developer
Jun 10, 2025
Navigate to:
I’ve been building apps with SQL Server for years. Everything worked well until I started dealing with sensor data, stock trade volume, and IoT telemetry. As the volume of time-stamped records grew into the millions, I saw relational databases struggling with workloads they weren’t designed for.
That’s when I explored time series databases. The performance improvements were significant, but what surprised me was the mental shift required.
Relational databases trained me to think: “What objects do I need, and how are they related?”
Time series databases made me ask: “What measurements am I taking and when?”
This fundamental change in thinking transforms how you approach certain data problems. But when does it make sense to switch?
When relational databases start to struggle
The breaking point usually isn’t query speed—it’s when your database starts experiencing lock contention because you’re hammering it with high-frequency updates while trying to read data at the same time. You’ll know you’ve hit it when:
- Dashboards freeze during data ingestion spikes
- Concurrent reads and writes start blocking each other
- Your “last 24 hours” queries take 30+ seconds
- You’re spending more time optimizing indexes than building features
This is where time series databases shine. They’re built for constant writes with occasional reads, not the balanced read/write patterns that relational databases expect. They define the schema on write, which means you don’t have to define the table columns up front.
Here’s the thing about time series data: individual rows are meaningless. One temperature reading or GPS coordinate by itself tells you nothing—unless you’re dealing with real-time snapshots where that single moment actually matters (like the latest stock price or current system status). But for most time series use cases, the sheer volume of those rows will start to tell a story that you didn’t explicitly architect together. It’s actually a perfect database style for machine learning to study.
The mental and data model shift
Working with time series data means letting go of relational context. At first, this feels uncomfortable. You lose the immediate understanding of what each piece of data “belongs to” in a business sense.
But something interesting happens as you adjust: patterns start emerging that you never noticed before. Time becomes your primary organizing principle, and you begin seeing trends, cycles, and anomalies that were invisible when the data was scattered across normalized tables.
Patterns emerge from the data itself rather than from the relationships you designed.
Data Model Transformation
This isn’t just a mental model shift—it’s a fundamental data model transformation. Let me show you what I mean:
Relational data model (SQL):
-- Flights table
flight_id | airline | departure_time | arrival_time | origin | destination
----------|---------|----------------|--------------|--------|------------
AA1234 | American | 2024-01-15 08:00:00 | 2024-01-15 11:30:00 | JFK | LAX
-- Flight Metrics table (with foreign key)
id | flight_id | metric_type | value | timestamp
---|-----------|-------------|----------|------------------------
1 | AA1234 | altitude | 28500.0 | 2024-01-15 08:00:00
2 | AA1234 | speed | 540.0 | 2024-01-15 08:00:00
3 | AA1234 | heading | 270.0 | 2024-01-15 08:00:00
4 | AA1234 | altitude | 29200.0 | 2024-01-15 09:00:00
5 | AA1234 | speed | 560.0 | 2024-01-15 09:00:00
Time series data model (SQL):
-- Single measurement with tags and multiple fields
measurement: flight_metrics
tags: flight_id=AA1234, airline=American, origin=JFK, destination=LAX
timestamp | altitude | speed | heading
----------------------|----------|-------|--------
2024-01-15 08:00:00 | 28500.0 | 540.0 | 270.0
2024-01-15 09:00:00 | 29200.0 | 560.0 | 268.0
2024-01-15 10:00:00 | 29800.0 | 555.0 | 269.0
See the difference? In the relational world, we’re building entities with attributes and relationships. In the time series world, we’re capturing measurements at specific moments. This shift in data structure changes how you interact with your data.
The time series query states exactly what you mean: give me the average altitude per minute. No pivoting, no CASE statements, no fighting the data model.
Relational SQL:
-- Fighting to get time-based data out of a relational structure
SELECT
AVG(CASE WHEN metric_type = 'altitude' THEN value END) as avg_altitude,
DATE_TRUNC('minute', timestamp) as minute
FROM flight_data
WHERE flight_id = 'AA1234'
GROUP BY DATE_TRUNC('minute', timestamp);
Results:
minute | avg_altitude
------------------------|-------------
2024-01-15 08:00:00 | 28500.0
2024-01-15 09:00:00 | 29200.0
2024-01-15 10:00:00 | 29800.0
Time series SQL:
-- Direct expression of what you actually want
SELECT
time_bucket('1m', time) as minute,
AVG(altitude) as avg_altitude
FROM flight_metrics
WHERE flight_id = 'AA1234'
GROUP BY minute;
Results:
minute | avg_altitude
------------------------|-------------
2024-01-15 08:00:00 | 28500.0
2024-01-15 09:00:00 | 29200.0
2024-01-15 10:00:00 | 29800.0
The ORM challenge
The biggest adjustment is the ORM paradigm shift. Whatever your platform, you’re used to thinking in objects and relationships. For this example, we will use C# and Entity Framework.
The ORM way (C#):
// Think in entities and relationships
public class FlightData
{
public int Id { get; set; }
public string FlightId { get; set; }
public List < FlightMetric > Metrics { get; set; }
}
// Query with navigation properties
var flightWithMetrics = context.FlightData
.Include(f => f.Metrics.Where(m => m.Timestamp > yesterday))
.FirstOrDefault(f => f.FlightId == "AA1234");
The time series way (C#):
// Think in measurements at specific time points
public async Task RecordFlightMetrics(string flightId, double altitude,
double speed, DateTime timestamp)
{
var point = PointData
.Measurement("flight_metrics")
.Tag("flight_id", flightId)
.Field("altitude", altitude)
.Field("speed", speed)
.Timestamp(timestamp, WritePrecision.Ms);
await _influxClient.GetWriteApiAsync()
.WritePointAsync(point, "aviation", "my-org");
}
Can You Use an ORM with a Time Series Database?
Short answer: not really, and you wouldn’t want to. ORMs are designed for modeling objects and their relationships using foreign keys and navigation properties, not how the objects evolve over time. Time series data is fundamentally different—it’s measurements over time, not related entities.
Instead of fighting this, embrace the directness. Time series databases give you more control and better performance by working directly with the data model rather than abstracting it through object mappings.
The trade-offs are real, though.
What you lose moving away from ORMs:
- Rich object models with navigation properties
- Automatic SQL generation and change tracking
- Language-integrated queries (LINQ, Criteria API, QuerySets, etc.)
What you gain with direct time series access:
- Massive performance improvements for time-based queries
- Schema flexibility without migrations
- Purpose-built time aggregation functions
For many applications dealing with high-frequency data, the performance gains outweigh the development convenience you lose. Most time series databases also offer language-specific SDKs (like the InfluxDB 3 SDK for C#) and integrate with data collectors like Telegraf for simplified data ingestion.
The reality check
Here’s what I learned: most applications don’t need time series databases. If your primary use case is about managing the current state of an object, or you need consistent views across multiple tables, time series databases are likely not the right tools. If your data volume is manageable and you’re not seeing concurrent read/write conflicts, stick with what you know.
Time series databases make sense when:
- High-frequency data ingestion is causing database locks
- You’re building something that acts like a “data historian”
- Patterns over time matter as much or more than the current values
- Storage costs are becoming significant due to data volume
Stick with relational databases when:
- Individual records have a critical business context
- You need complex queries across different data types
- Data volume isn’t causing performance issues
Start with a quick test and hybrid approach
My advice: don’t overthink it. Take your most demanding, high-frequency API endpoint and try routing it to a time series database instead. Set it up in parallel with your existing system and see what happens.
The usefulness becomes clear quickly. Either you’ll immediately see the benefit and start thinking of other places to apply it, or you’ll realize your current approach is working fine.
High-frequency insert pattern – relational approach (C#):
// Instead of this high-frequency insert pattern...
public async Task LogUserActivity(int userId, string action, DateTime timestamp)
{
var activity = new UserActivity
{
UserId = userId,
Action = action,
Timestamp = timestamp
};
_context.UserActivities.Add(activity);
await _context.SaveChangesAsync(); // This can cause locks under load
}
High-frequency insert pattern – time series approach (C#):
// Try this approach for high-frequency data
public async Task LogUserActivity(int userId, string action, DateTime timestamp)
{
var point = PointData
.Measurement("user_activity")
.Tag("user_id", userId.ToString())
.Field("action", action)
.Timestamp(timestamp, WritePrecision.Ms);
await _influxClient.GetWriteApiAsync()
.WritePointAsync(point, "analytics", "my-org"); // Non-blocking writes
}
Most real applications end up using both databases. Keep user accounts, orders, and business logic in your relational database; route high-frequency measurements, events, and analytics data to a time series database.
This gives you the best of both worlds: rich relational context where it matters and efficient time-based storage where volume is the challenge.
Making the call
After years of building with SQL databases, I can tell you there’s a clear breaking point. When you start spending more time optimizing database performance than shipping features, that’s your signal.
InfluxDB 3’s SQL support eliminates the learning curve barrier that stopped many of us before. If the problems in this post sound familiar, try it for free to see the difference immediately.