A Practical Guide to SCADA Security

Navigate to:

Critical infrastructure is under siege. The systems that control our power grids, water treatment plants, and oil pipelines weren’t designed for a connected world. This post covers what security measures teams need to understand and how time series monitoring can help turn SCADA’s weaknesses into a security advantage.

The stakes for SCADA security have never been higher

Somewhere right now, a programmable logic controller is opening a valve, adjusting a turbine’s speed, or regulating the chlorine levels in a city’s drinking water. These actions are orchestrated by Supervisory Control and Data Acquisition (SCADA) systems. They run power grids, water treatment facilities, oil and gas pipelines, manufacturing plants, and transportation networks.

For decades, these systems operated in relative obscurity. They sat on isolated networks, spoke proprietary protocols, and were managed by operational technology (OT) engineers who rarely crossed paths with the IT security team.

The convergence of IT and OT networks, driven by the demand for remote access, operational analytics, and cost efficiency, has dragged SCADA systems into a threat landscape they were never built to survive. The results have been dramatic. In 2015 and 2016, coordinated cyberattacks took down portions of Ukraine’s power grid, leaving hundreds of thousands without electricity. In 2021, the Colonial Pipeline ransomware attack shut down fuel distribution across the U.S. East Coast, triggering panic buying and fuel shortages.

These aren’t theoretical risks. They’re documented events, and they only represent the incidents that became public. The reality is that SCADA systems are being probed, scanned, and targeted every day, and many operators lack the visibility to even know it’s happening.

SCADA security challenges

Securing SCADA and industrial control systems is fundamentally different from securing a corporate IT environment. The assumptions, priorities, and constraints are almost inverted.

Availability Over Confidentiality

In IT security, the classic triad is confidentiality, integrity, and availability, usually prioritized in roughly that order. In OT, the priorities flip. A power plant cannot tolerate downtime. A water treatment facility cannot go offline for a patch cycle. The consequences of a disrupted industrial process aren’t a lost spreadsheet; they’re potential physical harm, environmental damage, or loss of life. This means that many standard IT security practices, such as aggressive patching, frequent reboots, and network scanning, can be dangerous or even impossible in OT environments.

Legacy Systems and Long Lifecycles

SCADA components often have operational lifecycles of 20 to 30 years. It’s not uncommon to find PLCs running firmware from the early 2000s, human-machine interfaces (HMIs) on Windows XP, or historians on unsupported database platforms. These systems were engineered for reliability and determinism, not security. Replacing them is expensive and operationally risky, so they persist despite the vulnerabilities.

Protocols Without Security

Modbus, DNP3, and OPC Classic are the workhorses of industrial communication, but they were designed in an era when network isolation was considered sufficient protection. Modbus, for instance, has no authentication, no encryption, and no way to verify the identity of a device sending commands. These protocols are deeply embedded in operational infrastructure and cannot be easily replaced.

The Air Gap Myth

Many organizations still believe their OT networks are air-gapped. In practice, true air gaps are rare. Remote access solutions, vendor support connections, shared file servers, USB drives, and even cellular modems on RTUs create pathways between networks.

Key strategies for SCADA security

Effective SCADA security is layered, OT-aware, and built around the operational realities of industrial environments. There is no single solution, but a combination of strategies dramatically reduces risk.

Network Segmentation

The foundation of SCADA security is proper network architecture. At a minimum, there should be a demilitarized zone (DMZ) between the corporate IT network and the OT network, with no direct traffic flowing between them. Within the OT network, further segmentation between supervisory systems, control systems, and field devices helps limit lateral movement.

Asset Inventory and Visibility

You cannot protect what you don’t know exists. Many organizations lack a complete, accurate inventory of their OT assets, including PLCs, RTUs, HMIs, historians, network switches, and communication links. Passive network discovery tools designed for OT environments can build and maintain this inventory without disrupting operations.

Access Control and Authentication

Every access point into the OT environment should require strong authentication, ideally multi-factor. Least-privilege principles should govern who can access what, and remote access should be tightly controlled, monitored, and time-limited. Shared accounts should be eliminated wherever possible.

OT-Aware Patch Management

Patching in OT requires a risk-based approach. Not every vulnerability needs an immediate patch, and not every system can be patched without operational impact. Organizations need a process that evaluates vulnerability severity in the context of their specific environment, tests patches in a staging environment where possible, and schedules maintenance windows that align with operational needs.

Deep Packet Inspection for Industrial Protocols

Traditional firewalls see Modbus traffic as TCP on port 502 and nothing more. OT-aware firewalls and intrusion detection systems can parse the actual protocol content to inspect function codes and register addresses to enforce policies.

Incident Response Planning

OT incident response is not IT incident response, the playbook must account for the physical consequences of containment actions. Isolating a network segment might stop an attacker, but could also trip a safety system or halt a process. Response plans need to be developed collaboratively between security teams, OT engineers, and plant operations.

Continuous monitoring for SCADA security

All of the strategies above are essential, but there’s a fundamental truth about SCADA security that defenders can exploit: industrial processes are inherently predictable.

A temperature sensor in a chemical reactor reports a value every second. A PLC cycles through its logic on a fixed schedule. A pump runs at a consistent speed. Network traffic between a SCADA server and its RTUs follows regular, repeatable patterns. This predictability means that anomalies like equipment failure, operator error, or a cyberattack create detectable deviations from established baselines.

This is where time series data becomes a security team’s most powerful tool.

Baselining Normal Behavior

By collecting and storing high-resolution time series data from sensors, PLCs, network flows, and protocol logs, you can build a detailed behavioral profile of “normal” for every asset and process in your environment. What does normal Modbus traffic look like between the SCADA server and PLC-07? What’s the typical temperature range for reactor vessel 3 during a batch run? How often does the engineering workstation initiate write commands?

With enough historical data, these baselines become remarkably precise, and deviations become immediately apparent.

Detecting Process Manipulation

An attacker who gains access to a SCADA system may try to subtly alter process parameters, such as changing a setpoint, opening a valve, or adjusting a chemical dosing rate. If you’re monitoring time series data from those processes, you can detect changes that fall outside historical norms.

Spotting Anomalous Network Behavior

Industrial network traffic is highly structured. By logging protocol-level metadata, you can detect unusual patterns. A “write multiple registers” command from an IP address that has only ever issued read commands is suspicious. A burst of DNP3 unsolicited responses at an unusual time deserves investigation. These signals are only visible if you’re capturing and analyzing this data.

Correlating Across IT and OT

The most sophisticated attacks traverse the IT/OT boundary. Detecting them requires correlating events across both domains on a unified timeline. For example, a failed VPN login attempt at 1:47 AM, followed by a successful login at 1:52 AM, followed by an unusual engineering workstation session at 1:55 AM, followed by a PLC configuration change at 2:03 AM. While each of these events in isolation might not trigger an alert, together, on a single timeline, the pattern is unmistakable. Time series data makes this correlation possible.

Why a time series database beats a SIEM or relational database for OT security data

If you’re convinced that this kind of monitoring is critical for SCADA security, the next question is where to store and analyze all this data. The three common options are a traditional relational database, a Security Information and Event Management (SIEM) platform, or a time series database like InfluxDB. For OT security data, the time series database wins decisively. Here’s why.

Data Volume

A single SCADA environment can generate enormous volumes of data. Consider a modest facility with 500 sensors reporting every second, 20 PLCs, a network tap capturing protocol metadata, and authentication logs from access points. That’s easily millions of data points per day, and larger environments generate orders of magnitude more.

Relational databases like PostgreSQL or MySQL were designed for transactional workloads: inserts, updates, deletes, and joins across normalized tables. They handle time series data poorly at scale. Write throughput degrades as tables grow, and time-based queries over millions of rows become expensive without careful indexing and partitioning, which creates operational complexity. SIEMs are built for log ingestion, but they’re optimized for text-based event logs, not numerical telemetry. Ingesting raw sensor data at one-second intervals into a SIEM is technically possible, but economically painful, as SIEM licensing is typically based on data volume, and the cost of ingesting OT data can be prohibitive. Many organizations end up sampling or aggregating data before it reaches the SIEM, losing the granularity needed for effective anomaly detection.

InfluxDB and other time series databases are built for this workload. They use storage engines optimized for high-volume writes of timestamped data and compressed, columnar storage that keeps disk usage manageable even at scale. InfluxDB can handle hundreds of thousands of writes per second on modest hardware.

Query Performance

OT security analysis is fundamentally time-focused. You need to answer questions like: “What was the average pressure in vessel 4 between 2:00 and 2:15 AM?” or “Show me all Modbus write commands to PLC-12 in the last 24 hours alongside the corresponding sensor readings.” or “Alert me if the rate of change of this temperature exceeds the 99th percentile of its 30-day historical distribution.”

In a relational database, these queries require careful SQL with window functions, CTEs, and often materialized views to perform well. The query language wasn’t designed for time series operations, and performance tuning is an ongoing burden.

SIEMs offer search languages that handle event correlation well but are awkward for continuous numerical analysis. Calculating rolling averages, derivatives, or statistical distributions over sensor data in a SIEM is possible but cumbersome.

Time series databases provide native query primitives for exactly these operations. InfluxDB includes built-in functions for windowed aggregation, moving averages, derivatives, percentiles, and histogram analysis. A query that would require 30 lines of carefully optimized SQL can often be expressed in a few lines with InfluxDB. This matters not just for convenience but for enabling security analysts and OT engineers to explore data and build detection logic without being database specialists.

Data Retention

OT security data has a natural tiered value structure. The last 24 hours of raw sensor data are extremely valuable for investigating an active incident. The last 30 days at full resolution are important for anomaly detection baselines. Data from six months ago is useful for trend analysis, but doesn’t need high granularity. Data from a year ago might only need hourly averages for compliance purposes.

Relational databases require you to manage this lifecycle manually by writing ETL jobs to downsample old data, archive tables, and manage storage. SIEMs typically offer hot/warm/cold storage tiers, but with limited control over how data is aggregated as it ages. InfluxDB has retention policies and downsampling built into the database itself. You can define policies that automatically downsample data from one-second to one-minute resolution after 30 days, then to five-minute resolution after 90 days, and delete raw data after a year. This happens transparently, without external tooling, and keeps storage costs predictable while preserving long-term visibility.

Moving forward

SCADA security is not a problem that can be solved with a single product, a one-time assessment, or a policy document. It requires sustained commitment to understanding your environment, monitoring it continuously, and building the organizational capacity to detect and respond to threats.

The good news is that the same characteristic that makes SCADA systems vulnerable, like their reliance on predictable, deterministic processes, is also what makes them uniquely defensible through data-driven monitoring. Industrial processes generate time series data that reveals anomalies clearly when you have the right tools to capture and analyze it.

A time series database like InfluxDB, paired with a well-designed collection pipeline and visualization layer, enables security teams to see their OT environment with a level of clarity that was previously impractical. Not as a replacement for network segmentation, access control, and the other foundational security measures, but as the monitoring layer that ties everything together and ensures that when something goes wrong, you know about it in seconds rather than weeks.