Ten Years After Ukraine’s Power Grid Cyberattack: Lessons Learned and Questions Answered About CRASHOVERRIDE

Table of Contents

By Robert M. Lee, CEO of Dragos, and Tim Conway, ICS Curriculum Lead at SANS Institute

Thank you to everyone who joined us for the webinar, “A Decade After the First Cyber Attack on Civilian Power Infrastructure.” With over 2,000 people registered, we received far more questions than we could answer live.What follows are answers to the most common questions organized by theme.

A bit of context: On Christmas Eve 2015, we each received messages that would define the next decade of our careers: Ukrainian electric utilities were reporting outages they suspected were cyberattacks, and we were being called to help investigate what turned out to be the first cyberattack to successfully take down power.

In December 2016, it happened again with CRASHOVERRIDE, malware purpose-built to attack electric transmission substations. Then came TRISIS in 2017, the first malware to directly target safety systems with lethal intent. Over the following years, we’ve tracked the evolution of these and many other threats as operational technology (OT) cybersecurity emerged as a critical discipline.

Ten years later, the questions you asked in the webinar reveal where the community is: more sophisticated in understanding threats, but still wrestling with fundamental challenges in detection, response, and resource allocation. Here we dig into your questions.

Q: What were the key factors that made the Ukraine attacks possible? Could this happen in other regions of the world?

First, understand that the Ukrainian utilities had firewalls, antivirus, patching programs, access controls—standard IT security. The problem was that IT security approaches don’t address the fundamental risk in OT environments.

In 2015, the adversary gained access, learned the systems, and then used intended functionality to take control away from operators. Opening circuit breakers through the distribution management system isn’t a vulnerability—it’s a feature. The vulnerability was that an adversary had access and the operators had no way to detect the takeover or prevent it.

Could this happen in other regions? Yes, absolutely. We were frustrated in early 2016 when the narrative came out in the US that “it can’t happen here” or “regulations protect us.” We give credit to the power company CEOs who pushed back and said publicly: “It can happen here, and here’s what we’re doing about it.” That honesty shifted the conversation.

The attacks demonstrated sophisticated operational planning. They coordinated across three different utilities and over 50 substations. They caused outages and had a second campaign to inhibit restoration by corrupting firmware, destroying workstations, and severing operator communications.

Q: How did Ukraine restore power so quickly after the 2015 cyberattack, and what does that tell us about resilience?

The six-hour restoration time is often misunderstood as a defensive success.Ukraine’s ability to restore power was largely because their grid operators, some of the most mission-focused, dedicated operators you’d ever meet, still knew how to run things manually. They already knew what to do when equipment failed due to reasons like a lightning strike or equipment failure. But this was the first time we had to ask: “What do we do if someone else is controlling the system?”

That’s a resilience challenge we’re still addressing: planning for scenarios where adversaries control operational systems. The attack also revealed a fundamental gap. While power was restored in six hours, they lost automation capabilities for almost a year. If you’re in the electric utility business, you can appreciate how operationally significant that is.

Q:You track threat groups called KAMACITE and ELECTRUM that were behind the Ukraine attacks. What have they been doing over the past decade?

In 2016, we saw them codify their human expertise into CRASHOVERRIDE malware. What took 20 people and 45 minutes of manual operations in 2015 became something they could execute in 45 seconds. They could scale it to every transmission substation in the world running that equipment.

The move from manual operations to this use of malware was more significant than people realized at the time. CRASHOVERRIDE had protocol capabilities it didn’t need for that specific attack. It could swap in and out different protocols for different targets, load different exploits for different protection relays. It became a force multiplier—not just a tool for one target, but a capability enabling teams across multiple different targets.

We’ve tracked KAMACITE and ELECTRUM continuing to target energy infrastructure globally over the past decade. They’ve expanded capabilities and compromised additional utilities. In testimony to Congress in 2018, Dragos shared tracking of five activity groups specifically targeting industrial control systems (ICS). Today Dragos tracks over twenty. That’s not just better visibility—that’s more adversaries making the strategic investment in OT capabilities.

Q:How does PIPEDREAM compare to previous threats like CRASHOVERRIDE?

PIPEDREAM represents the irreversible shift we’ve been warning about for years. The Ukraine attacks, while sophisticated, were still site-specific. PIPEDREAM is different. It’s the first reusable cross-industry capability that can achieve disruptive or destructive effects on ICS equipment.

It was initially targeted toward energy assets—liquid natural gas and electric transmission—but it can work in almost any OT environment. Data centers, manufacturing facilities, even military control systems.

What made infrastructure defensible for decades was heterogeneity—every site was different enough that adversaries had to customize attacks. We’ve moved toward more homogeneous infrastructure for good operational reasons, but this allows threats to scale.

PIPEDREAM takes advantage of that shift. Once it’s in target networks, it’s reliable because it uses native functionality and common software deployed across infrastructure sites. This demands an approach focused not just on prevention, but on detection and response, because we must assume adversaries will achieve access.

The fact that we identified PIPEDREAM before it was deployed represents one of the most significant public-private partnership wins in cybersecurity history. Dragos, working with an undisclosed partner and with NSA, FBI, CISA, and DOE, identified and analyzed it “left of boom.” But the time bought by that discovery only matters if the community acts on it.

Q: Are we seeing more OT cyber attacks now, or are we just detecting more because we’re finally looking?

Both, and that’s important to understand.

In 2018, the Senate Energy Committee held a hearing to better understand the industrial threat landscape, which was still largely unknown at that time. The collection methods that worked for IT threats didn’t translate to OT environments. We simply weren’t looking for OT cyber threats in the right places.

Adoption of purpose-built OT security is much more recent than IT security. As more organizations deploy capabilities to see what’s happening in their industrial networks, we’re discovering threats that were likely present before. But it’s also true that adversary focus on OT has increased significantly.

Q: What’s the biggest gap in OT security a decade after Ukraine?

Acknowledgement and understanding of the risk to businesses and national security. A lot of CEOs are stunned when they discover how little is actually being done on the OT security side—where they generate revenue, where they impact communities, where consequences are most severe. Government officials and policymakers conflate IT security with OT security, and are often drawn into discussions around emerging technologies before focusing on the fundamentals that can make a difference now at scale.

The disparities are stark. There are electric utilities that are among the most well-protected organizations in the world. Many more are in the middle, working to keep up. And there are small sites—electric co-ops, water utilities, gas pipelines, refineries—where basic security is lacking. These environments are critical to modern civilization, yet adversaries can train and prepare there undetected.

On the technical side, visibility and access remain fundamental challenges. Preventing access is the first defense, but monitoring is essential to detect threats and, when a breach happens, to allow the organization to do root cause analysis to identify the cause and mitigate the threat. Without monitoring, they might never know what happened and how to properly address it.

Q: Does NERC CIP actually protect the U.S. electric grid?

NERC CIP has been an overall good initiative. The community’s efforts to comply with these standards have made the North American bulk electric system resilient and well-defended, and these standards have been modeled around the world.

But regulations and standards are the trailing end of best practices. They provide base levels of security, not complete protection against determined adversaries. Malware and vulnerabilities aren’t the threat—the human adversary is, and we cannot regulate them away.

NERC CIP is a comprehensive standard you can learn from whether it applies to you or not. It balances IT technologies north of the systems with OT, ICS, and field technologies. The main questions are: where does it apply, and if it was in place at targeted assets, would adversaries have had to change their approach? The answer is yes—but that doesn’t mean they would have failed. Adversaries are sophisticated enough—they would have adapted. That’s the nature of the threat we face.

Q: What about vulnerability management and patching in OT? How effective are these practices?

This is where the data becomes sobering. Dragos’s analysis of industrial control system vulnerabilities found that 64% of vulnerability patches don’t actually eliminate risk because the components being patched were already insecure by design. Additionally, 72% of ICS vulnerabilities provided no alternative mitigation guidance beyond patching—no way to reduce risk until after an update cycle.

This doesn’t mean patching shouldn’t occur—it reduces attack surface and is important. But we must understand it doesn’t reduce risk against human adversaries nearly as much as the community believes. We need active defense: monitoring for threats, responding to them, and learning from them in our environments. Regulations focused primarily on patching miss this reality.

Q: Why is root cause analysis so important in OT incidents, and what data is required?

When cyber is involved in an OT incident, understanding root cause isn’t just about fixing what broke. It’s about understanding adversary behavior and evolution.

The challenge is that root cause analysis in OT requires different data than most organizations collect. You need network visibility into OT protocols and commands being issued between systems—understanding what’s happening between system 1, system 2, and the physical manifestation in system 3. That’s network traffic data, specifically ICS protocol communications.

Much of this is transient data. If you don’t collect it ahead of time, it’s gone. The adversary sends a command, achieves the effect, and there’s no resident forensic information to find later.

We’re experiencing cases now where organizations didn’t collect data ahead of incidents, so they don’t know if cyber was involved. The NERCCIP-015 INSM (Internal Network Security Monitoring) standard addresses exactly this gap. It will massively raise our level of insight, understanding, and ability to detect and respond to threats across the bulk electric system. Even organizations not under NERC regulation should look at that standard as a model.

Q: When you detect adversary activity in OT systems, what decisions can or should operators make?

The foundation is understanding that causing outages isn’t the worst outcome anymore. We’ve seen adversaries targeting protection relays and safety systems not just to cause outages but to create conditions for equipment damage or worse.

Utilities might legitimately choose to de-energize a line or de-rate a generation unit if they detect adversary manipulation—similar to how they intentionally cause outages during wildfire risk conditions to prevent catastrophic damage.

You need cyber operators who can see behavior at specific substations and say: stop planned maintenance, move to conservative operations until we verify integrity. That requires visibility into your OT environment and the ability to act on what you see.

This is exactly the kind of capability that defenders need—you cannot defend what you cannot see, and you cannot respond to threats you don’t understand. Organizations need purpose-built OT security that provides threat detection and response capabilities specifically designed for industrial environments.

Q: What should organizations prioritize for OT security? Where do we start?

We looked at every major OT cyber incident we could find and asked: what security controls actually worked? Not what sounds good in theory or what’s popular in IT security, but what demonstrably stopped or detected these specific attacks.

We distilled this to five critical controls for ICS security, published through SANS as the ICS Five Critical Controls.” These aren’t theoretical—they’re based on what would have prevented or detected Ukraine, CRASHOVERRIDE, TRISIS, ransomware in OT, and other real attacks.

If you implement these five controls, you’re addressing threat scenarios that are real risks because they’ve happened in the industry. From there you can advance to secure-by-design thinking and future architecture, but those five controls establish a defensible baseline quickly.

The five controls work because you can map them to existing frameworks like ISA/IEC 62443, NERC CIP, NIST CSF, or the Cybersecurity Performance Goals. They provide strategy where other frameworks provide structure.The five controls give you a starting point that’s actually doable.

Q: What about smaller utilities with limited resources? What’s realistic for them?

This is as much a policy challenge as a technical one. In the US, there are thousands of gas, water, and electric utilities that share IT contractors just for basic IT support, let alone cybersecurity. Free government assessments or new technology development miss the mark for these organizations.

These smaller co-op and public utility sites need direct resourcing—either through state-level policy changes or federal funding—to hire talent and purchase technologies they deem appropriate. Many aren’t allowed to spend money on cybersecurity without state regulator approval, creating fifty-plus different interpretations of risk and requirements.

There’s also a legitimate policy debate about whether federal or national security requirements should drive costs onto local ratepayers. These are solvable problems, but they require policy decisions, not just technology solutions.

The private sector has a role to play as well. Organizations like Dragos have programs specifically to support smaller utilities. Through OT-CERT, we provide free access to information and educational resources. Through our Community Defense Program, we also provide our platform, threat intelligence, training, and incident response support for qualifying utilities.

Q: What roles should governments play? Should there be more regulation?

When governments speak with one voice and coordinates across agencies, the infrastructure community listens and responds. When they receive competing guidance from field offices, base commanders, national and local regulators—each with different “top priorities"—you get analysis paralysis.

The industry benefits when government explains why cybersecurity is needed and share what outcomes they are driving toward, but let infrastructure operators determine how to achieve those outcomes. Infrastructure operators understand these environments far better than regulators do, and telling them how to run security in systems they’ve operated for decades will fail.

Q: What keeps you up at night about OT security?

I worry we will waste the time we’ve been given. Finding PIPEDREAM before it was deployed—that was a gift. We identified this cross-industry capability while we still had time to prepare defenses.

But we see organizations that know they should deploy better threat detection waiting because they’re uncertain what regulation is coming. We see resources going to compliance checkboxes rather than real threat mitigation. We see vendors allowed into critical infrastructure discussions without supply chain requirements.

Looking at the 2015 and 2016 events, and then the 2017 TRISIS/TRITON attack on safety systems—we’ve seen the movement toward not just causing outages but creating conditions for equipment damage and threats to human safety. Understanding that trajectory and having appropriate detection to make informed operational decisions is critical. We need to mature into a model where our cyber operators have the skills, tools, and appropriate support and guardrails from their organization and governments to protect their OT environments.

Q: Are you optimistic about the next ten years?

Yes, not because threats are diminishing or this is getting easier, but because the community that’s emerged to address the challenge of critical infrastructure protection over the past decade is exceptional.

The infrastructure operators we work with live in the communities they serve. They feel the responsibility deeply. The partnerships between government and private sector are stronger than in any other domain. The technology and expertise available now didn’t exist ten years ago.

Defense is doable. It requires the right focus, appropriate partnerships, and an understanding of progress and remaining gaps. Based on the questions we received from the webinar, that’s exactly the mindset this community has.

Skills evolve, technology changes, new threats emerge. The key is looking at current scenarios with fresh eyes: where are you today with your team, capabilities, training, and technology? What parts of these historical attacks would still work in your environment, and what would adversaries need to modify?

Every time we talk about this with a new utility or new entity, we learn something new about how the threats would manifest in their specific context.

Ten years after Ukraine, we’ve learned a tremendous amount. We’ve also learned how much we still don’t know—and that’s okay. The people working this problem are some of the best we’ve ever worked with, and they’re only getting better. That’s what gives us confidence about the next decade.

View Webinar

For more information on the Ukraine attacks, see the SANS/E-ISAC report “Analysis of the Cyber Attack on the Ukrainian Power Grid” in the National Security Archive. To learn more about the ICS Five Critical Controls, visit sans.org/ics-five-critical-controls.

Here are some upcoming events where OT cybersecurity community members will gather—new people are always welcome!

SANS ICS Security Summit & Training 2026
Dragos Forums (US, Canada, Europe, APAC, Middle East)
Dragos Industrial Security Conference (DISC) 2026

Robert M. Lee is a recognized authority in the industrial cybersecurity community. He is CEO and co-founder of Dragos, a global technology leader in cybersecurity for operational technology (OT)/industrial control systems (ICS) environments.