Preparing Critical Infrastructure for the AI Revolution: Reflections from Davos 2026

Table of Contents

Last month at the World Economic Forum’s Annual Meeting in Davos, I was privileged to engage with global leaders on the challenges facing our critical infrastructure. I participated in the “On Guard: from Deep Sea to Orbit” panel alongside experts from space technology and other sectors, and led a workshop breakout session exploring how organizations can maintain safe operations when digital systems are disrupted or unreliable.

These conversations reinforced something I’ve been observing across our customer base and the broader industrial community: we’re racing toward an AI-powered future, one that holds tremendous promise for efficiency, optimization, and competitive advantage,without fully addressing the risks it could introduce into the systems that make modern life possible. Industrial and critical infrastructure sectors are experiencing unprecedented technological change that demands new approaches to resilience and security. The cybersecurity of traditional information technology (IT) systems and the AI models themselves are receiving attention but the cybersecurity of operational technology (OT) investments trail behind. You cannot have an AI or space revolution without energy and infrastructure; you cannot keep your energy and infrastructure for long without OT cybersecurity.

Over the past 40 years, OT environments have evolved from mechanical controls to digital systems; over the last 30 years from isolated networks to IP-connected infrastructure, and over the last 20 years into complex automation environments made of systems of systems compromised of OT, IT, and Internet of Things (IoT) devices. The last 10 years have seen this trend massively accelerate. This digital transformation brings enormous benefits in efficiency and capability but also has created new attack surfaces and dependencies that we have still not appropriately secured.

Now we’re entering the next phase, AI integration into OT. This time, the pace of adoption will out strip our ability to manage the risks if we don’t act now to ensure security and resiliency for our future.

Across manufacturing plants, electric grids, and other industrial facilities, organizations are starting to move AI from supporting roles into the control loop, the fundamental capability of these systems to impact the physical world around us. For years,we’ve seen machine learning used for data science, modeling, weather forecasting, and other operational support functions. It contained risk but it wasn’t particularly concerning because humans remained in direct control of critical decisions.

What’s changing now is that early adopters are deploying AI applications, AI controllers,and AI software directly into operations to drive efficiency and optimization. We’re seeing serious conversations, and some early deployments, around using agentic AI to make decisions in battery farms, solar farms, wind farms,the mining industry,and other critical infrastructure. AI is starting to have a major role in operations, making autonomous decisions about systems that affect the physical world.

Asset owners and operators are feeling the pressure as CEOs, boards, and investors push for AI-adoption to improve productivity, decrease costs, accelerate innovation, and surpass competition.If the products to be adopted were named anything other than “AI” they would not be adopted so aggressively or quickly. These lead to pressures on testing and validation timelines, confidence in new vendors, and a lack of appreciation between teams in the company managing risk vs. those managing adoption. Most of all it is leading to more complexity than ever before. This creates two critical failure scenarios that organizations must prepare for.

There will likely be an AI reset at some point with winners,losers, and consolidation as the world figures out AI adoption. When that happens, organizations need to be prepared for what comes next.

What happens if an AI application or technology and their associated data sets go away overnight? Or as multiple AI vendors shift through consolidation, innovations that introduce vulnerabilities or are misaligned for some industrial systems? What happens if a major AI system across sectors or supply chains stops working?

Now is the time organizations should be asking themselves these questions:

On the supply side:

What warranties and representations are you getting from AI vendors,as well as partners, suppliers, and infrastructure ecosystems that use AI, regarding your data, access to your software, and continuity of service especially in the event of bankruptcy or acquisition? What happens to your operations if these systems become unavailable abruptly?

On the operational side:

How would you run operations without the AI systems you’re adopting? Do you maintain the capability to operate manually or with legacy systems if AI becomes unavailable? What about the partners, suppliers, and infrastructure ecosystems themselves as they impacted negatively by AI they use?

These aren’t hypothetical concerns. The pace of AI adoption, combined with the near inevitable market correction, means organizations could find themselves dependent on systems that suddenly cease to exist.

The second scenario is even more concerning for long-term operational capability. As AI makes operations environments more complex, understanding what’s happening in your systems becomes exponentially more difficult.

When something goes wrong, whether it’s a cyber attack, a malfunction, or an operational error, organizations need the ability to conduct root cause analysis. This isn’t just about cybersecurity. It’s about understanding why production decreased, why quality suffered, why a process failed, or why a safety incident occurred. This is an area I’ve highlighted before that we are already seeing across numerous incidents that hampers our recovery.

Root cause analysis is essential for:

  • Restoring operations correctly
  • Preventing future incidents
  • Meeting regulatory requirements
  • Making informed business decisions
  • Maintaining safety and reliability

The problem is that too many organizations already fail to monitor their OT environments despite technology that exists today. They either rely on IT monitoring with the mistaken impression it covers OT or lack resources due to decisions by boards or regulatory commissions that limit funding for OT cybersecurity. With the introduction of AI, these organizations vastly increase their attack surface and the complexity of their environments while remaining blind to what’s happening in their OT networks, and as a result with limited ability to find out what happened in the case of an outage or safety incident.

First, establish governance processes for AI adoption that balance innovation with security and safety. Apply the same testing, validation, and continuous assessment rigor to AI deployment that you’ve historically applied to control systems. Help your leaders understand the significant risk to the business in letting competitive or economic pressures override the engineering discipline that has kept operations safe and reliable.

Second, deploy OT-native visibility and monitoring capabilities that detect and help log today’s threats to OT environments and are on pace to keep up with AI-enabled complexity.With AI,understanding what’s happening in your OT systems becomes even more critical for root cause analysis when things go wrong. OT monitoring solutions like the Dragos Platform exist today, and vendors must prioritize innovation that keeps up with the needs that future AI technologies introduce into operational systems, as well as making these capabilities accessible to the OT community. That’s our commitment at Dragos: to protect OT environments today and for the future.

Third, plan for failure scenarios by understanding intelligence-driven threat scenarios and consequence-scenarios that are already occurring. You cannot protect yourself against the unknowns if you cannot handle the knowns. Document your AI dependencies. Understand what happens if an AI system or vendor fails. Maintain the capability to operate without AI when necessary.

We’ve never been better positioned to address this challenge. The talent exists. The technology exists. The knowledge of how to secure complex industrial systems exists. Countries can align around protecting civilians, families, and shared infrastructure, as the conversations in Davos demonstrated.

What’s required is awareness that this is a problem, dialogue to address it across organizations and sectors, and execution to implement the right safeguards. Organizations that establish proper governance, invest in OT monitoring and cybersecurity, plan for failure scenarios, and manage vendor AI dependencies will capture the benefits of AI while managing the risks.

The pressure to innovate with AI is real and will only intensify. But the organizations that succeed long-term will be those that prepare beforehand.We have the opportunity now to get this right. Let’s do it.

Robert M. Lee is a recognized authority in the industrial cybersecurity community. He is CEO and co-founder of Dragos, a global technology leader in cybersecurity for operational technology (OT)/industrial control systems (ICS) environments.