Separating Signal from Noise
Every threat briefing now includes a section on AI. Most of them conflate three very different things: what AI can do today to accelerate attacks, what it might enable in 2-3 years, and what makes for good conference talk titles but has no operational basis.
OT security teams need to plan for the first category, watch the second, and ignore the third. Here is a breakdown.
Three AI Threat Vectors That Are Real Right Now
1. Accelerated Reconnaissance
Large language models can parse OT documentation at machine speed. This matters because OT environments have historically relied on obscurity as a partial defense — the assumption that attackers don't understand Modbus function codes, or how a specific DCS vendor structures its network architecture.
That assumption is gone. An attacker with access to:
- Leaked or publicly available vendor manuals
- Protocol specifications (Modbus, DNP3, OPC-UA, EtherNet/IP)
- Conference presentations on ICS security research
- Shodan/Censys scan results for industrial devices
...can feed all of it into an LLM and get a structured attack plan in hours instead of weeks. The LLM won't write the exploit, but it will identify which devices are likely vulnerable, which protocol commands to target, and which network paths to prioritize.
Concrete example: An attacker targeting a water utility can prompt an LLM with the utility's SCADA vendor (often publicly listed in procurement records), the protocol specifications for that vendor's RTUs, and known CVEs — then receive a prioritized list of attack paths without any prior ICS expertise.
2. AI-Assisted Protocol Fuzzing
Fuzzing industrial protocols with AI guidance finds vulnerabilities faster. Traditional fuzzing sends random or semi-random inputs to a target and watches for crashes. AI-assisted fuzzing uses models trained on protocol specifications to generate inputs that are more likely to trigger edge cases.
For OT, this means:
- Modbus TCP: AI-guided fuzzers can systematically test function codes, register ranges, and malformed packet structures that manual fuzzing would take months to cover.
- OPC-UA: The protocol's complexity (hundreds of service calls, nested data structures) makes it a prime target for AI-assisted discovery of parsing vulnerabilities.
- DNP3: Stateful protocol interactions that are difficult to fuzz manually become tractable when an AI model understands the protocol state machine.
This doesn't require nation-state resources. Open-source AI fuzzing frameworks already exist, and adapting them to industrial protocols requires moderate effort, not a research lab.
3. Social Engineering at Scale
AI-generated phishing targeting OT operators is qualitatively different from generic phishing. An attacker can now:
- Generate vendor-impersonation emails that reference specific product models, firmware versions, and maintenance procedures
- Create realistic voice deepfakes of known vendor contacts for phone-based social engineering
- Produce fake but convincing technical bulletins, firmware update notices, and safety advisories
- Customize phishing at scale — different lures for different roles (plant manager vs. control engineer vs. maintenance tech)
OT operators are accustomed to receiving technical communications from vendors. AI makes it trivial to produce communications that match the expected format, terminology, and tone.
What Is NOT Happening (Yet)
The hype cycle around AI and OT security includes several scenarios that have no current operational basis:
- Autonomous AI exploit chains against PLCs. Writing a working exploit for a specific PLC model requires understanding the hardware, the firmware execution environment, and the specific vulnerability. LLMs cannot do this autonomously today.
- AI-driven process manipulation. Changing a control loop to cause physical damage requires understanding the process physics — temperature ramp rates, pressure limits, chemical reaction kinetics. AI models have no grounding in the physical process.
- Self-propagating AI malware in OT networks. OT networks are heterogeneous (different vendors, protocols, firmware versions per segment). Autonomous lateral movement across this diversity is an unsolved problem for AI.
- AI that understands control loop physics. As we detailed in our analysis of how attackers map control loops to manipulate physical processes, Stuxnet required years of human engineering to manipulate centrifuge speeds within specific parameters. No AI system can replicate this kind of domain-specific physical reasoning today.
These scenarios may become relevant in 5-10 years. They are not relevant for 2026-2027 planning.
AI Threat Vectors: Real vs. Hypothetical
| Threat Vector | Status | Timeline | Example | Defensive Control |
|---|---|---|---|---|
| LLM-assisted reconnaissance | Active now | Current | Parsing vendor docs and CVE databases to build OT attack plans | Minimize public exposure of OT architecture details; assume attacker has documentation |
| AI-guided protocol fuzzing | Early adoption | 1-2 years for widespread use | Fuzzing Modbus/OPC-UA with ML-guided input generation | Patch known protocol vulnerabilities; monitor for anomalous protocol traffic |
| AI-generated phishing | Active now | Current | Vendor-impersonation emails with correct product terminology | Out-of-band verification for any action requested via email; access brokering for vendor sessions |
| Deepfake voice/video | Emerging | 1-2 years | Fake vendor support call requesting remote access | Pre-registered vendor contacts with callback verification |
| Autonomous PLC exploitation | Not observed | 5+ years | AI independently discovers and exploits PLC vulnerability | Standard patching and segmentation (same defense regardless) |
| AI-driven process manipulation | Not observed | 5-10+ years | AI understands process physics to cause targeted damage | Safety instrumented systems (SIS); air-gapped safety controllers |
| Self-propagating AI worm in OT | Not observed | 5-10+ years | AI autonomously moves across heterogeneous OT devices | Network segmentation; protocol-aware firewalling |
Defenses That Work Regardless of Whether the Attacker Uses AI
The defensive playbook for AI-accelerated attacks is the same playbook that works against human-driven attacks. AI changes the speed and scale of certain attack phases, not the fundamental attack outcomes. Defend against the outcomes:
Visibility
You cannot defend what you cannot see. Passive network discovery across IT and OT segments gives you:
- A complete asset inventory (including devices that "aren't supposed to be there")
- Baseline traffic patterns that make anomalous reconnaissance detectable
- Protocol-level visibility into what commands are being sent to OT devices
Segmentation
AI-accelerated reconnaissance is only valuable if the attacker can reach the target. Segmentation limits what an attacker can see and touch:
- Isolate OT zones from IT zones at Layer 3
- Restrict cross-zone traffic to explicitly allowed flows
- Segment vendor access to only the assets covered by the vendor's scope
Access Control
AI-powered social engineering is designed to obtain credentials or trick operators into granting access. Brokered access neutralizes this:
- No persistent credentials for vendor or remote sessions — access is provisioned per session
- MFA on every session — a compromised password alone is insufficient
- Session recording — even if an attacker gains access, their actions are captured
Configuration Integrity
If an attacker reaches an OT device (by any method, AI-assisted or not), the final defense is detecting unauthorized changes:
- Monitor PLC programs and HMI configurations for unexpected modifications
- Maintain offline backups of all OT configurations
- Alert on firmware changes outside approved maintenance windows
Plan for Acceleration, Not Revolution
AI is making existing attack techniques faster and cheaper — accelerating the trends documented in the Dragos 2026 OT threat report. It is not creating new categories of OT attacks. The operators who will handle this shift well are the ones who already have visibility into their networks, segmentation between zones, and controlled access for every session. If you have those foundations, AI-powered attacks hit the same walls as every other attack. If you don't, the timeline to get compromised just got shorter.

