“Zero-click. Zero trace. Zero trust — EchoLeak is a shot across the bow of AI integration. And it doesn’t stop at IT.”
The EchoLeak exploit (CVE-2025-32711) is making headlines for its stealth and sophistication but what’s not being talked about enough is how this same class of attack can impact ICS and OT environments, especially those undergoing digital transformation.
This isn’t just a Microsoft Copilot problem. It’s a problem for any AI-enabled system ingesting data and automating action, whether in the cloud or on the plant floor.
EchoLeak in a Nutshell
The vulnerability exploited Microsoft 365 Copilot through a zero-click indirect prompt injection. A malicious email was enough. No user interaction required. When a user asked a related question, the AI parsed the attacker’s hidden commands and quietly exfiltrated internal data leveraging the AI’s own context engine against itself.
What About ICS/OT?
Let’s connect the dots.
Operational environments are increasingly incorporating AI whether for predictive maintenance, anomaly detection, supply chain optimization, or even interfacing with IT ticketing systems via tools like Copilot or other generative interfaces.
Here’s where the risk starts to bleed across domains:
- AI-Augmented Engineers: Field engineers may use AI copilots to pull documentation, logs, or past troubleshooting steps. If those systems ingest malicious or spoofed input (e.g., fake alert tickets, poisoned log entries), you have a context-based injection scenario.
- Data Leakage from HMI/SCADA Logs: If AI tools are used to summarize or interact with ICS data, they may inadvertently expose sensitive control parameters, device configurations, or operational history especially if prompt injections or poisoned data make it into historian records or support cases.
- IT/OT Convergence Risks: Many organizations now link OT event streams or performance metrics with enterprise IT platforms. If AI copilots on the IT side are exposed to that shared context, an attacker may pivot into operational insights, maintenance schedules, or even vulnerabilities in PLC firmware versioning.
ICS/OT Threat Scenarios from EchoLeak-Like Attacks
- Indirect Prompt Injection via Maintenance Email
A malicious email referencing “safety shutdown procedures” triggers an AI assistant to summarize and forward a critical ICS shutdown process to an attacker-controlled destination. - Poisoned Historian Data
A rogue insider or compromised endpoint inserts AI-readable comments or tags into log entries that get processed by AI analytics engines effectively smuggling out control logic or alarm thresholds. - Cross-Domain Escalation
An attacker targets the AI assistant used by IT helpdesk teams supporting OT. The assistant references an OT system ticket, follows the attacker’s hidden prompt, and leaks credentials or device access information.
Defensive Guidance from The Pointman
For OT-aware security leaders, this is your call to action:
- Keep AI off the plant floor — unless fully risk-modeled. AI should never have unrestricted access to ICS context without strict prompt sanitation and isolation.
- Segment AI systems across IT/OT boundaries. Even in converged environments, hard separation of AI context pools between enterprise data and control systems.
- Red team your AI. Test for prompt injection and context abuse like you would with any other input validation problem because that’s what this is.
- Audit what AI sees. If your AI tools pull from shared document libraries, tickets, emails, or logs, classify and restrict that data like you would crown jewels.
Final Word from The Pointman
EchoLeak isn’t a one-off bug, it’s a preview of where the threat landscape is heading. As AI becomes the bridge between IT and OT, we need to start treating it like both a power tool and a privileged user.
In ICS environments, AI doesn’t need to touch control logic to cause damage. It just needs to know too much and be tricked into sharing it.
Stay sharp. Stay skeptical. Stay resilient.