Why We Can’t Let AI Take the Wheel of Cyber Defense
If you want to waste the incredible potential of artificial intelligence, there is a quick way to do it: confuse automation with actual safety or mistake a shiny new tech feature for true resilience.
We are currently living through a strange and intense moment in the security world. AI development is moving at a speed that most companies honestly can’t handle, yet the market is flooded with sales pitches promising “autonomous” cyber defenses. The narrative is always the same: install this system, and it will clean up your security mess while you go grab a coffee.
Let me be direct: I am extremely skeptical of that promise. AI is an incredibly powerful tool (arguably an indispensable one at this point) but we have to remember that it is still just a tool. The best results don’t come from replacing people with machines; they come from pairing human expertise with AI capabilities.
The Danger of the “Closed Loop”
This distinction isn’t just philosophical; it matters because of how these systems actually function.
When we talk about fully autonomous systems, we are talking about a loop: the AI takes in data, makes a decision, generates an output, and then immediately consumes that output to make the next decision. The entire chain relies heavily on the quality and integrity of that initial data.
The problem is that very few organizations can guarantee their data is perfect from start to finish. Supply chains are messy and chaotic. We lose track of where data originated. Models drift away from accuracy over time. If you take human oversight out of that loop, you aren’t building a better system; you are creating a single point of systemic failure and disguising it as sophistication.
Transparency is the Only Antidote
To fix this, we need absolute clarity. We need to know exactly where AI is active in our networks, what data it is chewing on, what decisions it is authorized to make, and—crucially—what specific thresholds will trigger an alert for a human to step in.
This requires strong governance and solid policy. But more than that, it requires leaders to look in the mirror and be honest about their appetite for risk. If you wouldn’t put your family in a driverless car that had no steering wheel or brake pedal, why would you hand over your entire cyber defense strategy to an unsupervised algorithm?
Technology fails. It has glitches. Experience has taught me that lesson over and over again.
Resilience is Human
That same experience has taught me something else: when systems go down, they stay down until people fix them. There is no magical self-healing feature that puts everything back together elegantly.
When a breach happens, it is people who rebuild. Engineers are the ones trying to deal with the damage and restoring services. Incident commanders are the ones making the tough calls based on imperfect information. AI can and absolutely should support those teams—it’s great at surfacing weak signals, prioritizing the flood of alerts, or suggesting possible actions. But the idea that AI will independently put the pieces back together after a major attack is a fantasy.
True resilience ultimately depends on human intervention.
The United Nation’s Scientific Advisory Board is right to say that keeping pace with frontier AI capabilities will be critical if we want to stay resilient over the next decade. The threats are evolving fast. Our adversaries are already using AI to scale up their reconnaissance, fabricate deepfake videos, write more convincing phishing emails, and probe our defenses with relentless speed. We cannot afford to fall behind.
However, “keeping pace” is not the same thing as “ceding control.” Our goal should be responsible acceleration. We need to move fast, yes, but we must do so with governance, transparency, and human judgment baked into the process.
What Does This Look Like in the Real World?
So, how do we actually do this? First, make “human-in-the-loop” the default setting for any AI that can act on your systems or data. Automated containment can save your skin in the first few seconds of an attack, but every autonomous process needs guardrails. It needs to be auditable, and there must be an explicit hand-off to human operators the moment confidence levels drop, or the stakes get too high.
Second, get serious about where your data comes from. Map out exactly where your models are getting their input. Validate those sources. Watch for drift. Document why decisions were made. If you cannot trace how an AI arrived at a specific conclusion, you should not let it make changes to your production environment without someone watching.
Third, treat AI-enabled cyber exercises as a priority for the board, not just the IT department. Run simulations where the tools are wrong, slow, or compromised. Stress-test your escalation paths. Coach your teams to question the AI’s output and how to recover when the “smart” system acts stupidly.
It is always better to discover fragility during a drill than in the middle of a crisis.
If we do that, if we insist on human plus AI, with integrity at the data layer and accountability at the governance layer, then we can harness the best of this technology without succumbing to its worst risks. That is how we keep pace with frontier capabilities while protecting what matters. Not by outsourcing judgment to a black box, but by making AI an auditable, dependable partner in a resilient human-led defense.