News

Governing the promise and perils of Agentic AI

Steve Durbin
Published 04 - February - 2026
Read the full article
digital insurance

Organizations are moving past simple chatbots and embracing agentic AI. This is a more advanced type of artificial intelligence that responds to inputs, plans, executes and adapts on its own. It aims to achieve specific goals with little human involvement. The prospects, therefore, are tremendous: autonomous systems that can enhance operations and decision-making.

Yet the risks are just as substantial. Autonomous systems can increase the likelihood of errors, blur lines of accountability, and broaden the potential avenues for cyberattacks. To gain the benefits, organizations must balance their goals with solid governance, clear safeguards, and a step-by-step rollout.

What sets agentic AI apart?

Agentic AI represents a departure from conventional tools such as ChatGPT and Copilot. Rather than relying on one model to respond in order, it uses a network of specialized agents that work collaboratively to achieve goals. These agents can break down tasks, manage resources, and carry out actions across different systems, often through ongoing cycles of planning, execution, and evaluation. This shift from reactive assistance to autonomous goal execution opens the door to expanded capabilities, while also increasing the likelihood and impact of potential risks.

The upside: Speed, scale and strategic value

Agentic AI can automate complex workflows such as alert triage, signal enrichment, script execution, report generation, and ticket management, all at machine-level speeds without pausing. This allows security operations to detect and respond to threats much faster, reduces workloads, and ensures playbook procedures are followed more consistently.

Beyond increasing operational effectiveness, agentic AI provides strategic advantages. These types of systems serve as tireless collaborators that incorporate threat intelligence, asset data, and historical incidents to recommend or take optimal next actions. They can run experiments in contained environments, validate assumptions and refine response tactics before problems intensify.

The downside: Missteps, misuse and misalignment

Yet autonomy cuts both ways. An agent empowered to act can just as easily misread objectives or react in unintended ways. Ambiguous instructions or misaligned objectives can trigger unforeseen changes at scale. The same integrations that let agents fetch data and run tools can be turned to malicious ends—attackers can inject prompts or compromise dependencies to steal data or move laterally across a network.

Because agentic systems operate in loops and chains, small errors can quickly compound. Without adequate oversight, these systems can escalate into behaviors that are hard to track or undo. This creates an important governance challenge, pressing the need to identify who sets the agent’s goals, who defines its operational limits, and whether its actions can be traced and verified.

Governance: The backbone of responsible autonomy

To reap the benefits of agentic AI without losing control, organizations must pursue a multi-step strategy:

  1. Establish scope and risk tolerance: Set clear limits for agentic AI by defining what agents can do, access and interact with. Include the business reasons for giving them autonomy. This helps stop overreach and keeps things within the organization’s risk limits.
  2. Least privilege by design: Restrict agent access by using specific identity controls, rotating credentials, network segmentation and permissions for each agent. Grant autonomy only when necessary and always within closely monitored environments.
  3. Human-in/on-the-loop: Critical decisions should always involve human review, especially those with legal, ethical or customer-facing implications. Define explicit escalation procedures to guarantee timely human involvement when needed.
  4. Monitor agent activity: Record all agent actions and detail the reasons behind decisions, information sources and tools used. This supports thorough audits and improves transparency for internal reviews and external oversight.
  5. Kill switches and rollback plans: Establish quick containment procedures and tested rollback plans to reduce damage when things go wrong.
  6. Red-team agents: Regularly test adversarial scenarios, like prompt injection, connector exploitation, and data poisoning, during development and in production to strengthen resilience and refine defenses.
  7. Supply-chain hygiene: Treat agent frameworks and connectors as essential parts of your supply chain. Carefully check all dependencies, fix versions to prevent unexpected updates, and quickly apply patches to reduce security risks.

Cross-functional collaboration

Teams across risk management, legal, compliance, IT and business operations must coordinate on shared goals, usage policies, testing criteria and incident response procedures.

Ethics and transparency are not optional. They are essential to risk management. Organizations must disclose when decisions are automated, explain how data is used, and set clear lines of accountability.

Training is essential. Teams need to develop the capability of defining strong objectives and constraints, since vague or poorly constructed prompts can lead to misconfigurations. Common understanding of the strengths and limitations of agentic AI helps to avoid misuse and supports reliable performance.

Start small, scale smart

Adopting agentic AI should be slow and careful. Start with pilot projects in clear, focused environments that enable measurable results.

Possible use cases include improving alert triage, speeding up evidence collection, reducing change management defects and increasing runbook accuracy. Connect these outcomes to key business metrics and the organization’s acceptable risk levels. Autonomy should be expanded only in parallel with advancements in monitoring, control practices and team readiness.

Intentional autonomy is the way forward

Agentic AI can reshape risk management by quickly automating complex, repetitive and labor-intensive tasks. Its independent functions offer advantages, but they also come with significant risks. Organizations that set their goals with strong protections, such as transparency, minimum access, human oversight, and ongoing assessment, can enjoy benefits while managing risks.

Governing the promise and perils of Agentic AI
Read the full article