News

Understanding The Weaponization Of Agentic AI

Steve Durbin
Published 30 - May - 2025
Read the full article on Forbes
riskemerging threatsforbestechnologygovernancepeopleai

With the rise of agentic AI, artificial intelligence enters a new phase in its iterative development. Unlike traditional AI, which requires human input and relies on predefined rules and programmed logic, agentic AI operates with a level of autonomy and purposeful decision-making. These systems dynamically perceive their environments, synthesizing data from diverse sources. They then reason through complex scenarios, formulate actionable plans and collaborate with other agents to execute tasks—whether that’s autonomously detecting and responding to sudden cyber threats or predicting demand to better manage inventory levels in supply chains.

Gartner estimates that by 2028, AI agents will autonomously make “15% of day-to-day work decisions.”

While AI agents hold immense potential to reshape industries, their ability to operate autonomously makes them highly attractive hacking tools to adversaries and threat actors.

The Dark Side Of AI Revolution: Weaponized AI Agents

As ethically used AI agents transform processes for efficiency gains, cybercriminals can conceivably use agentic AI to automate attacks and create adaptive malware that learns, evolves and evades defenses. Automation has rendered precise, large-scale attacks disturbingly affordable, with 78% of CISOs reporting an increase in AI-based threats.

The Many Faces Of AI-Powered Cybercrime

Cybercriminals can exploit the agility of agentic AI to automate deepfake, social engineering and phishing campaigns; schedule and execute tasks to completion without human oversight; and participate in multi-agent collaboration to provide a multiplier effect.

Agentic AI can automate cyberattacks in several ways, including:

Polymorphic Malware

With agentic AI, the polymorphic ability of malware to transform its code structure or appearance with each infection is further reinforced. It can now evolve to match existing security systems and learn from unsuccessful attacks, self-improving in real time to continually evade defenses.

Discovering Weak Links

Agentic AI can scan vast networks and systems independently, detecting misconfigured devices, open ports and unpatched software, identifying weak points in security frameworks.

Synthetic Identity Fraud

Agentic AI can collect vast amounts of personal information from public sources, including social media accounts, hacked databases and stolen credentials on the dark web. The AI models can fabricate synthetic identities that appear highly realistic, including fake names, Social Security numbers or other identifiers that conform to the expected format. Agentic AI can facilitate the simultaneous use of multiple synthetic identities to execute massive fraud—for instance, taking out loans from several financial institutions or executing fake transactions on credit lines tied to synthetic identities.

Multistage Campaigns

Agentic AI can use information from one interaction to inform subsequent interactions. For instance, a phishing attack can trick an individual into revealing a small piece of information in the initial round of attacks. The AI can then use that information to plan its next maneuver, launching multistage campaigns.

Multimodal Social Engineering

Agentic AI has the potential to transform social engineering through multimodal attacks that exploit a variety of communication channels, like text messages, phone calls or social media, to manipulate targets more persuasively. For instance, AI can approach a victim through social media to build rapport and then follow up with a call using synthesized voices to seal the deception.

When Autonomous Agents Go Rogue

AI agents employ machine learning to continually learn from vast volumes of real-time data. Uncontrolled access to huge volumes of data and autonomy can compromise an organization’s security when AI agents turn rogue and deviate from their intended purpose. This may occur due to an unintentional user error or a programming flaw.

Attackers can leverage AI agents to attack large language models (LLMs) using methods such as indirect prompt injection, where attackers inject malicious commands into external data sources that the AI reads; data poisoning, where the data used to train the AI model is poisoned with false information; or data exfiltration, where attackers use prompts to hack LLMs to reveal sensitive information.

Managing The Threat Landscape: Strategies To Secure AI

The following can help organizations protect themselves from malicious AI agents:

Implement AI-Based Anomaly Detection

AI-enabled monitoring tools can identify anomalous system activities, such as unauthorized data access and unusual user behavior, which could be indicators of a rogue AI presence.

Deploy Data Protection Mechanisms

Use role-based access controls (RBAC) to limit access to sensitive data per the user role within the organization. Grant privileges on a need-to-know basis to avoid unnecessary exposure to sensitive data. Enhance access controls by imposing multiple verification factors for accessing data. Encrypt sensitive data to reduce unauthorized access.

Reinforce AI Resilience

Improve the resilience of AI systems against malicious attacks by retraining them on historical data of past adversarial attacks or subjecting them to hypothetical attacks.

Ensure Data Integrity

Use high-quality, unbiased training datasets to avoid compromising the models. Trustworthy datasets reduce the risk of errors and biases, preventing the model from being trained on malicious data.

The ability of agentic AI to make decisions and self-learn makes them seem formidable, like something out of science fiction. Automated reconnaissance, combined with AI-driven deception, enables cyber adversaries to elude security measures in unprecedented ways. Threats are becoming more intelligent. As I discussed in my previous article, organizations must respond in kind with reciprocal measures and design smarter algorithms to power AI-enabled cybersecurity defenses that can counter these malign agents.

Understanding The Weaponization Of Agentic AI
Read the full article on Forbes