Connect with us

Thought Leaders

When AI Turns Rogue: Mitigating Insider Threats in the Age of Autonomous Agents

mm

 The prolific rise of AI agents is creating new challenges for security and IT teams. On the cusp of this shift towards more agent-automated workflows for business continuity tasks, recent testing found that AI agents can exhibit unsafe or deceptive behaviors under certain conditions, creating a new insider threat for businesses across industries.

Presenting a critical need for organizations to properly monitor AI agents that are accessing sensitive data and acting without human oversight, this potentially introduces new classes of risk that are faster, less predictable, and harder to attribute. The reality of these risks is twofold. On one hand, security teams need to be prepared for bad actors who utilize AI agents to strengthen social engineering attacks targeting their employees. On the other hand, they must be prepared for company-sanctioned internal AI agents which may exhibit behavior that creates an entirely new set of security risks and vulnerabilities.

How Internal AI Agents Can Lead to Insider Threats

AI agents pose two key internal security risks within enterprise networks. First, they operate autonomously without the ethical boundaries or built-in accountability that human workers inherently follow. Working based on inferred goals and not explicit instructions, their efficiency and persistence can push boundaries and enable them to act without approval. Without necessary checks and balances in place, exposures can easily be created and overlooked.

These agents also may have access to vast amounts of data but lack the ability to differentiate between privileged and routine information. Therefore, even simple tasks such as analyzing datasets could cause data to be leaked or exposed to external parties. This challenge becomes even more complex when AI is scaled across disparate systems and workflows with different data rules and protocols. Agents operating across regions often violate location-specific data handling rules due to inconsistent policy enforcement, thus building security risks. The reality is, only 30% of U.S. businesses are actively mapping which AI agents have access to critical systems, creating a major security blind spot.

How External AI Agents Can Lead to Insider Threats

One of the most common modes of entry for hackers is through phishing and social engineering attacks. Bad actors and cyber gangs are using AI agents to enhance these attacks and carry out sophisticated deepfake and impersonation incidents. AI agents can be used to create phishing and social engineering attacks that seem more real and credible to an untrained eye. 60% of breaches in 2024 involved a human element, and almost a quarter of those originated from social engineering. This number is only going to increase as the use of AI agents will continue to bolster these methods of entry.

AI agents can be trained to sift through extensive amounts of social media and personal data across platforms and channels to send targeted, personalized communications. Messaging that impersonates the tone and voice of senders is more likely to be successful and harder to differentiate from real media. AI agents can learn to adapt their campaigns to improve efficacy, pivoting when an attack doesn’t work or emulating an attack that did.

When it comes to deepfakes, AI agents can generate these impersonations at outstanding speeds that lead to mass manipulation. Bad actors are also using AI agents to communicate with real-time video calls with little to no latency and improve credibility. They can react to a target’s reaction in real-time and pivot their approach if necessary. These enhanced attacks can increase the risk of insider threats from unaware employees who may be tricked into exposing sensitive data, authorizing transactions, or sharing credentials with threat actors.

The Role of Security Teams in Addressing Agentic AI Challenges

Security teams need to take specific actions to limit the blind spots created by AI agents. IT departments should first and foremost limit AI agent access to sensitive data through data governance and real-time visibility controls. Steps to accomplish this include classifying AI agent identities in IAM systems and monitoring their activity with the same rigor as privileged accounts. The visibility piece goes further by monitoring network activity for already implemented behavioral baselines assigned to AI agents. With visibility, IT teams can monitor privileged access movement and anomalies, not only supporting prevention efforts, but also accelerating containment and recovery efforts in the event of a rogue agent.

In addition, by identifying new patterns and tactics used in AI agent attacks, organizations can update their defense strategies and train their security systems to recognize and thwart similar threats in the future.

Staying Ahead of Insider Threats Introduced by AI Agents

The reality is AI agents introduce a novel insider risk due to their autonomy and increasing access to sensitive systems. Furthermore, threat actors will continue to use AI agents to more effectively exploit human vulnerabilities through deepfakes, phishing, and social engineering.

As this technology advances, businesses need to improve their visibility into AI agent access and behavior before adopting them into their workflows. This includes investing in proactive, 360-degree, real-time monitoring alongside an AI agent workforce. While AI agents may solve critical business bottlenecks and challenges, they require a proactive approach to limit future insider threats.

With over two decades of experience designing, managing, and implementing advanced software and electronic systems, Todd is a seasoned technologist passionate about harnessing large language models to transform human-computer interaction. Todd is dedicated to advancing natural language technologies, he continuously seeks to develop impactful, cutting-edge applications that redefine the boundaries of human-machine collaboration.