Thought Leaders
Beyond Human: Securing Agentic AI and Non-Human Identities in a Breach-Driven World

If you’ve been anywhere near an enterprise SOC in the past 18 months, you’ve seen it. The alerts that don’t map to a person. The credentials that belong to “something,” not “someone.” The automation that moves faster than your IR playbook can keep up.
And lately, it’s not just noise, it’s the root cause of the biggest headlines in our industry. From the Victoria’s Secret cyber incident that disrupted sales and triggered a class-action lawsuit, to multi-tenant cloud compromises that exposed sensitive data across customers, a growing number of breaches now originate from identities we can’t see, don’t track, or barely understand.
The rise of agentic AI, autonomous systems that can act across applications, APIs, and infrastructure, has collided with the explosion of non-human identities (NHIs), including service accounts, bots, API keys, and machine credentials. Together, they’ve created a new attack surface expanding faster than most organizations can secure.
What We Mean by “Agentic AI”
Agentic AI refers to AI “agents” that can take goals and autonomously execute multi-step tasks, often across multiple systems, without human oversight.
-
Can call APIs, update databases, or trigger workflows.
-
Operates at machine speed – seconds, not minutes.
-
Risks: Prompt injection, poisoned training data, stolen credentials.
The Scale We’re Dealing With
Machine identities already outnumber humans in most enterprises, in some cases by 20:1 or more. By 2026, that ratio is projected to nearly double. These NHIs are in your cloud workloads, CI/CD pipelines, and API integrations, and now they’re driving LLM-based automation.
The challenge: Most IAM systems were built to manage people, not machines.
-
Accounts with no clear owner.
-
Static credentials that never expire.
-
Privileges far beyond what’s needed.
It’s not hypothetical. Breaches at Uber and Cloudflare have been traced back to compromised machine accounts – the kind that never get phishing simulations but can unlock critical infrastructure.
Agentic AI: A Force Multiplier for Risk
On one hand, agentic AI is an operational game-changer. On the other hand, if its access is compromised, you’ve got automation running for the adversary.
Consider:
-
An AI agent with read/write rights to SaaS applications could automate data theft without tripping human-centric anomaly detection.
-
If that same agent can alter IAM roles or deploy cloud resources, you’ve got privilege escalation on autopilot.
-
Prompt injection attacks and model poisoning mean attackers can redirect an AI agent without stealing its credentials.
We’ve seen how dangerous a mismanaged service account can be. Now give that account the ability to make decisions, and the stakes multiply.
Case Studies: Breach Forensics in the AI + NHI Era
Victoria’s Secret (May 2025)
Over the Memorial Day weekend, Victoria’s Secret took down its U.S. website and some in-store services in what appeared to be a ransomware-style incident (The Hacker News, Bitdefender). The outage lasted days and contributed to an estimated $20 million drop in Q2 revenue. A subsequent class-action lawsuit alleges that the retailer failed to encrypt sensitive data, skipped critical security audits, and neglected employee cybersecurity training (Top Class Actions), all gaps that mirror the weaknesses often seen in unmanaged non-human identities (NHIs) with privileged access.
Google Salesforce Breach (June 2025)
In June, attackers linked to the ShinyHunters (UNC6040) group gained access to a corporate Salesforce CRM instance via voice-phishing (vishing) techniques (ITPro). Once inside, they exfiltrated contact data belonging to small- and medium-sized business customers. While this began with human-targeted social engineering, the pivot point was a connected application — a form of machine identity – that allowed the intruders to move laterally without tripping user-behavior analytics.
Retail & Luxury Brand Outages (M&S, Cartier, The North Face)
A wave of attacks on major retailers, including The North Face and Cartier, exposed customer names, emails, and select account metadata (Sangfor, WSJ). Marks & Spencer was hit particularly hard, a ransomware and supply-chain attack attributed to the ‘Scattered Spider’ group disrupted click-and-collect services for more than 15 weeks and is estimated to have cost the retailer up to £300 million. In each case, third-party integrations and API connections, often backed by NHIs, became silent enablers for the attackers.
What is a Non-Human Identity (NHI)?
NHIs are credentials and accounts used by machines, not people. Examples:
-
Service accounts for databases or apps.
-
API keys for cloud-to-cloud integration.
-
Bot credentials for automation scripts.
Risks: Over-privileged, under-monitored, and often left active long after use.
Why Governance is Lagging
Even mature IAM programs hit roadblocks here:
-
Discovery Gaps — Many orgs can’t produce a complete NHI inventory.
-
Lifecycle Neglect — NHIs often persist for years, with no regular review.
-
Accountability Vacuums — Without a human owner, cleanup falls through the cracks.
-
Privilege Creep — Permissions accumulate as roles change, but revocation lags.
This is the same problem we’ve been fighting with human accounts for decades — only now, each NHI can operate 24/7, at scale, without triggering “human” risk controls.
A Vendor-Neutral Playbook for Securing NHIs and AI Agents
-
Comprehensive Discovery — Map every NHI and AI agent, including privileges, owners, and integrations.
-
Assign Human Owners — Make someone accountable for each NHI’s lifecycle.
-
Enforce Least Privilege — Match permissions to actual usage; remove excess.
-
Automate Credential Hygiene — Rotate keys, use short-lived tokens, auto-expire dormant accounts.
-
Continuous Monitoring — Detect anomalies like unexpected API calls or privilege escalations.
-
Integrate AI Governance — Treat AI agents like high-privilege admins: log activity, enforce policy, enable just-in-time access.
(These steps align with NIST CSF, CIS Control 5, and emerging Gartner IVIP best practices.)
Why This Has to Happen Now
This isn’t just about chasing an AI trend. It’s about recognizing that the identity perimeter has shifted from people to processes. Attackers know this. If we keep treating machine identities as an afterthought, we’re giving adversaries the blind spot they need to operate undetected.
The teams that will stay ahead are the ones that make NHI and agent governance a first-class element of identity security, with visibility, ownership, and lifecycle discipline in place before the next breach forces the issue.











