Connect with us

Reports

From Human to Hybrid: Inside Exabeam’s 2025 Report on AI-Fueled Insider Risk

mm

Exabeam’s new study, From Human to Hybrid: How AI and the Analytics Gap Are Fueling Insider Risk makes it clear that the threat has flipped: the greatest danger now comes from within the organization. Four numbers stand out—64% of security professionals now see insiders as the top risk, 76% report shadow AI already in use, only 44% have behavior-centric analytics (UEBA) in place, and 74% believe executives underestimate the problem. Together, these four factors define the landscape the report explores in detail.

The risk flipped inward—and that changes the architecture

If the main threat is inside, “more firewall” isn’t the answer. It’s identity, access, and behavior. Think continuous verification of who is doing what, with which data, and whether that pattern is normal. Regionally, most markets now treat insiders as the primary concern; the main outlier is APJ (Asia-Pacific & Japan), where many still fear external attackers more. For leaders, the practical translation is to shift spend toward:

  • Stronger identity controls (MFA that sticks, risk-based access, least privilege that’s actually enforced).
  • Data-aware monitoring across SaaS, endpoints, storage, and email so abnormal movement is visible.
  • Behavioral analytics that learn normal patterns per person, team, and system—and alert on meaningful deviation.

The organizational implication: security and data owners must work together. If you can’t answer “who touched what sensitive data this week and was it typical for them?” you’re blind to the modern breach path (compromised account → quiet data staging → quick exfil).

AI has reshaped the definition of “insider”

Shadow AI is the new shadow IT. Staff paste code, contracts, customer lists, or prompts with sensitive context into unapproved models. That’s why the 76% figure matters: it means this is not a niche problem. Treat GenAI like privileged access—approve specific tools, log usage where lawful, and prevent protected data classes (regulated PII, trade secrets) from ever entering third-party models. Pair policy with enablement: give people sanctioned AI options so they don’t feel forced to go rogue.

There’s also a new actor on the inside: AI agents. Teams are wiring agents into workflows with real credentials and API keys. These are “non-human insiders.” They don’t get tired, and they rarely complain—until they drift. That calls for two controls executives should recognize:

  • Scope: every agent needs an owner, a clear job, and minimal permissions.
  • Observability: every agent deserves the same audit trail and anomaly detection a human gets.

UEBA (User & Entity Behavior Analytics) is detection that focuses on behavior, not just signatures and excutives should become familiar with this. It builds a baseline for each user or entity (including bots, service accounts, and agents) by learning:

  • Time-series norms: typical login times, data volumes, or destinations.
  • Peer-group context: how a finance analyst behaves versus other finance analysts.
  • Sequence patterns: unusual orderings (e.g., first-time VPN login → immediate privilege change → bulk download).
    When activity strays from the learned patterns, UEBA scores the risk and surfaces outliers. Technically, this leans on statistics and machine learning (unsupervised and semi-supervised methods) that thrive on log data without needing perfect labels. In plain English: UEBA turns piles of events into “is this normal for them right now?”

Close the analytics gap—and the culture gap

Here’s the real exposure: only 44% of organizations use UEBA even though insider risk is now the headline problem. At the same time, 74% of practitioners say leaders underestimate insider threats. That cultural gap slows hiring, tooling, and policy. Closing both gaps looks like this:

Make behavior a first-class signal. Consolidate identity, endpoint, SaaS admin, email, and data-movement logs so one person (or agent) has one story across systems. Invest in correlation before dashboards. If the SOC can’t stitch identity across tools, they’ll miss quiet abuse and slow-motion exfiltration.

Balance privacy with detection—by design. The most common roadblock to insider programs is privacy resistance. Solve it with purpose-limited analytics, role-based access to telemetry, clear retention windows, and transparent documentation of what you analyze and why. Done right, privacy guardrails enable stronger detection because they unlock the data flows teams need.

Measure outcomes, not tool counts. Executives should ask for three numbers monthly:

  1. Time to detect abnormal behavior
  2. Time to contain insider incidents
  3. Percent of incidents caught by behavior analytics versus luck or after-the-fact audits.

Tie budget to improving those metrics, not to how many point products are “deployed.”

Treat GenAI like a production system. Establish allow-lists, red-line data categories, and logging for prompts and outputs where lawful. Give product and legal a seat at the table so “move fast” never means “spray data into black boxes.”

Baseline everyone and everything. People, service accounts, RPA scripts, and AI agents each get their own baseline. You’re looking for drift—new data touched, unusual times of day, odd destinations, or sequences that don’t match the job to be done.

Summary

From From Human to Hybrid: How AI and the Analytics Gap Are Fueling Insider Risk is more than a snapshot of today’s risks—it’s a preview of where security must go next. Insider threats, amplified by AI, are no longer exceptions but the baseline assumption. For CISOs and CEOs, the path forward means shifting from perimeter defenses to identity-centric strategies, treating GenAI with the same caution as privileged accounts, and giving both humans and AI agents their own behavioral baselines. The organizations that succeed will be those that unify telemetry, embrace outcome-driven metrics, and align leadership with operations. In that sense, Exabeam’s report is less a warning and more a playbook for building resilience in an AI-defined future.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.