Connect with us

Interviews

Liat Hayun, SVP Product Management & Research at Tenable – Interview Series

mm

Liat Hayun is the VP of Product and Research at Tenable Cloud Security. Prior to joining Tenable, Liat co-founded and served as CEO of Eureka Security, a data security company that was acquired by Tenable. Before co-founding Eureka Security, Liat spent over a decade leading cybersecurity efforts at the Israeli Cyber Command and at Palo Alto Networks. As VP of Product Management at Palo Alto Networks, Liat led the development of Cortex XDR and the company’s managed threat hunting service.

Tenable is a U.S.-based cybersecurity firm focused on helping organizations identify, understand, prioritize and remediate security vulnerabilities across their entire digital attack surface. It is best known for its exposure management platform and tools like the widely used Nessus vulnerability scanner, enabling businesses to gain visibility into threats spanning IT infrastructure, cloud, OT/IoT and identity systems and take decisive action to reduce business-impacting risk. Tenable’s solutions deliver continuous discovery, prioritization and threat insight to support proactive cyber risk management for tens of thousands of customers worldwide.

What fundamentally makes autonomous AI agents more dangerous than traditional AI models that only respond to user prompts, rather than acting independently?

Autonomous AI agents change the nature of risk because they can initiate tasks, access systems, make decisions, and interact with other services without human oversight.

That independence expands both the speed and scale of potential impact, increasing the likelihood of data exposure, operational disruption, and financial loss. An AI agent can query data stores, trigger workflows, call APIs, or modify infrastructure in real time. If misconfigured or compromised, it can move laterally across systems using the permissions it has been granted, often faster than a human could detect or intervene.

According to Tenable’s Cloud and AI Risk Report 2026, 52% of organizations now have non‑human identities with excessive permissions, and nearly half of those are dormant, creating widespread unmanaged access across production environments. That means, many AI‑driven processes already have access they don’t actively use but attackers can.

Attackers increasingly exploit what is known as the Confused Deputy problem. They do not need to compromise the agent itself. Instead, they trick an authorized agent into performing actions on their behalf, often through indirect prompt injection or manipulated inputs. The agent executes the request using its legitimate permissions, effectively doing the attacker’s work. When autonomous systems inherit broad permissions or assume over‑privileged roles, the time between misconfiguration and exploitation effectively disappears, creating a zero‑margin AI exposure gap.

Many enterprises are experimenting with AI agents informally. What are the first concrete steps security teams should take to assess whether agentic AI is already operating inside their environment?

Closing the AI Exposure Gap begins with discovery that extends beyond traditional asset inventories. The experimentation phase is often the most dangerous because organizations do not fully know what AI is being used, how it is configured, or what access it has been granted.

AI agents rarely live in a single, clearly labeled system. Many operate outside formal governance. They appear in developer tools, SaaS integrations, automation workflows, browser extensions, and cloud services, which creates a distributed deployment that increases blind spots and limits centralized control. This is already widespread, as more than 70% of organizations have integrated at least one third‑party AI or model‑related package, often embedding AI deep into applications and infrastructure with limited centralized oversight.

Recent research on Clawdbot shows that experimental agent deployments can pose serious risks when capabilities are deployed before security controls or configuration standards are fully understood. Even after experimentation ends and organizations decide which AI applications to adopt, they still need strong governance over how those systems are used, what permissions they hold, and how they interact with critical assets.

The discovery phase should begin by establishing a unified inventory of where AI exists across endpoints, cloud infrastructure, and external attack surfaces. This includes identifying AI libraries, agents, APIs, model services, and third‑party integrations, not just internally deployed systems, but anything externally reachable.

Next, teams must map how those agents are connected and what data they access, which identities they use, what permissions they hold, and which systems they can reach. Visibility requires context as risk emerges from relationships.

Finally, post experiment, organizations should identify which of those connections create reachable paths to critical assets and close the AI Exposure Gap.

How do risks like exposed control surfaces or unvetted third-party “skills” compare to more familiar threats such as supply-chain attacks or privilege escalation?

Unvetted third‑party skills and exposed control surfaces extend similar risks as supply‑chain compromise and privilege escalation, but with amplified speed, scale, and connectivity.

Third‑party agent skills function much like software supply‑chain dependencies, and the exposure surface already carries significant weight. Eighty‑six percent of organizations host third‑party code packages with critical‑severity vulnerabilities. When these components operate inside autonomous AI agents, execution becomes continuous and automated, eliminating the time between compromise and impact.

Exposed control surfaces expand the identity and access layer of risk by creating new operational interfaces that carry elevated permissions. Eighteen percent of organizations already allow AI services to assume over‑privileged roles, giving automated systems broad reach across environments. Agentic AI links these exposures into a single operational chain. A vulnerable dependency, excessive permissions, and exposed interfaces combine into a reachable attack path. Organizations operate in a zero-margin-for-error environment where exposure and exploitation align almost instantly.

From a defensive standpoint, what does “good hygiene” look like for organizations deploying AI agents with access to internal systems or sensitive data?

Good hygiene begins with treating AI agents as privileged digital actors with real authority across systems and data. Organizations need continuous visibility into every agent, what it does, and what it can reach. Teams should enforce least privilege for machine and service identities, remove dormant access, and tightly scope permissions. Each agent should be dedicated to a specific type of task, with permissions and access limited strictly to that function.

This matters because unused access is widespread. Nearly half of identities with critical excessive permissions are inactive, and more than 70% of default AI execution roles remain unused. These conditions create ready‑made escalation paths that carry risk without delivering value.

Security teams must also understand relationships across infrastructure, data stores, APIs, and applications. Mapping these connections reveals toxic exposure combinations, such as vulnerable workloads reachable through overprivileged agents with access to sensitive data.

Strong hygiene also requires continuous governance. Organizations must monitor agent behavior, control integrations, enforce data guardrails, and regularly validate permissions. These practices close the AI Exposure Gap and remove reachable attack paths.

What do you recommend organizations do to protect AI attack surfaces quickly without slowing innovation?

Organizations can protect AI attack surfaces by reducing exposure with speed and precision.

To do that, the first priority is identifying which AI systems create reachable paths to critical assets. More than 80% of organizations run workloads with vulnerabilities already exploited in the wild. Risk already exists inside most environments. Security teams need to focus on the connections that create real impact.

Then, realize automation enables scale. Continuous discovery, contextual prioritization, and guided remediation allow teams to reduce risk while maintaining development velocity.

While policy guardrails safeguard adoption, just‑in‑time access, monitored data flows, and controlled integrations help organizations manage AI activity while sustaining innovation.

Together, these steps shrink the AI Exposure Gap by removing high‑risk access and unsafe connections. Rapid exposure reduction protects systems while allowing AI adoption to move forward.

How should CISOs think about identity, permissions, and scope when granting AI agents access to production systems?

CISOs should treat AI agents as high‑speed nonhuman identities with operational authority across systems and data.

Access decisions should focus on precision. Permissions should align with specific tasks, remain time‑bound, and undergo continuous review. Excess privilege expands reach and increases impact when systems interact at machine speed.

Security leaders also need clear insight into effective reach. Permissions, combined with network paths, data access, and service integrations, define what an agent can actually do. Understanding these relationships reveals potential exposure before it becomes an impact.

Identity connects infrastructure, data, and applications across the AI Exposure Gap. Tight permission design limits blast radius and maintains control. When the time from exposure discovery to exploitation is nearly zero, identity governance determines how safely AI operates in production.

Do you expect agentic AI to force a rethink of existing security frameworks, or can today’s models be adapted to handle these new risks?

Security principles remain consistent, but operating models are evolving. Agentic AI connects systems, identities, and data into dynamic environments that change continuously and operate at machine speed. Unlike human operators, agents do not apply judgment to their actions or distinguish between appropriate and inappropriate requests. They execute based on instructions and permissions, which increases the importance of strict control and governance.

Risk emerges from relationships across infrastructure, software supply chains, and identity layers. Organizations inherit exposure faster than remediation cycles can keep pace. Automated deployment and automated exploitation compress response time and increase operational pressure.

Security frameworks need to center on exposure visibility and reduction. Teams require continuous insight into what exists, how systems connect, and which paths lead to critical assets.

The defining challenge is managing interconnected risk across the AI Exposure Gap. Security programs succeed by continuously reducing reachable exposure and maintaining control across complex environments.

Looking ahead, what areas of agentic AI security research do you believe deserve more attention as these tools move from experimental to mission-critical?

Several research areas will shape the future of agentic AI security.

First, the growth of nonhuman identity ecosystems requires deeper analysis. Organizations are rapidly expanding machine identities that operate across infrastructure, data, and services. Understanding privilege patterns, behavior, and lifecycle management will be essential.

Second, research must advance multi‑step attack path modeling. AI systems connect software supply chains, cloud infrastructure, and identity layers. Mapping how these elements interact will improve risk prediction and prioritization.

Third, governance of autonomous decision‑making requires greater focus. Security teams need visibility into how agents access, process, and transfer sensitive data over time.

Finally, exploitation speed continues to accelerate. Studying how attackers and defenders operate at machine speed will shape response strategies.

Future security depends on understanding and reducing the AI Exposure Gap. Research must focus on controlling interconnected exposure across the entire attack surface.

Thank you for the great interview, readers who wish to learn more should visit Tenable.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.