Connect with us

Interviews

Sandy Dunn, CISO at SPLX – Interview Series

mm

Sandy Dunn, CISO at SPLX is a veteran CISO with 20+ years of experience in healthcare and startups, providing CISO consulting through QuarkIQ. She leads the OWASP Top 10 for LLM Applications Cybersecurity and Governance Checklist and contributes to the OWASP AI Exchange, OWASP Top 10 for LLM, and the Cloud Security Alliance. An Adjunct Cybersecurity Professor at Boise State University, she is also a frequent speaker, advisor, and board member of Boise State’s Institute for Pervasive Cybersecurity. Sandy holds a master’s in information security management from SANS and numerous certifications, including CISSP, multiple SANS GIAC credentials, Security+, ISTQB, and FAIR.

SPLX is a cybersecurity company that provides end-to-end protection for AI systems through automated red-teaming, runtime protection, governance, remediation, threat inspection, and model security. Its platform runs thousands of adversarial simulations in under an hour to identify vulnerabilities, hardens system prompts before deployment, and includes Agentic Radar, an open-source tool for mapping and analyzing risks in multi-agent AI workflows.

What first drew you to the intersection of AI and cybersecurity, and how did that path lead to your role at SPLX and involvement with OWASP?

AI had been part of cybersecurity conversations for years before ChatGPT, but it often felt like it hadn’t lived up to the hype. So when it launched, I expected to be underwhelmed but instead had the exact opposite reaction. When I first used ChatGPT, I was both amazed by what it could do and terrified by how quickly it could be weaponized for adversarial attacks or privacy abuse.That moment lit a fire. I dove into LLMs headfirst reading every research paper I could find, joining every relevant Slack and Discord community, and running my own experiments. I lurked for a while in the OWASP Top 10 for LLMs channel, and when the first list was published, I knew it was an important milestone. But as a CISO, I felt security teams needed more information. We had told people what to worry about, but not what to do. I approached Steve Wilson, the project lead, about creating a “LLM Security CISO Checklist.” It then became the first OWASP GenAI sub-project, which inspired many additional sub projects.

Through that work, I met Kristian Kamber and Ante Gojsalic (the founders of SPLX), and also advised numerous AI security startups, some with promising ideas, some less so. At the time, I was CISO at a B2B chatbot company, creating an extensive AI adversarial testing playbook. When I saw SPLX’s demo, I immediately recognized they had solved the very problem I was wrestling with: how to operationalize adversarial testing. When SPLX needed a CISO, I jumped at the opportunity to be part of an amazing company with incredible people solving important challenges.

As CISO at SPLX, what are the most novel attack techniques you’re uncovering, especially with regards to agentic AI?

Due to the nuances of GenAI system design, it is not possible to eliminate GenAI vulnerabilities and attacks that weaponize the agent’s autonomy and capabilities.Memory Poisoning attacks such as MINJA is a recent example of this. With MINJA, attackers can subtly corrupt an agent’s memory banks through crafted interactions, implanting harmful “memories” with prompts resulting in misleading or dangerous behaviors later. Another example is the Echo Chamber attack. This is where an adversary creates conversational loops by sending an agent repeated malicious context. Researchers were able to bypass safety mechanisms by reinforcing harmful instructions over multiple turns.

Indirect and cross-modal prompt injection is another example. Malicious instructions hide in external content like images or documents that agents consume. These instructions can hijack autonomous decisions without direct input from the user.

Sophisticated AI agent ecosystem attacks such as tool poisoning require careful review of the entire supply for AI agent deployments. Attackers create seemingly legitimate tools for agent platforms but embed malicious instructions within tool descriptions and documentation. When agents load these tools, the embedded instructions become part of the agent’s context, enabling unauthorized actions such as data exfiltration or system compromise.

How does SPLX platform help organizations detect and respond to LLM-specific threats such as prompt injection or adversarial jailbreak attacks?

SPLX is an end-to-end security platform that protects LLM-powered applications and multi-agent systems across the entire AI lifecycle, from development through deployment to real-time operation. Our AI Runtime Protection is designed to stop these threats as they occur by continuously monitoring and filtering inputs and outputs. It acts as a real-time firewall for AI and enforces strict behavioral boundaries on AI. SPLX’s dynamic detection engine flags malicious activity in real-time, ensuring AI systems respond safely and stay within intended boundaries.

OWASP’s updated GenAI Security Top 10 expands key risks like system-prompt leakage and vector-database vulnerabilities. How do these new threats reflect the evolving adversarial landscape?

The 2025 OWASP Top 10 updates reflect an evolving understanding of how LLMs and generative AI technologies are used in real-world scenarios. It is important to point out there are many more than ten LLM threats, but the goal is to identify the top 10. The big changes are LLM07 Insecure Plugin Design was included in Supply Chain and LLM010 Model Theft was included in Unbounded Consumption for the 2025 list, which created room to add:

1. System-Prompt Leakage which identifies the threat of exposing the system prompt can reveal guardrails, logic flows, or even secrets embedded in LLM prompts.

2. Vector-Database Vulnerabilities flag the potential security issues within RAG systems such as cross-tenant data leakage, embedding inversion, or poisoned documents that later surface dangerous outputs.

3. The updates show AI focused attacks evolved from crafted opportunistic prompt attacks to sophisticated attacks targeting entire AI supply chains. Modern attacks demonstrate strategic thinking of the whole AI system, focusing on persistence, scale, and systemic impact rather than one-off exploits.

The expanded coverage of agentic architectures acknowledges another critical development. As AI systems gain greater autonomy and decision making capabilities, the potential consequences of security failures multiply exponentially. Reducing human oversight, while enabling more powerful applications, has a compounding effect that amplifies the impact of successful attacks.

In your view, what are the most commonly overlooked vulnerabilities in enterprises deploying agentic AI today?

The most commonly overlooked issue is the same challenge we face with traditional software and system deployments, the principle of least privilege. We are still grappling with least privilege with human users and service accounts and now organizations face a new challenge managing Non-Human identities (NIH). People are deploying agents while agent identity and access is still not fully understood or solved. We see agents given wide ranging permissions to read documents, access external APIs, and even modify systems. This isn’t a technical flaw in the model itself, it’s a fundamental architectural mistake. A compromised agent with excessive privilege can wreak havoc, from exfiltrating massive amounts of data to initiating financial transactions.Another often missed or ignored issue is the “trust” relationship between agents. In agentic systems, agents are often designed to communicate and cooperate with each other. We’re seeing a new class of attacks where a compromised agent can impersonate a legitimate one, essentially becoming an “Agent-in-the-Middle.” It’s like a Trojan horse but on an architectural level.

Can you walk us through actionable steps that enterprise security teams should take when deploying agentic AI tools in production environments? 

1. Start with incident response plans. What does the worst day look like, then work backwards to make sure security controls and visibility are in place. When an AI breach happens, your security operations center needs a playbook. Who gets notified? How do you isolate a compromised agent? What’s the process for rolling back to a known-good state? Having a plan before a crisis takes place is vital.

2. Attack surface inventory and threat assessment. You can’t secure what you don’t know you have. The first step is to get a full inventory of all AI agents,  tools in use, and data access. What data does it touch? What permissions does it have? What’s the potential impact if it’s compromised? Prioritize on the high impact and most likely threats. Then have a candid conversation with the executive team on risk appetite. A benefit of the accelerated AI activity is CISOs, risk officers, legal teams and executive leadership will be forced to have a real conversation about business objectives, risk appetite, and security budgets. Historically there was an expectation of no incidents with minimal budget. CISOs have (mostly) been able to avoid a big incident by implementing just enough security to make their organization a less attractive target than the organization with less security. AI enabled adversaries make that strategy unrealistic now.

3. Implement guardrails, least privilege, and monitoring tools. Scope is important for any agent deployments. Define an agent’s purpose, its boundaries, and its permissions. Don’t give an agent access to your entire SharePoint library if it only needs one folder. Implement controls that limit what APIs it can call and what actions it can take. Think of it like a new, very smart, drunk intern. You recognize they have amazing capabilities, but you wouldn’t trust them. You would limit what they could do within the company, you wouldn’t give them access to anything important, you would monitor what they do and you might have alarm systems in place if they tried to do something they absolutely should not do like access the CEO’s office.

4. Implement an AI-specific security stack that merges with your traditional security stack. Traditional security tools weren’t designed for GenAI systems or agentic systems. You need to implement tools that are designed for issues unique to GenAI such as prompt validation, output sanitization, and continuous monitoring of agent behavior. These tools need to be able to detect the subtle, semantic-based attacks which are specific to GenAI and agentic systems.

5. Integrate AI red teaming into the CI/CD pipeline. You need to continuously test your agents for vulnerabilities based on the significance of the changes and the organization’s risk appetite. The recent GPT-5 update is an example of how disruptive changes can be on agentic workflows. Make automated red-teaming a core part of your development lifecycle. This helps you identify issues as you update and change your agents.

How should organizations incorporate automated red-teaming, CIAM, RAG governance, and monitoring into their GenAI risk management strategy?

The key is integration rather than treating them as separate initiatives. Your GenAI risk management strategy needs to be a coherent framework where each component reinforces the others.

Start with automated red teaming as a foundation. It’s important to have continuous adversarial testing that evolves with the threat landscape. The SPLX platform simulates thousands of attack scenarios across different risk categories, testing for prompt injection, jailbreaks, context manipulation, and tool poisoning. The critical aspect is making this part of your CI/CD pipeline so every agent update is security-validated before deployment.

CIAM for AI systems requires rethinking traditional identity models. AI agents need granular permissions that can be dynamically adjusted based on context and risk levels. Implement attribute-based access control that considers not just the agent’s identity, but the data it’s processing, the tools it’s requesting access to, and the threat context.

RAG governance is particularly crucial. Establish data lineage tracking for all content ingested into vector stores. Implement content validation pipelines that can detect adversarial examples or malicious instructions embedded in documents.

For monitoring, you need telemetry that captures both technical and behavioral indicators. Technical monitoring includes input/output analysis, API call patterns, and resource consumption. Behavioral monitoring focuses on decision quality, task completion patterns, and interaction contexts that might indicate compromise.

Integration is important. The red teaming results should inform the CIAM policies, the monitoring systems should feed back into your RAG governance processes, and all of this needs to be coordinated through a centralized enterprise  risk management platform that can correlate signals across all these domains.

With AI-related breaches still in early stages but poised to grow, what trends should security leaders prepare for over the next 12 to 18 months

I expect to see a significant escalation in supply chain attacks targeting everything including AI infrastructure. AI supply chain attacks poison training datasets, compromise model repositories, or inject malicious code into software dependencies to gain persistent access to AI systems.

There is already a surge in autonomous social engineering such as deepfake vishing incidents, but the evolution toward fully autonomous generation is what concerns me the most. AI agents handling complete social engineering campaigns across multiple platforms simultaneously, each tailored to specific targets and contexts creates a force multiplier effect that traditional defenses aren’t prepared for.

I believe we’ll see the rise of AI-native attacks that operate at machine speed. Traditional security tools and human analysts can’t keep up with an attacker that can execute complex, multi-stage exploits in milliseconds.

How do you foresee regulatory and compliance frameworks evolving in response to generative AI risks?

I foresee a major shift in regulatory frameworks, aiming to balance innovation with a focus on accountability, transparency, secure development practices, and supply chain security.

I expect to see a focus on data provenance and integrity. Regulatory bodies will want to know where the data used to train and augment AI models came from.They’ll want to see proof that the data was sanitized, that it didn’t contain sensitive information, and that it hasn’t been poisoned.

Finally, I think we’ll see sector-specific regulations. The risks for a financial institution using an AI agent to handle transactions are different from those of a healthcare company using one for diagnostics.

Regulators will start to define specific standards for critical industries, requiring things like automated red-teaming, human-in-the-loop oversight, and strict auditing for AI systems that could have life-or-death consequences.

As an adjunct professor and board member at Boise State’s Institute for Pervasive Cybersecurity, how are you preparing the next generation of AI-security professionals, and what skills do you see as most critical in today’s evolving landscape?

Critical thinking and problem solving are still the most important skills students need for a great career in cybersecurity, but skills such as human psychology and linguistics, once only found within cyber threat intelligence teams, are skills that will benefit a variety of cybersecurity job roles in the AI future.

People and communication skills matter too. Cybersecurity is more than IT systems, it’s about people, helping a business meet their business objectives, and being able to translate and communicate complex technical risks to non-technical stakeholders so they can make the right decisions for the company. The future of cybersecurity will depend on professionals who are not only technically smart but also grounded and skilled communicators.

Finally, the AI security landscape is evolving so rapidly, so it will be important to be able to learn and adapt quickly.

Thank you for the great interview and detailed insight, readers who wish to learn more should visit SPLX

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.