Connect with us

Paul Reid, VP of Adversary Research at AttackIQ – Interview Series

Interviews

Paul Reid, VP of Adversary Research at AttackIQ – Interview Series

mm

Paul Reid, VP of Adversary Research at AttackIQ, is a seasoned veteran of the fast-paced world of cybersecurity. With more than two decades of experience as a technology strategist for leading tech companies, Paul has guided customers, partners, analysts, and journalists through the evolving cybersecurity landscape. His expertise spans cybersecurity, biometrics, network security, and cryptography.

Most recently, he has led a team of Cyber Threat Hunters focused on using behavioral analytics to detect emerging threats in customer environments. Paul is a published author in the Prentice Hall Series in Computer Networking and Distributed Systems and holds several patents in cybersecurity.

AttackIQ is a leading cybersecurity company specializing in breach and attack simulation (BAS) and continuous security validation. Its Adversarial Exposure Validation platform uses MITRE ATT&CK–based emulation to test security controls, identify vulnerabilities, and prioritize remediation. Founded in 2013, AttackIQ helps organizations improve their defensive posture, enhance SOC efficiency, and reduce risk.

You’ve held leadership roles across a range of cybersecurity domains for over two decades. What first sparked your interest in adversary research, and how did that journey ultimately lead you to AttackIQ?

My journey into cybersecurity started over 25 years ago, setting up Novell networks and working in the world of directory services — Novell, Microsoft Active Directory, LDAP. That early experience taught me the importance of identity, authentication, and access, the foundations of any security strategy.

From there, I transitioned into smart card authentication, where I had the opportunity to write PKCS #11 libraries and immerse myself in public key infrastructure (PKI) during its rapid rise. Working with symmetric and asymmetric cryptography during that era gave me a real appreciation for how encryption shapes trust in digital environments.

Later in my career, I pivoted into data classification, helping organizations understand the value of their data so they could protect what matters most. That experience naturally led to work in user and entity behavior analytics (UEBA), where I gained hands-on exposure to data science and machine learning, including programming in R.

Eventually, I was fortunate to lead a global threat hunting team where we conducted real-time tracking of nation-state adversaries. That was an intense and eye-opening period. There’s no better way to understand the tactics, techniques, and procedures (TTPs) of adversaries than to engage in daily operations against them.

It was during that time that a recurring frustration emerged: we’d often say, “If only they had done X or had Y control in place…” There was a gap between threat awareness and operational defense readiness.

That’s what ultimately brought me to AttackIQ. The opportunity to apply what I’d learned — to emulate real-world adversaries through breach and attack simulation, and to validate whether defenses are truly effective — was too compelling to pass up. Here, we don’t just theorize about threats; we test, measure, and improve against them every day.

Our team operates under a guiding principle: “Think bad, do good.” We think like adversaries not to harm, but to help our customers prepare for and defeat them.

Having led teams at TITUS, Interset, and Micro Focus, how has your experience in threat intelligence and partner enablement shaped your current approach to operationalizing adversary emulation?

Having worked in both technical and go-to-market roles at companies like TITUS, Interset, and Micro Focus, I’ve developed a holistic understanding of how threat intelligence needs to translate into operational outcomes, not just insights. Partner enablement, in particular, taught me how to communicate complex cybersecurity problems in ways that are actionable and meaningful for diverse audiences, from CISOs to frontline SOC analysts.

At AttackIQ, adversary emulation is not just about replaying threat behaviors. It’s about aligning with the MITRE ATT&CK framework, emulating adversaries with fidelity, and helping organizations test whether their defenses will hold up in a real-world scenario. That takes more than technical rigor; it requires education, collaboration, and enabling stakeholders across the security ecosystem.

My prior roles helped me understand how to bridge the gap between intelligence and execution — how to operationalize the threat landscape in a way that’s proactive, measurable, and defensible. That’s the essence of our mission at AttackIQ.

You’re now leading adversary research at a time when attackers are adopting AI at scale. How have you seen offensive AI tactics evolve in recent years—and how are defenders struggling to keep pace?

Adversaries are leveraging AI to increase the speed, precision, and scale of their operations. We’re seeing more personalized and believable phishing lures, AI-generated social engineering, and a greater scale and efficiency of attacks. These capabilities reduce the time between reconnaissance and compromise, compressing the defender’s window to respond. Many organizations are still reliant on reactive processes and static detection rules, which weren’t designed to handle adversaries that learn and evolve. Defenders need to adopt continuous validation and exposure management to close that gap, testing their defenses under realistic conditions and iterating quickly in response to new adversarial behaviors.

What distinguishes adversarial AI from traditional cyber threats, and why do you believe a shift in mindset—not just tools—is required to respond effectively?

Traditional threats often follow known patterns that defenders can track and mitigate with rule-based detection. Adversarial AI tactics introduce a new level of variability and adaptability that challenges those assumptions. It can generate novel attack paths and evade defenses dynamically. Addressing this shift requires more than deploying new tools. It demands a strategic change in how organizations think about defense. Instead of reacting to incidents after the fact, security teams need to be simulating evolving threats by using known tactics, techniques and procedures (TTPs), and proactively validating controls to test their systems’ responses. A threat-informed mindset, supported by real-world emulation, is key to anticipating and countering these new risks.

Can you explain how AttackIQ translates threat intelligence into practical defense through adversary emulation, and how that process has changed with the rise of generative AI?

The traditional challenge with threat intelligence is operationalization, bridging the gap between insight and action. Adversary emulation solves that by taking intelligence about known threat behaviors and turning it into executable tests that assess whether current defenses can withstand those behaviors. With generative AI, the threat landscape becomes more fluid, with more variable behaviors. Emulations now need to reflect not just static techniques but also adaptive and context-aware behavior. At AttackIQ, live threat intelligence aligned to MITRE ATT&CK and model adversary behaviors are ingested into emulation plans to mirror real-world attacks. These emulations are deployed in production-like environments to validate whether security controls detect, prevent, or respond as expected.

Continuous Threat Exposure Management (CTEM) is becoming a core component of cyber resilience strategies. How should organizations approach CTEM when facing fast-adapting AI-powered threats?

CTEM represents a shift from static risk assessments to dynamic, intelligence-driven security validation. Facing AI-powered threats, organizations must treat exposure as a moving target. That means identifying and prioritizing exposures based on active testing, not just theoretical risk.

Red and blue teams need to collaborate in simulating adaptive adversaries and continuously test detection and response capabilities. Organizations that embrace this approach are better equipped to adapt quickly, validate their investments and security controls, and maintain resilience amid a rapidly changing landscape.

The “Foundations of AI Security” course from AttackIQ covers risk frameworks like MITRE ATLAS and the AI RMF. Which aspects of these frameworks do you find most underutilized or misunderstood in enterprise settings?

One of the most common misunderstandings we see is the tendency to treat frameworks like MITRE ATLAS and the AI Risk Management Framework (AI RMF) as isolated reference materials, rather than as operational tools for building resilience into AI-enabled systems.

MITRE ATLAS, much like ATT&CK in its early days, is often viewed as a static catalog of attack techniques targeting AI/ML systems. In reality, ATLAS is a tactical adversary emulation framework designed to help security teams simulate AI-specific threats — from data poisoning and model evasion to inference manipulation — and validate their detection, logging, and response capabilities. The problem is that most enterprises haven’t yet built the visibility or controls necessary to detect attacks against the ML pipeline, making the proactive use of ATLAS through breach and attack simulation strategies all the more critical. It’s an underleveraged tool for testing how AI systems behave under adversarial pressure.

On the other hand, NIST’s AI RMF is frequently misunderstood as a compliance checklist. In truth, it’s a strategic governance framework — one that supports organizations in mapping AI use cases, measuring risks (including those posed by adversaries), managing them through prioritization and mitigation, and embedding oversight across the system lifecycle. Where ATLAS is tactical, AI RMF is strategic. The two frameworks are highly complementary: ATLAS enables the validation of risk through real-world simulation, while AI RMF provides the structure to govern those risks, define ownership, and align AI assurance with business priorities.

In our Foundations of AI Security course, we teach that one of the most underutilized aspects of the AI RMF is the “Map” and “Measure” functions — especially in early-stage deployments. These functions encourage organizations to model not only system use and misuse scenarios but also identify adversarial threats in context. Pairing this with ATLAS empowers organizations to move beyond theoretical concerns and begin operationalizing AI security in a meaningful, testable way.

Ultimately, the missed opportunity lies in treating these frameworks as academic. When used together, AI RMF and ATLAS enable a threat-informed, risk-driven approach to securing AI, turning high-level governance into real-world assurance.

From prompt injection to model theft, the OWASP Top 10 for LLMs highlights a new class of vulnerabilities. Which of these threats do you think CISOs are most unprepared for—and why?

LLM03: 2025 Supply Chain vulnerabilities exploit critical blind spots in existing governance and enterprise trust assumptions by slipping through controls not originally designed for AI/ML systems. Traditional security programs focus on software packages and code dependencies, but AI models, often treated as data assets, lack the same scrutiny. This allows compromised pre-trained models, poisoned LoRA adapters, or tampered Hugging Face merges to be ingested into production environments without verification, code signing, or behavioral evaluation. Because these models don’t trigger static analysis or malware signatures, they behave as sleeper threats, activating only under specific prompts or conditions that evade detection.

The enterprise assumption that reputable AI ecosystems or registries enforce trustworthy standards compounds the problem. Security teams may believe their DevSecOps pipelines and TPRM programs cover AI risks, but in reality, most do not audit dataset lineage, enforce model provenance, or apply SBOM-equivalent controls to AI components. Attackers exploit this misplaced trust, manipulating open-source model tools, backdooring adapters, or poisoning data used in fine-tuning, to silently embed malicious behavior. Without adversarial red teaming and governance that explicitly accounts for these gaps, even well-secured enterprises risk operational compromise through trusted but unverified AI artifacts.

As red and blue teams begin to simulate AI-enabled adversaries, how does adversary emulation evolve? Are we entering a new era of simulation-based security validation?

As adversaries integrate AI into their operations, adversary emulation needs to evolve in step, moving beyond fixed playbooks. Red teams must simulate dynamic, AI-like behaviors: pivoting, privilege escalation, and adaptive tactics, techniques, and procedures (TTPs) all modeled to reflect how a real AI-driven attacker might act.

We are entering a new era where emulation becomes continuous and intelligence led. No longer episodic exercises, simulations integrate real-time threat intelligence and iterate production like environments, testing controls under emergent, unpredictable conditions. With CTEM, this approach ensures security validation becomes a strategic, operational function rather than a checkbox.

Looking ahead, what emerging AI risks concern you most? And where do you see the greatest opportunity for defenders to get ahead of the curve?

The most concerning risk is how AI is lowering the barrier of entry for sophisticated attacks at scale. What once required deep technical skill is now increasingly accessible through commoditized AI tools. AI can automate reconnaissance, generate exploit code and craft tailored phishing campaigns, operating faster than traditional defenses can adapt.

On the flip side, defenders have a parallel opportunity: to harness AI and automation for proactive defense. By automating emulation, accelerating detection and prioritizing exposures based on real-world risk, security teams can anticipate adversarial moves rather than simply react to them.

Thank you for the great interview, readers who wish to learn more should visit AttackIQ

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.