Thought Leaders
The Boardroom Gap: Why CISOs Struggle to Talk About Deepfakes — and How to Frame It

Cybersecurity is entering a pivotal moment, driven by the widespread adoption of AI by enterprises, governments, and individuals. With 82% of companies in the US either using or exploring using AI in their business, organizations are unlocking new efficiencies, but so are attackers. The same tools powering innovation are also enabling threat actors to generate synthetic content with alarming ease and realism. This new reality has introduced significant challenges, including the ability to create synthetic content (images, audio and video) and malicious deepfakes (manipulated audio, video, or image used to impersonate a real person) at unprecedented speed and sophistication. In just a few clicks, anyone with access to a computer and the internet can manipulate images, audio, and videos, introducing distrust and doubt into the information ethos.
In an age where companies, governments and media organizations rely on digital communication for their livelihood, there is no room for error in underestimating the risks deepfakes, synthetic identity fraud, and impersonation attacks pose. These threats are no longer hypothetical – financial losses from deepfake-enabled enterprise fraud exceeded $200 million in Q1 2025 alone, underscoring the scale and urgency of the issue. A new threat landscape requires a new approach to cybersecurity, and CISOs need to act fast to ensure their company remains secure. However, asking for new capital and clearly communicating an organization's threat exposure to an executive board with varying levels of knowledge of the severity of the threat of deepfakes can be daunting. As deepfake attacks continue to evolve and take shape, every CISO needs to be at the forefront of bringing this conversation to the boardroom.
Below is a framework for CISOs and executives to facilitate stakeholder conversations at the board, organization and community level.
Use Familiar Frameworks: Deepfakes as Advanced Social Engineering
Boards have been conditioned to think about cybersecurity in familiar terms: phishing emails, ransomware attacks, and the looming question of whether their company will be breached. That mindset shapes how they prioritize threats and where they allocate security budgets. But when it comes to AI-generated content, especially deepfakes, there’s no built-in reference point. Framing deepfakes as a standalone, novel threat often leads to confusion, skepticism, or inaction.
To combat this, CISOs should anchor the conversation in something boards already understand: social engineering. At its core, the deepfake threat isn’t entirely new; it’s an evolved, more dangerous form of phishing that has existed within the industry for years and continues to be the number one attack vector of social engineering. Boards already recognize phishing as a credible risk, and they’re comfortable approving resources to defend against it. In many respects, Deepfakes represent a more convincing, more scalable, and more capable form of social engineering, targeting both organizations and individuals with devastating precision.
Framing deepfakes in this way allows CISOs to tap into existing education, budget lines, and institutional muscle memory. Rather than asking for new resources, they can reframe the ask as an evolution of already-approved security investments. The more CISOs can lean into this narrative, the more likely they are to be granted the resources to address this larger, immediate issue.
Anchor the Risk in Realism, Not Sensationalism
Pointing to real-world examples is a great way to further a board’s understanding of the impacts deepfake threats could have on organizations. However, it's important to consider which examples CISOs put in front of boards, as they can have the opposite effect. Infamous stories such as the $25 million wire fraud incident in Hong Kong make for great headlines, but they can backfire in the boardroom. These extreme examples often feel remote or unrealistic, creating a sense that “something that catastrophic could never happen to us.” The bias kicks in immediately and removes the sense of urgency to invest in protection.
Instead, CISOs should use more relatable scenarios to show how this risk could play out internally, such as executive impersonation or interview fraud.
In one case, North Korean threat actors created a fake Zoom call featuring AI-generated executives to trick a crypto employee into downloading malware to access sensitive company information with the intent of stealing cryptocurrency. In the end, the hackers could not gain access, but the threat these attacks pose to a brand’s integrity should be a wake-up call to boards within the enterprise.
Another growing tactic involves fake job candidates using AI-generated identities and deepfake credentials to infiltrate enterprise organizations. These individuals often act on behalf of U.S. adversaries such as Russia, North Korea, or China, seeking access to sensitive systems and data. This trend drains internal resources and exposes organizations to national security risks and financial exploitation.
Often, these threats fly under the radar. For every example in the news, dozens go unreported, making it difficult to comprehensively understand the magnitude of this threat. The more mundane the attack, the more unsettling—and relatable—it becomes. By sharing examples like these—realistic, relatable, and closer to home—CISOs can ground the deepfake conversation in everyday business operations and reinforce why this evolving threat demands serious attention at the board level.
Tie Deepfake Defense to Existing Resilience Metrics
CISOs are continually asked the same questions from their boards: What’s our likelihood of being breached? Where are we most vulnerable? How do we reduce risk? While phishing, ransomware and data breaches continue to exist, it’s important to showcase the fundamental shift that has changed within those vulnerabilities and how they now extend well beyond traditional attack surfaces.
HR, finance, and procurement teams—roles not traditionally seen as frontline defenders—are now frequent targets of synthetic impersonation, and the average human’s ability to detect these threats is extremely low. In fact, only 1 in every 1,000 people can accurately detect AI-generated content. CISOs are now tasked with addressing the demand for advanced social engineering education and greater cyber resilience across the organization, as everyone in the organization needs to be trained, tested, and made aware to help with mitigation.
Deepfake defense needs to become an extension of enterprise-wide resilience and requires continuous education the same way teams are trained via phishing simulations, awareness training, and red team exercises. CISOs should utilize metrics from trainings and simulations to help frame the issue in metrics that their board understands. If a board has already bought into resilience as a strategic priority for the organization, deepfakes become a natural next frontier.
AI-generated threats aren’t coming. They’re already here. It’s time we ensure the boardroom is ready to listen and lead. Thanks to the adoption of AI, the scale and frequency of deepfake and identity-based attacks have transformed the threat landscape into one that is unpredictable and ever-evolving.
But boards don’t need a primer on deepfakes or voice cloning. They need a clear business context and a greater understanding of the threats they pose to their organizations. CISOs should ground their conversation in risk, cost, and operational continuity. Those who align their deepfake narrative to familiar paradigms—phishing, social engineering, resilience — give their board a framework and context in which they can act, not just react.