Interviews
Aarti Samani, Founder and CEO of Shreem Growth Partners – Interview Series

Aarti Samani, Founder and CEO of Shreem Growth Partners, is a technology leader and AI strategist with over two decades of experience driving growth across AI, digital identity, and biometric security. She has held senior roles at iProov and Digital Surgery (acquired by Medtronic), leading global product and marketing strategies and securing major funding rounds. A frequent BBC commentator and keynote speaker at global events including Money 20/20 and CogX Festival, she advises boards and executives on AI ethics, governance, and risk, helping organisations build trust and resilience in the age of artificial intelligence.
Shreem Growth Partners helps organisations protect against deepfake fraud and AI-driven impersonation by focusing on the human layer of defence. Through tailored training, executive briefings, vulnerability assessments, and strategic consultations, the firm equips teams and leaders to recognise, respond to, and prevent manipulation attempts. By fostering awareness, preparedness, and cultural resilience, Shreem empowers organisations to safeguard their identity, reputation, and workforce against evolving AI threats.
You founded Shreem Growth Partners after senior roles at iProov, Medtronic Digital Surgery, and other AI-driven firms. What inspired you to start a company devoted specifically to deepfake fraud resilience?
After the release of ChatGPT in late 2022, we witnessed an explosion of AI-powered media tools capable of cloning voices and faces in minutes. Most organisations viewed them as creative accelerators for design and marketing. But criminals saw a new opportunity for more impactful social engineering.
Deepfakes quickly became the tool of choice for fraudsters to manipulate people. Security teams responded by investing in detection technology, yet detection alone cannot solve a problem rooted in human trust. Psychology was the industry’s blind spot.
That insight led me to found Shreem Growth Partners, a firm dedicated to building deepfake fraud resilience through awareness, simulation, and cognitive resilience. Strengthening people’s ability to think critically and resist manipulation is the key to mitigating this form of fraud.
At iProov, you worked at the frontier of facial biometrics and identity verification. How did that experience reveal the growing threat that deepfakes pose to authentication systems?
As Chief Product and Marketing Officer at iProov, I helped bring biometric face verification to security-sensitive organisations across both public and private sectors. These institutions are constant targets for organised cybercrime. Their adversaries are intelligent, well-funded, and equipped with cutting-edge technology.
We began to see attempts to bypass biometric systems using deepfakes built from stolen identities. Because iProov operated as a cloud-based managed service, we could observe these attacks in real time and analyse how each iteration evolved. It became clear that the criminals were learning faster than the market was adapting.
That experience gave me an inside view to the evolution of deepfake-enabled fraud and a deep appreciation for how creativity and psychology drive cybercrime. That same understanding of the criminal mindset now informs my work in deepfake fraud resilience.
Deepfakes are increasingly used for executive impersonation, voice cloning, and social-engineering attacks. Which scenarios are you seeing most often in the field today?
It is always easier to gain access through social engineering than by breaking the security systems. As people grow more aware of the traditional scams, criminals are turning to new ways to manipulate trust.
Deepfake technology has become the most powerful social-engineering tool we have ever seen. It enables perpetrators to generate high-fidelity trust signals: realistic faces, voices, and narratives that bypass human judgment.
The criminal advantage is not limited to executive impersonation or payments. Increasingly, the targets are intellectual property, sensitive data, and access credentials. By training AI agents and giving them cloned voices, fraud can now be executed at scale, speed, and precision. That is the threat horizon we are stepping into.
What distinguishes a deepfake fraud resilience program from traditional cybersecurity or anti-phishing training?
Cyber security training as we know it was created in the early 2000s, when the internet and commercial computing were still taking shape. Since then, the content has evolved to meet compliance needs, with focus on LMS platforms and gamification for engagement. But the threat landscape moved on.
Today’s fraud is intelligent, creative, and psychologically engineered. Criminals use AI with the same sophistication as academia or enterprise. Training must therefore go beyond compliance and into cognition.
Deepfake fraud resilience training cannot be reduced to whether someone clicks a link. It has to teach people to think critically, to question the authenticity of faces and voices they interact with, and to recognise how easily perception can be manipulated.
Equal emphasis has to be placed on cognitive resilience. Heightened emotions, distracted minds, and constant multi-tasking all weaken thinking critically. Building cognitive resilience trains employees to maintain an emotional equilibrium, and to remain analytical, and alert. It is exactly the mindset needed to resist manipulation.
Your firm offers Deepfake Vulnerability Assessments and Tabletop Exercises. Can you walk us through what one of those simulations looks like and what insights clients typically gain?
In our vulnerability assessments and tabletop exercises, we replicate the latest deepfake-enabled attack vectors drawn from real incidents. Common scenarios include a fake job applicant using a cloned identity, an IT helpdesk compromised to reset multi-factor authentication credentials, or a video call with an AI-generated persona that convinces staff to download malicious software.
These simulations expose how people respond under pressure and reveal blind spots in communication, process, and incident response. Executives often discover missing expertise and unclear decision ownership. Outcomes typically lead to a sustained resilience programme. One that strengthens fraud literacy, process and culture change, and crisis readiness across the organisation.
You often talk about human-centric risk management. How can companies empower their people to detect and resist deepfake manipulation rather than rely solely on technical tools?
Deepfakes exploit the same neural circuitry that helps us evaluate trust and emotion. Our brains are wired to prioritise visual and auditory cues, as these are the first senses we develop. But we have not yet evolved to instinctively question these cues, so we must learn and continually reinforce that skill.
Companies can empower their employees by training them to treat digital interactions with caution and curiosity. When we are on a voice or a video call, we are engaging with an artefact. That artefact may be a real person projected on the screen or it could be an artificial one. We cannot determine that visually.
The only defence is active verification: asking the right questions, cross-checking details, and noticing inconsistencies in the narrative. The new mantra must be simple — zero trust, always verify.
Deepfakes blur the line between misinformation and fraud. How can executives prepare for reputational and operational crises triggered by synthetic media incidents?
Leadership teams must now treat deepfake incidents as a core business risk, not a hypothetical one. Every executive around the table should know their role when a crisis unfolds.
Just as boards review risk registers, they should also be reviewing incident response plans, and these cannot be static. The threat landscape evolves too quickly. Plans need to be stress-tested and updated at least twice a year, ideally following a tabletop exercise that simulates a real event.
Deepfake crises rarely sit neatly within one function. They demand coordination between communications, legal, security, and HR. The organisations that respond best are those whose leaders rehearse that collaboration before a real incident occurs.
There’s also a psychological cost when employees or executives become targets of convincing deepfakes. What support or protocols should organizations have in place to address that human impact?
“Scam shame” must be avoided at all costs. Organisations should build a culture of psychological safety long before an incident occurs. When employees or executives become targets of deepfakes, the experience can feel like an assault. A loss of control over one’s own likeness, voice, and digital identity.
The response must be both procedural and human. Clear reporting protocols should sit alongside mental health support, confidential debriefs, and access to digital forensics so individuals understand what happened.
Leaders set the tone. When executives speak openly about manipulation attempts and recovery, it removes stigma and encourages transparency. That openness is what turns individual vulnerability into collective resilience.
As the founder of a consultancy in this space, what are your biggest concerns about how fast generative AI is evolving—and where do you see the next wave of deepfake capabilities emerging?
Generative AI is evolving at extraordinary speed. That in itself is not the concern. The real risk lies in its rapid adoption and stickiness across everyday business operations. The pace of threat management and awareness simply has not kept up. That gap between threat and resilience is exactly where fraudsters thrive, with deepfakes as their most effective enabler.
Every new innovation becomes another surface for exploitation. Right now, AI agents pose a growing risk. When combined with deepfake technology, they can execute fraud at scale, speed, and precision that no human attacker could match.
Conversely, what technologies or collaborative efforts give you optimism that we can stay ahead of deepfake-driven fraud and disinformation?
The fact that this conversation is happening in a respected publication is itself a sign of progress. It shows that innovators, regulators, media, and enterprises are all hit hard by disinformation.
What gives me optimism is the growing willingness to collaborate. Deepfake resilience will not come from one tool or one organisation but from shared intelligence. Learning to exchange information and educate the community as efficiently as the criminals do.
We still need to refine how that collaboration happens, but the intent is there. And that collective intent is what will ultimately restore trust.
Thank you for the great interview, readers who wish to learn more should visit Aarti Samani.












