Interviews
Dr Mathilde Pavis, Head of Legal, OpenOrigins – Interview Series

Dr Mathilde Pavis, Head of Legal at OpenOrigins, is a leading expert in AI regulation and digital media governance, specializing in deepfakes, synthetic media, and content provenance. She advises companies, governments, and trade unions on compliance, licensing, and risk in generative AI, and has worked with Microsoft and ElevenLabs on AI policy and strategy. She has also advised UNESCO on AI and intellectual property, and regularly contributes expert evidence to UK policymakers.
OpenOrigins develops technology to combat misinformation and deepfakes by creating verifiable, tamper-proof records of digital content. Its platform focuses on establishing clear provenance, allowing media, creators, and platforms to prove when and how content was created, edited, and distributed—an increasingly critical capability as synthetic media becomes more advanced and harder to detect.
You have advised governments, global institutions like UNESCO, and companies such as Microsoft and ElevenLabs on AI regulation. What led you to focus specifically on deepfakes, digital replicas, and synthetic media, and how did that journey shape your decision to found Replique?
My work on deepfakes didn’t begin with the technology—it began with a much older legal puzzle. When I started researching intellectual property for my PhD in 2013, I was struck by how much less protection performers receive compared to authors, composers, or filmmakers. In practice it means that your words or your music ends up better protected in law that your voice, your face and your body. That imbalance felt odd, and it pushed me to ask a deeper question: how do we culturally and legally value the work of someone whose contribution is their face, voice, and body on screen?
That question led me into performers’ rights and data. At the time, it was considered a niche area with little commercial relevance. I was actively advised to move into more “lucrative” fields like patents or traditional copyright. The assumption was that issues around a person’s likeness or voice were largely managed informally—through industry norms or “gentlemen’s agreements in Hollywood”. But to me, that lack of formal protection signalled a gap, not a dead end for my research so I kept with it.
What’s changed is that today, almost everyone is a performer. Our lives are mediated through cameras—on phones, laptops, video calls, and social platforms. Whether for work or personal use, people are constantly recording and sharing versions of themselves. The legal questions that once applied mainly to actors or musicians now apply to anyone with a smartphone.
Deepfakes didn’t create these issues—they exposed and accelerated them. The research I was doing from 2013 onwards suddenly became urgent. Around 2017 and 2018, developments in neural networks—particularly coming out of places like MIT and UC Berkeley—began demonstrating how convincingly a person’s face, voice, and body could be digitally manipulated. Within a year, that capability became widely known as “deepfakes,” and it first gained traction in deeply harmful ways, especially through non-consensual sexual content targeting women and children.
Only later did the commercial implications emerge, as the creative industries began adopting synthetic media. That’s when the contractual and economic questions I had been working on came to the forefront. Almost overnight, what had been seen as a largely theoretical or doctrinal area of law became a highly practical, commercially significant, and socially urgent field.
At its core, the legal challenge hasn’t changed: people want to share aspects of themselves, but still retain meaningful control. Existing frameworks struggle with that nuance. They tend to treat individuals as either entirely private or fully public—either protected or fair game. But most people exist somewhere in between. That tension is now central not just for professional performers, but for anyone participating in digital life.
I became known as someone who researched and worked in this space, which led me to work with governments interested in protecting people against deepfakes, and companies wanting to make digital cloning products safe to use, like ElevenLabs. At Replique, I bring everything I’ve learnt to people and companies who want to use digital cloning or digital replica technology responsibly, and safely. I’ve basically turned my ‘blue sky’ research into an advisory business that brings specialist legal advice to the creative industries.
As Head of Legal at OpenOrigins, a company focused on establishing an immutable record of content provenance to combat deepfakes, how do you see provenance-based systems competing with or replacing traditional deepfake detection approaches?
Comparing deepfake detection tools can quickly become an apples-and-oranges exercise, because their effectiveness depends on context and purpose. From a policy perspective, what we need is a range of complementary tools – there is no single “best” solution, and Open Origins is one part of that wider ecosystem. Where OpenOrigins’ technology stands out as a deepfake detection solution is in situations where a content creator or information organisation needs to prove the authenticity of the content they share with partners, audiences, or the public.
By providing verifiable provenance and “receipts” at the point of creation, it offers a strong form of prevention by demonstrating that content is not a deepfake. However, this approach is less useful for everyday internet users who want to quickly assess content they encounter online. In those cases, detection relies more on probabilistic and content-analysis methods rather than provenance-based verification. We need different tools for different needs, and we need to accept there is no silver bullet against deepfakes.
From a legal standpoint, what is currently the biggest gap in how jurisdictions are handling consent and ownership in AI-generated or AI-replicated content?
Oof, how long have you got? The answers depends on what we mean by AI-generated or replicated content. The issues vary whether you’re looking at an AI-generated image of a house or a cat. Or digital recreation of a person’s face or their voice. Let’s stick to the topic of deepfakes and digital replica, and answer your question in the context of ‘digital cloning’.
On consent, the core issue is that most contracts – whether employment agreements or platform terms – contain broad, vague clauses that grant extensive rights over user content. These can be interpreted as a form of “backdoor consent” where agreeing to terms may be taken to mean consenting to uses like cloning, even though most people would strongly dispute that interpretation. This creates a significant gap between legal interpretation and user expectation, one that currently benefits companies while regulation lags behind.
On ownership, there is no clear legal answer to who owns a digital clone, because existing frameworks like data protection, copyright, and personality rights were not designed for this technology. Today, most people get scanned and cloned at work,at the request and with the financing of an employer or a client. And those entities usually expect a high degree of control over this asset, which is understandable but often problematic because that asset is a digital imitation of your face or your voice, and can make you say things you’ve never said, or do things you’ve never done.
The question of ‘who owns your clone?’ is very important, yet unanswered in law today.
You have worked closely on voice cloning technologies. What are the most misunderstood legal risks when it comes to synthetic voices, both for companies and individuals?
The most misunderstood issue in legal compliance is the balance between a company’s commercial interest in funding and exploiting a digital clone, and the individual’s right to privacy and digital dignity. This tension sits across multiple legal regimes (primarily intellectual property, data protection, and privacy) which were never designed to operate together and interpret cloning in fundamentally different ways. As a result, translating this into workable, business-friendly practices is complex and often unclear. Companies therefore either overlook key risks or incur significant costs to navigate them properly. That creates a perverse outcome where responsible compliance becomes the harder, more expensive path, rather than the default.
How should enterprises think about consent architecture in AI systems, especially when dealing with likeness, identity, and training data?
Companies should design their systems around three core capabilities. First, they need to secure informed, contextual consent at onboarding. Second, they must make it easy for users to withdraw that consent and delete some or all of their data, something that is technically challenging and often overlooked, but essential for compliance with laws like the UK and EU GDPR and similar regimes in the US. Maintaining consent over time means building systems where withdrawal is operationally smooth and aligned with the business model.
Consent must be granular. And third, users should be able to manage permissions at the level of individual files, update their likeness data, and understand how it is being used. That requires transparency and control – tools that allow users to monitor, review, and moderate how their digital clones are deployed. This level of flexibility is still rare, but it’s where competitive advantage increasingly lies.
In your experience advising both startups and governments, where is the biggest disconnect between how AI is being built and how it is being regulated?
The disconnect between how AI is built and how it is regulated comes down to fundamentally different missions. Governments regulate in the public interest, while AI companies (often venture-backed) are primarily driven by growth, revenue, and profit. Those priorities don’t always conflict, but they frequently pull in different directions, with regulation seen as a constraint rather than a support.
This creates a structural tension: regulators and innovators are operating with different incentives, values, and even languages. That makes alignment difficult in practice, even if it’s not impossible. We are starting to see a new wave of tech companies aligning more closely with public interest goals, but they remain the exception rather than the rule – especially among those that successfully scale.
OpenOrigins focuses on verifying content at the point of creation using cryptographic provenance. How critical is this origin-first approach compared with post-distribution safeguards?
This loops back to my answer above. Authenticating content at creation, ‘upstream’ is far more effective than trying to verify it at the point of distribution or even consumption, ie. ‘downstream’. Authenticating content at creation is like tracing food from the moment it’s grown on the farm, rather than trying to work it out from what’s on your plate. If you know where the chicken was raised, how it was handled, and how it moved through the supply chain, you can trust what you’re eating. If you’re instead trying to infer all of that just by looking at the finished dish, you’re relying on guesswork. It’s the same with discerning between human-created and AI-generated content online: provenance at the source gives you verifiable assurance, while downstream detection is inherently more uncertain and reactive.
What role do you see standards like C2PA playing in the future of media, and are they sufficient on their own to restore trust online?
C2PA is a welcome initiative, and in many ways supports the same movement for content authenticity as OpenOrigins. They are an important part of the content safety and content authenticity ecosystem. As with every cybersecurity tool, there is no silver bullet.
For creators and talent in industries like film, music, and gaming, what practical steps should they take today to protect themselves from unauthorized digital replication?
Artists today face two distinct risks: the replication of their work (such as music, images, or writing) and the replication of their likeness, including their face, voice, and body. With minimal input, AI systems can now reproduce both with a high degree of fidelity. In practical terms, protection starts with being deliberate about what you share online, recognising that any content posted may be scraped and used in training datasets, often without clear consent or visibility.
That risk is now a baseline reality of operating online. But the more immediate and controllable risk often lies in contracts. Agreements artists make with their collaborators, distributors, or platforms may include clauses that allow AI use, reuse, or resale of content for training purposes – frequently without meaningful participation in downstream revenue.
For artists, this makes contract scrutiny critical. Understanding how your work and likeness can be used, licensed, or repurposed is now as important as the creative process itself. Much of the current debate (across unions, industry bodies, and platforms) centres on correcting this imbalance and ensuring creators retain both control and fair compensation.
So two key pieces of advice: be careful what you share online, and read your contracts and look for AI clauses before you sign.
Looking ahead three to five years, do you believe we will reach a point where every piece of digital content must carry verifiable provenance, or will trust remain fragmented across platforms and jurisdictions?
I’d like to say yes, but realistically, no—not within five years. In tech, five years feels long; in terms of changing user behaviour and habits, it’s very short. Most consumers are unlikely to base their decisions based on whether content comes with authenticated provenance. Platforms tend to follow user demand, optimising for engagement rather than provenance.
That could shift if regulation intervenes. We’re already seeing early moves in places like California, where labelling and moderation requirements are emerging, but scaling that globally will take time – likely closer to a decade than five years.
Another area of change is sector-specific: industries like journalism, finance, insurance, and healthcare may begin to require provenance and authentication because trust is fundamental to their operations.
Last but not least, consumers may not care about provenance information in the short term, but they will likely care about quality of content, and quality of information. If AI-generated content becomes too homogeneous or “bland”, audiences may start to value human-created content more explicitly. That could drive a segmentation of the market, with some platforms prioritising scale and AI-generated content, and others curating for authenticity, provenance and high-trust, human-led material – but that shift remains an unknown.
Thank you for your great answers, readers who wish to learn more should visit OpenOrigins.












