Thought Leaders
Copyright in the Age of AI: A Turning Point for Copyright Law

Artificial intelligence is forcing legal systems around the world to confront the most fundamental question in copyright: What does it mean to be an author?
For decades, the doctrine evolved slowly, adapting to new formats, new industries, and new technologies. But the rise of generative AI has accelerated that evolution more than any other innovation in the last century. Suddenly, judges and lawmakers must decide whether learning from copyrighted material is “stealing,” whether algorithmic output can ever be protected, and how to balance innovation with the rights of creators.
These questions no longer reside in academic circles or policy papers. They are being fought in courtrooms today, shaping the rules for how AI tools are trained, how they operate, and who bears responsibility for their outputs. The answers emerging from these cases will fundamentally define the future of global AI development.
What’s unfolding now is not the collapse of copyright, but its transformation. And the U.S. courts — historically a global reference point — are at the center of the debate.
Thomson Reuters v. ROSS Intelligence: A Turning Point for AI Training
One case that illustrates the changing legal landscape against the backdrop of AI rollout is Thomson Reuters v. ROSS Intelligence. In February 2025, a U.S. court in Delaware ruled that using editorial headnotes from Westlaw, an online legal research service, to train a competing AI legal research tool did not qualify as fair use.
The judge reasoned that if an AI system learns from copyrighted material to build a competing product, that training is unlikely to qualify as “transformative”, and therefore cannot be permitted. This ruling set a major precedent: not all AI training is equal, and the purpose of the model, especially its commercial overlap with the source material, matters.
Yet the legal picture is far from uniform. Just months later, two California judges adopted a more cautious, nuanced approach in Kadrey v. Meta and Bartz v. Anthropic case, a related dispute involving authors whose copyrighted works were used to train AI models. They signaled that training large language models could indeed be considered fair use, provided that: the underlying data was lawfully acquired, and the training caused no market harm, meaning the models did not reproduce substantial chunks of books or or negatively impacted the market for licensing books.
While this approach did not contradict the Delaware ruling, it refined this approach and clarified the legal landscape. Together, these cases demonstrate that U.S. courts are actively calibrating how the traditional four-factor fair use test should apply to cutting-edge AI technologies.
A Familiar Pattern: AI Echoes Past Legal Battles
AI may feel unprecedented, but the legal dilemmas surrounding it are not new. Throughout U.S. history, novel technologies have repeatedly forced courts to redefine creativity, ownership, and permissible use:
- Photography was once doubted as art until in 1884 the Supreme Court ruled in Burrow-Giles v. Sarony that the process of producing photos involved human creativity, including attributes such as composition, lighting, and artistic intention — and therefore deserved copyright protection.
- The VCR, in the 1984 Betamax decision, survived Hollywood’s attempt to outlaw it when the Court ruled that recording TV for personal use was not infringement. This meant that devices used for reproducing content should not be outlawed if they are used within the bounds of non-infringing uses.
The pattern is unmistakable: every transformative technology arrives with fear, confusion, and intense litigation. And every time, courts adapt long-standing legal principles to new contexts. Today’s AI debates closely mirror those early disputes: Is AI primarily an instrument of infringement or a powerful tool for creativity and progress?
A Global Patchwork of AI Copyright Rules
Other legal systems are grappling with the same tensions, each through its own lens:
- China’s Beijing Internet Court (2023) ruled that AI-assisted images can be copyrighted if the human demonstrates meaningful aesthetic control.
- The European Union’s AI Act (2024) introduced the world’s first transparency requirement for AI developers, mandating disclosure of summaries of copyrighted training data.
- Canada, the U.K., and Australia are exploring hybrid approaches that balance innovation with creator protection.
Despite the differences, one theme is global: copyright law is adjusting not by discarding old rules or inventing new principles, but by recalibrating old ones or reinterpreting human creativity in the age of automation.
The Baseline Principle: Human Authorship Still Reigns
Both the U.S. Copyright Office’s 2023 guidance and the D.C. Circuit’s 2025 Thaler v. Perlmutter decision reaffirm that purely machine-generated works cannot be copyrighted.
What matters is “sufficient human creativity”, the human contribution that shapes, selects, curates, or meaningfully transforms AI output into a final work. AI may produce infinite possibilities, but authorship still depends on human judgment. As cases multiply, courts will refine this line — but they will not erase it.
The Legal Battlefield Widens: Music, Film, and Beyond
In 2024–2025, the focus of AI-related litigation expanded from training to output. Major record labels are filing cases against music start-ups like AI song generators Suno and Udio, claiming the companies operate unlicensed services that exploit artists’ recordings to generate similar tracks for profit. The labels argue that such use is not transformative and is threatening the licensed music market. Film studios, including Disney, Universal, and Warner Bros. Discovery, are suing image-generation platforms like Midjourney for enabling the creation of depictions of protected film and TV characters that infringe on copyright laws.
These cases are no longer solely focused on how AI is trained, but also on what it produces and who is responsible for such content. If an AI system produces infringing content, who is liable — the developer, the user, or the model itself? How close must an AI-generated output be to a protected work to cross the line? The answers will define the rules for generative media in every creative industry.
Law in Motion: Copyright’s Next Chapter Is Being Written Now
Copyright is under stress — but not collapse. The same legal principles that applied to photography, radio, and television are now being used to define the rules of machine learning. Copyright is not dying; it is being rewritten in real time and remains loyal to its oldest purpose: safeguarding human creativity while allowing innovation to flourish. Courts are not abandoning foundational principles; they are stretching them to fit new realities. And each ruling brings the system closer to a stable, functional framework for AI.
The true transformation is not in the law itself, but in how quickly it must now evolve. Historically, copyright adapted over decades. Today, it must adapt in real time through rapid rulings, legislative updates, and international coordination.
These are not merely legal puzzles. They will shape how AI is built, deployed, and monetized for decades. The legal community is not witnessing a crisis. It is participating in one of the most significant re-writings of intellectual property law in modern history. The privilege for today’s lawyers, creators, and businesses is extraordinary: to define the legal architecture of the AI age.










