Alex Hudek is the Co-Founder & CTO of Kira Systems. He holds Ph.D and M.Math degrees in Computer Science from the University of Waterloo, and a B.Sc. from the University of Toronto in Physics and Computer Science.
His past research in the field of bioinformatics focused on finding similarities between DNA sequences. He has also worked in the areas of proof systems and database query compilation.
Today he published a new book titled AI for Lawyers, a book that explores five major topics around the ethics of lawyers using – and not using – AI.
What was the inspiration behind writing AI for Lawyers?
My co-founder, Noah Waisberg, and I run a legal AI software company (Kira Systems) and have been working on legal AI for almost a decade. We’re among the longest-active people in the industry. Throughout this time, we’ve realized that there’s a wide range of acceptance and resistance, and mixed feelings when it comes to the role of AI in law. We felt that there was an opportunity to provide a perspective of where we are today, and dispel some of the myths and fears around AI.
The reality is that AI is here to stay, and we wanted to write a book to help lawyers realize that and to get them onboard if they aren’t already.
What are some of the topics that are discussed in the book?
AI for Lawyers delivers information crucial to understanding the future of law, in an accessible and readable format meant to demystify the jargon surrounding AI and show the powerful practicality that comes with embracing it.
New and aspiring lawyers will find this book indispensable, as it details exciting career options and trajectories that weren’t possible only a few years ago. It also provides a framework for adopting AI in practical detail, while highlighting how AI is being used today to achieve substantially better results for law firms around the world.
The book further explains how AI will likely shape the legal world in the years to come in areas that have yet to be transformed by digital technology, the ethics of lawyers using – and not using – AI, and more.
Why should lawyers begin to adopt AI?
Adopting AI is a good business decision. For law firms and other legal services providers, it offers opportunities to do better quality work, increase realization rates, win new business, retain and upsell existing business, and do fixed-fee work more profitably. For companies, it enables them to do work faster and with less effort and – more importantly – better run their businesses, knowing rather than guessing at, for example, the details of their business relationships (as documented in their contracts).
AI empowers adopters – through teaching AI systems new skills – to create competitive differentiation, build value in the organization rather than its individual lawyers, and potentially make money from capturing and distributing their expertise.
Can you discuss some examples of how AI is being used to achieve impactful results for law firms?
Machine learning is effective where large amounts of data are part of a legal process, and where scaling those using human labor and intelligence processes is a challenge.
The first applications in legal came in litigation, when lawyers began to face the problem of more and more data being generated and stored in electronic formats. Increasingly, lawyers found that the parties to a lawsuit had large volumes of information (and potentially relevant evidence) stored in email, document management systems, and other digital media. The availability of data outpaced the ability of humans to review and identify all the information relevant to lawsuits, and this led to the application of machine learning in the discovery phase of litigation. Machines, it turns out, are very good at quickly and accurately identifying data that might be relevant to a discovery request.
Contract analysis, where Kira plays, is another example of a field where there are large amounts of data (M&A deals can involve thousands of contracts). Traditional processes (like manual contract reviews) simply can’t keep up with the volume of data or the need for accurate identification and analysis of contract clauses.
Legal research is another area where machine learning has enhanced the process of researching legal precedents, and extracting meaning and insight from sets of legal documents that were previously harder to find with manual methods.
What are some of the ethical obligations of lawyers practicing in the AI space?
As technology becomes a bigger and bigger part of our lives, its ethical implications get more attention. This extends from the positioning of security cameras, to anonymity online, to the training data that is used to teach machine learning algorithms. Unsurprisingly, it’s also an issue around legal AI.
The technology may be new, but the ethical duties that lawyers using AI face are still the same as before wide scale AI adoption. The book addresses the ways that AI impacts a lawyer’s duty of competency (including knowing the benefits and risks associated with relevant technology); duty of communication; duty to supervise and restrictions on the unauthorized practice of law; duty of loyalty; and more.
Could you discuss how big of an issue AI bias is?
In recent years we’ve seen real impacts from biased AI systems. For example, facial recognition is increasingly used by law enforcement agencies to identify people on scales that were previously unimaginable. When you factor in that current facial recognitions often have lower accuracies for many minorities, potential problems become immediately apparent. This has led many companies to back away from some of these uses. Similarly, the use of AI to make judgements or decisions that affect individuals, like predicting the risk of recidivism in the criminal system, or determining credit scores, has starkly illustrated how large of an impact bias in AI systems can have.
That said, there are also applications that don’t suffer from bias in the same way. For example, in M&A due diligence review, an area where Kira is frequently used, you wouldn’t see the type of bias that’s described in the previous examples. AI models can still be biased in some ways, either being over or under inclusive for instance, but the impact of this isn’t as critical.
So the answer is both that bias is a big issue in some applications, and not a big issue in others. It depends on how you use it.
How can attorneys best handle AI bias?
The responsible use of AI requires those applying it to consider bias issues in their application. Attorneys can best approach this by understanding how AI works, and how to spot situations where the application of AI can have unintended negative consequences. That’s pretty general advice, because there are many kinds of AI and it’s put to use in many different contexts.
The main thing to watch out for is situations where incorrect or biased decisions supported by AI could lead to negative outcomes for people, or violate the privacy or personal integrity of people.
Who should be responsible and/or liable when an AI makes a mistake?
This is a function of contract, tort, and product liability law more than anything else. Typically, AI systems user agreements limit their vendor’s warranties and liability. There are good reasons for this. First, most responsible vendors know their systems make errors, and wouldn’t pledge otherwise. Second, AI systems often supplement lawyers, and are not really making the final decisions that would lead to liability. Third, vendors would have to charge a lot more to take on the risk (though they could potentially cover themselves by purchasing their own “errors and omissions” insurance policies).
It’s imperative that lawyers work strictly with their client’s best interests in mind and recognize that AI and other tools of technology are just that, tools. They are invaluable assistants, and they are here to stay, but they still don’t make the rules, or completely train themselves – that’s still the responsibility of humans.
Is there anything else that you would like to share about AI for Lawyers?
AI is the latest step in driving the practice of law forward. It’s heavily used in law, and offers real advantages for lawyers who embrace it, and perils for those who don’t. I’m happy to be a part of this change and hope this book can be a guide that pushes the industry even further. Despite the common fears and uncertainty around AI, Noah and I firmly believe that it’s a tool that can and will be used in positive ways for the world for years to come.
You can learn more about AI for Lawyers by visiting our website.
- Humayun Sheikh, CEO of Fetch.ai – Interview Series
- Research Team Aims to Create Explainable AI for Nuclear Nonproliferation and Nuclear Security
- Pieter VanIperen, Founder & Managing Partner of PWV Consultants – Cybersecurity Interviews
- Advance in Microchips Brings Us Closer to AI Edge Computing
- AI Researchers Create Video Game Playing Model That Can Remember Past Events