stub Bradford Newman, Chair of North America Trade Secrets Practice - Interview Series - Unite.AI
Connect with us


Bradford Newman, Chair of North America Trade Secrets Practice – Interview Series




Bradford specializes in matters related to trade secrets and Artificial Intelligence. He is the Chair of the AI Subcommittee of the ABA. Recognized by the Daily Journal in 2019 as one of the Top 20 AI attorneys in California, Bradford has been instrumental in proposing federal AI workplace and IP legislation that in 2018 was turned into a United States House of Representatives Discussion Draft bill. He has also developed AI oversight and corporate governance best practices designed to ensure algorithmic fairness.

What was it that initially ignited your interest in artificial intelligence? 

I have represented the world's leading innovators and producers of AI products and technology for many years. My interest has always been to go behind the curtain and understand the legal and technical facets of machine learning, and watch AI evolve. I am fascinated by what is possible for applications across various domains.


You are a fierce advocate for rational regulation of artificial intelligence, specifically in regulation that protects public health, can you discuss what some of your major concerns are?  

I believe we are in the early stages of one of the most profound revolutions humankind has experienced. AI has the potential to impact every aspect of our lives, from the minute we wake up in the morning to the moment we go to sleep — and also, while we are sleeping. Many of AI's applications will positively impact the quality of our lives, and likely our longevity as a species.

Right now, from a computer science and machine learning standpoint, humans are still very involved in the process, from coding the algorithms, to understanding the training data sets, to processing the results, recognizing the shortcomings and productizing the technology.

But we are in a race against time on two major fronts. First, what is commonly referred to as the “black box” problem: human involvement and understanding of AI will decrease over time as AI's sophistication (think ANNs) evolve. And second, the use of AI by governments and private interests will increase.

My concern is that AI will be used, both purposely and unintentionally, in ways that are at odds with Western Democratic ideals of individual liberty and freedom.


How do we address these concerns? 

Society is at the point where we must resolve not what is possible with respect to AI, but what should be prohibited and/or partially constrained.

First, we must specifically identify the decisions that can never be made in whole or in part by the algorithmic output generated by AI.  This means that even in situations where every expert agrees that the data in and out is totally unbiased, transparent and accurate, there must be a statutory prohibition on utilizing it for any type of predictive or substantive decision-making.

Admittedly, this is counter-intuitive in a world where we crave mathematical certainty, but establishing an AI “no fly zone” is essential to preserving the liberties we all hold dear and that serve as the bedrock for our society.

Second, for other identified decisions based on AI analytics that are not outright prohibited, we need legislation that clearly defines those where a human must be involved in the decision-making process.


You’ve been in instrumental in proposing federal AI workplace and IP legislation that in 2018 was turned into a United States House of Representatives Discussion Draft bill. Can you discuss some of these proposals?

The AI Data Protection Act is intended to promote innovation and designed to (1) increase transparency in the nature and use of, and to build public trust in, artificial intelligence; (2) address the impact of artificial intelligence on the labor market and (3) protect public health and safety.

It has several key components.  For example, it prohibits covered companies' sole reliance on artificial intelligence to make certain decisions, including a decision regarding employment of individuals or the denial or limitation of medical treatment, and prohibits medical insurance issuers from making decisions regarding coverage of a medical treatment solely on the AI analytics.  It also establishes the Artificial Intelligence Board — a new federal agency charged with specific responsibilities for regulating AI as it pertains to public health and safety.  And it requires covered entities to appoint a Chief Artificial Intelligence Officer.


You’ve also developed AI oversight and corporate governance best practices designed to ensure algorithmic fairness. What are some of the current issues that you see with fairness or bias in AI systems? 

This subject has been the focus of intense scrutiny from academics and is now getting the interest of U.S. government agencies, like the Equal Employment Opportunity Commission (EEOC), and the plaintiff's bar.  Most of the time, causation results from either a flaw in the training data sets or the lack of understanding and transparency into the testing environment. This is compounded by the lack of central ownership and oversight of AI by senior management.

This lack of technical understanding and situational awareness is a significant liability concern.  I have spoken to several prominent plaintiff's attorneys who are on the hunt for AI bias cases.


Deep learning often suffers from the black box problem, whereby we input data into an Artificial Neural Network (ANN), and we then receive an output, with no means of knowing how that output was generated. Do you believe that this is a major problem?  

I do. And as algorithms and neural networks continue to evolve, and humans are increasingly not “in the loop,” there is a real risk of passing the tipping point where we will no longer be able to understand critical elements of function and output.


With COVID-19 countries all over the world have introduced AI powered state surveillance systems. How much of an issue do you have with potential abuse with this type of surveillance?

It is naïve, and frankly, irresponsible from an individual rights and liberty perspective, to ignore or downplay the risk of abuse.  While contact tracing seems prudent in the midst of a global pandemic, and AI-based facial recognition provides an effective measure to do what humans alone would not be capable of accomplishing, society must institute legal prohibitions on misuse along with effective oversight and enforcement mechanisms.  Otherwise, we are surrendering to the state a core element of our individual fundamental rights. Once given away in wholesale fashion, this basic element of our freedom and privacy will not be returned.


You previously stated that “We must establish an AI “no-fly zone” if we want to preserve the liberties that Americans all hold dear and that serve as the bedrock of our society.” Could you share some of these concerns that you have?  

When discussing AI, we must always focus on AI’s essential purpose: to produce accurate predictive analytics from very large data sets which are then used to classify humans and make decisions.  Next, we must examine who the decision-makers are, what are they deciding, and on what are they basing their decisions.

If we understand that the decision-makers are those with the largest impact on our health, livelihood and freedoms —  employers, landlords, doctors, insurers, law enforcement and every other private, commercial and governmental enterprise that can generate, collect or purchase AI analytics — it becomes easy to see that in a Western liberal democracy, as opposed to a totalitarian regime, there should be decisions which should not, and must not, be left solely to AI.

While many obvious decisions come to mind, like prohibiting the incarceration of someone before a crime is committed, AI's widespread adoption into every aspect of our lives presents much more vexing instances of ethical conundrums.  For example, if an algorithm accurately predicts that workers who post photos on social media of beach vacations where they are drinking alcohol quit or get fired from their jobs on average of 3.5 years earlier than those who post photos of themselves working out, should the former category be denied a promotion or raise based solely on the algorithmic output?  If an algorithm correctly determines that teenagers who play videogames on average of more than 2 hours per day are less likely to graduate from a four-year university, should such students be denied admission?  If Asian women in their 60s admitted to the ICU for COVID-19 related symptoms are determined to have a higher survival rate than African-American men in their 70s, should women receive preferential medical treatment?

These over-simplified examples are just a few of the numerous decisions where reliance on AI alone contradicts our views of what individual human rights require.


Is there anything else that you would like to share regarding AI or Baker McKenzie? 

I am extremely energized to help lead Baker McKenzie's truly international AI practice amidst the evolving landscape. We recognize that our clients are hungry for guidance on all things AI, from negotiating contracts, to establishing internal oversight, to avoiding claims of bias, to understanding the emerging domestic and international regulatory framework.

Companies want to do the right thing, but there are very few law firms that have the necessary expertise and understanding of AI and machine learning to be able to help them.  For those of us who are both AI junkies and attorneys, this is an exciting time to add real value for our clients.

Thank you for the fantastic answers regarding some of these major societal concerns involving AI. Readers who wish to learn more about Bradford Newman should click here.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of, a website that focuses on investing in disruptive technology.