stub What the White House's AI Bill of Rights Means for America & the Rest of the World - Unite.AI
Connect with us

Ethics

What the White House’s AI Bill of Rights Means for America & the Rest of the World

mm
Updated on

The White House Office of Science and Technology Policy (OSTP) recently released a whitepaper called “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People”. This framework was released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered world.”

The foreword in this bill clearly illustrates that the White House understands the imminent threats to society that are posed by AI. This is what is stated in the foreword:

“Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”

What this Bill of Rights and the framework it proposes will mean for the future of AI remains to be seen. What we do know is that new developments are emerging at an ever exponential rate.  What was once seen as impossible, instant language translation is now a reality, and at the same time we have a revolution in natural language understanding (NLU) that is led by OpenAI and their famous platform GPT-3.

Since then we have seen instant generation of images via a technique called Stable Diffusion that may soon become a mainstream consumer product. In essence with this technology a user can simply type in any query that they can imagine, and like magic the AI will generate an image that matches the query.

When factoring in exponential growth and the Law of Accelerating Returns there will soon come a time when AI has taken over every aspect of daily life. The individuals and companies that know this and take advantage of this paradigm shift will profit. Unfortunately, a large segment of society may fall victim to both ill-intentioned and unintended consequences of AI.

The AI Bills of Rights is intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems. How this bill will compare to China's approach remains to be seen, but it is a bill of Rights that has the potential to shift the AI landscape, and it is likely to be adopted by allies such as Australia, Canada, and the EU.

That being stated the AI Bill of Rights is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument. What this means is that it will be up to enterprises and governments to abide by the policies outlined in this whitepaper.

This bill has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence, below we will outline the 5 principles:

1. Safe and Effective Systems

There's a clear and present danger to society by abusive AI systems, specifically those that rely on deep learning. This is attempted to be addressed with these principles:

“You should be protected from unsafe of ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate that they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems. You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.”

2. Algorithmic Discrimination Protections

These policies address some of the elephants in the room when it comes to enterprises abusing individuals.

A common problem when hiring staff using AI systems it that the deep learning system will often train on biased data to reach hiring conclusions. This essentially means that poor hiring practices in the past will result in gender or racial discrimination by a hiring agent. One study indicated the difficulty of attempting to de-gender training data.

Another core problem with biased data by governments is the risk for wrongful incarceration, or even worse criminality prediction algorithms that offer longer prison sentences to minorities.

“You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation) religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such as algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.”

It should be noted that the USA has taken a very transparent approach when it comes to AI, these are policies that are designed to protect the general public, a clear contrast to the AI approaches taken by China.

3. Data Privacy

This data privacy principle is the one that is most likely to affect the largest segment of the population. The first half of the principle seems to concern itself with the collection of data, specifically with data collected over the internet, a known problem especially for social media platforms. This same data can then be used to sell advertisements, or even worse to manipulate public sentiment and to sway elections.

“You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations, and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed.”

The second half of the Data Privacy principle seems to be concerned with surveillance from both governments and enterprises.

Currently, enterprises are able to monitor and spy on employees, in some cases it may be to improve workplace safety, during the COVID-19 pandemic it was to enforce the wearing of masks, most often it is simply done to monitor how time at work is being utilized. In many of these cases employees feel like they are being monitored and controlled beyond what is deemed acceptable.

“Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.”

It should be noted that AI can be used for good to protect peoples privacy.

4. Notice and Explanation

This should be the call to arms for enterprises to deploy an AI Ethics advisory board, as well as push to accelerate the development of explainable AI. Explainable AI is necessary in case an AI model makes a mistake, understanding how the AI works enables the easy diagnosis of a problem.

Explainable AI also will allow the transparent sharing of information on how data is being utilized, and why a decision was made by AI. Without explainable AI it will be impossible to comply with these policies due to the blackbox problem of deep learning.

Enterprises that focus on improving these systems will also incur positive benefits from understanding the nuances and complexities behind why a deep learning algorithm made a specific decision.

“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in the use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining outcome. Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the content. Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made pubic whenever possible.”

5. Human Alternatives, Consideration, and Fallback

Unlike most of the above principles, this principle is most applicable to government entities, or privatized institutions that work on behalf of the government.

Even with an AI ethics board, and explainable AI it is important to fall back on human review when lives are at stake. There is always potential for error, and having a human review a case when requested could possibly avoid a scenario such as an AI sending the wrong people to jail.

The judicial and criminal system have the most room to cause irreparable harm to marginalized members of society and should take special note of this principle.

“You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. You should have access to a timely human consideration and remedy by a fallback and escalation problem if any automated system fails, it produces an error, or you would like to appeal, or contest its impact on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal system, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access to oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes a description of these human governance processes and assessment of their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.”

Summary

The OSTP should be given credit for attempting to introduce a framework that bridges the safety protocols that are needed for society, without also introducing draconian policies that could hamper progress in the development of machine learning.

After the principles are outlined, the bill continues by providing a technical companion to the issues that are discussed as well as detailed information about each principle and the best ways to move forward to implement these principles.

Savvy business owners and enterprises should take note to study this bill, as it can only be advantageous to implement these policies as soon as possible.

Explainable AI will continue to dominate in importance, as can be seen from this quote from the bill.

“Across the federal government, agencies are conducting and supporting research on explainable AI systems. The NIST is conducting fundamental research on the explainability of AI systems. A multidisciplinary team of researchers aims to develop measurement methods and best practices to support the implementation of core tenets of explainable AI. The Defense Advanced Research Projects Agency has a program on Explainable Artificial Intelligence that aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. The National Science Foundation’s program on Fairness in Artificial Intelligence also includes a specific interest in research foundations for explainable AI.”

What should not be overlooked, is that eventually the principles outlined herein will become the new standard.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.