stub Researchers Challenge Long-Held Machine Learning Assumption - Unite.AI
Connect with us

Ethics

Researchers Challenge Long-Held Machine Learning Assumption

Published

 on

Researchers at Carnegie Mellon University are challenging a long-held machine learning assumption that there is a trade-off between accuracy and fairness in algorithms used to make public policy decisions. 

The use of machine learning is increasing in many areas like criminal justice, hiring, health care delivery and social service interventions. With this growth also comes increased concerns over whether these new applications can worsen existing inequities. They could be particularly harmful to racial minorities or individuals with economic disadvantages. 

Adjusting a System

There are constant adjustments to data, labels, model training, scoring systems and other aspects of the system in order to guard against bias. However, the theoretical assumption has been that the system becomes less accurate when there are more of these adjustments. 

The team at CMU set out to challenge this theory in a new study published in Nature Machine Intelligence.

Rayid Ghani is a professor in the School of Computer Science’s Machine Learning Department (MLD) and the Heinz College of Information Systems and Public Policy. He was joined by Kit Rodolfa, a research scientist in MLD; and Hemank Lamba, a post-doctoral researcher in SCS. 

Testing Real-World Applications

The researchers tested this assumption in real-world applications, and what they found was that the trade-off is negligible across many policy domains. 

“You actually can get both. You don't have to sacrifice accuracy to build systems that are fair and equitable,” Ghani said. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won't work.”

The team focused on situations where in-demand resources are limited. The allocation of these resources is helped by machine learning.

They focused on systems in four areas:

  • prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce reincarceration;
  • predicting serious safety violations to better deploy a city’s limited housing inspectors;
  • modeling the risk of students not graduating from high school in time to identify those most in need of additional support;
  • and helping teachers reach crowdfunding goals for classroom needs.

The researchers found that models optimized for accuracy could effectively predict the outcomes of interest. However, they also demonstrated considerable disparities in recommendations for interventions. 

The important results came when the researchers applied the adjustments to the outputs of the models that targeted improving their fairness. They discovered that there was no loss of accuracy when disparities baked on race, age, or income were removed. 

“We want the artificial intelligence, computer science and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximize both,” Rodolfa said. “We hope policymakers will embrace machine learning as a tool in their decision making to help them achieve equitable outcomes.”

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.