Alyssa Simpson Rochwerger, Co-Author of Real World AI – Interview Series
Alyssa Rochwerger is a customer-driven product leader dedicated to building products that solve hard problems for real people. She has held numerous product leadership roles for machine learning organizations. She served as VP of product for Figure Eight (acquired by Appen), VP of AI and data at Appen, and director of product at IBM Watson. She recently left the space to pursue her dream of using technology to improve healthcare. Currently, she serves as director of product at Blue Shield of California, where she is happily surrounded by lots of data, many hard problems, and nothing but opportunities to make a positive impact.
We discuss her new book: The Real World of AI: A Practical Guide for Responsible Machine Learning
In the book’s introduction you describe how as an IBM product manager you first encountered an issue with AI system delivering biased information when a picture of a person in a wheelchair was classified by the algorithm as “loser”. How much of a wake-up call was this for you about AI bias?
I wouldn’t call it a wake up call as much as it was my first time building a machine learning based product (I was only a few months into the role) and I didn’t know enough yet about how this technology worked to appropriately put in guards and actively mitigate unwanted bias. It was an eye opening experience that sharpened my attention on this issue – and made me acutely aware moving forward. Equity, access, and inclusion is a topic I’m passionate about – and have been for a long time – I even won an award in college for my advocacy for students with disabilities. This experience at IBM helped me understand from a technical perspective how easy it is for systemic societal bias to be encoded into machine learning based products if the team isn’t actively mitigating. I was happy to be working at an institution that cares deeply about equity and put resources into mitigating.
What did you personally learn researching and writing this book?
On a personal note – I had to carve out time for writing this book while switching jobs, having a 1-year-old all while navigating COVID. I learned how to carve out time to make this a priority, and how to ask for help from my family which afforded me the time to give book writing my attention.
Professionally – it was wonderful to have so many participants who willingly and graciously shared their stories with us for publication. Machine learning professionals in my experience are an incredibly thoughtful and gracious group of people – willing to help others and share mistakes and lessons learned. Unfortunately, many of these lessons learned stories were not included for this book or had to be anonymized significantly, because of concern to go public with behind-the-scenes information that could make a company or individual look bad if taken in the wrong light. While that’s certainly par for the course, personally I feel it’s too bad – I’m a big believer in learning and growing from past experience and mistakes if they can be helpful for others.
What are some of the most important lessons that you hope people will take from reading this?
I hope people will learn that machine learning is not super scary or hard to understand. That it’s a powerful but also at times brittle technology that needs guidance and structure to be successful solving hard problems. Also that responsible ethical use of this technology is critical to maturity and success – and that focusing on mitigating harmful bias early on is key to business success.
One example of AI gender bias that was depicted in the book was the Apple Credit Card issuing lower lines of credit to women than to men. This was an example of how omitting gender as an option failed to account for other variables that can serve as a proxy for gender. The example showcased that without the “gender” input it was impossible to figure out that the outcome was biased until after the final product was released. What are some types of data inputs that you believe should never be omitted to avoid bias against gender or minorities?
There is no hard and fast rule – every data set, use case, and situation is different. I would encourage practitioners to get into the details and nuance of what problem a machine learning algorithm is being applied to solve – and what harmful bias could be coded into that decision.
The book describes how a primary responsibility when communicating with the AI team is to precisely define the outcomes that are important to the business. In your opinion how often do businesses fail at this task?
I would say in my experience, most of the time, the outcomes are either not defined or only defined at a loose or high level. Getting into the details about the specific outcomes is an easy way to set up the team for success early on.
The book speaks about the importance of realizing that an AI system is not a “Set it and forget it” type of system. Could you briefly discuss this?
This is the classic mistake that most companies make when launching a new ML system into production. Reality changes – time passes, what was true yesterday (the training data) might not be true tomorrow. It depends on your circumstances, but in most cases, it’s important to be able to learn and adjust and make better decisions over time based on more recent information.
Machine learning based products essentially are decision-makers. To equate this to a human example – it’s like a referee in a high-stakes football game. Many times, if it’s a well-trained referee with experience, the referee makes a good decision and the game goes on – but at times, that referee either makes a bad call – or isn’t sure what call to make – and needs to go back and review the video – ask a few other folks in order to make a decision on a particular play. Similarly – ML products need feedback, training, and at times aren’t confident. They need to have back-up options to fall back on as well as new information to learn from to get better over time. A good referee will learn over time and get better at making judgements.
Could you speak to the importance of creating a cross-functional team that can identify what problems are best tackled by using AI?
Machine learning technology is well suited typically for very hard specific problems that are not solved with other approaches. Any hard problem – it takes a team to be successful. When companies are new to AI – there is often a false narrative that a lone machine learning scientist, or even machine learning team can solve the problem by themselves. I have never found that to be true. It takes a team with different backgrounds and approaches to tackle a hard problem – and certainly to deploy machine learning technology successfully to production.
Thank you for the great interview, for readers (and especially business executives) who are interested in learning more, I recommend that they read the book The Real World of AI: A Practical Guide for Responsible Machine Learning.
- NVIDIA: From Chipmaker to Trillion-Dollar AI Powerhouse
- Laura Petrich, PhD Student in Robotics & Machine Learning – Interview Series
- Liquid Neural Networks: Definition, Applications, & Challenges
- Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) – Interview Series
- AI Leaders Warn of ‘Risk of Extinction’