stub Reid Blackman, Ph.D, Founder and CEO of Virtue Consultants - Interview Series - Unite.AI
Connect with us

Interviews

Reid Blackman, Ph.D, Founder and CEO of Virtue Consultants – Interview Series

mm

Published

 on

Reid Blackman is the Founder and CEO of Virtue Consultants. In that capacity he has worked with companies to integrate ethics and ethical risk mitigation into company culture and the development, deployment, and procurement of emerging technology products. He is also a Senior Advisor to Ernst & Young and sits on their Artificial Intelligence Advisory Board, and is a member of IEEE’s Ethically Aligned Design Initiative.

Reid’s work has been profiled in The Wall Street Journal and Dell Perspectives and he has contributed pieces to The Harvard Business Review, TechCrunch, VentureBeat, and Risk & Compliance Magazine. He has been quoted in numerous news articles, and he regularly speaks at various venues including The World Economic Forum, SAP, Cannes Lions, Forbes, NYU Stern School of Business, Columbia University, and AIG.

You were a Philosophy Professor at Colgate University from 2009 to 2018. At what point did you begin to incorporate AI ethics into your classes?

I often taught a course on Social and Political Philosophy, where I covered Marx. One of Marx’s central claims is that capitalism will ultimately give way to communism, due to a massive increase in the “means of production.” In other words, capitalism pushes greater and greater efficiency in the name of competition and opening new markets, which means an increase in the creation of technologies that can output more and more in shorter and shorter time. Marx also predicted this would increasingly put money in the hands of the few and push more and more people into poverty, at which point capitalist structures would be overturned by a revolution led by the growing numbers of the destitute masses. All of this leads to a discussion around the ethics of technology obviating the need for human labor, which is a central element of AI ethics.

Small side-story if you’re interested: Back in 2002 I was a graduate student leading a discussion on Marx with undergraduates at Northwestern University. At some point, a student raised his hand and said, “Eventually we won’t need humans to do any work.” The class was confused. I was confused. So I said, “well then who’s going to do the work?” He replied in a very matter of fact kind of way: “robots.” The class erupted in laughter. I stifled my own. But it’s pretty obvious who got the last laugh.

In 2018, you launched Virtue Consultants, an ethics consultancy that empowers Data and AI leaders to identify and mitigate the ethical risks of their products. What inspired you to begin this entrepreneurial journey?

Jealousy. Well, sort of. I started a fireworks wholesaling company when I was a graduate student, I think around 2003 or 2004. That went better than I anticipated, and the company still exists, though now I’m an advisor and no longer taking care of day to day operations. Anyway, it’s relevant because it explains how I came to be a mentor to start-ups in Colgate’s entrepreneurship program (called TIA, Thought Into Action, led by two awesome VCs, Andy Greenfield and Wills Hapworth, who run TIA Ventures). As a mentor I saw students embarking on exciting projects as they tried to figure out how to establish and scale their for-profit or non-profit startups and I thought, “I want that!” But what would my new venture be? It had to speak to my love of philosophy and ethics, and the first thing that made sense was an ethics consultancy. I didn’t see the market for such services at the time, because there wasn’t one to see, and so I waited. And then Cambridge Analytica, and BLM, and #MeToo made national headlines, and suddenly there was a greater awareness of the need.

How important is it for companies to introduce an AI Ethics Statement?

An AI Ethics Statement is not essential, but it’s an extremely useful tool for setting your goals. When you’re introducing an AI ethics program into your organization, you want it to identify and mitigate and manage various ethical, reputational, regulatory, and legal risks. That’s its main function. An ethics statement helps in articulating what things will look like once you have the infrastructure, process, and practices in place to achieve that function. Insofar as a strategy needs a goal – which it always does – an AI Ethics Statement is a nice way to articulate those goals, though it isn’t the only way.

How can companies ensure the ethics statement is transferred into process and practice?

An ethics statement is just a tiny step in the right direction. If you want to keep going, the next natural step is to do an assessment of where you are relative to the goals articulated in that statement. Once you know where the largest, riskiest gaps are – that is, where you are most at risk of falling short of your goals – then you can start devising the solutions to narrow those gaps. Maybe it’s an ethics committee. Maybe it’s a due diligence process during product development. Maybe it’s getting better about how you handle data in non-product departments, like marketing and HR. Probably it’s all those things and more.

What are some solutions that companies should implement to avoid bias into the actual AI algorithm?

There are a bunch of technical tools out there for identifying bias, but they’re limited. They’ll allow you to compare your model’s outputs against the dozens of quantitate metrics that have been offered in the academic ML ethics literature, but you have to be careful because those metrics are not mutually compatible. So a substantive, ethical decision needs to be made: which, if any of these metrics, is the appropriate one in this context?

Aside from using a tool like that, supplemented with a responsible way of answering that question, product teams would do well to think about where bias can creep in before they start building. How could it be contained or reflected in the training data sets? How about in determining the objective function? What about determining the threshold? There are many places bias can creep in. Forethought about where that could be in one’s current project and how it could get in there are essential for identifying and mitigating bias.

AI companies are notorious for being dominated by white males who may unintentionally program in some biases into the AI system.  To avoid this, what type of traits should human resource departments look for?

I’m all for greater opportunity and greater diversity among engineers and product teams generally. That said, I think this is looking at things through the wrong lens. The primary problem when it comes to biased algorithms is not that some white guy’s biases lead to biased code. It’s that the data training sets are biased. In fact, a recent paper out of Columbia – “Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics” – concluded that “[p]rogrammers who understand technical guidance successfully reduce bias,” and that “[a]lgorithmic predictions by female and minority AI programmers do not exhibit less algorithmic bias or discrimination.” So while HR should pay attention to diversity issues, it is far from clear that strategies for reducing biased AI outputs should primarily – let alone, exclusively – focus on hiring decisions in relation to diversity efforts.

Could you discuss what ethical risk due diligence is and why companies should implement it?

An ethical risk due diligence is an attempt to spot the various ethical risks that can be realized with the product you’re creating, including how it is deployed, how it may be used and misused, etc. You want to focus on features of the product – both those it has and those it lacks – and the ways those can lead, when deployed in various contexts, to ethical wrongdoing. When it’s done well, it’s a systemic and exhaustive inspection. Of course, while you can try your very best to look around the corner, there are quite possibly some things you’ll miss, which is why continuous monitoring is important.

As for why companies should implement it: they only need to consider the ethical, reputational, regulatory, and legal risks for not doing so. Think about Optum in the news and under regulatory investigation for an (allegedly) biased algorithm that recommended to healthcares practitioners to pay more attention to white patients than to sicker black patients. Or to Goldman Sachs, under investigation for the credit limits for the Apple Card, which allegedly discriminate against women. Or Amazon’s hiring software, which was scrapped due to concerns about bias before it was deployed. Or IBM being sued by Los Angeles for allegedly misappropriating data collected from its Weather app. Or Facebook….

Is there anything else that you would like to share about Virtue Consultants?

Virtue helps senior leaders put AI ethics into practice, whether it’s helping to educate and upskill people on the topic, writing an AI Ethics Statement, creating and implementing an actionable AI ethical risk framework, or simply serving as advisors on AI ethics. If that sounds interesting, people should come say hi.

Thank you for the great interview, readers who wish to learn more about Reid should visit Reid Blackman, or you may visit Virtue Consultants.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.