stub Juliette Powell & Art Kleiner, Authors of the The AI Dilemma - Interview Series - Unite.AI
Connect with us

Interviews

Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

mm

Published

 on

The AI Dilemma is written by Juliette Powell & Art Kleiner.

Juliette Powell is an author, a television creator with 9,000 live shows under her belt, and a technologist and sociologist. She is also a commentator on Bloomberg TV/ Business News Networks and a speaker at conferences organized by the Economist and the International Finance Corporation. Her TED talk has 130K views on YouTube. Juliette identifies the patterns and practices of successful business leaders who bank on ethical AI and data to win. She is on faculty at NYU's ITP where she teaches four courses, including Design Skills for Responsible Media, a course based on her book.

Art Kleiner is a writer, editor and futurist. His books include The Age of HereticsWho Really MattersPrivilege and Success, and The Wise. He was editor of strategy+business, the award-winning magazine published by PwC. Art is also a longstanding faculty member at NYU-ITP and IMA, where his courses include co-teaching Responsible Technology and the Future of Media.

The AI Dilemma” is a book that focuses on the dangers of AI technology in the wrong hands while still acknowledging the benefits AI offers to society.

Problems arise because the underlying technology is so complex that it becomes impossible for the end user to truly understand the inner workings of a closed-box system.

One of the most significant issues highlighted is how the definition of responsible AI is always shifting, as societal values often do not remain consistent over time.

I quite enjoyed reading “The AI Dilemma”. It's a book that doesn't sensationalize the dangers of AI or delve deeply into the potential pitfalls of Artificial General Intelligence (AGI). Instead, readers learn about the surprising ways our personal data is used without our knowledge, as well as some of the current limitations of AI and reasons for concern.

Below are some questions that are designed to show our readers what they can expect from this ground breaking book.

What initially inspired you to write “The AI Dilemma”?

Juliette went to Columbia in part to study the limits and possibilities of regulation of AI. She had heard firsthand from friends working on AI projects about the tension inherent in those projects. She came to the conclusion that there was an AI dilemma, a much bigger problem than self-regulation. She developed the Apex benchmark model — a model of how decisions about AI tended toward low responsibility because of the interactions among companies and groups within companies. That led to her dissertation.

Art had worked with Juliette on a number of writing projects. He read her dissertation and said, “You have a book here.” Juliette invited him to coauthor it. In working on it together, they discovered they had very different perspectives but shared a strong view that this complex, highly risky AI phenomenon would need to be understood better so that people using it could act more responsibly and effectively.

One of the fundamental problems that is highlighted in The AI Dilemma is how it is currently impossible to understand if an AI system is responsible or if it perpetuates social inequality by simply studying its source code. How big of a problem is this?

The  problem is not primarily with the source code. As Cathy O'Neil points out, when there's a closed-box system, it's not just the code. It's the sociotechnical system — the human and technological forces that shape one another — that needs to be explored. The logic that built and released the AI system involved identifying a purpose, identifying data, setting the priorities, creating models, setting up guidelines and guardrails for machine learning, and deciding when and how a human should intervene. That's the part that needs to be made transparent — at least to observers and auditors. The risk of social inequality and other risks are much greater when these parts of the process are hidden. You can't really reengineer the design logic from the source code.

Can focusing on Explainable AI (XAI) ever address this?

To engineers, explainable AI is currently thought of as a group of technological constraints and practices, aimed at making the models more transparent to people working on them. For someone who is being falsely accused, explainability has a whole different meaning and urgency. They need explainability to be able to push back in their own defense. We all need explainability in the sense of making the business or government decisions underlying the models clear. At least in the United States, there will always be a tension between explainability — humanity's right to know – and an organization's right to compete and innovate. Auditors and regulators need a different level of explainability. We go into this in more detail in The AI Dilemma.

Can you briefly share your views on the importance of holding stakeholders (AI companies) responsible for the code that they release to the world?

So far, for example in the Tempe, AZ self-driving car collision that killed a pedestrian, the operator was held responsible. An individual went to jail. Ultimately, however, it was an organizational failure.

When a bridge collapses, the mechanical engineer is held responsible. That’s because mechanical engineers are trained, continually retrained, and held accountable by their profession. Computer engineers are not.

Should stakeholders, including AI companies, be trained and retrained to take better decisions and have more responsibility?

The AI Dilemma focused a lot on how companies like Google and Meta can harvest and monetize our personal data. Could you share an example of significant misuse of our data that should be on everyone’s radar?

From The AI Dilemma, page 67ff:

New cases of systematic personal data misuse continue to emerge into public view, many involving covert use of facial recognition. In December 2022, MIT Technology Review published accounts of a longstanding iRobot practice. Roomba household robots record images and videos taken in volunteer beta-testers’ homes, which inevitably means gathering intimate personal and family-related images. These are shared, without testers’ awareness, with groups outside the country. In at least one case, an image of an individual on a toilet was posted on Facebook. Meanwhile, in Iran, authorities have begun using data from facial recognition systems to track and arrest women who are not wearing hijabs.16

There’s no need to belabor these stories further. There are so many of them. It is important, however, to identify the cumulative effect of living this way. We lose our sense of having control over our lives when we feel that our private information might be used against us, at any time, without warning.

One dangerous concept that was brought up is how our entire world is designed to be frictionless, with the definition of friction being “any point in the customer's journey with a company where they hit a snag that slows them down or causes dissatisfaction.” How does our expectation of a frictionless experience potentially lead to dangerous AI?

In New Zealand, a Pak’n’Save savvy meal bot suggested a recipe that would create chlorine gas if used. This was promoted as a way for customers to use up leftovers and save money.

Frictionlessness creates an illusion of control. It’s faster and easier to listen to the app than to look up grandma’s recipe. People follow the path of least resistance and don’t realize where it’s taking them.

Friction, by contrast, is creative. You get involved. This leads to actual control. Actual control requires attention and work, and – in the case of AI – doing an extended cost-benefit analysis.

With the illusion of control it feels like we live in a world where AI systems are prompting humans, instead of humans remaining fully in control. What are some examples that you can give of humans collectively believing they have control, when really, they have none?

San Francisco right now, with robotaxis. The idea of self-driving taxis tends to bring up two conflicting emotions: Excitement (“taxis at a much lower cost!”) and fear (“will they hit me?”) Thus, many regulators suggest that the cars get tested with people in them, who can manage the controls. Unfortunately, having humans on the alert, ready to override systems in real-time, may not be a good test of public safety. Overconfidence is a frequent dynamic with AI systems. The more autonomous the system, the more human operators tend to trust it and not pay full attention. We get bored watching over these technologies. When an accident is actually about to happen, we don’t expect it and we often don’t react in time.

A lot of research went into this book, was there anything that surprised you?

One thing that really surprised us was that people around the world could not agree on who should live and who should die in The Moral Machine’s simulation of a self-driving car collision. If we can’t agree on that, then it’s hard to imagine that we could have unified global governance or universal standards for AI systems.

You both describe yourselves as entrepreneurs, how will what you learned and reported on influence your future efforts?

Our AI Advisory practice is oriented toward helping organizations grow responsibly with the technology. Lawyers, engineers, social scientists, and business thinkers are all stakeholders in the future of AI. In our work, we bring all these perspectives together and practice creative friction to find better solutions. We have developed frameworks like the calculus of intentional risk to help navigate these issues.

Thank you for the great answers, readers who wish to learn more should visit The AI Dilemma.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.