Michael Schrage is a Research Fellow at the MIT Sloan School of Management’s Initiative on the Digital Economy. A sought-after expert on innovation, metrics, and network effects, he is the author of Who Do You Want Your Customers to Become?, The Innovator’s Hypothesis: How Cheap Experiments Are Worth More than Good Ideas (MIT Press), and other books.
In this interview we discuss his book “Recommendation Engines” which explores the history, technology, business, and social impact of online recommendation engines.
What inspired you to write a book on such a narrow topic as “Recommendation Engines”?
The framing of your question gives the game away…..When I looked seriously at the digital technologies and touchpoints that truly influenced people’s lives all over the world, I almost always found a ‘recommendation engine’ driving decision. Spotify’s recommenders determine the music and songs people hear; TikTok’s recommendation engines define the ‘viral videos’ people put together and share; Netflix’s recommenders have been architected to facilitate ‘binge watching’ and ‘binge watchers;’ Google Maps and Waze recommend the best and/or fastest and/or simplest ways to get there; Tinder and Match.com recommend who you might like to be with or, you know, ‘be’ with; Stitch Fix recommends what you might want to wear that makes you ‘you;’ Amazon will recommend what you really should be buying; Academia and ResearchGate will recommend the most relevant research you should be up to date on….I could go on – and do, in the book – but both technically and conceptually, ‘Recommendation Engines’ are the antithesis of ’narrow.’ Their point and purpose covers the entire sweep of human desire and decision.
A quote in your book is as follows: “Recommenders aren’t just about what we might buy, they’re about who we might want to become”. How could this be abused by enterprises or bad actors?
There’s no question or doubt that recommendation can be abused. The ‘classic’ classic question – Cui bono? – ‘Who benefits?’ – applies. Are the recommendations truly intended to benefit the recipient or the entity/enterprise making the recommendation? Just as its easy for a colleague, acquaintance or ‘friend’ who knows you to offer up advice that really isn’t in your best interest, it’s a digital snap for ‘data driven’ recommenders to suggest you buy something that increases ’their’ profit at the expense of ‘your’ utility or satisfaction. On one level, I am very concerned about the potential – and reality – of abuse. On the other, I think most people catch on pretty quickly to when they’re being exploited or manipulated by people or technology. Fool me once, shame on you; fool me twice or thrice, shame on me. Recommendation is one of those special domains where it’s smart to be ethical and ethical to be smart.
Are echo chambers where users are just fed what they want to see regardless of accuracy a societal issue?
Eli Pariser coined the excellent phrase ’the filter bubble’ to describe this phenomenon and pathology. I largely agree with his perspective. In truth, I think it now fair to say that ‘confirmation bias’ – not sex – is what really drives most adult human behavior. Most people are looking for agreement most of the time. Recommenders have to navigate a careful course between novelty, diversity relevance and serendipity because – while too much confirmation is boring and redundant – too much novelty and challenge can annoy and offend. So, yes, the quest for confirmation is both a personal and social issue. That said, recommenders offer a relatively unobnoxious way to bring alternative perspectives and options to people’s attention., However, I do, indeed, wonder whether regulation and legal review will increasingly define the recommendation future.
Filter bubbles currently limit exposure to conflicting, contradicting, and or challenging/viewpoints. Should there be some type of regulation that discourages this type of over-filtering?
I prefer light-touch to heavy-handed regulatory oversight. Most platforms I see do a pretty poor job of labelling ‘fake news’ or establishing quality control. I’d like to see more innovative mechanisms explored: swipe left for a contrarian take; embed links that elaborate on stories or videos in ways that deepen understanding or decontextualize the ‘bias’ that’s being confirmed. But let’s be clear: choice architectures that ‘discourage’ or create ‘frictions’ require different data and design sensibilities than those that ‘forbid’ or ‘censor’ or ‘prevent.’ I think this a very hard problem for people and machines alike. What makes it particularly hard is that human beings – in fact – are less predictable than a lot of psychologists and social scientists believe. There are a lot of competing ‘theories of the mind’ and ‘agency’ these days. The more personalized recommendations and recommenders become, the more challenging and anachronistic ‘one size fits all’ approaches become. It’s one of the many reasons this domain interests me so.
Should end users and society demand explainability as to why specific recommendations are made?
Yes, yes and yes. Not just ‘explainability’ but ‘visibility,’ ’transparency’ and ‘interpretability,’ too. People should have the right to see and understand the technologies being used to influence them. They should be able to appreciate the algorithms used to nudge and persuade them. Think of this as the algorithmic counterpart to ‘informed consent’ in medicine. Patients have the right to get- and doctors have the duty to provide – the reasons and rationales for choosing ’this’ course of action to ’that’ one. Indeed, I argue that ‘informed consent’ – and its future – in medicine and health care offers a good template for the future of ‘informed consent’ for recommendation engines.
Do you believe it is possible to “hack” the human brain using Recommender Engines?
The brain or the mind? Not kidding. Are we materially – electrically and chemically – hacking neurons and lobes? Or are we using less invasive sensory stimuli to evoke predictable behaviors? Bluntly, I believe some brains – and some minds – are hackable some of the time. But do I believe people are destined to become ‘meat puppets’ who dance to recommendation’s tunes? I do not. Look, some people do become addicts. Some people do lose autonomy and self control. And, yes, some people do want to exploit others. But the preponderance of evidence doesn’t make me worry about the ‘weaponization of recommendation.’ I’m more worried about the abuse of trust.
A quote in a research paper by Jason L. Harman and Jason L. Harman states the following: “The trust that humans place on recommendations is key to the success of recommender systems”. Do you believe that social media has betrayed that trust?
I believe in that quote. I believe that trust is, indeed, key. I believe that smart and ethical people truly understand and appreciate the importance of trust. With apologies to Churchill’s comment on courage, trust is the virtue that enables healthy human connection and growth. That said, I’m comfortable arguing that most social media platforms – yes, Twitter and Facebook, I’m looking at you! – aren’t built around or based on trust. They’re based on facilitating and scaling self-expression. The ability to express one’s self at scale has literally nothing to do with creating or building trust. There was nothing to betray. With recommendation, there is.
You state your belief that the future of Recommender Engines will feature the best recommendations to enhance one’s mind. In your opinion are any Recommendation Engines currently working on such a system?
Not yet. I see that as the next trillion dollar market. I think Amazon and Google and Alibaba and Tencent want to get there. But, who knows, there may be an entrepreneurial innovator who surprises us all: maybe a Spotify incorporating mindfulness and just-in-time whispered ‘advice’ may be the mind-enhancing breakthrough.
How would you summarize how Recommendation Engines enables users to better understand themselves?
Recommendations are about good choices…. sometimes, even great choices. What are the choices you embrace? What are the choices you ignore? What are the choices you reject? Having the courage to ask – and answer – those questions gives you remarkable insight into who you are and who you might want to become. We are the choices we make; whatever influences those choices has remarkable impact and influence on us.
Is there anything else that you would like to share about your book?
Yes – in the first and final analysis, my book is about the future of advice and the future of who you ‘really’ want to become. It’s about the future of the self – your ’self.’ I think that’s both an exciting and important subject, don’t you?
Thank you for taking the time to share your views.