Connect with us

Thought Leaders

Three Ways to Build ā€˜Emotional Infrastructure’ in GenAI Products and Win Consumer Trust

mm

In a recent conversation with a potential investor, I was asked if I knew what the most effective secret to scaling was. For me, this was a no-brainer – it’s emotion. The investor, however, scoffed. Like so many in the tech world, he believed logic was scalable, but emotion was not.Ā 

Though we argued for a while, I didn’t change his mind (or win his investment). But I spent over a decade working in sports marketing – you will never convince me that logic is what fills stadiums with hysterical fans. That’s all emotion.Ā 

While some might counter that comparing tech to sports is like comparing apples to oranges, I’d say tech could learn a lesson or two from the sports industry. Because when it comes to so many tech products, not taking human emotion into account is working against us. This myopic perspective is what led to the worst aspects of social media in the past, and what is currently fueling the most damaging headlines regarding generative AI today.

If tech entrepreneurs want to win consumer trust in the promise and potential of AI, we need to design products that are built with what I like to call ā€œemotional infrastructure.ā€ Here’s how.Ā 

Use Memory to Support, Not Exploit

The vast functions promised by AI agents will be made possible thanks to improved memory capabilities – but that also entails storing more personal data, and unfortunately, user data has historically been largely leveraged to try and manipulate users into spending money.Ā 

Building AI with emotional infrastructure, however, would instead leverage memory of user data to fill in the knowledge gaps that the user may not realize they’re missing.

For example, if a user asks an AI agent which local beaches are dog-friendly, as it stands, most agents will spit back a list of beaches and maybe ask if the user would also be interested in a list of nearby pet-friendly restaurants.Ā 

However, an agent designed with emotional infrastructure incorporates an added layer of emotional intelligence. In this case, it would remember that the dog’s vaccine appointment was the previous day and notify the user that taking their dog into water too soon could lead to infection, exhaustion, or unexpected side effects.

In this way, AI agents would behave more like trusted consultants who are willing to push back when simply following orders could lead to less-than-optimal results. Doing so is also more likely to build greater user trust, engendering increased use.Ā 

Build Revenue Through Trust, Not Targeting

Early genAI adopters have already had access to the biggest players in the market for years now, free of charge. This may tempt B2C AI startup founders into thinking the only path to profitability is a free product that makes money selling data to third-party vendors or via ad sales.Ā 

But defaulting to this business model could endanger already precarious consumer trust if users aren’t treated with a much higher level of respect, care, and responsibility. Take the latest story of Meta AI’s app in which users seemed to unintentionally be posting highly personal queries onto a public feed.Ā 

Not all products have social elements, but this example illustrates the level of personal detail users are willing to share with AI agents. And considering research shows that chatbots are better able to persuade human counterparts when equipped with personal details, advertising within these platforms could be done in ethically questionable ways – like accounts of Meta serving ads for beauty products after teen users deleted selfies.Ā 

Instead, emotional infrastructure would optimize for care over clicks. This doesn’t have to come at the expense of revenue, either.Ā 

AI could make these experiences even more effective by only serving ads that authentically meet real user needs: a raincoat when bad weather is on the way, pet insurance plans within budget for a new kitten, vitamin C supplements at the earliest sign of cold symptoms.Ā 

But true care isn’t just about accurate targeting – much like the beach example above, true care fills in the blanks users don’t realize they’re missing. For example, a pet health tracker may alert a user that their particular breed is prone to vitamin B12 deficiencies and therefore recommend a brand of kibble that would better meet their dog’s needs. Uncovering these types of overlooked details is where AI can truly shine and bring real value to end users.Ā 

About half of global consumers are willing to share personal data if it means better user experiences, but this number dropped to just 15% in the US market. AI entrepreneurs have a real opportunity to turn that perception around through emotionally intelligent design.Ā 

Design Emotional Guardrails That Put People First

The most crucial aspect of emotional infrastructure is building guardrails that protect users’ best interests. This involves carefully selecting the type of content that can be AI generated, clearly labeling AI-generated material, and designing clear interventions when users are venturing into emotionally risky territory.Ā 

For example, when building a pet care app, my team and I decided that we would not turn to AI for any health- or nutrition-related content. The risk of hallucinations or misinformation was too great, so we made the decision to pay a professional veterinarian to write and curate these articles. Obviously, this was a more expensive choice, but when literal lives are on the line, it’s a worthy investment.Ā 

Not only does it lend credibility to our product, it keeps our users and their pets safe. It’s also what consumers want – 79% agree that AI use should be disclosed by companies.Ā 

Finally, it is in the best interest of AI companies to design their products to communicate their limitations clearly, and point users toward human interventions when appropriate. There have already been stories of emotionally vulnerable users turning toward self-harm or suicide due to the lack of adequate guadrails in chatbots, and understandably, these have led to lawsuits.Ā 

Though I believe the future of AI in our lives is bright, we have to be honest about where its weaknesses currently lie and not allow the pressures of speed and competition take priority over safety. If we do, we’ll lose out on the trust of the consumers we claim to want to serve. Ignoring that responsibility hurts the entire ecosystem.Ā