Thought Leaders
How To Earn Trust for AI Across the Age Spectrum

In insurance, AI-powered automation is already driving measurable business value, transforming key processes, and promising faster, more efficient service.
But it also raises questions of fairness and accountability.
As this powerful tech makes its way into more insurance touchpoints, trust is becoming an increasingly valuable currency – especially across generational lines. Younger users expect AI to power most interactions. Older users are wary of it. The challenge, then, becomes designing AI experiences that meet the needs and expectations of customers with a variety of demographic preferences.
To do so, insurers (or carriers) utilizing AI must look beyond its technical capabilities and strive to prioritize transparency, progressive onboarding, and AI-human collaboration. For the insurance industry, where decisions often occur at sensitive moments in people’s lives and directly impact them in far-reaching ways, building trust across AI-driven decisions isn’t just an option: It will be the DNA of insurance well into the future.
AI Expectations: A Generational Divide
Digitally-native generations like Millennials and Gen Z are already highly accustomed to AI-driven experiences in banking, retail, and media. It’s not surprising then that younger users are more likely to feel more at home with AI applications and services as these tools proliferate into other sectors.
For instance, insurers often use AI-powered chatbots or virtual assistants to offer quote comparisons or policy recommendations within seconds. Young users who are already primed to prioritize speed and personalization likely won’t bat an eye, even if they don’t fully understand the mechanics at play.
Gen X and Baby Boomers on the other hand in general are often significantly more wary of these AI bots, especially when it comes to decisions about money or investments. This older demographic values explainability and reassurance, preferring hybrid models, where real people remain accessible as touchpoints – to walk them through coverage decisions or explain why a claim was approved or denied – even if humans aren’t carrying out all operations.
It’s important to remember that comfort with AI varies not just by age, but by perceived stakes, too. When decisions are high risk OR reward – like in the case of financial loss or insurance coverage – trust in AI’s hidden logic shrinks.
Transparency: The Foundation of Trust
A whopping 80% of AI projects fail due to “lack of trust” on the part of users. That number only increases in an industry like insurance, where trust and confidence have proven to be key elements in most transactions.
To build confidence, companies must proactively explain how AI works and what data it uses. Take CapitalOne, for instance. They openly publish information on how they use AI and ML – how they prompt the models, what data their AI is trained on, and more – for fraud detection, credit risk assessment, and customer experience personalization and share their AI governance standards with customers.
Transparency efforts help users feel in control and at ease, even when a human isn’t in the loop. To bridge the trust gap, especially for older customers, insurers should consider providing “why we made this decision” popups, clear opt-in pages for data policies, and easy access to the appeal processes throughout the digital customer journey.
Low-Stakes Use Cases
The most successful AI strategies introduce users to AI’s value in low-risk contexts before scaling to high-impact decisions.
In an insurance context, this could mean utilizing AI to assist new policyholders only with basic coverage queries or helping agents draft routine customer emails – both low-risk use cases that allow employees and customers alike to build confidence in AI without fear of negative consequences. PayPal’s customer-facing AI use cases always begin with features that boost safety without touching money directly, such as using AI to detect suspicious login attempts or recommend password updates.
These smaller interactions help new users, especially those from older generations, acclimate to AI, while simultaneously reinforcing the brand’s tech-forward identity for younger users. Over time, these comfort-building strategies will allow companies to expand AI into higher-stakes workflows like credit, claims, or lending.
The Human Touch Still Matters
AI has proven to enhance human productivity, but allowing it to take over completely is a sure to erode trust. Particularly in insurance or financial services, human empathy, judgment, and contextual understanding are still irreplaceable.
Consider Morgan Stanley’s recently introduced AI co-pilot for financial advisors. The system helps them analyze client portfolios faster, but advisors are still kept in full control of the client relationship. For older users, knowing a human is involved often provides reassurance, while younger customers may see it as a signal of credibility and accountability for instances where AI reaches limits.
A BCG global AI trust survey found that across all age demographics, consumers prefer a “human failsafe” model, where AI makes suggestions but final decisions rest with a person. Even as AI improves, the winning formula will be hybridized – where AI accounts for speed and scale, humans for nuance and trust.
Trust the Processing
The next phase of AI adoption in insurance will be shaped as much by trust in this technology as it is by its capabilities.
But that trust won’t come overnight.
The companies that succeed will be those which can create AI-powered experiences that are fast and explainable, automated and personal, building bridges across demographics through transparency, empathy, and thoughtful design. Because in a world where AI is making more decisions than ever, trust is the most important product you can deliver.












