Thought Leaders
Stop Asking What AI Can Do. Start Asking What Your Users Actually Need.

Most companies building AI products start by asking: What can AI do?”, and that’s is the wrong question.
The technology is widely accessible now. Anyone can plug in an API, train a model, or add generative AI to an existing product. The roadblock isn’t access to the technology. It’s understanding your users deeply enough to know what problems you’re actually solving.
These lessons learned by building a platform apply whether you’re building for creators, healthcare workers, enterprise sales teams, or any users whose trust you need to earn.
Start with the people, not the technology
When you ask users what frustrates them most, you rarely hear complaints about the tools themselves; usually, it’s a much more fundamental issue. Customer service teams are drowning in tickets they can’t answer fast enough. Sales teams need personalized outreach at scale, but work within limited staffing. Creators need to be discovered, but algorithms favor already existing audiences.
The pattern is the same across industries: no one wants AI to replace them. They want AI to handle the repetitive work so they can focus on what actually matters.
Take creators looking to build a following, for example. Fifty-four percent cite “making sure my content gets found” as their top challenge, and it takes an average of 6.5 months to earn their first dollar. The AI tools that exist produce generic content that doesn’t reflect individual voices or aesthetics. What they needed wasn’t more content generation — it was an AI built around what they actually do, that knows their ins and outs, so the AI can handle the mundane, leaving them with the most important tasks.
If you start with those insights rather than with the technology itself, the product looks different. Customers and users are looking for AI that solves the problems they face, not just the ones that are easiest to automate.
None of that happens if you start with what the technology can do and work backward. The best question isn’t what AI can do; it’s what your users need that doesn’t exist yet.
Transparency isn’t a feature, it’s infrastructure
When you’re building AI for any business where trust is at the forefront, the same fear comes up: “If users find out they were interacting with AI and we didn’t tell them, we lose credibility.”
This isn’t creator paranoia. It’s what consumers expect. Nearly 75 percent of consumers want to know if they’re communicating with an AI agent. The stakes are even higher in industries where the entire business model depends on trust — financial services, healthcare, legal, or any platform built on personal relationships.
The instinct for many companies is to hide AI interactions, make them seamless, and avoid drawing attention to them. The assumption is that transparency will reduce engagement or make the experience feel less premium.
The opposite is true. When transparency is built into the foundation rather than added as an afterthought, it actually increases comfort and trust. Creators use AI more freely when there’s no risk of a “gotcha” moment, and fans appreciate knowing what’s happening.
The challenge is that you can only be transparent if you control how the AI works. Third-party tools don’t show you what’s happening under the hood. You can’t explain how they work or what data they’re trained on. If you can’t explain it, you can’t be truly transparent about it.
If trust matters to your business, transparency has to be built into the infrastructure – it’s not something you can add on later.
When to build versus when to buy
The default is to use what’s already out there, because it’s faster and cheaper. That works fine when AI is a nice extra feature, but it doesn’t work when AI is a focal point of what you’re building.
There are three questions worth asking.
- Do you need per-user customization? If every user needs AI that behaves differently based on their individual style, voice, or preferences, off-the-shelf tools won’t cut it.
- Can you explain how your AI actually works? With third-party tools, you can’t tell users what’s happening behind the scenes or what data they’re trained on.
- Do you control the data’s safety and privacy? If you’re handling sensitive content or user information, you can’t outsource that responsibility.
If you answer yes to all three, you probably need to build.
The 42 percent of companies that scrapped their AI initiatives in 2025, up from 17 percent in 2024, learned the hard way that off-the-shelf tools often can’t deliver on specific needs. Speed isn’t worth much if the product doesn’t work.
This won’t be the right call for everyone. But if AI is central to what you’re building and your users need to trust you, buying gives you speed. Building gives you control.
What matters most
After building AI tools in a space where trust is everything, a few principles have become clear.
- Start with the people using it, not the technology powering it. Spend real time understanding their problems before you build anything.
- Design transparency from day one. You can’t add it later. If trust matters to your business, make it part of the architecture.
If AI is central to what you’re doing and you need customization, privacy, and the ability to explain how it works, build. Don’t settle for off-the-shelf tools that can’t deliver what your users actually need.
When you build AI for people, the technology is never the hardest part – understanding your users is.












