Thought Leaders
Responsible by Design – Why AI Must Be Human-First

Artificial intelligence is changing the digital experiences that define our everyday lives. From personalized product recommendations and predictive healthcare to autonomous driving and even amusement park lines, AI is making those experiences cheaper, more efficient and even enjoyable. At least, that’s the promise.
But all too often, we encounter headlines about so-called “AI fails” — moments when users feel misled, frustrated, or simply not understood. And when trust breaks down, so too does AI’s potential to be truly meaningful and effective.
Public trust is eroding. Why? Because most systems aren’t designed with human experience in mind—they’re designed for what’s fast, scalable, and profitable. That should be a major cause for concern for companies of all sizes that are doubling down on their AI investments.
To build trust, companies need to pause and ask deeper questions: Why are we building this system? Should it even be built in the first place? In short, are we designing AI to serve human needs – or forcing humans to adapt to machine logic?
The Trust Gap with AI
Too often, AI is developed in isolated technical environments, where success is measured by accuracy or speed, and not social impact or usability. Ethical thinking, a core component of trust, isn’t automatically baked into the AI development pipeline. This disconnect results in systems that might be innovative in theory, but fall short in practice.
Take Air Canada’s chatbot, which confidently misinformed a customer about its bereavement fare policy, only for the company to argue that it wasn’t responsible for what the chatbot said. Or Meta’s AI chatbot, which offered factually incorrect statements in search results. These examples reflect more than technical glitches; they expose a systemic failure to design AI applications with empathy, safety guardrails and real-world context.
The public has taken notice. According to the Pew Research Center, 59% of Americans and 55% of AI experts have little to no confidence in U.S. companies to develop AI responsibly. That’s a trust gap we can’t ignore.
Human-Centered Design Is Not a Luxury
Design isn’t window dressing for AI. It’s foundational to how it behaves and how it’s perceived by end users. Human-centered design begins with understanding the people we’re designing for: their goals, frustrations, values, and lived realities.
It is imperative during the design process to ask very specific questions to ensure the technology serves human needs, not the other way around, defining:
- Who are we designing for?
- What are their goals, values, and challenges?
- How do they interact with systems emotionally and functionally?
- Is the product trustworthy and does it do what it says it’s going to do?
- Is it promoting inclusivity and accessibility for all different kinds of people?
These aren’t abstract questions. They directly shape how AI performs in the wild. And, in high-stakes contexts like healthcare, security, or education, they can determine whether a system is inclusive and fair or confusing and harmful.
What a Better Design Looks Like
Designers bridge human needs with machine capabilities through the process of prototyping, testing, and iterating, which ensures products make sense to the people using them. That includes questioning how AI communicates, what decisions it automates, and how much control it offers to the user.
Take amusement parks as an example. AI is being deployed this summer to reduce wait times, personalize experiences, and manage crowd flow. It’s a promising use case. But success isn’t just about throughput. A well-designed system will prioritize human experience, not just efficiency. That means transparent messaging, intuitive interfaces, clear opt-ins, and fallback options for users with unique needs (like families without smartphones or guests with accessibility needs).
The opportunity here isn’t just to optimize logistics business operations in favor of the bottom line; it’s to elevate joy, reduce friction, and create shared experiences that feel magical, not mechanical. That’s an opportunity for design.
Testing the Human Element
In human-centered AI design, testing is critical. It is ideal to bring potential end users into the process early and often, with the purpose of identifying potential blind spots. When users can’t understand or trust a system, the system has failed, no matter how impressive its backend may be.
Testing also ensures accessibility, which is often overlooked in AI-driven experiences. A chatbot might be technically functional, but if it doesn’t serve neurodiverse users or non-native speakers, it’s not truly functional. Inclusive design doesn’t just benefit the margins, it strengthens products for anyone and everyone.
Responsibility Starts with Design
Policies can help set boundaries and prevent the worst outcomes. So frameworks like the EU AI Act and the Blueprint for an AI Bill of Rights are crucial steps. But compliance is just the floor. Design is how we reach the ceiling.
Companies must go beyond checklists to build AI systems that support dignity, agency, and oversight. That means resisting the urge to fully automate human judgment, and instead design tools that augment it. Responsible AI doesn’t erase human control; on the contrary, it amplifies it.
This work isn’t only technical. It requires multidisciplinary teams of designers, engineers, policy experts and ethicists all working together from day one. It means designing with people, not just for them.
The Role of Designers in Shaping AI
Designers play a unique role in shaping AI’s trajectory. They are translators that are fluent in both the structured logic of machines and the complexity of human life. Designers are trained to recognize friction points, emotional cues, and social implications that data alone can’t capture.
Similar to the specific questions that ensure the technology serves human needs, designers must advocate for questions that don’t fit neatly into training data, like:
- How does this make someone feel?
- Is the user in control?
- What happens when things go wrong?
Too often, design is brought in at the end of the AI pipeline to “make it look good.” But the true power of design is strategic. It should shape how problems are defined in the first place, not just how interfaces look.
Human-First, Not Machine-First
AI isn’t just a technical challenge, it’s a design challenge. And to meet it, we must center human values from the start.
Human-centered AI isn’t a luxury– it’s a necessity. It creates systems that are trustworthy, reliable, unbiased, auditable, and compliant. But more than that, it builds products that benefit the end user and elevates human productivity and potential.
We have the tools and we have the responsibility to design a future where technology serves people, not the other way around. That future starts with design.












