Connect with us

Ryan Peterson, Chief Product Officer at Concentrix – Interview Series

Interviews

Ryan Peterson, Chief Product Officer at Concentrix – Interview Series

mm

Ryan Peterson serves as the Executive Vice President and Chief Product Officer at Concentrix, driving global product and technology strategy. A transformative leader with deep expertise in product development, data, and AI, Ryan fuels innovation and growth. Previously, he held executive roles at Amazon Web Services (AWS), leading Amazon Connect (CXE) and AWS Storage, where he expanded both businesses and gained expertise in go-to-market strategies, sales, and solution architecture. With over two decades in data management, Ryan brings a sharp perspective on AI deployment. He holds eight patents in data storage and security.

Concentrix is a publicly traded global provider of customer experience solutions and technology, offering services across design, build, and run phases to help organizations modernize operations and enhance performance. It operates in over 70 countries and six continents, and since spinning off in 2020 has expanded through major acquisitions such as Convergys, Webhelp, and BlinkCX, strengthening its scale and capabilities.

Let’s start personal—your journey spans startups, Amazon, and now Concentrix. What’s a lesson about customer experience or product development that’s stuck with you across all of these roles?

It’s simple: Start with the customer first and work backwards from there. That holds true whether you're at a three-person startup or scaling systems globally for a Fortune 500 company. It’s easy for teams to get caught up in building what they think is exciting and interesting, but if it’s not solving a real problem for real people, it’s a waste of time. The most impactful solutions I’ve seen always come from first understanding what the customer is actually trying to accomplish and then building toward that.

A recent Concentrix Consumer Survey shows only 31% of U.S. consumers are familiar with the concept of Agentic AI. Why do you think that awareness is still so low despite the industry buzz?

Because we’ve made it more complicated than it needs to be. Ask five people what “agentic AI” means, and you’ll probably get five different answers. The term hasn’t settled into a shared definition because the industry hasn’t done a great job at explaining it in everyday terms. That’s the core issue. When I describe it as AI acting on behalf of another entity without requiring a prompt, people usually get it. Until that becomes the standard explanation, most consumers are going to stay confused, and it will continue to be more of a buzzword than a mainstream idea.

One of the most striking stats is that 76% of people still find brands more trustworthy when a human, not AI, helps them. How do you interpret that in the context of building AI-native customer experiences?

That stat isn’t anti-AI, it’s pro-trust. Let’s face it, people generally don’t trust AI, and they certainly don’t want it when they need help. Just think about the last time you called into a customer support line and were first connected to an AI bot. Most people hit zero or try to bypass it to get to a human agent. AI, in its current state, doesn’t always solve the problem, it often creates friction. If AI adds friction to the customer journey, it’s not enhancing the experience; it’s getting in the way. What customers really want is a smooth experience that helps them get their problem solved. When AI works in harmony with human judgment, that’s when trust is earned. The future isn’t about AI-only, it’s about AI that’s backed by real human intelligence, making the experience seamless and effective.

Were there any surprising or counterintuitive insights in the data that made you rethink how Agentic AI should be rolled out—either in terms of design, messaging, or use cases?

The biggest surprise wasn’t in the numbers, but in what they exposed. A lot of companies are rushing to launch AI and skip over the work needed to prep their knowledge bases entirely. It’s like trying to build a self-driving car without a map to guide it. You can’t expect an AI to give reliable answers if the underlying content is outdated or incomplete. But beyond that, we’ve seen that optimizing AI purely for efficiency – like full automation or containment – can backfire. Removing humans entirely can lead to missed signals, higher churn, or lost revenue opportunities that only surface in real conversations. The real challenge isn’t just getting AI to perform, it’s designing systems where humans and AI can work in tandem to drive the right outcomes.

Getting the deployment foundation right isn’t always glamorous. It’s like how a human agent has to train before taking calls. It may not be the most exciting part, but it’s a critical step for the agent to do their job well. The same goes for AI. Even the best agentic strategy won’t work if the data and preparation behind it aren’t solid.

Trust is clearly not one-size-fits-all. With consumers far more skeptical of AI in finance and healthcare than in tech or travel, how should companies in high-stakes sectors approach AI integration?

In high-stakes sectors like finance and healthcare, trust isn’t optional, it’s foundational. While consumers may welcome AI for things like speeding up document processing or streamlining internal workflows, they’re far more skeptical when it touches anything personal, clinical, or high-impact. That means companies need to be extremely intentional about which use cases they automate and how. AI can absolutely drive value in these sectors – in finance, rapid fraud detection, credit risk assessment, automated collections, and personalized insights for wealth management – but only when governed tightly, with clear boundaries and human oversight. The moment AI starts making decisions that directly impact people’s money, health, or safety, the bar for trust rises dramatically. In those cases, customers say AI will support, not replace, human judgment.

The survey shows people want transparency, override options, and strong privacy guarantees. How are you embedding these trust anchors into Concentrix’ intelligent experience platform?

We have 40+ years of experience supporting the world’s best brands, and throughout that time we’ve learned to understand what customers want and expect. We’re helping our clients to overcome that lack of trust with consumers by giving them the tools to control exactly how their AI agents behave – from adversarial LLM’s that act like watchdogs, calling out anything sketchy before it hits the customer, to built in policy engines that let security teams define what’s allowed or blocked. But managing consumer trust doesn’t stop at the launch of AI. Business needs are always evolving and answers change, so we’ve introduced Agentic Engineering as a purposeful way to design, deploy, and continuously manage AI. That includes bringing humans back into the loop to ensure AI is aligned, auditable, and always grounded in good judgment. We’ve also launched our new Agentic Readiness offering. This helps companies understand where they have opportunities to deploy agentic AI responsibly. We guide them through key considerations, such as what to avoid in AI deployments, and how to scale safely. The goal is to bring the most benefit to customers and to their business. AI needs structure to be trustworthy, and we’ve made that part of the DNA, which in turn helps to deliver the kind of positive experiences that drive brand loyalty.

Only 8% of consumers are comfortable with fully autonomous AI support, yet the industry is charging ahead with automation. How do you reconcile that gap between what’s possible and what’s acceptable?

The problem is that a lot of AI is being deployed in places where customers notice most, and not in a good way. Companies forget that people hate feeling tricked. We all appreciate AI when it quietly makes things easier, like smart recommendations from Netflix or YouTube because they work and feel invisible. But when you’re locked out of your Airbnb in the pouring rain, that’s not a moment for AI – it’s a moment for a human who can solve the problem fast and with empathy. Trust isn’t lost because AI exists, it’s lost when AI creates friction, frustration, and shows up in the wrong places, creating a negative brand experience and ultimately driving customers away.

There’s a growing call for regulation, but many companies don’t want to wait. What role should enterprise leaders play in setting their own ethical standards while regulatory frameworks catch up?

If you’re deploying AI and haven’t created guardrails, you’re already behind – that’s negligence, not innovation. Picture a scenario where an airline rolls out an AI-powered chat assistant to help with bookings and changes, but when customers start asking about compensation for delayed flights, the system gives out the wrong refund information. Not because the AI was broken, but because no one has set clear policy boundaries, escalation rules, and they didn’t take the time to prepare their agentic knowledge base in the right way. That kind of mistake creates confusion with customers, damages brand trust, and may open the door to legal exposure. The companies that will lead in this space are the ones who treat ethics and governance as launch requirements, not afterthoughts.

Looking ahead, how do you see Agentic AI transforming enterprise operations internally—beyond just customer-facing use cases?

The internal side is where we’re seeing the fastest ROI. AI is already helping employees with all kinds of day-to-day tasks, from writing emails, to summarizing meetings, and pulling insights out of noisy systems, which frees them up to focus on higher-impact work – and nobody’s pushing back because it actually helps. In our own internal AI-powered application iX Hello, we’ve seen up to an 80% reduction in time to complete work, with as much as 20% more output across business functions. This low-risk space is where companies should be experimenting first. If you can’t get AI to work well for your own teams, you’ve got no business putting it in front of your customers. Use it in-house first, then scale it with confidence. In the long run, this will only help to ensure the efficiency and productivity of a company’s employees, while maintaining consistency across its brand.

If we checked back in 12 months, what would success look like for Concentrix’s AI initiatives—not just in adoption, but in earning and sustaining consumer trust?

Success for us is about helping our clients transform their operations from simply launching bots to running AI as an integral, first-class part of their business. But it's not just about improving their operations, it's about making consumers' lives easier. The real measure of success will be seeing a marked improvement in consumer satisfaction. When AI is used in the right way, it should enhance the experience for consumers, making their interactions smoother and more efficient. We’ve already invested heavily in building AI technology that creates better experiences throughout the customer journey, and we've deployed over 7,000 bots at Concentrix. Now, we’re scaling the operational model, not just the tech. When AI is fully integrated, accountable, and managed within businesses, it leads to better outcomes for consumers, which in turn builds lasting trust.

Thank you for the great interview, readers who wish to learn more should visit Concentrix or they can access the referenced Concentrix Consumer Survey here.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.