Interviews
Matt Walz, CEO at Trialbee – Interview Series

Matt Walz is the CEO of Trialbee, a global leader in technology-driven patient recruitment. He brings more than 20 years of software and leadership experience in the life sciences industry. Matt started his career as a developer and held various technical and leadership roles at Rollins Corporation, PSCI, Microsoft, Morgan Lewis, and Datalabs. In 2006, Matt co-founded NextDocs, which grew to be a global leader in clinical, quality, and regulatory document management, where he served as CTO, CSO, and Board Director for 9 years. Before joining Trialbee, Matt spent 5 years as General Manager for Life Sciences and VP Strategic Accounts for Aurea Software, which had acquired NextDocs.
Trialbee is a healthcare-technology company that streamlines patient recruitment for clinical trials. By leveraging data analytics, digital outreach, and real-world evidence, it matches, engages, and pre-qualifies patients to accelerate enrollment. Its platform offers transparency across sources and partners, helping sponsors, CROs, and trial sites manage the recruitment pipeline more efficiently while reducing burden on sites.
You’ve worked across both health tech startups and large-scale clinical research platforms. What personal experiences or moments in your career led you to recognize the potential—and pitfalls—of AI in patient recruitment?
AI has been the fastest-moving technology trend I’ve seen in over two decades working in clinical development – faster even than the early days of cloud adoption. What’s been most striking to me is how AI has moved from conceptual to operational across what is typically an industry that is slow to adopt new technology – and is being prioritized even at regulatory authorities such as the FDA. For clinical trial patient recruitment specifically, we’re still in the early phases of learning where it is best fit. Vendors and sponsors alike are exploring AI for protocol development, personas and targeting, data enrichment, localizations, and communications and engagement – which are all major friction points for research teams.
With that said, there’s still some risk that comes alongside that potential. I’ve spoken to leaders at major pharmaceutical companies who reinforce that while AI is showing up in more points of the workflow, it can’t run unchecked. Human oversight is foundational.
This is for quality and security reasons as well as because at our core, companies like Trialbee connect with patients and families searching for hope – a very human and empathetic experience that cannot be replaced by AI in a meaningful way for the patients we all serve.
Clinical trial recruitment has historically faced issues of diversity, speed, and accuracy. In your view, how is AI helping to address these challenges—and where does it still fall short?
AI is helping streamline some of the slowest and most resource-intensive parts of the recruitment process. For example, things that used to take weeks – like translating study materials across dozens of languages – are now being compressed into hours. That means we can start recruiting faster across more global markets.
When it comes to accuracy, AI-powered agents are starting to help us deliver more consistent, criteria-aligned interactions, from the materials we create to pre-screening to chatbots. These tools are especially useful for reducing drop-off points which slow down the recruitment process.
Diversity remains a challenge, though. AI is only as representative as the data it’s trained on, and representation is also shaped by factors outside the technology – including country-by-country regulatory restrictions that limit how AI can be used in patient-facing roles. Building trust with trial participants has been a challenge throughout the entire history of clinical research, and engagement with AI tools is met with varying degrees of skepticism. With that in mind, we strongly support an approach that would give people the option to interact with a live medical professional or, for example, an AI agent. This can help reach participants with varying comfort levels around AI while ensuring strong oversight, especially for agentic AI, though safeguards like separate reasoning engines must be incorporated to make it successful.
You mentioned before that AI tools are being deployed faster than any prior innovation in patient recruitment. But with global regulators struggling to keep pace, what are the most urgent oversight gaps you see in multinational clinical trial campaigns?
The biggest gap is the lack of regulatory alignment across geographies. In the U.S., agencies like the FDA are embracing AI with new frameworks and early review processes. In contrast, Europe is understandably moving more cautiously, focused on treading carefully and applying more stringent regulatory reviews.
For companies like ours that operate globally, this creates a challenge: What’s acceptable in one country may not be in another. And the variance isn’t just in regulations, but also in how different channels or social media platforms like Facebook can be used for recruitment, how personal data is handled, or how patient consent is collected. These are nuances that require operational agility and a deep understanding of regional ethics and compliance standards.
This is where our history of innovation and inherent global culture are major assets as we navigate the exciting yet highly dynamic AI landscape.
How can that lack of global alignment in regulatory frameworks derail AI adoption in clinical trials? Have you witnessed any real-world consequences of this?
Absolutely. The digital advertising strategies we rely on for patient recruitment are a good example here. Facebook is one of the most effective platforms globally, but even within the countries where it’s permitted, the level of targeting you’re allowed to do, and what data you can use, varies widely. We’re building internal expertise to overcome those differences, and we’re expecting AI regulation to follow a similar path.
In practical terms, the limitations that this dilemma imposes on recruitment teams can result in delayed campaign launches, additional cycles with ethics committees, and more complex compliance workflows. If you’re not deeply aware of how each country interprets AI use, especially in patient-facing applications, you risk slowing down trials or running into serious approval barriers.
Trialbee operates at the intersection of data, technology, and patient engagement. How do you ensure that AI-driven recruitment strategies don’t reduce patients to data points, but instead enhance the human side of research?
An excellent and important question for all of us. As I mentioned earlier, the way I view AI is its ability to empower humans – not replace them. This is especially true in the very personal industry we work in, where we are trying to help generations of patients live healthier lives around the world. Our business is a warm one, about connecting people, and human beings will always be at the heart of it.
When it comes to day-to-day operations, the best AI we can provide – for example, within our Honey Platform™ – would be to analyze data and trends, and prompt sites and study teams where action may be needed. We do much of this already, and will continue adding capabilities to ensure the valuable data being captured is put to immediate use that makes a difference in the trial. This could mean providing daily insights on recruitment progress or prompting follow-up with specific patients with predictive modeling.
Internally, we’re using AI throughout our organization in a systematic and collaborative way. A couple of good examples here could be translation of recruitment materials and AI-driven suppression of potential PII data – these will always be overseen by an experienced human. So you’ll hopefully see how we’re using AI to make our amazing team stronger, and not the other way around.
What specific skill sets are most critical for clinical research teams to responsibly guide and govern AI tools today?
The most critical skill sets sit at the intersection of clinical expertise, AI literacy, and regulatory fluency. Teams need to understand how to engage with AI platforms effectively by prompting them with precision and reviewing their outputs critically.
There’s also a growing need for regulatory insight. Like I mentioned before, this is especially needed for areas like agentic AI, where we’re building separate reasoning engines to serve as guardrails in patient interactions. Teams must also be able to evaluate AI-translated content and verify its accuracy and cultural relevance before materials are submitted to ethics committees.
AI adoption is accelerating. What advice would you give to clinical trial stakeholders who are hesitant or overwhelmed by the complexity of integrating AI into their workflows?
Someone once said, when you begin to work with AI, make sure you use Actual Intelligence. Machine learning can enable amazing things – providing it has the expertise, context, and guardrails of domain experts behind it.
My advice is to start small and stay grounded in what you can deliver today. One of the biggest mistakes I see is companies leaning too far into vague promises about AI transformation without articulating how it actually works or when it will be ready. While those promises may sound great in the moment, they can erode confidence because they don’t show evidence of a true plan.
The better path is to break adoption into small, defined steps with clear outcomes. Choose one or two high‑impact areas where AI can remove friction and make sure those are backed by the right oversight. Be specific about the tools you’re using, how they’re set up, and most importantly, how you’re protecting sensitive information. This is the approach we take at Trialbee. We only talk with stakeholders about capabilities we’re actively building, typically no more than three months out, because we want to ensure we’re communicating what’s real.
At Trialbee, we are currently asking a different department or team each week to present use cases that have worked for them. We discuss the how as well as the why to share learnings, challenges, and solutions so others may replicate their AI successes to improve efficiency, customer delivery, or recruitment outcomes.
We also emphasize transparency about the tools we’re using to build those capabilities. If we’re using OpenAI’s ChatGPT or Anthropic’s Claude, for example, we describe the setup to stakeholders, including how we isolate sensitive information and apply human oversight. Once they see the gains in action, like time savings in translation workflows or increased speed in early patient screening, they’re more likely to get on board with the next AI use case. And so, it’s less about selling the big vision and more about proving value step by step.
The FDA and other regulators are starting to ask tougher questions about AI models used in drug development. What kinds of transparency, validation, or auditability standards do you believe should become industry norms?
The industry needs to move toward full transparency and must ensure there is human oversight in every AI-assisted decision. To cite a few examples:
When we talk about agentic AI, we’re already working on ways to embed regulatory logic into a separate reasoning engine that can evaluate and correct conversations in real time. That kind of internal control system should become standard in any patient-facing application. Validation protocols also need to be formalized, including benchmark testing and ongoing performance evaluations.
Most importantly, these standards should be integrated into the product development process and not bolted on afterward. That level of rigor will be essential for maintaining patient safety, earning regulatory trust, and scaling AI responsibly across global clinical research campaigns.
AI models often rely on historical datasets that may reflect systemic healthcare biases. How do you approach ensuring fairness and diversity in patient recruitment, especially for underrepresented populations?
Not having AI hasn’t been what’s holding diversity in clinical research back – not prioritizing a plan is. And AI can’t really help with that. Once there’s a real commitment, AI can be a powerful tool that absolutely helps us reach underrepresented groups more effectively, but only if we’re intentional. That’s why at Trialbee we broaden the data our models use, build community partnerships, and constantly monitor recruitment outcomes to make sure no group is being left behind.
You mentioned your team is rolling out new AI-related products later this year. Can you offer a high-level preview of the problems you’re solving—and how these innovations reflect your broader philosophy around responsible AI use?
Trialbee has a culture of innovation and AI is a major and growing component of it. This year alone, our Honey Platform rolled out new site workflows, a sponsor-specific patient registry, and use cases supporting trial finder websites for global biopharma brands such as BMSClinicalTrials.com. With AI specifically, you’ll see new features and enhancements rolling out over the next 3, 6, 12 months and beyond. We’re developing chatbots, smart tools, and more inside Honey while also evaluating new ways of streamlining processes for our customers. Internally, we’re using it to become more targeted, more intentional, more inclusive, and more efficient in everything that we do – with an experienced team member driving every decision and interpreting context for all of the AI models we use.
Looking five years ahead, how do you envision Trialbee’s role evolving as AI becomes more deeply embedded in clinical research? What part do you see your company playing in shaping a more ethical, efficient, and globally harmonized future for patient recruitment?
Five years from now, I see Trialbee standing as the leading AI-enabled service provider for patient recruitment in clinical research. We’re already integrating generative AI into every part of the recruitment workflow where it can accelerate speed, improve accuracy, or increase patient optionality. As I mentioned, we’re actively evaluating tools that would give patients a choice between engaging with a live medical professional or an AI agent, depending on their preference and comfort level. We believe giving people that choice is key to increasing trust and participation over time.
Ethically, we’re committed to ensuring AI is implemented with regulatory rigor and transparency. That means embedding oversight mechanisms into the technology itself and being open about how our systems work. We’re also building AI into the culture of our organization – every department and every team – so that we’re ready to adapt as technology evolves. Ultimately, we want to be a company that helps define how AI is used responsibly throughout clinical research. If we do that right, we can help shape a future where trials are faster, more inclusive, and easier for patients to access regardless of where they live or what language they speak.
Thank you for the great inteview, readers who wish to learn more should visit Trialbee.












