Interviews
Matthew Crowson, MD, Director of AI/GenAI Product Management at Wolters Kluwer Health – Interview Series

Dr. Matt Crowson is a healthcare technology leader and practicing surgeon focused on applying AI to clinical practice. He is Director of AI/Generative AI Product at Wolters Kluwer Health, where he leads initiatives to improve evidence synthesis and real-world data analysis. Previously, he led Deloitte’s healthcare provider AI practice, developing generative AI solutions to enhance documentation, revenue cycles, and research. He also serves as an assistant professor at Harvard Medical School and has authored over 90 peer-reviewed publications.
Wolters Kluwer is a global provider of professional information, software, and services, supporting clients in healthcare, tax and accounting, legal and regulatory, financial compliance, and ESG. Headquartered in the Netherlands, the company leverages deep industry expertise and advanced technology to deliver tools that streamline workflows, ensure compliance, and support critical decision-making. Its operations span more than 180 countries, with offerings organized across divisions such as Health, Tax & Accounting, Legal & Regulatory, Financial & Corporate Compliance, and Corporate Performance & ESG.
Let's start with a personal one—how do you balance your dual roles as a practicing surgeon and an AI product leader? Has your clinical work shaped your view of what AI should or shouldn't be in healthcare?
Honestly? ​It starts with ruthless time-boxing and an industrial-strength coffee machine. Clinic mornings keep my patient care skills honest, while the rest of the day is spent turning that frontline pain into product specs. The two roles fuel each other: seeing a resident click through ten screens to order Tylenol is all the market research I need.
Artificial intelligence (AI) projects flame out when no one in the room has felt that pain. Our Future Ready Healthcare Survey shows 80% of leaders say “optimize workflows” is a top priority. Still, only 63% think they’re prepared to do it with generative AI (GenAI). This is the classic strategy-execution gap that domain experts can close by asking the right clinical “why” before writing a single line of code.
My clinical lens also keeps the mission practical. Frontline staff told us their top imperatives are fixing staffing shortages (82%), squeezing out admin overhead (77%), and crushing burnout (76%). If an algorithm doesn’t move one of those needles, it’s just theater. Clinicians tune out fast.
That lens also makes me cautious about where AI shouldn’t roam. In fact, 57% of professionals worry that over-reliance on GenAI could erode clinical judgment, yet only 18% say their organizations have published guardrails. Until governance catches up, the mandate is clear: automate paperwork, not thinking.
So, for me, the balance isn’t really coffee versus calendar. It is about keeping one foot in the clinic – so I never forget who AI is supposed to serve – and one foot in product, so that knowledge ships. Do that well, and the caffeine is just a nice bonus.
The Future Ready Healthcare Survey Report from Wolters Kluwer highlights a strong gap between GenAI enthusiasm and execution. Were you surprised by any of the results? What stood out most to you personally?
I wasn’t surprised in the least. I’ve yet to meet an anti-automation clinician. What slows rollout isn’t fear of some “Skynet in scrubs” scenario but rather the daily grind of healthcare operations. The survey crystallizes that reality. Eight out of 10 leaders rank workflow optimization as a top priority, yet barely six in 10 say they’re ready to let GenAI tackle it. That delta is exactly what I see: liability landmines, data that looks more like a junk drawer than a data lake, and financial incentives that still reward volume over efficiency. There are other blockers, too, including a training vacuum, shadow-IT fatigue, and regulatory fog.
What struck me most was how mundane those obstacles are. Staffing shortages, administrative drag, and burnout dominate the worry list, but only 18% of organizations have formal Gen AI policies. If you don’t know who signs off on a model or how its outputs get audited, enthusiasm dies in the compliance office. Additionally, 68% of respondents say labor costs are their most significant financial pressure, and it’s no wonder executives want proof of return on investment (ROI) before signing another software invoice. The headline isn’t “AI panic,” it’s, “Great idea—show me the workflow and the business case.”
Over half of healthcare professionals surveyed worry that GenAI could erode clinical decision-making skills. Do you think that fear is valid—or does it reflect deeper concerns about trust and transparency in AI systems?
Some of the anxiety is real, but it has less to do with sci-fi fears of a HAL-9000-style rogue AI and more to do with plain-old accountability. When a tool offers differential diagnoses in seconds, you need crystal-clear provenance: Where did the recommendation come from, who signs off, and how does it get audited? Today, only a small minority of organizations have formal GenAI governance, so clinicians default to caution. That shows up in our data as 57% saying “over-reliance could erode judgment.” To me, this is a signal that they don’t want a black box intruding on their license to practice.
I see the issue through a historical lens. When spreadsheets hit finance departments, some accountants worried their analytical muscles would atrophy. Instead, spreadsheet software became the new baseline, raising the floor for accuracy. Healthcare is overdue for a similar leap. We lose far too many patients to variation in care; medical error remains a leading cause of injury and death. GenAI’s superpower can be narrowing those error bars by surfacing guidelines, highlighting contraindications, and flagging outliers faster than any human can sift through the chart. But it must stay an assistant, not an autonomous decision-maker, especially in the next three-to-five-year window.
So yes, the fear is valid but it’s solvable. Transparent datasets, audit trails, and human-in-the-loop checkpoints turn “AI erosion” into “AI augmentation.” Give clinicians traceable recommendations and clear lines of accountability, and that 57% will melt away. It’s not about replacing expertise; it’s about augmenting it with better tools.
Only 18% of respondents say they’re aware of clear GenAI policies at their organizations. What are the potential risks of deploying GenAI tools without such governance in place?
Think of it as launching a new medication without a dosing label. Healthcare data are highly sensitive, and GenAI models become smarter only when they absorb that protected health information (PHI)-rich context. Without strict data stewardship policies to govern who can upload information, how that data is logged, and where it resides, an organization is just one clipboard snapshot away from a privacy breach that could make headlines.
Liability is the next landmine. When an algorithm hallucinates a contraindicated dose, who eats the malpractice claim? The vendor, the hospital, or the clinician who clicked “accept”? Right now, that answer is fuzzy because fewer than one in five organizations have codified “rules of the road” for GenAI. In a vacuum, lawyers often default to the deepest pockets, and that uncertainty alone can stall innovation.
Governance also guards against subtler risks like model drift and silent bias. An oncology bot trained on last quarter’s guidelines may quietly fall out of date, nudging care off evidence-based rails. Policies that mandate version control, outcome monitoring, and sunset triggers keep algorithms from aging into safety hazards.
Finally, trust is on the line. Clinicians worry that over-reliance on GenAI could blunt their clinical judgment; rolling out opaque tools only confirms those fears. Clear governance, transparency on data lineage, validation protocols, and human-in-the-loop checkpoints turns “black box” anxiety into confidence that AI is an assistive teammate, not a rogue resident.
Based on your work with Wolters Kluwer and in the OR, what is the most realistic near-term use case for GenAI in healthcare?
Forget robot surgeons. Over the next three years, the killer GenAI opportunity is administrative annihilation. Two lanes are already proving themselves:
- Front-of-house note-taking. Ambient listening tools now draft the progress note while a physician is talking to their patient, then drop it straight into the electronic health record (EHR). Our survey shows 41% of respondents put this on their GenAI wishlist, and the technology is already live in early-adopter health systems. Several studies have shown that ambient dictation systems can cut cognitive load by 51% and after-hours “pajama time” by greater than 60%. This is a hard ROI; you can feel it quickly.
- Back-office revenue protection. The next domino is the prior-authorization packets, denial appeal letters, and other revenue-cycle sludge. For reference, 67% of leaders say prior authorization alone is choking productivity, and 62% call out EHR admin drag. Large language models that read the chart and auto-populate these forms are already shaving days off claims and freeing staff for higher-value work.
Why these two? They hit the trifecta of low clinical risk, high workforce relief, and clear dollars-and-cents justification. In a market where 68% of executives list staffing costs as the top financial pressure, tools that give hours back without changing the care plan are the easiest “yes.” Autonomous diagnosis will come later; now, GenAI earns its keep by making the clipboard disappear.
The survey notes that data isn’t the top risk cited by respondents—which is surprising given how often data privacy dominates headlines. What risks do clinicians and administrators see as more pressing?I was also surprised. The headlines would have us believe HIPAA breaches keep every hospital CIO awake at night. Yet, our data shows that only 56% of professionals cite privacy as a top GenAI risk, while an even larger slice (57%!) worry about “dumbing down” clinical judgment. That tells me the frontline fear isn’t hackers, it’s accountability.
Here’s what clinicians and administrators are sweating:
- Liability roulette. If the algorithm nudges care off course, who signs the malpractice check? Lack of clear regulations and standards ranks alongside transparency gaps at 55%, signaling real unease about the legal blast radius.
- Regulatory whiplash. Seventy-six percent of leaders already feel whipsawed by shifting Medicare and Medicaid rules; layering opaque GenAI on top of that is a hard sell until guardrails solidify.
- Model drift and bias. Fifty-five percent flag bias from under-trained models as a critical risk, a reminder that stale data can be as dangerous as missing data.
In short, most organizations assume their firewalls are decent; they don’t have a clear chain of accountability when a large language model (LLM) output ends up in a care plan. Until governance frameworks spell out ownership, audit trails, and update cadences, GenAI rollouts will keep stalling, regardless of how tight the security stack is.
Do you believe that GenAI tools will ultimately enhance or dilute clinician autonomy? How do we design systems that support decision-making without overstepping it?
GenAI is poised to expand, not shrink, clinical autonomy. Right now, much of that autonomy is hindered by inbox triage, prior-authorization paperwork, and EHR gymnastics. No surprise, then, that frontline staff rank “optimizing workflows” as their number one use-case for GenAI (80% priority) even though barely 63% feel technically ready to execute. Pharmacists and allied-health professionals are already betting on the upside: 41% and 47% expect GenAI to carve out enough administrative fat to reduce support staffing needs. Freeing clinicians from data entry means more face time with patients. That’s the autonomy everyone wants.
Still, the survey reminds us that autonomy cuts both ways as we’ve touched on earlier: 57% of respondents worry that overreliance on GenAI could dull clinical judgment. The antidote is thoughtful design, not throttle-back. Systems must show their work with provenance flags, citations, and confidence scores, so humans stay the final arbiters. Version control and post-deployment monitoring catch silent model drift before it poisons care pathways, while “always-visible override” buttons make it clear the algorithm is an assistant, not the attending.
Governance is the last mile. Only 18% of professionals say their organization has a published GenAI policy. Without a transparent chain of accountability, even the best user experience will stall in legal limbo. Robust policies need to spell out data stewardship, audit trails, and role delineation that get socialized across physicians, nurses, and the physician assistant who pushes the button. When we pair those guardrails with workflow-native design, GenAI stops feeling like a threat to autonomy and starts acting like the co-pilot clinicians have been begging for.
What’s holding back adoption most—technology limitations, regulatory uncertainty, workflow friction, or something deeper like cultural resistance?
It’s an execution deficit wrapped in legacy incentives. Most health-system leaders can articulate a slick GenAI vision, but their operating muscle hasn’t caught up. Our survey shows the disconnect in one line: 80% of respondents rank “optimize workflows” as a top priority, yet only 63% believe they’re ready to do it. Vision is cheap; integration engineers, change-management playbooks, and graphics process unit (GPU) budgets are not.
Governance is the next sinkhole. Only 18% of professionals are even aware of a published GenAI policy at their hospital. Without clear data use, validation, and liability rules, every promising pilot risks becoming a compliance grenade. That legal fog is amplified by macro uncertainty. In fact, 75% of leaders worry that fast-shifting state and federal regulations will upend whatever solution they roll out.
Then comes trench-level friction: nearly half of executives cite dirty data and EHR integration nightmares as primary barriers, and just 42% say they have a process to bolt GenAI tools into existing workflows. If the model can’t see the chart or adds clicks, clinicians will abandon it before lunch.
Finally, there’s “pilot purgatory.” Numerous external studies peg the success rate of AI pilots graduating to enterprise scale at roughly one in ten. Boards celebrate the demo, issue a press release, and move on. Because nobody funds the following unglamorous plumbing work. GenAI will remain a PowerPoint promise until hospitals staff up with product owners who’ve shipped software before.
In short, tech and culture aren’t separate blockers. They’re fused. Solve for accountable leadership, real integration budgets, explicit guardrails, and the appetite for GenAI will match its hype.
You’ve built AI systems focused on pragmatic, evidence-based outcomes. What advice would you give to healthcare leaders trying to navigate hype and identify truly valuable AI investments?
Start with a diagnosis, not a demo. Before you let a shiny hammer hunt for nails, quantify the nail: Is operating room utilization down 8% for two straight quarters? Are denial appeals languishing and bleeding revenue? Is nurse unit three spending two hours a shift on EHR “toggle time” (time spent switching between screens and tasks)? Once the pain is explicit, the right tool tends to introduce itself. As Sir William Osler reminded the medical community generations ago, “Listen to the patient; [they] will tell you the diagnosis.”
With the problem pinned, interrogate the business case like a CFO. Demand hard numbers: baseline metrics, projected deltas, and payback windows that survive a boardroom sniff test. Remember that only about one in ten AI pilots graduate to enterprise scale; if the vendor can’t show a live customer who moved the key performance indicator (KPI) you care about, keep walking.
Next, decide on buy, build, or partner. Buying can accelerate time-to-value but watch for vaporware dressed in buzzwords. Building gives you control but only if you have a tiger-team profit and loss owner who has shipped production machine learning before. Hybrid partnerships often strike the balance: your data, their model, shared upside, shared risk.
Finally, prioritize small, cross-functional teams with clear accountability. Think of a two-pizza squad including the CMO, CIO, head of data engineering, and a frontline champion, rather than large steering committees. Align their incentives to multi-year outcome goals rather than short-term metrics, and give them a dedicated infrastructure budget—GPUs, data engineering, machine learning operations (MLOps)—so the project progresses beyond the pilot stage.
Finally, looking ahead: what would a responsible, fully integrated GenAI system look like in a hospital setting five years from now? What are the milestones we need to hit to get there?
Imagine walking into a clinic where the physician never swivels to the keyboard. The conversation flows, and a discreet ambient-listening agent captures the dialogue, drafts a note, cues guideline-based orders, and generates the prior-authorization packet before the doctor’s hand is on the doorknob. Early pilots are already proving the concept, and 41% of clinicians in our survey say this is precisely the GenAI feature they want next.
What makes that scene possible isn’t sci-fi robotics; it’s an invisible architecture that fuses clean, interoperable data with a real-time orchestration layer and “governance-as-code.” We still have homework to do. To close the gaps, think about data plumbing first, then, embed the guardrails (rather than bolting them on) to turn hype into habit.
Milestones fall naturally once the foundation is set. In year one, I recommend that hospitals and health systems wire up the data fabric, publish enterprise-wide GenAI guidelines, and build an MLOps pipeline. In the second year of implementation, it will be important to scale ambient documentation across ambulatory clinics, measure documentation time and after-hours “pajama time.”  In year three, let GenAI draft denial appeals and prior-authorization packets (67% of leaders said that burden is ripe for elimination.) In the fourth and fifth years, evolve into real-time clinical decision support with provenance and, ultimately, conversation-based care planning where the system executes orders the moment they’re spoken.
Thank you for the great interview, readers who wish to learn more should visit Wolters Kluwer or read the Future Ready Healthcare Survey Report.