Connect with us

Interviews

Andreas Cleve, Co-Founder and CEO of Corti – Interview Series

mm

Andreas Cleve, Co-Founder and CEO of Corti, is an entrepreneur focused on advancing artificial intelligence in healthcare. His work in the sector began with Ovivo, a conversational workforce planning platform for hospitals that rapidly expanded across Denmark before being acquired in 2013. He later co-founded Hyvi, a research initiative exploring context-aware language models capable of understanding complex conversations in real time, which ultimately evolved into Corti in 2018. Beyond building companies, Cleve has played a key role in strengthening the Nordic AI ecosystem through initiatives like Nordic.ai and advisory roles with organizations including DIGITALEUROPE and Denmark’s National Digitization Council.

Corti is a Copenhagen-based healthcare AI company developing specialized models designed to understand medical conversations and support clinicians in real time. Its platform acts as an AI assistant for healthcare professionals by generating clinical documentation, surfacing insights during patient interactions, and automating administrative workflows. By offering its technology through APIs and integrations with healthcare systems, Corti aims to reduce clinician workload while improving efficiency and decision-making across hospitals and digital health platforms.

You grew up in a family where healthcare was a constant part of everyday life… How did those early experiences shape the founding of Corti, and what specific problems were you determined to solve from day one?

Growing up around healthcare made two things painfully clear: expertise matters enormously, and the processes that transfer that expertise are fragile and often fail the people who most need them. Those early household experiences, which included seeing caregivers struggle, watching knowledge get lost in hand-offs, and feeling the fear that comes from inconsistent care, seeded the belief that healthcare should be predictable and that clinicians should never be alone when a hard decision arrives. That translated directly to Corti’s founding mission: build systems that underwrite expertise, so clinicians always have reliable, real-time decision support.

From day one we set out to address the supply and demand imbalance in healthcare: the gap between the complexity of modern medicine and the limited human capacity to apply it everywhere, by creating AI that reduces variance, speeds detection, and supports safer decisions in the moments that matter most.

Corti positions itself as healthcare AI infrastructure rather than a standalone AI assistant. What does infrastructure mean in this context, and what capabilities does it unlock that point solutions or chat-based tools cannot?

When we talk about infrastructure, we mean that we’re not shipping a single assistant or widget; we’re building the foundational stack that makes clinical-grade AI possible across many workflows. Infrastructure here means: healthcare-native models and data (not generic web data), a clinical reasoning layer that surfaces answers with clinical context, lifecycle and governance tools (model cards, audit trails, verifiable lineage), deployment options that meet regulators (sovereign clouds, on-prem or private endpoints), and developer-facing APIs and SDKs that let product teams plug clinical intelligence into their apps without becoming ML or compliance experts.

That approach unlocks three things point solutions cannot: (1) deployability, meaning models and runtimes that survive real clinical constraints (latency, data residency, auditability); (2) scale across specialties, meaning reusable, certified building blocks (speech, coding, clinically scoped endpoints) that reduce the cost of building many vertical apps; and (3) regulatory and enterprise trust, meaning policies, BAAs, and compliance primitives built into the platform so customers can move from pilots to production. In short, infrastructure turns clinical R&D into deployable services that developers and hospitals can ship, certify, and scale.

General-purpose AI models are often applied to clinical settings with mixed results. What are the most common ways these models fall short when used in real healthcare environments?

General-purpose models have made remarkable progress, and for many tasks they work well. But healthcare rewards depth in ways that horizontal AI can’t easily replicate. Clinical reasoning depends on subtle cues, specialised terminology, institutional context, and an understanding of how documentation flows through regulatory and reimbursement systems. Getting that right requires training on clinical data, validating against clinical benchmarks, and building compliance into the stack from the start. It’s not a prompting problem; it’s a research problem, which is why we think healthcare needs a dedicated AI lab, one that can go deep on the domain rather than broad across many.

Corti operates across Europe, the U.S., and beyond, each with different care models and governance. How do you design AI systems that adapt to this real-world complexity?

We design for complexity by owning more of the stack and by making deployment and governance first-class citizens. Practically, that means training on healthcare-only data and tuning models for clinical reasoning; building audit trails, model cards, and BAA-ready APIs; and architecting routing so compliance controls are selected by geography and risk profile. For customers who need it, we offer sovereign cloud and on-prem deployment options, so providers can choose where their data lives and maintain control over the models running on it.

That flexibility lets us run the same clinical AI across different care models while honouring local documentation standards, privacy laws, and institutional governance. Importantly, we treat research as laddering to production; every advance must be traceable, testable, and deployable in the real world, not just promising in the lab. That’s what it means to be built to thrive in clinical reality.

Looking at frontline clinical workflows today, where does Corti deliver the most immediate, measurable impact, and why do those areas matter most for overburdened clinicians?

Corti’s most immediate impact today is in the clinical and administrative workflows that carry the greatest burden. Our models and APIs power ambient documentation, coding, and agent-driven automation inside healthcare software used by clinicians every day.

Those areas matter because documentation and billing are among the most time-consuming and error-prone parts of care delivery. When conversations become structured, EHR-ready notes in real time, when coding is completer and more accurate, and when routine workflows are automated safely inside regulated systems, clinicians spend less time on paperwork and organisations see measurable improvements in efficiency and reimbursement quality.

Healthcare is not one monolithic problem but thousands of specialty-specific workflows operating under regulatory pressure. By building production-grade AI that thrives in clinical reality, we enable software companies and health systems to address those problems at scale. That is where healthcare’s AI lab delivers practical, measurable return.

Corti supports hundreds of thousands of patient interactions every day. What lessons have emerged operating AI at that scale that aren’t obvious in pilots or lab environments?

Operating at scale exposes friction pilots hide heterogeneous data quality (no two EHRs or call transcripts look the same), production latency and streaming constraints, legal and contractual complexity across customers and geographies, and the perpetual edge cases that only show up under load. Labs can measure accuracy on curated sets; production forces you to solve routing, observability, drift detection, model rollback, and accountable audit trails. Another lesson: real trust is earned by making models explainable, repeatable, and certifiable, rather than by single-site performance. Finally, pilots understate total cost of ownership: developers in production need SDKs, consistent endpoints, and governance primitives to maintain safety and to iterate productively.

Healthcare demands higher explainability than consumer AI. How do you approach clinical reasoning, transparency, and accountability when AI influences medical decisions?

Healthcare demands a higher standard because the cost of error is real. Clinical AI cannot just generate plausible language; it has to reason over complex, regulated, high-stakes information in a way that is transparent and inspectable.

That is why we developed GIM, our Gradient Interaction Modifications method, to make clinical reasoning more interpretable at the model level. GIM recently topped the Hugging Face Mechanistic Interpretability Benchmark, ranking #1 on the leaderboard among interpretability approaches. That matters because interpretability is not an academic exercise in healthcare – it is foundational to trust, safety, and regulatory adoption.

Beyond research, transparency has to carry through to deployment. We provide model cards, validation benchmarks, audit trails, and version control so customers know exactly what is running and how it was evaluated. Outputs are tied to evidence, uncertainty is explicit, and systems are designed to support clinicians as an underwriter of decisions, not replace them with an opaque black box.

In healthcare, explainability is not a feature. It is a prerequisite for trust. That is why we approach clinical AI as a lab discipline first and ensure research ships in production-grade systems that can be inspected, governed, and safely deployed.

AI sovereignty is a critical topic in regulated sectors. What does sovereignty mean in healthcare, and how can providers maintain control while still benefiting from advanced AI?

In healthcare, sovereignty means that providers retain control over data residency, model choice, and operational governance. Practically, sovereignty is achieved with options for local or regional hosting (sovereign clouds and on-prem), private model endpoints, full audit and lifecycle control, and contractual and technical guarantees (BAAs, SLAs, DPIAs). Sovereignty is not anti-cloud; it is about giving providers the ability to choose where their workloads run and to have verifiable control and traceability over models and data. That combination lets providers access cutting-edge capabilities while meeting legal and institutional obligations.

As a founder and advisor to EU initiatives, how do you see regulation evolving, and where do policymakers still underestimate technical realities of clinical AI?

Europe is right to take regulation seriously. In healthcare, auditability, traceability, and accountability are not optional – they’re prerequisites for trust.

Where policymakers sometimes underestimate reality is in how operational clinical AI is. Certification is not a one-time approval; it requires continuous monitoring, version control, and ongoing validation. At the same time, we have to avoid over-regulating. If compliance becomes disproportionate, innovation slows and useful tools never reach clinicians.

At Corti, we assume regulation from day one. We build auditability, model governance, and sovereign deployment options directly into our models and APIs, so startups and established vendors don’t have to retrofit for compliance later. Healthcare is complex and fragmented, and the only way to move at pace is to bake regulatory readiness into the foundation. The balance Europe needs is rigorous but practical: protect patients but make it possible to build and deploy safely at scale.

Looking ahead 12–24 months, what major shifts should healthcare leaders expect from Corti, and how do those plans set the foundation for 2026?

Expect Corti to double down on the lab-to-production pathway: shipping research-backed, clinical-grade models and packaging them as deployable infrastructure (speech, coding, and agent endpoints, a clinical reasoning layer, and sovereign deployment options). Upcoming roadmap plans include improved STT and latency benchmarks, voice agents, medical coding models going into production, and multiple sovereign cloud launches, all explicitly designed to move customers from pilots to certified production. Corti is not a single application; it is healthcare’s AI lab, built to enable whole classes of safe, auditable clinical software – the foundation for our 2026 ambitions.

Thank you for the great interview, readers who wish to learn more should visit Corti.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.