Interviews
Vasili Razhnou, CEO and Founder of MEDvidi – Interview Series

Vasili Razhnou is the CEO and Founder of MEDvidi, an AI-powered mental health platform. As a serial founder with over 15 years in healthcare and business, he has built five technology startups. At MEDvidi, Vasili is leading the development of AI-powered clinical tools that reduce administrative burden and enable providers to deliver faster, more consistent care. Under his leadership, the company reached $30M in ARR.
You’ve spent over a decade building healthcare infrastructure, from early clinic digitization to scaling multiple telehealth ventures before founding MEDvidi. What specific problem or moment pushed you to start the company, and how did those earlier experiences shape your approach to building AI-driven clinical systems?
It started long before MEDvidi. In 2008, when I joined my first clinic, everything was still running on paper. Our offices were full of medical records, which created physical and mental clutter. It used to take about five days to locate and retrieve patient records.
I bought a scanner and shredder to digitize everything. That single change transformed how the clinic operated. It saved money and time, and made patient records easily accessible. A simple action showed that sometimes operational infrastructure is the foundation of good care.
From there, we built an online interface with cloud storage, then a small intake and EHR system, adding features year by year.
MEDvidi originally emerged from traditional offline clinics in San Francisco and Miami in 2019 and transitioned to a custom telehealth platform in 2020 to make mental health care accessible across the U.S. While building the company, we realized that providers are overwhelmed – they spend an average of 16 hours per week on administrative tasks.
To address this bottleneck, we developed an AI-powered clinical tool. Today, MEDvidi provides care for common conditions like ADHD, anxiety, and depression across the US, while automating workflows and prescription medication management for clinicians with AI. By reducing friction in documentation and administrative work, we expand both patient access and provider capacity.
You’ve seen healthcare evolve from manual workflows to large-scale telehealth platforms. What are the biggest operational inefficiencies that still persist today, and why have they been so difficult to solve without AI?
The biggest problem in healthcare is still providers’ capacity. They are spending too much time on admin tasks, leaving no time for new patients. At MEDvidi, we see it firsthand – within three months of joining us, most providers are 80% booked with follow-up patients.
During those visits, the majority of time is spent on routine admin tasks such as verifying patient identity, charting, pulling PDMP reports, assessing for drug-seeking behavior, reviewing medical history, etc. These are important tasks, but they don’t require a clinician’s judgment for a complex diagnosis.
AI changed that – we can now automate most of it. For example, the AI Chart Generator transcribes visits in real time, updating documentation every 60 seconds, and cutting charting time by 10x. The AI Chart Reviewer monitors 100% of clinical encounters for SOP adherence, reducing chart review time by 80% while handling ID verification, drug-seeking detection, and guideline compliance. An AI Receptionist handles rescheduling via SMS and voice, gathers prescription-related issues from patients, provides updates, and integrates the information into workflows.
Your platform focuses heavily on automating routine psychiatric workflows while keeping physicians in the loop. How do you define the right boundary between automation and clinical decision-making?
Healthcare providers remain at the center of care. This is the only right way to do it. MEDvidi’s AI is designed to support and empower clinicians, not to replace them. Every clinical decision, prescription, and treatment plan is reviewed and approved by a licensed medical provider.
I believe healthcare needs more proof that the technology can improve efficiency without compromising safety. Our goal is to make sure providers aren’t wasting their judgment on tasks that don’t require it. When a stable patient comes in for a routine follow-up, and the case is straightforward, AI can handle preparation, documentation, and review, and the provider confirms the decision. The human is always in the loop, but we’re making sure their time is spent where it actually matters.
The AI Prescribing Assistant is trained on real clinical data and requires physician approval for every decision. How do you think about safety, accountability, and auditability when deploying AI in such high-stakes environments?
When you operate in a highly regulated space like healthcare, you can’t afford to get this wrong.
Unlike other AI health tools trained on non-specific medical data, MEDvidi AI is trained on 130,000+ real psychiatric visits, providing domain-specific accuracy. It’s a unique infrastructure, purpose-built and trained for psychiatric workflows, regulations, and controlled-substance requirements.
Our AI system works as a clinical verification layer, grounded in evidence-based guidelines and a proprietary dataset of thousands of real historical visits. It ensures every prescription aligns with standards and provides regulators with transparent oversight. Crucially, the AI does not make independent decisions. That’s the architecture we intentionally built.
Many telehealth platforms have faced scrutiny around overprescribing and misaligned incentives. How can AI systems actually improve compliance and rebuild trust rather than amplify those risks?
In healthcare, there are always two components: the business side and the clinical side. Many telehealth companies blurred that line during the boom years, prioritizing growth and, in some cases, compromising clinical rigor.
At MEDvidi, we’ve always kept those functions strictly separated. Clinical decisions are never influenced by business incentives. Our AI systems actually reinforce that separation rather than weaken it.
One of the key ways we do this is through AI-powered chart review. Every patient encounter is checked against standardized clinical SOPs to ensure the treatment plan is appropriate and compliant. These SOPs are not created by business teams – they are developed and continuously reviewed by a committee of licensed medical professionals and aligned with all applicable laws and regulations. They are designed with one goal in mind: delivering the best possible care for each individual patient. Importantly, these protocols are fully auditable and can be reviewed by regulators at any time.
AI becomes a layer of consistency and accountability. It helps ensure that care decisions are based on clinical standards, not subjective pressure, time constraints, or patient demand. That also means we sometimes say no. If a patient comes in expecting a specific medication because they read about it online, but it’s not clinically appropriate, our providers won’t prescribe it – and AI helps enforce that standard consistently.
There is a tradeoff. Patients who don’t receive the treatment they expect may leave negative reviews. But that’s the cost of practicing responsible medicine. In the long run, this kind of transparent, protocol-driven, and auditable system is what strengthens compliance and rebuilds trust across patients, providers, and regulators.
You’ve highlighted that up to 80% of psychiatric visits are routine follow-ups. How does automating these interactions fundamentally change access to care and the economics of mental health delivery?
Today, access to mental health care is constrained not by demand but by how clinician time is allocated. Up to 80% of psychiatric visits are routine follow-ups – often driven by regulatory requirements rather than clinical complexity. In many of these cases, the provider is verifying that a stable patient is continuing the same treatment, with no meaningful changes.
That creates a structural bottleneck. Clinicians spend most of their time maintaining existing patients, while new patients wait 6 to 9 weeks to be seen. This is exactly where automation has the most impact. For stable patients, the workflow is highly structured: symptom checks, side effect monitoring, adherence verification, and compliance review.
These are protocol-driven interactions that AI can handle consistently and at scale. When something falls outside expected parameters – an adverse reaction, a change in symptoms, or any red flag – the case is immediately escalated to a provider.
By shifting these routine interactions to AI, we fundamentally rebalance capacity. Clinicians can redirect their time toward new patients and more complex cases where human judgment is critical.
That alone expands access without increasing the number of providers.
The economics change as well. The cost of servicing a stable patient drops significantly, while provider productivity increases. Instead of being a limiting factor, clinician time becomes a leveraged resource. At scale, this means shorter wait times, lower costs, and the ability to serve populations that were previously underserved – including rural patients and those who can’t take time off work.
In short, automation doesn’t replace care – it reallocates it. It removes the regulatory and administrative burden from clinicians and converts it into scalable infrastructure, which is what ultimately unlocks access.
In your recent article, Why AI in Healthcare Is Being Deployed in the Wrong Place, you argue that the industry is focusing too much on replacing clinicians instead of fixing administrative bottlenecks. What are the biggest misconceptions driving this misalignment?
People still tend to think that “AI in healthcare” only means ChatGPT talking to patients instead of real doctors and prescribing medication with no control.
AI infrastructure in healthcare is highly complex and always requires human oversight. When companies try to shortcut and go straight to autonomous clinical decision-making, they run into trust, regulatory, and safety problems.
The right entry point is the administrative layer. Fix that first, showcase and prove safety, build trust, and then expand from there. That’s the path MEDvidi is on.
If administrative automation is the highest return entry point for AI in healthcare, what specific workflows should organizations prioritize first to see immediate impact?
The biggest mistake is trying to layer AI on top of broken workflows. The goal shouldn’t be incremental improvement – it should be rethinking where entirely new workflows can be built with AI.
Start by mapping the clinical and operational process end-to-end and identifying where time is actually spent. In most organizations, the largest bottlenecks are scheduling, patient flow, and documentation. These are high-volume, repetitive tasks where AI can deliver immediate ROI. Automating scheduling reduces no-shows and idle provider time. AI-driven documentation – like real-time transcription and chart generation – removes one of the heaviest burdens on clinicians.
But the real opportunity goes beyond optimization. Some workflows, especially routine follow-ups or compliance checks, can be fully redesigned around AI rather than just assisted by it. That’s where step-function gains happen.
Compliance monitoring is another good example. Today, organizations manually audit a small percentage of encounters. With AI, you can review 100% of interactions in real time, flagging documentation gaps, SOP deviations, and potential risks before they escalate.
In some cases, these new AI-native workflows may not fit neatly into existing regulatory frameworks. That means organizations need to be prepared to validate their approach, generate evidence, and work closely with regulators to demonstrate safety and compliance.
The companies that will see the biggest impact are not the ones adding AI features, but the ones willing to rebuild core workflows around what AI makes possible.
Healthcare is uniquely complex with layered regulations, fragmented data, and high consequences for errors. What does a production-ready AI architecture actually look like in this environment compared to a demo or pilot system?
The AI should be trained on domain-specific, real clinical data and built around real workflows. Every output should be auditable. This means all charts, flagged prescriptions, and SOP checks are reviewable and traceable.
A production-ready system also needs to account for how care is actually delivered. Providers are very protocol-based. When you hire independent clinicians, they bring habits from previous settings. AI standardizes this in a way and supports those workflows.
Again, the human oversight layer is crucial. AI should handle the administrative and analytical workload, while clinicians remain responsible for final decisions.
Most importantly, the system should be built from the ground up with compliance, security, and reliability in mind.
Looking ahead, how do you see AI reshaping telehealth and prescribing over the next three years, especially as regulators begin to respond to early deployments like AI-assisted prescription workflows?
The regulatory environment is shifting. AI is already here in healthcare. States like Utah are creating sandboxes to let technology companies demonstrate what AI can do, including prescribing controlled substances.
Over the next few years, we’ll see fully automated follow-up care for stable patients. AI-managed visits with physicians in a supervisory role, confirming decisions. That model makes care faster and cheaper for people who currently can’t access it at all. That’s the standard we’re trying to set.
Thank you for the great interview, readers who wish to learn more should visit MEDvidi.












