Akilesh Bapu is the Founder & CEO of DeepScribe, which uses natural language processing (NLP) and advanced deep learning to generate accurate, compliant, and secure notes of doctor-patient conversations.
What was it that introduced and attracted you to AI and natural language processing?
If I remember correctly, Jarvis from “Iron Man” was the first thing that really attracted me to the world of natural language processing and AI. Particularly, I found it fascinating how much faster a human was able to not only go through tasks but also go into an incredible level of depth into certain tasks and unveil certain information that they wouldn’t have even known about if it weren’t for this AI.
It was this concept of “AI by itself won’t be as good as humans at most tasks but put a human and AI together and that combination will dominate.” Natural language processing is the most efficient way for this human/AI combination to happen.
From then on, I was obsessed with Siri, Google Now, Alexa, and the others. While they didn’t work as seamlessly as Jarvis, I so badly wanted to make them work as Jarvis did. Particularly, what became apparent was, commands such as “Alexa do this,” “Alexa do that,” were pretty easy and accurate to do with the current state of technology. But when it comes to something like Jarvis, where it can actually learn and understand, filter, and pick up on important topics during another conversational exchange—that hadn’t really been done before. This actually directly relates to one of my core motivations in founding DeepScribe. While we are solving the issue of documentation for physicians, we’re attempting a whole new wave of intelligence while doing it: ambient intelligence. AI that can dig through your day-to-day utterances, find useful information, and use that information to help you out.
You previously did some research using deep learning and NLP at UC Berkeley College of Engineering. What was your research on?
Back at the Berkeley AI Research Lab, I was working on a gene ontology annotator project where we were summarizing PubMed articles with specific output parameters.
The high-level overview: Take a task like the CNN news article summarization. In that task you’re taking news articles and summarizing them into roughly a few sentences. In your favor you have data and the ability to train these models on over a million articles. However, the problem space is enormous since you have limited structure to the summaries. In addition, there is hardly any structure to the actual articles. While there have been quite a few improvements since 2.5 years ago when I was working on this project, this is still an unsolved problem.
In our research project, however, we were developing structured summaries of articles. A structured summary in this case is similar to a typical summary except we know the exact structure of the output summary. This is helpful since it dramatically reduces the output options for our machine learning model—the challenge was that there was not enough annotated training to run a data-hungry deep learning model and get usable results.
The core of the work I did on this project was to leverage the knowledge we have around the input data and develop an ensemble of shallow ML models to support it—a technique we invented called the 2-step annotator. The 2-step annotator benchmarked at nearly 20x the accuracy as the previous best (54 percent vs 3.6 percent).
While side by side, this project and DeepScribe may sound entirely different, they were highly similar in how they used the 2-step annotation method to vastly improve results on a limited dataset.
What was the inspiration behind launching DeepScribe?
It all started with my father, who was a medical oncologist. Before electronic health record systems took over health care, physicians would jot down things on paper and spend very little time on notes. However, once EHRs started becoming popular as part of the HITECH Act of 2009, I started noticing that my dad spent more and more time at the computer. He’d start coming home later. On the weekends, he’d be sitting on the couch dictating notes. Simple things like him picking me up from school or basketball practice became a thing of the past as he’d be spending most of his evening hours catching up on documentation.
As a nerdy kid growing up, I would try to find solutions for him by searching the web and having him try them out. Sadly, nothing worked well enough to save him from the long hours of documentation.
Fast forward several years to the summer of 2017—I’m a researcher working at the Berkeley AI Research Lab, working on projects in document summarization. One summer when I’m back at home, I notice that my dad is still spending copious amounts of time documenting. I ask, “What’s new in the world of documentation? Alexa is everywhere, Google Assistant is so good now. Tell me, what’s the latest in the medical space?” And his answer was, “Nothing has changed.” I thought that it was just him but when I went and surveyed several of his colleagues, it was the same issue: not what the latest is in cancer treatment or the novel problems their patients were having—it was documentation. “How can I get rid of documentation? How can I save time on documentation? It’s taking so much of my time.”
I also noticed several companies that had emerged to try to solve documentation. However, either they were too expensive (thousands of dollars per month) or they were too minimal in terms of technology. The physicians at that time had very few options. That was when the opportunity opened up that if we could create an artificially intelligent medical scribe, a technology that could follow physicians’ patient visits and summarize them, and offer it at a cost that could make it accessible for everyone, it could truly bring the joy of care back to medicine.
You were only 22 years old when you launched DeepScribe. Can you describe your journey as an entrepreneur?
At Berkeley, I continued to delve into the world of entrepreneurship as much as possible, primarily with their wide array of classes. My favorites were:
- The Newton Lecture Series—people like Jessica Mah from InDinero or Diane Greene from VMWare who were Cal alums gave highly relatable talks about their time at Berkeley and how they started their own companies
- Challenge Lab—I actually met my co-founder Matt Ko through this class. We were placed in groups and went through a semester-long journey of creating a product and being mentored on what it takes during the early stages to get an idea going.
- Lean Launchpad—By far my favorite of the three; this was a grueling and rigorous process where we were guided by Steve Blank (acclaimed billionaire and the man behind the lean startup movement) to take an idea, validate it through 100 customer interviews, build a financial model, and more. This was the type of class where we pitched our “startup” only to get stopped on slide 1 or 2 and get grilled. If that wasn’t hard enough, we were also expected to interview 10 customers a week. Our idea at the time was to create a patent search that would give similar results to an expensive prior art search, which meant we were pitching to 10 enterprise customers a week. It was great because it taught us to think fast on our feet and be extra resourceful.
DeepScribe started when an investor group called The House Fund was writing checks for students who would turn down their summer internships and spend their summer building their company. We had just shut down Delphi (the patent search engine) and Matt and I had been constantly talking about medical documentation and everything fell in place since it was the perfect time to give it a shot.
With DeepScribe, we were lucky to have just come fresh out of Lean Launchpad since one of the most important factors in building a product for physicians was to iterate and refine the product around customer feedback. A historical issue with the medical industry has been that software has rarely had physicians in the design loop, therefore resulting in software that wasn’t optimized for the end user.
Since DeepScribe was happening at the same time as my final year at Berkeley, it was a heavy balancing act. I’d show up to class in a suit so I could be on time for a customer demo right after. I’d use all the EE facilities and professors not for anything to do with class but 100 percent for DeepScribe. My meetings with my research mentor even turned into DeepScribe brainstorming sessions.
Looking back, if I had to change one thing about my journey, it would’ve been to put college on hold so I could spend 150 percent of my time on DeepScribe.
Can you describe for a medical professional what the advantages of using DeepScribe are versus the more traditional method of voice dictation or even taking notes?
Using DeepScribe is meant to be very similar to using an actual human scribe. As you talk naturally to your patient, DeepScribe will listen in and pick up on the medically relevant speech that usually goes in your notes and puts it in there for you, using the same medical language that you yourself use. We like to think of it as a new AI-powered member of your medical staff that you can train as you’d like to help with documentation in your electronic health record system as you’d like. It’s very different from using voice dictation service as it eliminates the entire step of having to go back and document. While typical dictation services turn 10 minutes of documentation into 7-8 minutes, DeepScribe turns it into a few seconds. Our physicians report anywhere from 1.5 to 3 hours of time saved per day depending on how many patients they see.
DeepScribe is device-agnostic, operable from an iPhone, Apple Watch, browser (for telemedicine), or hardware device.
What are some of the speech recognition or NLP challenges that DeepScribe may encounter due to complex medical terminology?
Contrary to popular opinion, complex medical terminology is actually the easiest part for DeepScribe to pick up. The trickiest part for DeepScribe is to pick up on unique contextual statements a patient may give a physician. The more they stray from a typical conversation, the more we see the AI stumble. But as we collect more conversational data, we see it improve on this dramatically every day.
What are the other machine learning technologies that are used with DeepScribe?
The large umbrellas of speech recognition and NLP tend to cover most of the machine learning we’re doing at DeepScribe.
Can you name some of the hospitals, nonprofits, or academic institutions that are using DeepScribe?
DeepScribe started out through a pilot program with the UC Berkeley Health Center. Hartford Healthcare, Texas Medical Center, and Cedar Valley Medical Specialists are a handful of the larger systems DeepScribe is working with.
However, the larger percentage of DeepScribe users are 50 private practices from Alaska to Florida. Our most popular specialties are primary care, orthopedics, gastroenterology, cardiology, psychiatry, and oncology, but we do support a handful of other specialties.
DeepScribe has recently launched a program to assist with COVID-19. Could you walk us through this program?
COVID-19 has hit our doctors hard. Practices are only seeing 30-40 percent of their patient load, scribe staffing is being cut, and providers are being forced to rapidly switch all their patients on to telemedicine. All this ends up leading to more clerical work for providers—we at DeepScribe firmly believe that in order for this pandemic to come to a halt, physicians must devote 100 percent of their attention and time to taking care of their patients.
To help aid this cause, we are proud to launch a free telemedicine solution to health care professionals fighting this pandemic. Our telemedicine solution is fully integrated with our AI-powered medical scribe solution, eliminating the need for clinical documentation for encounters made on our platform.
We’re also offering our scribe service for free during the pandemic. This means that any physician can get access to a scribe for free to handle their documentation. Our hopes are that by doing this, physicians will be able to focus more of their attention on their patients and spend less time thinking about documentation, leading to a faster halting of the COVID-19 outbreak.
Thank you for the great interview, I really enjoyed learning about DeepScribe and your entrepreneurial journey. Anyone who wishes to learn more should visit DeepScribe.
New Advancements in AI for Clinical Use
Researchers from Radboudumc helped advance artificial intelligence (AI) in the clinical setting after demonstrating how AI can diagnose problems similar to a doctor, while also showing how it reaches the diagnosis. AI already plays a role in this environment, being utilized to quickly detect abnormalities that could be labeled as a disease by experts.
AI in the Clinical Setting
Artificial intelligence has been increasingly used in the diagnosis of medical imaging. What was traditionally done by a doctor studying an X-ray or biopsy to identify abnormalities can now be done with AI. Through the use of deep learning, these systems can diagnose by themselves, oftentimes being just as accurate or even better than human doctors.
The systems are not perfect, however. One of the issues is that the AI does not demonstrate how it is analyzing the images and reaching a diagnosis. Another problem is that they do not do anything extra, meaning they stop once reaching a specific diagnosis. This could lead to the system missing some abnormalities even when there is a correct diagnosis.
In this scenario, the human doctor is better at observing the patient, X-ray, or other images overall.
Advancements in the AI
These problems for AI in the clinical setting are now being addressed by researchers. Christina González Gonzalo is a Ph.D. candidate at the A-eye Research and Diagnostic Image Analysis Group of Radboudumc.
González Gonzalo developed a new method for the diagnostic AI by utilizing eye scans that found abnormalities of the retina. The specific abnormalities can be easily found by human doctors and AI, and they often are found in groups.
In the case of the AI system, it would diagnose one or a few of the abnormalities and stop, demonstrating one of the downsides of using such a system. In order to address this, González Gonzalo developed a process where the AI goes over the picture multiple times. When it does this, it learns to ignore the places that it had already covered, which allows it to discover new ones. On top of that, the AI also highlights suspicious areas, making the whole diagnostic process more transparent for humans to observe.
This new method is different from the traditional AI systems used in these settings, which base their diagnosis on one assessment of the eye scan. Now, researchers can see how the new AI system reached its diagnosis.
In order to ignore the already detected abnormalities, the AI system digitally fills them with healthy tissue from around the abnormalities. The diagnosis is then made based on all of the assessment rounds being added together.
The study found that this new system improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0%.
This new system could really change how AI is used when diagnosing diseases based on abnormalities, and the biggest advancement is the new transparency that it can demonstrate when undergoing this process. This transparency is what will allow even more future corrections and advancements, with the end-goal being an AI system that could diagnose problems much more accurately and faster than the best human experts within the field. All of this could also lead to a more trustworthy system, possibly resulting in the widespread adoption of it within the larger field.
Naheed Kurji, Co-Founder, President and CEO of Cyclica – Interview Series
Naheed Kurji is the President and CEO of Cyclica, a Toronto-based biotechnology company that leverages artificial intelligence and computational biophysics to reshape the drug discovery process. Cyclica provides the pharmaceutical industry with an integrated, holistic, and end-to-end enabling platform that enhances how scientists design, screen, and personalize medicines for patients, and has recently been named by Deep Knowledge Analytics as one of the top 20 AI in Pharma companies globally
Cyclica leverages artificial intelligence and computational biophysics to reshape the drug discovery process. Can you discuss in what way AI is used in this process?
Technology has played a critical role in drug discovery dating back to the ’80s. However, the drug discovery and development process is still very inefficient, time consuming and expensive, costing more than 2 billion dollars over 12 years. The poor efficiency often results in high rates of attrition and failure to meet drug safety and efficacy milestones. Researchers are aware of this and they are actively seeking tools to holistically understand the qualities that define the best drugs in order to develop safer and more effective medicines
Recent advances in cloud computing, AI and biophysics have created an opportunity to gain deep insight from the vast amounts of biochemical, biological, healthcare and patient data that are now available in order to better understand disease. These advances have also enabled medicinal chemists to enhance the design of novel therapies and use AI to drive greater predictive insights earlier in the drug development process. At Cyclica we have developed proprietary deep-learning engines, MatchMaker and POEM to support the drug design process. MatchMaker predicts how chemical compounds and drugs interact with multiple proteins, known as polypharmacology. We found the combination of both a knowledge-based and structure-based approach yielded the greatest predictive accuracy and performance. POEM (Pareto-Optimal Embedded Modeling), is a parameter-free supervised learning approach for building drug property prediction models and addresses several limitations of other ML approaches, resulting in less overfitting and increased interpretability.
At Cyclica, we are using AI to provide scientists with a robust and validated platform to accelerate decision-making and hypothesis generation in order to increase the overall efficiency of the drug discovery process and to reduce the number of downstream failures.
Cyclica has designed the Ligand Design and Ligand Express platform, what is this precisely?
We are the first company to approach computational polypharmacology (an appreciation that drugs interact with multiple targets) with an integrated drug discovery platform that interrogates molecular interactions on a proteome-wide scale. Our platform is comprised of two key pieces, Ligand Express, our first generation off-target profiling and target deconvolution platform, and Ligand Design, our next generation single and multi-targeted in silico drug design technology. Ligand Express and Ligand Design are powered by two internally built, validated, and patented machine learning and deep learning engines: MatchMaker and POEM. Rooted deeply in protein biophysics, MatchMaker is a deep learning drug-target interaction engine that generalizes across both data-rich and data-poor targets (see validation notes here and here). POEM, a machine learning technology implemented for Absorption, Distribution, Metabolism, and Excretion (ADME) property prediction, is a novel, parameter-free approach to model building.
All taken together, Ligand Design and Ligand Express offer a powerful end to end AI-augmented drug discovery platform for the design of advanced, chemically novel lead-like molecules that simultaneously prioritizes compounds based on their polypharmacological profile, effectively minimizing undesirable off-target effects. Our differentiated platform opens new opportunities for drug discovery, including multi-targeted and multi-objective drug design, lead optimization, ADMET-property prediction, target deconvolution, and drug repurposing. Driven by a diverse and highly-talented team with deep expertise across machine learning, computational biophysics/chemistry/biology, biochemistry, and medicinal chemistry, we are continuing to innovate through our robust R&D pipeline.
How important is decentralizing the discovery of medicine to the Cyclica business model?
Our vision is to decentralize the discovery of better medicines by combining our deep roots in Artificial Intelligence (AI) and protein biophysics with an innovative business model. And at the very core of Cyclica’s ethos is the steadfast desire to help patients by advancing the discovery and development of better medicines by taking a holistic yet personalized approach.
To this end, we believe that the future of drug discovery is in the hands of innovative research institutions and emerging biotech companies (we wrote about this in Forbes here). Supporting our philosophy, in 2019 IQIVIA reported that emerging biopharma companies account for over 70% of the total R&D pipeline (up from 50% in 2003), and that these companies patented over 2/3 of new drugs in 2018 (up from 50% in 2010). While emerging biotech companies will lead innovation in drug discovery, big pharma will continue to invest in advancing late stage clinical trials and market penetration through their sales channels.
With our Series B funding, we will accelerate commercial plans to advance a growing pipeline of pre-clinical and clinical assets through an innovative decentralized partnership model. Our goal is to create and own hundreds of drug discovery programs across multiple therapeutic areas. These programs are created via spin outs and joint ventures (JVs) with top tier research institutions, facilitated largely through the Cyclica Academic Partnership Program (“CAPP”).
Propelled by a rapidly growing portfolio of more than 30 active and advancing drug discovery programs, we will continue to spark innovation through a combination of venture creation and partnerships with early-stage and emerging biotech companies. Recent partnerships include EntheogeniX Biosciences, NineteenGale Therapeutics, Rosetta Therapeutics, the Rare Diseases Medicine Accelerator, and two stealth JVs encompassing over 50 programs across multiple therapeutic areas. By executing on our decentralized business model, creating new companies through spin-outs and joint ventures and helping them scale, we are in effect creating the biotech pipeline of the future.
Many of your technologies are cloud-based, why is this so important?
Access to the cloud allows us to computationally scale the workflows that we are conducting, as well as benefit from regulated security infrastructure. Also, as an early stage company, the ability to get up and running with the cloud without the overhead of investing in our own hardware was critical for the financial viability in our early days. Looking forward, while much of our R&D work is done on the cloud, over the past couple of years we have become less cloud-dependent with the ability to run projects on single machines. We are also aiming to support private cloud installations since that’s something we feel our partners may desire. Technological advancements have made it possible to do on a personal laptop what used to take many machines on the cloud, but by continuing to utilize the cloud we are able to greatly expand the scope of the problems we are solving.
Cyclica often takes equity positions in companies that they partner with. Can you discuss the business reasoning behind this?
Smaller biotechnology companies and academic groups are generally overlooked by the market in terms of partnership opportunities. While they may not have the resources, infrastructure or facilities in comparison to mature big pharma counterparts, small biotechs are increasingly entering the spotlight with a combination of deep subject-matter expertise in specific indications and the benefits of a lean organization conducive to rapid innovation.
This led us to think on how we can engage with these smaller companies with an avant-garde strategy. We partner scientists in research organizations who are interested in spinning out a company or early stage biotech companies, and enable them with ourAI-augmented drug discovery platform through in kind contributions. In return, take equity into the companies and/or share in the ownership of the compounds and assets that are created and pursued. By sparking a surge of innovation through a combination of venture creation and partnerships, we can capture greater value and develop long-term relationships with our partners to address a spectrum of unmet medical needs to better the lives of patients.
Entheogenix Biosciences is a joint venture between Cyclica and ATAI Life Science. What exactly is Entheogenix Biosciences?
There is a unique opportunity for innovation in the neuropsychiatric landscape to better serve patients suffering from complex mental ailments. Current medicines and therapies that rely on single-targeted drug interventions often fall short, requiring patients to take multiple medications that may present potential safety issues as well as reduce medication adherence. We have partnered with ATAI Life Science to leverage their deep experience in mental health and psychedelics, while empowering them with our AI-augmented drug discovery platform to create not only new medicines, but the right ones to tackle mental ailments. Entheogenix Biosciences is one of the many joint ventures we have formed and is a testament to our belief in changing the paradigm in which mental health disorders are treated by bringing our disease agonistic, robust and scientifically validated computational platform into the hands of subject-matter experts and world-class scientists.
Is there anything else that you would like to share about Cyclica?
While we are very excited to share the announcement of our series B round of financing. We are just as eager to share the launch of the Cyclica Academic Partnership Program (CAPP) and new partnerships over the next few months.
Thank you for the interview. I look forward to following the future progress of Cyclica.
Groundbreaking Research Shows How Sensors Can Be 3D Printed on Contracting Organs
Major research has come out of the University of Minnesota that could have huge implications in healthcare. Mechanical Engineers and computer scientists have developed a new 3D printing technique that allows electronic sensors to be directly printed on organs that are expanding and contracting.
The new technique uses motion capture technology like what is used to create movies, and besides having implications within the general field of healthcare, it could be specifically applied to diagnose and monitor the lungs of individuals with COVID-19.
The research was published in Science Advances, a scientific journal published by the American Association for the Advancement of Science (AAAS).
3D Printing Technique
The research is based on a 3D printing technique that was discovered two years ago. The technique was first used on a hand that rotated and moved left to right, with electronics directly printed on the skin of the hand. It has now been developed even further to work on organs such as the lungs or heart, which expand and contract, leading to a change in the shape or distortion.
Michael McAlpine is a University of Minnesota mechanical engineering professor and senior researcher on the study.
“We are pushing the boundaries of 3D printing in new ways we never even imagined years ago,” said McAlpine. “3D printing on a moving object is difficult enough, but it was quite a challenge to find a way to print on a surface that was deforming as it expanded and contracted.”
Development and Future Applications
The researchers first used a balloon-like surface and a specialized 3D printer. They utilized motion capture tracking markers, like the ones used to create special effects in movies, in order to help the 3D printer adapt to the expansion and contraction movements on the surface.
After using the balloon-like surface, the researchers tested it on an animal lung that was artificially inflated. It proved to be a success, and a soft hydrogel-based sensor was printed directly on the surface.
According to McAlpine, this technology could be used in the future to print directly on a pumping heart.
“The broader idea behind this research, is that this is a big step forward to the goal of combining 3D printing technology with surgical robots,” said McAlpine. “In the future, 3D printing will not be just about printing but instead be part of a larger autonomous robotic system. This could be important for diseases like COVID-19 where health care providers are at risk when treating patients.
The research team also included lead author Zhijie Zhu, a mechanical engineering Ph.D. candidate at the University of Minnesota, as well as Hyun Soo Park, assistant professor in the University of Minnesota Department of Computer Science and Engineering.
The work was supported by Medtronic and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.
- How Quantum Mechanics will Change the Tech Industry
- Jim McGowan, head of product at ElectrifAi – Interview Series
- NASA to Use Machine Learning to Enhance Search for Alien Life on Mars
- New Study Attempts to Improve Hate Speech Detection Algorithms
- Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects