Connect with us

Interviews

Tal Wenderow, President and CEO of Vocalis Health – Interview Series

mm

Updated

 on

Tal Wenderow is the President and CEO of Vocalis Health an AI healthtech company pioneering the development of vocal biomarkers. Previously, Mr. Wenderow co-founded Corindus Vascular Robotics in 2002, which was a New York Stock Exchange-listed company upon its acquisition by Siemens Healthineers in 2019.

What was the genesis story behind Vocalis Health?

Vocalis Health was founded in 2019 by Dr. Shady Hassan and Daniel Aronovich. Dr. Hassan, through treating patients in the hospital, realized that he was habitually listening to their voices to gauge the state of their health. As there was no standard system of quantifying their disease based on the voice, he was using his subjective experience and expertise to measure their health status. He then realized that he could help millions of patients before they even entered the hospital by utilizing artificial intelligence and vocal analysis technology to screen patients for a variety of diseases.

Vocalis Health was created with the mission of standardizing voice screening to raise the alerts as early as possible to improve patient outcomes, and to do so in an accessible, cost-effective, and non-invasive way.

Could you discuss the types of vocal biomarkers that are used to assess the risk level of a person being COVID-positive or requiring further testing?

The vocal biomarkers we develop are AI algorithms that deploy proprietary Machine Learning/Deep Learning processing techniques to assess the correlation between a patient’s voice and a variety of diseases, symptoms and medical conditions. The algorithm converts audio recordings to visual images, called spectrograms, which then use computer vision techniques, along with information from the patient’s medical records, to find small changes in the spectrogram.

To develop the COVID-19 biomarker, we collected data in clinical trials of PCR-confirmed COVID-positive patients as well as their voice recordings, and we trained an AI algorithm to recognize signs and patterns of the disease, essentially creating a “digital signature” of COVID-19. Every new patient’s voice that we analyze is compared to our COVID-19 vocal biomarker. We measure the correlation between their voice and the COVID-19 “signature,” assessing the probability that the person is infected with COVID-19.

Is age, language or accent a barrier to risk assessment?

Our algorithm only applies to adults, as we have not yet collected data on people under the age of 18.  Participants in our trials and pilots have ranged in age from 19 to 83, with no impact of age.  But with children under 18, their ongoing and variable growth and development – which is known to impact voice – requires further research, which we are planning.

Regarding accents, we have tested our algorithm on many accents in English – with participants speaking English from diverse geographies including India, South Africa, Israel and the US – and found no significant effects, meaning accents do not affect the success of the algorithm.

Our database also includes several languages, and early results have shown our COVID-19 vocal biomarker is language-agnostic, which we demonstrated through a pilot we conducted on languages that the algorithm was not trained on. Even so, we do perform a validation on every new language to ensure the algorithm is successful and optimize it as needed.

Based on the current data what percentage of false positives or false negatives have been observed?

The terms false positive / false negative are applicable to diagnostic tools; our COVID-19 vocal biomarker is a screening tool, not a diagnostic tool.  By definition, a screening tool will identify some COVID-negative people as having high risk and vice versa. Currently, our COVID-19 screening tool on its own demonstrated an Area Under the Curve (AUC) of 0.73. When a symptom assessment is added to the tool, the AUC increased to 0.85, leading us to include a symptom questionnaire in our current product. As we continue to collect data, our algorithms become more sensitive and accurate over time.

Can you discuss the machine learning technologies that are used to assess the risk of a COVID-19 infection with a person’s voice?

Our approach to vocal biomarkers is converting the voice from the audio domain to the visual domain by creating a spectrogram of the voice. We then apply machine learning techniques to the image domain to correlate the algorithm with our data and generate the vocal biomarker.

Vocalis Health recently announced a collaboration with Mayo Clinic to research and develop new voice-based tools for screening, detecting and monitoring patient health. Could you discuss some details behind this partnership?

The Mayo Clinic is one of the leading healthcare systems in the world, and we previously conducted our first pivotal clinical trial with them on pulmonary hypertension (PH). Now, we are collaborating with them to further optimize and validate our vocal biomarker for PH and, in parallel, are exploring other disease states where vocal biomarkers can potentially have a positive impact on patient outcomes.

In a previous trial with Vocalis Health, the Mayo research team established a relationship between certain vocal characteristics and pulmonary hypertension (PH). Could you describe what is PH and what are the vocal characteristics that are looked for?

Pulmonary hypertension is high blood pressure in the pulmonary arteries, the blood vessels that supply blood to the lungs, and can eventually damage the right side of the heart. The walls of the pulmonary arteries become thick and stiff and cannot expand well to allow blood through, leading to reduced blood flow which makes it harder for the right side of the heart to pump blood through the arteries.

Pulmonary hypertension often goes unnoticed, as its symptoms are like that of other pulmonary disease, and even once suspected, it’s challenging to diagnose. Our goal is to screen patients and to raise the flag as early as possible that a patient is at high risk for PH. PH is a chronic disease, currently without a cure, but treatment can help alleviate symptoms and improve quality of life. The earlier PH is diagnosed, the more effective treatment will be, and the better prognosis a patient will have.

What are some of the clinical implications for telemedicine and remote patient monitoring in the future?

Our goal with vocal biomarkers is to provide early detection, to screen, manage, monitor and predict a variety of symptoms and conditions to enable the healthcare system to become more proactive, not only reacting to severe symptoms or events.

Ours is not the only solution in development to accomplish these tasks, but we believe that the voice, being non-invasive, scalable and accessible without requiring any additional devices or equipment, is the ideal method of screening. By utilizing voice technology for remote pateint monitoring, signs of disease can be noticed significantly earlier and patients can be treated accordingly.

Is there anything else that you would like to share about Vocalis Health?

At its core, Vocalis Health is a voice computing company applying AI and ML technology to enhance healthcare. We believe that our vocal biomarkers can be applicable to a myriad of diseases, ranging from the acute to the chronic, and allow doctors to be proactive in their treatment of patients.

Thank you for the great interview, readers who wish to learn more should visit Vocalis Health.

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.