Connect with us

Interviews

Anton Dolgikh, Head of AI, Healthcare and Life Sciences at DataArt – Interview Series

mm

Updated

 on

Anton Dolgikh leads AI and ML-oriented projects in the Healthcare and Life Sciences practice at DataArt and runs educational and training developers focused on solving business problems with ML methods. Prior to working at DataArt, Dolgikh worked in the Department of Complex Systems at the Université Libre de Bruxelles, a leading Belgian private research university.

What was it that originally inspired you to pursue AI and life sciences as a career?

A passion for searching the links between phenomena and facts. I always like to read. I love books. At university, I discovered a new source of information – articles. At some point, it appeared that to get a complete picture, to crystallize the beautiful truth from a mass of information is almost impossible. And here comes AI. Statistics, machine learning of course, and natural science with AI at the top all act to build the bridge between the human brain’s thirst for knowledge and a world where all the laws are known and there are no black boxes.

 

You currently educate and train developers who are focused on solving business problems with ML methods. Is there a specific field of machine learning that you focus on more, for example deep learning?

Yes, deep learning is a very popular and, let’s be honest, powerful instrument; we cannot neglect it. I personally prefer the Bayesian interpretation of classical algorithms, or even a combination of neural networks and a Bayesian approach — for example, a Bayesian Variational Autoencoder. But I believe that the most important thing to teach new ML guys is not to blindly use ML machinery like a magic black box,  but rather perceive the basic principles behind each and every method. A must-have skill is the ability to explain the predictions obtained for a business audience.

 

In March 2019, you wrote an article called ‘Are we Ready for Machine Radiologists, and their Mistakes?‘. In the article you outlined the pros and cons of accepting results from machine radiologists versus human radiologists. If you had to choose between a human and a machine giving you results, which would you choose and why?

I prefer a human radiologist. Not because I have some special knowledge that AI tis heavily prone to errors and decisions are intrinsically erroneous. No, it’s more a question of empathy and the psychological nature. I want to support human doctors during this difficult period. Moreover, I believe that in the nearest future, we will only see AI augment human ability.

 

You recently wrote a white paper called ‘The Impact of Artificial Intelligence on Lifespan.’ In this paper, you stated that AI should be viewed as a tool in the search for longer life. What are some of the more promising methodologies that AI could apply to the quest to extend human lifespan?

Today the new tool of AI is beginning to operate in scientific laboratories on par with classical instruments and approaches. This fact itself is promising. AI is here to help, not replace, us in the struggle to cope with the huge quantities of data flooding not only laboratories but even our personal lives.

 

Also discussed in the same white paper is a claim by Biogerontology Research Foundation AI Director and CEO of Insilico Medicine, Dr Alex Zhavoronkov, that increasing lifespan to 150 years is not a fantastic goal. Do you believe that a child born in 2020 will be able to live to 120 or even 150 years?

I want to believe. Being a scientist by education and belief, I have to base my decisions on facts, on understanding the progress of scientific methods in the area. We’ve made an impressive leap in the fields of genetics, biotechnology and medicine in general, and this strengthens my belief. And don’t forget that a substantial part of the success in increasing lifespan is a healthy environment and a healthy lifestyle, so we have to work on this.

 

In this same paper you mention the potential for mind uploading (transhumanism). Do you believe that this could eventually be a reality, and how does it make you feel personally?

I’ve thought a lot about it. Frankly, it makes me feel frustrated. I think that we associate personality with what we see in a mirror, and for me, it’s hard to detach my character from my body. Nevertheless, this doesn’t mean it is not possible. And, yes, I believe that sooner or later mind uploading will become feasible. The consequences are much harder to foresee.

 

You’re currently the Head of AI, Healthcare and Life Sciences at DataArt. What are some of the most interesting projects DataArt is currently working on?

We have a project dedicated to new drug development. It’s inspiring how computational methods have developed to fuel and direct the progress in medicinal chemistry and pharmacology. We also do a lot of work on applying AI to extract information from medical texts such as clinical trial reports, medical articles, and specialized forums. It’s hard work, but it takes us closer to the digitalization of healthcare, and I find this exciting.

 

As an avid book lover, I also need to ask what books you recommend?

I don’t want to call the list below a “must-read”. While there are standard textbooks which are suited for large groups, most of the “must-read” books express personal experience and background of an adviser.

 

Is there anything else that you would like to share about DataArt?

DataArt is an excellent example of the recent trend toward digitalization of almost every aspect of life and activity. This trend increases responsibility in software development because today it’s not only about building a site for a shop, for example, in which case a mistake by the developer will have minimal consequences. Today a developer’s mistake could become a national or worldwide catastrophe if it involves a program controlling the functioning of, for example, a nuclear plant. DataArt’s responsible approach to software development in a broad sense gives me confidence in what we develop, and I am very proud to be part of the company and the work that we are doing.

As for another recent project of hours, last year DataArt launched a prototype application called ‘SkinCareAI’, which analyses skin images to detect early signs of melanoma. Featuring the latest advancements in machine learning (ML) technology, SkinCareAI was developed by DataArt ML expert Andrey Sorokin for the International Skin Imaging Collaboration (ISIC) challenge.

To learn more about some of our other projects and case studies, please go to DataArt’s Healthcare and Life Sciences page.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Healthcare

Scientists Detect Loneliness Through The Use Of AI And NLP

mm

Updated

 on

Researchers from the University of California San Diego School of Medicine have made use of artificial intelligence algorithms to quantify loneliness in older adults and determine how older adults might express loneliness in their speech.

Over the past twenty years or so, social scientists have described a trend of rising loneliness in the population. Studies done over the past decade in particular have documented rising loneliness rates across large swaths of society, which has impacts on depression rates, suicide rates, drug use, and general health. These problems are only exacerbated by the Covid-19 pandemic, as people are unable to safely meet up and socialize in person. Certain groups are more vulnerable to extreme loneliness, such as marginalized groups and older adults. As MedicalXpress reported, one study done by UC San Diego found that senior housing communities had loneliness rates approaching 85% when counting those who reported experiencing moderate or severe loneliness.

In order to determine solutions to this problem, social scientists need to get an accurate view of the situation, determining both the depth and breadth of the issue. Unfortunately, most methods of gathering data on loneliness are limited in notable respects. Self-reporting, for instance, can be biased towards the more extreme cases of loneliness. In addition, questions that directly ask study participants to quantify how “lonely” they feel can sometimes be inaccurate due to social stigmas surrounding loneliness.

In an effort to design a better metric for quantifying loneliness, the authors of the study turned to natural language processing and machine learning. The NLP methods used by the researchers are used alongside traditional loneliness measurement tools, and its hoped that analyzing the natural ways people use language will lead to a less biased, more honest representation of people’s loneliness.

The new study’s senior author was Ellen Lee, assistant professor of psychiatry at the School of Medicine, UC San Diego. Lee and the other researchers focused their study on 80 participants between the ages of 66 to 94. Participants in the study were encouraged by the researchers to answer questions in a way that was more natural and unstructured than most other studies. The researchers weren’t just asking questions and classifying answers. As the first author Ph.D. Varsha Badal, explained that using machine learning and NLP allowed the research team to take these long-form interview answer and find how subtle word choice and speech patterns could be indicative of loneliness when taken together:

“NLP and machine learning allow us to systematically examine long interviews from many individuals and explore how subtle speech features like emotions may indicate loneliness. Similar emotion analyses by humans would be open to bias, lack consistency, and require extensive training to standardize.”

According to the research team, individuals who were lonely had noticeable differences in the ways they responded to the questions compared to non-lonely respondents. Lonely respondents would express more sadness when asked questions regarding loneliness and had longer responses in general. Men were less likely to admit feeling lonely than women. In addition, men were more likely to use words expressing joy or fear than women were.

The researchers of the study explained that the results helped elucidate the differences between typical research metrics for loneliness and the way individuals subjectively experience and describe loneliness. The results of the study imply that loneliness could be detected through the analysis of speech patterns, and if these patterns prove to be reliable they could help diagnose and treat loneliness in older adults. The machine learning models designed by the researchers were able to predict qualitative loneliness with approximately 94% accuracy. More research will need to be conducted to see if the model is robust and if its success can be replicated. In the meantime, members of the research team are hoping to explore how NLP features might be correlated with wisdom and loneliness, which have an inverse correlation in older adults.

Spread the love
Continue Reading

Healthcare

Updesh Dosanjh, Practice Leader, Technology Solutions, IQVIA – Interview Series

mm

Updated

 on

Updesh Dosanjh, Practice Leader of Technology Solutions at IQVIA, a world leader in using data, technology, advanced analytics and expertise to help customers drive healthcare – and human health – forward.

What is it that drew you initially to life sciences?

I’ve worked in multiple industries over the last 30 years, including the life sciences industry in the start of my career. When I chose to come back to the life sciences industry 15 years ago, it was to achieve three ambitions: work in an industry that contributed to the well-being of people; work in an area of industry that could be significantly helped by technology; and to work in an industry that gave me the chance to work with nice people.  Working with a pharmacovigilance team in life sciences has helped me to meet all three of these goals.

Can you discuss what human data science is and its importance to IQVIA?

The volume of human health data is growing rapidly—by more than 878 percent since 2016. Increasingly, advanced analytics are needed to bring to light needed insights. Data science and technology are progressing rapidly, however, there continue to be challenges with the collection and analysis of structured and unstructured data, especially when coming from disparate and siloed data sources.

The emerging discipline of human data science integrates the study of human science with breakthroughs in data technology to tap into the potential value big data can provide in advancing the understanding of human health. In essence, the human data scientist serves as a translator between the world of the clinician and the world of the data specialist. This new paradigm is helping to tackle the challenges facing 21st-century health care.

IQVIA is uniquely positioned to collect, protect, classify and study the data that helps us answer questions about human health. As a leader in human data science, IQVIA has a deep level of life sciences expertise as well as sophisticated analytical capabilities to glean insights from a plethora of data points that can help life science customers bring new medications to market faster and drive toward better health outcomes. By understanding today’s challenges and being creative about how new innovations can accelerate new answers, IQVIA has leaned into the concept of human data science—transforming the way the life sciences industry finds patients, diagnoses illness, and treats conditions.

How can AI best assist drug researchers in narrowing down which specific drugs deserve more industry resources?

Bringing new medications to market is incredibly costly and time-consuming—on average, it takes about 10 years and costs $2.6 billion to do so. When drug developers explore a molecule’s potential to treat or prevent a disease, they analyze any available data relevant to that molecule, which requires significant time and resources. Furthermore, once a drug is introduced and brought to market, companies are responsible for pharmacovigilance in which they need to leverage technology to monitor adverse events (AEs)—any undesirable experiences associated with the use of a given medication—thus helping to ensure patient safety.

Artificial intelligence (AI) tools can help life sciences organizations automate manual data processing tasks to look for and track patterns within data. Rather than having to manually sift through hundreds or thousands of data points to uncover the most relevant insights pertaining to a particular treatment, AI can help life sciences teams effectively uncover the most important information and bring it to the forefront for further exploration and actionable insights. This ensures more time and resources from life science teams are reserved for strategic analysis and decision-making rather than for data reporting.

You recently wrote an article detailing how biopharmaceutical companies that use natural language processing will have a competitive edge. Why do you believe this is so important?

Life sciences companies are under more pressure than ever to innovate, as they strive to advance global health and stay competitive in a highly saturated marketplace. Natural language processing (NLP) is currently being leveraged by life science companies to help mine and “read” unstructured, text-based documents. However, there is still significant untapped potential for leveraging NLP in pharmacovigilance to further protect patient safety, as well as assure regulatory compliance. NLP has the potential to meet evolving compliance requirements, understand new data sources, and elevate new opportunities to drive innovation. It does so by combining and comparing AEs from decades of statistical legacy data and new incoming patient data–which can be processed in real-time—giving an unprecedented amount of visibility and clarity around information being mined from critical data sources.

Pharmacovigilance (the detection, collection, assessment, monitoring, and prevention of adverse effects with pharmaceutical products) is increasingly reliant on AI. Can you discuss some of the efforts being applied by IQVIA towards this?

As mentioned, one of the primary roles of pharmacovigilance (PV) departments is collecting and analyzing information on AEs. Today, approximately 80 percent of healthcare data resides in unstructured formats, like emails and paper documents, and AEs need to be aggregated and correlated from disparate and expansive data sources, including social media, online communities and other digital formats. What is more, language is subjective, and definitions are fluid. Although two patients taking the same medication may describe similar AE reactions, each patient may experience, measure, and describe pain or discomfort levels on a dynamic scale based on various factors. PV and safety professionals working at life sciences organizations that still rely on manual data reporting and processing need to review these extensive, varied, and complex data sets via inefficient processes. This not only slows down clinical trials but also potentially delays the introduction of new drugs to the marketplace, preventing patients from getting access to potentially life-saving medications.

The life sciences industry is highly data-driven, and there is no better ally for data analysis and pattern detection than AI.  These tools are especially useful in processing and extrapolating large, complex PV data sets to help automate manual workloads and make the best use of the human assets on safety teams. Indeed, the adoption of AI and NLP tools within the life sciences industry is making it possible to take these large, unstructured data sets and turn them into actionable insights at unprecedented speed. Here are a few of the ways AI can improve operational efficiency for PV teams, which IQVIA actively delivers to its customers today:

  1. Speed literature searches for relevant information
  2. Scan social media across the globe to pinpoint AEs
  3. Listen and absorb audio calls (e.g. into a call center) for mentions of a company or drug
  4. Translate large amounts of information from one language into another
  5. Transform scanned documents on AEs into actionable information
  6. Read and interpret case narratives with minimal human guidance
  7. Determine whether any patterns in adverse reaction data are providing new, previously unrealized information that could improve patient safety
  8. Automate case follow-ups to verify information and capture any missing data

Is there anything else you would like to share about IQVIA?

IQVIA leverages its large data sets, advanced technology and deep domain expertise to provide the critical differentiator in providing AI tools that are specifically built and trained for the life sciences industry. This unique combination of attributes is what has contributed to the successful implementation of IQVIA technology across a wide array of industry players. This supports integrated global compliance efforts for the industry as well as improving patient safety.

Thank you for the great interview, readers who wish to learn more should visit IQVIA.

Spread the love
Continue Reading

Healthcare

AI Algorithms Can Enhance the Creation of Bioscaffold Materials and Help Heal Wounds

mm

Updated

 on

Artificial intelligence and machine learning could help heal injuries by boosting the development speed of 3D printed bioscaffolds. Bioscaffolds are materials that allow organic objects, like skin and organs, to grow on them. Recent work done by researchers at Rice University applied AI algorithms to the development of bioscaffold materials, with the goal of predicting the quality of printed materials. The researchers found that controlling the speed of the printing is crucial to the development of useful bioscaffold implants.

As reported by ScienceDaily, team of researchers from Rice University collaborated to use machine learning to identify possible improvements to bioscaffold materials. Computer scientist Lydia Kavraki, from the Brown School of Engineering at Rice, lead a research team that applied machine learning algorithms to predict scaffold material quality. The study was co-authored by Rice bioengineer Antonios Mikos, who works on bone-like bioscaffolds that serve as tissue replacements, intended to support the growth of blood vessels and cells and enable wounded tissue to heal more quickly. The bioscaffolds Mikos works on are intended to heal musculoskeletal and craniofacial wounds. The bioscaffolds are produced with the assistance of 3D printing techniques that produce scaffolds that fit the perimeter of a given wound.

The process of 3D printing bioscaffold material requires a lot of trial and error to get the printed batch just right. Various parameters like material composition, structure, and spacing must be taken into account. The application of machine learning techniques can reduce much of this trial and error, giving the engineers useful guidelines that reduce the need to fiddle around with parameters. Kavraki and other researchers were able to give the bioengineering team feedback on which parameters were most important, those most likely to impact the quality of the printed material.

The research team started by analyzing data on printing scaffolds from a 2016 study on biodegradable polypropylene fumarate. Beyond this data, the researchers came up with a set of variables that would help them design a machine learning classifier. Once all the necessary data was collected, the researchers were able to design models, test them, and get the results published in just over half a year.

In terms of the machine learning models used by the research team, the team experimented with two different approaches. Both machine learning approaches were based on random forest algorithms, which aggregate decision trees to achieve a more robust and accurate model. One of the models that the team tested was a binary classification method that predicted if a particular set of parameters would result in a low or high-quality product. Meanwhile, the second classification method utilized a regression-method that estimated which parameter values would give a high-quality result.

According to the results of the research, the most important parameters for high-quality bioscaffolds were spacing, layering, pressure, material composition, and print speed. Print speed was the most important variable overall, followed by material composition. Its hoped that the results of the study will lead to better, faster printing of bioscaffolds, thereby enhancing the reliability of 3D printing body parts like cartilage, kneecaps, and jawbones.

According to Kavraki, the methods used by the research team have the potential to be used at other labs. As Kavraki was quoted by ScienceDaily:

“In the long run, labs should be able to understand which of their materials can give them different kinds of printed scaffolds, and in the very long run, even predict results for materials they have not tried. We don’t have enough data to do that right now, but at some point we think we should be able to generate such models.”

Spread the love
Continue Reading