Connect with us

Healthcare

Groundbreaking Research Shows How Sensors Can Be 3D Printed on Contracting Organs

Published

 on

Major research has come out of the University of Minnesota that could have huge implications in healthcare. Mechanical Engineers and computer scientists have developed a new 3D printing technique that allows electronic sensors to be directly printed on organs that are expanding and contracting. 

The new technique uses motion capture technology like what is used to create movies, and besides having implications within the general field of healthcare, it could be specifically applied to diagnose and monitor the lungs of individuals with COVID-19. 

The research was published in Science Advances, a scientific journal published by the American Association for the Advancement of Science (AAAS). 

3D Printing Technique

The research is based on a 3D printing technique that was discovered two years ago. The technique was first used on a hand that rotated and moved left to right, with electronics directly printed on the skin of the hand. It has now been developed even further to work on organs such as the lungs or heart, which expand and contract, leading to a change in the shape or distortion. 

Michael McAlpine is a University of Minnesota mechanical engineering professor and senior researcher on the study.

“We are pushing the boundaries of 3D printing in new ways we never even imagined years ago,” said McAlpine. “3D printing on a moving object is difficult enough, but it was quite a challenge to find a way to print on a surface that was deforming as it expanded and contracted.”

Development and Future Applications

The researchers first used a balloon-like surface and a specialized 3D printer. They utilized motion capture tracking markers, like the ones used to create special effects in movies, in order to help the 3D printer adapt to the expansion and contraction movements on the surface. 

After using the balloon-like surface, the researchers tested it on an animal lung that was artificially inflated. It proved to be a success, and a soft hydrogel-based sensor was printed directly on the surface. 

According to McAlpine, this technology could be used in the future to print directly on a pumping heart.

“The broader idea behind this research, is that this is a big step forward to the goal of combining 3D printing technology with surgical robots,” said McAlpine. “In the future, 3D printing will not be just about printing but instead be part of a larger autonomous robotic system. This could be important for diseases like COVID-19 where health care providers are at risk when treating patients.

The research team also included lead author Zhijie Zhu, a mechanical engineering Ph.D. candidate at the University of Minnesota, as well as Hyun Soo Park, assistant professor in the University of Minnesota Department of Computer Science and Engineering. 

The work was supported by Medtronic and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.

 

Spread the love

Healthcare

Scientists Detect Loneliness Through The Use Of AI And NLP

mm

Published

 on

Researchers from the University of California San Diego School of Medicine have made use of artificial intelligence algorithms to quantify loneliness in older adults and determine how older adults might express loneliness in their speech.

Over the past twenty years or so, social scientists have described a trend of rising loneliness in the population. Studies done over the past decade in particular have documented rising loneliness rates across large swaths of society, which has impacts on depression rates, suicide rates, drug use, and general health. These problems are only exacerbated by the Covid-19 pandemic, as people are unable to safely meet up and socialize in person. Certain groups are more vulnerable to extreme loneliness, such as marginalized groups and older adults. As MedicalXpress reported, one study done by UC San Diego found that senior housing communities had loneliness rates approaching 85% when counting those who reported experiencing moderate or severe loneliness.

In order to determine solutions to this problem, social scientists need to get an accurate view of the situation, determining both the depth and breadth of the issue. Unfortunately, most methods of gathering data on loneliness are limited in notable respects. Self-reporting, for instance, can be biased towards the more extreme cases of loneliness. In addition, questions that directly ask study participants to quantify how “lonely” they feel can sometimes be inaccurate due to social stigmas surrounding loneliness.

In an effort to design a better metric for quantifying loneliness, the authors of the study turned to natural language processing and machine learning. The NLP methods used by the researchers are used alongside traditional loneliness measurement tools, and its hoped that analyzing the natural ways people use language will lead to a less biased, more honest representation of people’s loneliness.

The new study’s senior author was Ellen Lee, assistant professor of psychiatry at the School of Medicine, UC San Diego. Lee and the other researchers focused their study on 80 participants between the ages of 66 to 94. Participants in the study were encouraged by the researchers to answer questions in a way that was more natural and unstructured than most other studies. The researchers weren’t just asking questions and classifying answers. As the first author Ph.D. Varsha Badal, explained that using machine learning and NLP allowed the research team to take these long-form interview answer and find how subtle word choice and speech patterns could be indicative of loneliness when taken together:

“NLP and machine learning allow us to systematically examine long interviews from many individuals and explore how subtle speech features like emotions may indicate loneliness. Similar emotion analyses by humans would be open to bias, lack consistency, and require extensive training to standardize.”

According to the research team, individuals who were lonely had noticeable differences in the ways they responded to the questions compared to non-lonely respondents. Lonely respondents would express more sadness when asked questions regarding loneliness and had longer responses in general. Men were less likely to admit feeling lonely than women. In addition, men were more likely to use words expressing joy or fear than women were.

The researchers of the study explained that the results helped elucidate the differences between typical research metrics for loneliness and the way individuals subjectively experience and describe loneliness. The results of the study imply that loneliness could be detected through the analysis of speech patterns, and if these patterns prove to be reliable they could help diagnose and treat loneliness in older adults. The machine learning models designed by the researchers were able to predict qualitative loneliness with approximately 94% accuracy. More research will need to be conducted to see if the model is robust and if its success can be replicated. In the meantime, members of the research team are hoping to explore how NLP features might be correlated with wisdom and loneliness, which have an inverse correlation in older adults.

Spread the love
Continue Reading

Healthcare

Updesh Dosanjh, Practice Leader, Technology Solutions, IQVIA – Interview Series

mm

Published

 on

Updesh Dosanjh, Practice Leader of Technology Solutions at IQVIA, a world leader in using data, technology, advanced analytics and expertise to help customers drive healthcare – and human health – forward.

What is it that drew you initially to life sciences?

I’ve worked in multiple industries over the last 30 years, including the life sciences industry in the start of my career. When I chose to come back to the life sciences industry 15 years ago, it was to achieve three ambitions: work in an industry that contributed to the well-being of people; work in an area of industry that could be significantly helped by technology; and to work in an industry that gave me the chance to work with nice people.  Working with a pharmacovigilance team in life sciences has helped me to meet all three of these goals.

Can you discuss what human data science is and its importance to IQVIA?

The volume of human health data is growing rapidly—by more than 878 percent since 2016. Increasingly, advanced analytics are needed to bring to light needed insights. Data science and technology are progressing rapidly, however, there continue to be challenges with the collection and analysis of structured and unstructured data, especially when coming from disparate and siloed data sources.

The emerging discipline of human data science integrates the study of human science with breakthroughs in data technology to tap into the potential value big data can provide in advancing the understanding of human health. In essence, the human data scientist serves as a translator between the world of the clinician and the world of the data specialist. This new paradigm is helping to tackle the challenges facing 21st-century health care.

IQVIA is uniquely positioned to collect, protect, classify and study the data that helps us answer questions about human health. As a leader in human data science, IQVIA has a deep level of life sciences expertise as well as sophisticated analytical capabilities to glean insights from a plethora of data points that can help life science customers bring new medications to market faster and drive toward better health outcomes. By understanding today’s challenges and being creative about how new innovations can accelerate new answers, IQVIA has leaned into the concept of human data science—transforming the way the life sciences industry finds patients, diagnoses illness, and treats conditions.

How can AI best assist drug researchers in narrowing down which specific drugs deserve more industry resources?

Bringing new medications to market is incredibly costly and time-consuming—on average, it takes about 10 years and costs $2.6 billion to do so. When drug developers explore a molecule’s potential to treat or prevent a disease, they analyze any available data relevant to that molecule, which requires significant time and resources. Furthermore, once a drug is introduced and brought to market, companies are responsible for pharmacovigilance in which they need to leverage technology to monitor adverse events (AEs)—any undesirable experiences associated with the use of a given medication—thus helping to ensure patient safety.

Artificial intelligence (AI) tools can help life sciences organizations automate manual data processing tasks to look for and track patterns within data. Rather than having to manually sift through hundreds or thousands of data points to uncover the most relevant insights pertaining to a particular treatment, AI can help life sciences teams effectively uncover the most important information and bring it to the forefront for further exploration and actionable insights. This ensures more time and resources from life science teams are reserved for strategic analysis and decision-making rather than for data reporting.

You recently wrote an article detailing how biopharmaceutical companies that use natural language processing will have a competitive edge. Why do you believe this is so important?

Life sciences companies are under more pressure than ever to innovate, as they strive to advance global health and stay competitive in a highly saturated marketplace. Natural language processing (NLP) is currently being leveraged by life science companies to help mine and “read” unstructured, text-based documents. However, there is still significant untapped potential for leveraging NLP in pharmacovigilance to further protect patient safety, as well as assure regulatory compliance. NLP has the potential to meet evolving compliance requirements, understand new data sources, and elevate new opportunities to drive innovation. It does so by combining and comparing AEs from decades of statistical legacy data and new incoming patient data–which can be processed in real-time—giving an unprecedented amount of visibility and clarity around information being mined from critical data sources.

Pharmacovigilance (the detection, collection, assessment, monitoring, and prevention of adverse effects with pharmaceutical products) is increasingly reliant on AI. Can you discuss some of the efforts being applied by IQVIA towards this?

As mentioned, one of the primary roles of pharmacovigilance (PV) departments is collecting and analyzing information on AEs. Today, approximately 80 percent of healthcare data resides in unstructured formats, like emails and paper documents, and AEs need to be aggregated and correlated from disparate and expansive data sources, including social media, online communities and other digital formats. What is more, language is subjective, and definitions are fluid. Although two patients taking the same medication may describe similar AE reactions, each patient may experience, measure, and describe pain or discomfort levels on a dynamic scale based on various factors. PV and safety professionals working at life sciences organizations that still rely on manual data reporting and processing need to review these extensive, varied, and complex data sets via inefficient processes. This not only slows down clinical trials but also potentially delays the introduction of new drugs to the marketplace, preventing patients from getting access to potentially life-saving medications.

The life sciences industry is highly data-driven, and there is no better ally for data analysis and pattern detection than AI.  These tools are especially useful in processing and extrapolating large, complex PV data sets to help automate manual workloads and make the best use of the human assets on safety teams. Indeed, the adoption of AI and NLP tools within the life sciences industry is making it possible to take these large, unstructured data sets and turn them into actionable insights at unprecedented speed. Here are a few of the ways AI can improve operational efficiency for PV teams, which IQVIA actively delivers to its customers today:

  1. Speed literature searches for relevant information
  2. Scan social media across the globe to pinpoint AEs
  3. Listen and absorb audio calls (e.g. into a call center) for mentions of a company or drug
  4. Translate large amounts of information from one language into another
  5. Transform scanned documents on AEs into actionable information
  6. Read and interpret case narratives with minimal human guidance
  7. Determine whether any patterns in adverse reaction data are providing new, previously unrealized information that could improve patient safety
  8. Automate case follow-ups to verify information and capture any missing data

Is there anything else you would like to share about IQVIA?

IQVIA leverages its large data sets, advanced technology and deep domain expertise to provide the critical differentiator in providing AI tools that are specifically built and trained for the life sciences industry. This unique combination of attributes is what has contributed to the successful implementation of IQVIA technology across a wide array of industry players. This supports integrated global compliance efforts for the industry as well as improving patient safety.

Thank you for the great interview, readers who wish to learn more should visit IQVIA.

Spread the love
Continue Reading

Healthcare

AI Algorithms Can Enhance the Creation of Bioscaffold Materials and Help Heal Wounds

mm

Published

 on

Artificial intelligence and machine learning could help heal injuries by boosting the development speed of 3D printed bioscaffolds. Bioscaffolds are materials that allow organic objects, like skin and organs, to grow on them. Recent work done by researchers at Rice University applied AI algorithms to the development of bioscaffold materials, with the goal of predicting the quality of printed materials. The researchers found that controlling the speed of the printing is crucial to the development of useful bioscaffold implants.

As reported by ScienceDaily, team of researchers from Rice University collaborated to use machine learning to identify possible improvements to bioscaffold materials. Computer scientist Lydia Kavraki, from the Brown School of Engineering at Rice, lead a research team that applied machine learning algorithms to predict scaffold material quality. The study was co-authored by Rice bioengineer Antonios Mikos, who works on bone-like bioscaffolds that serve as tissue replacements, intended to support the growth of blood vessels and cells and enable wounded tissue to heal more quickly. The bioscaffolds Mikos works on are intended to heal musculoskeletal and craniofacial wounds. The bioscaffolds are produced with the assistance of 3D printing techniques that produce scaffolds that fit the perimeter of a given wound.

The process of 3D printing bioscaffold material requires a lot of trial and error to get the printed batch just right. Various parameters like material composition, structure, and spacing must be taken into account. The application of machine learning techniques can reduce much of this trial and error, giving the engineers useful guidelines that reduce the need to fiddle around with parameters. Kavraki and other researchers were able to give the bioengineering team feedback on which parameters were most important, those most likely to impact the quality of the printed material.

The research team started by analyzing data on printing scaffolds from a 2016 study on biodegradable polypropylene fumarate. Beyond this data, the researchers came up with a set of variables that would help them design a machine learning classifier. Once all the necessary data was collected, the researchers were able to design models, test them, and get the results published in just over half a year.

In terms of the machine learning models used by the research team, the team experimented with two different approaches. Both machine learning approaches were based on random forest algorithms, which aggregate decision trees to achieve a more robust and accurate model. One of the models that the team tested was a binary classification method that predicted if a particular set of parameters would result in a low or high-quality product. Meanwhile, the second classification method utilized a regression-method that estimated which parameter values would give a high-quality result.

According to the results of the research, the most important parameters for high-quality bioscaffolds were spacing, layering, pressure, material composition, and print speed. Print speed was the most important variable overall, followed by material composition. Its hoped that the results of the study will lead to better, faster printing of bioscaffolds, thereby enhancing the reliability of 3D printing body parts like cartilage, kneecaps, and jawbones.

According to Kavraki, the methods used by the research team have the potential to be used at other labs. As Kavraki was quoted by ScienceDaily:

“In the long run, labs should be able to understand which of their materials can give them different kinds of printed scaffolds, and in the very long run, even predict results for materials they have not tried. We don’t have enough data to do that right now, but at some point we think we should be able to generate such models.”

Spread the love
Continue Reading