Connect with us

Interviews

Dr. Lingjia Tang, CTO and Co-Founder, Clinc – Interview Series

mm

Updated

 on

Dr. Lingjia Tang, CTO and Co-Founder of Clinc, is a professor of Computer Science at The University of Michigan. Dr. Tang’s research in building large-scale production infrastructure for intelligent applications is widely recognized and respected in the academic community. In addition to working at both Microsoft and Google, Lingjia received her PhD in Computer Science from the University of Virginia. Lingjia has recently received prestigious awards including ISCA Hall of Fame, Facebook Research Awards and Google research Award.

What initially attracted you to AI? When did you first discover that you wanted to launch an AI business?

In the mid-2000s I was performing research around large-scale systems that support various applications and how we can design servers as a software system to run those applications more efficiently. At the time, we were shifting from working with traditional web applications to more machine learning-driven functions. That’s when I started to pay attention to the algorithms associated with AI and gained interest in fundamentally understanding how AI applications work. Soon after, the research team I was working with decided to pivot and basically build our own AI applications as benchmarks to study, which is what led us to publishing our first few research papers and developing our first product, Sirius—an open end-to-end voice and vision personal assistant.

As an open source software, Sirius allowed people to build conversational virtual assistants on their own. At the time, this was a very limited capability for the general public and was really only controlled by the big companies, such as Google and Apple. However, we saw that we were filling a critical gap when we released the software and saw that it had tens of thousands of downloads in the first week! That was the turning point where we knew there was a lot of market demand for this type of software.

Come 2015, we launched Clinc with the mindset that we would be able to provide everyone – every developer, company, person—who wants to be able to build a virtual assistant with the access to expertise, tooling and innovation to be able to do that.

Clinc offers conversational AI solutions without relying on keywords or scripts. Could you go into some details regarding how this is achieved? What are some of the Natural Language Processing (NLP) challenges that had to be overcome?

What really sets Clinc apart from other conversational AI platforms on the market is its underlying AI algorithms that enable its “human in the room” experience, which understands messy and unscripted language. This allows for corrections to backtrack and “heal” mistakes made in human conversation and enables complex conversational flows—conversations that a real human would be able to understand. In contrast to a speech-to-text word matching algorithm, Clinc analyzes dozens of factors from the user’s input including wording, sentiment, intent, tone of voice, time of day, location and relationships, and uses those factors to deliver an answer that represents a composite of knowledge extracted from its trained brain. For example, if I ask my virtual assistant, “how much money did I spend on a burger?” it needs to understand that I am asking about money and spending, that I am asking specifically about a hamburger and that a hamburger is a type of food and should be matched to my recent spending at a restaurant.

Achieving this level of understanding is not easy. In general, we would break down conversational AI into two components: Natural Language Understanding (NLU) and dialog management. So, the challenge that we had to overcome was figuring out how to build a system that can extract key pieces of information accurately and can anticipate what the user is asking.

We are able to do this through sophisticated, contextual, top-down NLU, that is trained to be intuitive in nature to keep up with the natural flow of conversation, understanding slang and context. This is in comparison to competitive solutions that have a top down, rules-based approach to Natural Language Processing (NLP) that does not allow for conversational healing, meaning if the customer makes an error, the competitive solutions make them go back to square one, wasting time and only frustrating the user. We also use crowdsourcing to extract our language data to create a richer, diverse data set that can be immediately used to train AI models.

Could you discuss how deep learning is used with the Clinc AI system?

Clinc is using a hybrid approach to deep learning where we use the traditional old-school model to some degree and leverage deep learning where needed. Specifically, we use deep learning to understand words and languages to determine the dialogue flow. Generally, our entire dialogue is a combination of deep learning and symbolic AI. We don’t use deep learning for language generation yet because, when it comes to our customers which are primarily in the banking industry, there are a lot of regulations that the virtual assistant must follow that dictate what they can and cannot say to their customers. So, there is still a lot of uncertainty around whether or not the deep learning will be able to follow those set language restrictions.

As of right now, I don’t think the conversational AI community is completely ready to fully adopt deep learning whereas the academic community is 100% all in, but I do look forward to seeing what the new models can do.

What’s the process for a company that wishes to customize the AI’s responses to target a specific audience? Could you give some examples of how Clinc is currently being used by clients?

We allow clients to either license a platform they can build on however they like, or take our fully built and trained chatbot, Finie, and customize it and integrate it into their apps or messaging services. Finie can handle matters related to balances, transactions, spending history, locating an ATM, making a transfer and more.

My favorite example of how a client has customized Clinc’s AI to target a specific audience is İşbank. As Turkey’s largest private bank, they turned to us to develop their digital banking assistant, Maxi, back in 2018. To infuse Maxi with a unique personality, İşbank held 14 focus groups to gauge what sort of traits and skills bank customers wanted in a virtual assistant. They also hired a voice actress to recite sentences in Turkish related to banking tasks. İşbank’s conversational banking team came up with these sentences by considering the way real people would phrase their needs. Upon our recommendation, the team paid participants on crowdsourcing marketplaces such as Amazon Mechanical Turk to supply different ways they might express the same questions, such as a request to view their balances (“what is my balance,” “how much money do I have in my account,” “show me the cash in my account”) or pay a bill (“pay my bill,” “bill payments”).

This example really shows how invested İşbank is in offering a digital banking assistant to help their customers better navigate their accounts. With Clinc, İşbank launched Maxi to more than 7.5 million people, in Turkish. Since its launch, İşbank has seen widespread adoption by more than 5.5 million users, with an average of 9.8 interactions per user. In recent months, as COVID-19 cases increased in Turkey, İşbank swiftly trained Maxi to be responsive to COVID-19-related queries. Since March 2020, Maxi has answered more than 1.2 million customer queries related to COVID-19, a more than 62% increase in usage.

What would you tell women who are interested in learning more about AI but are reluctant to get involved due to it being a male dominated field?

Off the bat, I don’t think there is any reason why AI is considered a male-dominated field. I think there are a lot of women pioneers in AI that are doing really well and are making an impact. I think AI coupled with social policy is a unique area that has the potential to have a lot of impact on people’s everyday lives. This is where I do think more diverse insights across the board would really benefit us, especially since there are a lot of conversations around AI bias involving race and gender. I believe that having a scoped community of AI developers will continue to have a disproportionate impact on society and policy.

For the women out there who are interested in joining the AI field, I highly recommend it especially if you are interested in making an impact. AI has had so much growth and innovation over the years and it really is an exciting time to be a part of it.

Is there anything else that you would like to share about Clinc?

Clinc is making huge strides right now. Personally, I have just stepped into a new role as CTO of Clinc and I am really excited to focus on how we can further work with developers and data scientists to grow the reach of our technology. As I look toward the future, I see the demand for AI-powered applications shifting to enable people who don’t have years of data science experience and machine learning background to be able to use it too. For example, you don’t have to have a graphic design degree to be able to use Photoshop. I think AI is heading in that direction where developers with no AI or machine learning training will be able to achieve results and produce high quality applications. Overall, we want to reiterate that we are not only devoted to the end-user but also to the developers, no matter what level, who show interest in our solution.

Thank you for the great interview, I look forward to followin your progress. Anyone who wishes to learn more should visit Clinc.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Autonomous Vehicles

Andrew Stein, Software Engineer Waymo – Interview Series

mm

Updated

 on

Andrew Stein is a Software Engineer who leads the perception team for Waymo Via, Waymo’s autonomous delivery efforts. Waymo is an autonomous driving technology development company that is a subsidiary of Alphabet Inc, the parent company of Google.

What initially attracted you to AI and robotics?

I always liked making things that “did something” ever since I was very young. Arts and crafts could be fun, but my biggest passion was working on creations that were also functional in some way. My favorite parts of Mister Rogers’ Neighborhood were the footage of conveyor belts and actuators in automated factories, seeing bottles and other products filled or assembled, labeled, and transported. I was a huge fan of Legos and other building toys. Then, thanks to some success in Computer Aided Design (CAD) competitions through the Technology Student Association in middle and high school, I ended up landing an after-school job doing CAD for a tiny startup company, Clipper Manufacturing. There, I was designing factory layouts for an enormous robotic sorter and associated conveyor equipment for laundering and organizing hangered uniforms for the retail garment industry. From there, it was off to Georgia Tech to study in electrical engineering, where I participated in the IEEE Robotics Club and took some classes in Computer Vision. Those eventually led me to the Robotics Institute at Carnegie Mellon University for my PhD. Many of my fellow graduate students from CMU have been close colleagues ever since, both at Anki and now at Waymo.

You previously worked as a lead engineer at Anki a robotics startup. What are some of the projects that you had the opportunity to work on at Anki?

I was the first full-time hire on the Cozmo project at Anki, where I had the privilege of starting the code repository from scratch and saw the product through to over one million cute, lifelike robots shipped into people’s homes. That work transitioned into our next product, Vector, which was another, more advanced and self-contained version of Cozmo. I got to work on many parts of those products, but was primarily responsible for computer vision for face detection, face recognition, 3D pose estimation, localization, and other aspects of perception. I also ported TensorFlow Lite to run on Vector’s embedded OS and helped deploy deep learning models to run onboard the robot for hand and person detection.

I also built Cozmo’s and Vector’s eye rendering systems, which gave me the chance to work particularly closely with much of Anki’s very talented and creative animation team, which was also a lot of fun.

In 2019, Waymo hired you and twelve other robotics experts from Anki to adapt its self-driving technology to other platforms, including commercial trucks. What was your initial reaction to the prospect of working at Waymo?

I knew many current and past engineers at Waymo and certainly was aware of the company’s reputation as a leader in the field of autonomous vehicles. I very much enjoyed the creativity of working on toys and educational products for kids at Anki, but I was also excited to join a larger company working in such an impactful space for society, to see how software development and safety are done at this organizational scale and level of technical complexity.

Can you discuss what a day working at Waymo is like for you?

Most of my role is currently focused on guiding and growing my team as we identify and solve trucking-specific challenges in close collaboration with other engineering teams at Waymo. That means my days are spent meeting with my team, other technical leads, and product and program managers as we plan for technical and organizational approaches to develop and deploy our self-driving system, called the Waymo Driver, and extend its capabilities to our growing fleet of trucks. Besides that, given that we are actively hiring, I also spend significant time interviewing candidates.

What are some of the unique computer vision and AI challenges that are faced with autonomous trucks compared to autonomous vehicles?

While we utilize the same core technology stack across all of our vehicles, there are some new considerations specific to trucking that we have to take into account. First and foremost, the domain is different: compared to passenger cars, trucks spend a lot more time on freeways, which are higher-speed environments. Due to a lot more mass, trucks are slower to accelerate and brake than cars, which means the Waymo Driver needs to perceive things from very far away. Furthermore, freeway construction uses different markers and signage and can even involve median crossovers to the “wrong” side of the road; there are freeway-specific laws like moving over for vehicles stopped on shoulders; and there can be many lanes of jammed traffic to navigate. Having a potentially larger blind spot caused by a trailer is another challenge we need to overcome.

Waymo’s recently began testing a driverless fleet of heavy-duty trucks in Texas with trained drivers on-board. At this point in the game, what are some of the things that Waymo hopes to learn from these tests?

Our trucks test in the areas in which we operate (AZ / CA / TX / NM) to gain meaningful experience and data in all different types of situations we might encounter driving on the freeway. This process exercises our software and hardware, allowing us to learn how we can continue to improve and adapt our Waymo Driver for the trucking domain.

Looking at Texas specifically: Dallas and Houston are known to be part of the biggest freight hubs in the US. Operating in that environment, we can test our Waymo Driver on highly dense highways and shipper lanes, further understand how other truck and passenger car drivers behave on these routes, and continue to refine the way our Waymo Driver reacts and responds in these busy driving regions. Additionally, it also enables us to test in a place with unique weather conditions that can help us drive our capabilities in that area forward.

Can you discuss the Waymo Open Dataset which includes both sensor data and labeled data, and the benefits to Waymo for sharing this valuable dataset?

At Waymo, we’re tackling some of the hardest problems that exist in machine learning. To aid the research community in making advancements in machine perception and self-driving technology, we’ve released the Waymo Open Dataset, which is one of the largest and most diverse publicly available fully self-driving datasets. Available at no cost to researchers at waymo.com/open, the dataset consists of 1,950 segments of high-resolution sensor data and covers a wide variety of environments, from dense urban centers to suburban landscapes, as well as data collected during day and night, at dawn and dusk, in sunshine and rain. In March 2020, we also launched the Waymo Open Dataset Challenges to provide the research community a way to test their expertise and see what others are doing.

In your personal opinion, how long will it be until the industry achieves true level 5 autonomy?

We have been working on this for over ten years now and so we have the benefit of that experience to know that this technology will come to the world step by step. Self-driving technology is so complex and we’ve gotten to where we are today because of advances in so many fields from sensing in hardware to machine learning. That’s why we’ve been taking a gradual approach to introduce this technology to the world. We believe it’s the safest and most responsible way to go, and we’ve also heard from our riders and partners that they appreciate this thoughtful and measured approach we’re taking to safely deploy this technology in their communities.

Thank you for the great interview, readers who wish to learn more should visit Waymo Via.

Spread the love
Continue Reading

Interviews

Michael Schrage, Author of Recommendation Engines (The MIT Press) – Interview Series

mm

Updated

 on

Michael Schrage is a Research Fellow at the MIT Sloan School of Management’s Initiative on the Digital Economy. A sought-after expert on innovation, metrics, and network effects, he is the author of Who Do You Want Your Customers to Become?The Innovator’s Hypothesis: How Cheap Experiments Are Worth More than Good Ideas (MIT Press), and other books.

In this interview we discuss his book “Recommendation Engines” which explores the history, technology, business, and social impact of online recommendation engines.

What inspired you to write a book on such a narrow topic as “Recommendation Engines”?

The framing of your question gives the game away…..When I looked seriously at the digital technologies and touchpoints that truly influenced people’s lives all over the world, I almost always found a ‘recommendation engine’ driving decision. Spotify’s recommenders determine the music and songs people hear; TikTok’s recommendation engines define the ‘viral videos’ people put together and share; Netflix’s recommenders have been architected to facilitate ‘binge watching’ and ‘binge watchers;’ Google Maps and Waze recommend the best and/or fastest and/or simplest ways to get there; Tinder and Match.com recommend who you might like to be with or, you know, ‘be’ with; Stitch Fix recommends what you might want to wear that makes you ‘you;’ Amazon will recommend what you really should be buying; Academia and ResearchGate will recommend the most relevant research you should be up to date on….I could go on – and do, in the book – but both technically and conceptually, ‘Recommendation Engines’ are the antithesis of ’narrow.’ Their point and purpose covers the entire sweep of human desire and decision.

A quote in your book is as follows: “Recommenders aren’t just about what we might buy, they’re about who we might want to become”.  How could this be abused by enterprises or bad actors?

There’s no question or doubt that recommendation can be abused. The ‘classic’ classic question – Cui bono? – ‘Who benefits?’ – applies. Are the recommendations truly intended to benefit the recipient or the entity/enterprise making the recommendation? Just as its easy for a colleague, acquaintance or ‘friend’ who knows you to offer up advice that really isn’t in your best interest, it’s a digital snap for ‘data driven’ recommenders to suggest you buy something that increases ’their’ profit at the expense of ‘your’ utility or satisfaction. On one level, I am very concerned about the potential – and reality – of abuse. On the other, I think most people catch on pretty quickly to when they’re being exploited or manipulated by people or technology. Fool me once, shame on you; fool me twice or thrice, shame on me. Recommendation is one of those special domains where it’s smart to be ethical and ethical to be smart. 

Are echo chambers where users are just fed what they want to see regardless of accuracy a societal issue?

Eli Pariser coined the excellent phrase ’the filter bubble’ to describe this phenomenon and pathology. I largely agree with his perspective. In truth, I think it now fair to say that ‘confirmation bias’ – not sex – is what really drives most adult human behavior. Most people are looking for agreement most of the time. Recommenders have to navigate a careful course between novelty, diversity relevance and serendipity because – while too much confirmation is boring and redundant – too much novelty and challenge can annoy and offend. So, yes, the quest for confirmation is both a personal and social issue. That said, recommenders offer a relatively unobnoxious way to bring alternative perspectives and options to people’s attention., However, I do, indeed, wonder whether regulation and legal review will increasingly define the recommendation future.

Filter bubbles currently limit exposure to conflicting, contradicting, and or challenging/viewpoints. Should there be some type of regulation that discourages this type of over-filtering?

I prefer light-touch to heavy-handed regulatory oversight. Most platforms I see do a pretty poor job of labelling ‘fake news’ or establishing quality control. I’d like to see more innovative mechanisms explored: swipe left for a contrarian take; embed links that elaborate on stories or videos in ways that deepen understanding or decontextualize the ‘bias’ that’s being confirmed. But let’s be clear: choice architectures that ‘discourage’ or create ‘frictions’ require different data and design sensibilities than those that ‘forbid’ or ‘censor’ or ‘prevent.’ I think this a very hard problem for people and machines alike. What makes it particularly hard is that human beings – in fact – are less predictable than a lot of psychologists and social scientists believe. There are a lot of competing ‘theories of the mind’ and ‘agency’ these days. The more personalized recommendations and recommenders become, the more challenging and anachronistic ‘one size fits all’ approaches become. It’s one of the many reasons this domain interests me so.

Should end users and society demand explainability as to why specific recommendations are made?

Yes, yes and yes. Not just ‘explainability’ but ‘visibility,’ ’transparency’ and ‘interpretability,’ too. People should have the right to see and understand the technologies being used to influence them. They should be able to appreciate the algorithms used to nudge and persuade them. Think of this as the algorithmic counterpart to ‘informed consent’ in medicine. Patients have the right to get- and doctors have the duty to provide – the reasons and rationales for choosing ’this’ course of action to ’that’ one. Indeed, I argue that ‘informed consent’ – and its future – in medicine and health care offers a good template for the future of ‘informed consent’ for recommendation engines. 

Do you believe it is possible to “hack” the human brain using Recommender Engines?

The brain or the mind? Not kidding. Are we materially – electrically and chemically – hacking neurons and lobes? Or are we using less invasive sensory stimuli to evoke predictable behaviors? Bluntly, I believe some brains – and some minds – are hackable some of the time. But do I believe people are destined to become ‘meat puppets’ who dance to recommendation’s tunes? I do not. Look, some people do become addicts. Some people do lose autonomy and self control. And, yes, some people do want to exploit others. But the preponderance of evidence doesn’t make me worry about the ‘weaponization of recommendation.’ I’m more worried about the abuse of trust.

A quote in a research paper by Jason L. Harman and Jason L. Harman states the following: “The trust that humans place on recommendations is key to the success of recommender systems”. Do you believe that social media has betrayed that trust?

I believe in that quote. I believe that trust is, indeed, key. I believe that smart and ethical people truly understand and appreciate the importance of trust. With apologies to Churchill’s comment on courage, trust is the virtue that enables healthy human connection and growth. That said, I’m comfortable arguing that most social media platforms – yes, Twitter and Facebook, I’m looking at you! – aren’t built around or based on trust. They’re based on facilitating and scaling self-expression. The ability to express one’s self at scale has literally nothing to do with creating or building trust. There was nothing to betray. With recommendation, there is.

You state your belief that the future of Recommender Engines will feature the best recommendations to enhance one’s mind. In your opinion are any Recommendation Engines currently working on such a system?

Not yet. I see that as the next trillion dollar market. I think Amazon and Google and Alibaba and Tencent want to get there. But, who knows, there may be an entrepreneurial innovator who surprises us all: maybe a Spotify incorporating mindfulness and just-in-time whispered ‘advice’ may be the mind-enhancing breakthrough.

How would you summarize how Recommendation Engines enables users to better understand themselves?

Recommendations are about good choices…. sometimes, even great choices. What are the choices you embrace? What are the choices you ignore? What are the choices you reject?  Having the courage to ask – and answer – those questions gives you remarkable insight into who you are and who you might want to become. We are the choices we make; whatever influences those choices has remarkable impact and influence on us.

Is there anything else that you would like to share about your book?

Yes – in the first and final analysis, my book is about the future of advice and the future of who you ‘really’ want to become. It’s about the future of the self – your ’self.’ I think that’s both an exciting and important subject, don’t you?

Thank you for taking the time to share your views.

To our readers I highly recommend this book, it is currently available on Amazon in Kindle or paperback. You can also view more ordering options on the MIT Press page.

Spread the love
Continue Reading

Healthcare

Updesh Dosanjh, Practice Leader, Technology Solutions, IQVIA – Interview Series

mm

Updated

 on

Updesh Dosanjh, Practice Leader of Technology Solutions at IQVIA, a world leader in using data, technology, advanced analytics and expertise to help customers drive healthcare – and human health – forward.

What is it that drew you initially to life sciences?

I’ve worked in multiple industries over the last 30 years, including the life sciences industry in the start of my career. When I chose to come back to the life sciences industry 15 years ago, it was to achieve three ambitions: work in an industry that contributed to the well-being of people; work in an area of industry that could be significantly helped by technology; and to work in an industry that gave me the chance to work with nice people.  Working with a pharmacovigilance team in life sciences has helped me to meet all three of these goals.

Can you discuss what human data science is and its importance to IQVIA?

The volume of human health data is growing rapidly—by more than 878 percent since 2016. Increasingly, advanced analytics are needed to bring to light needed insights. Data science and technology are progressing rapidly, however, there continue to be challenges with the collection and analysis of structured and unstructured data, especially when coming from disparate and siloed data sources.

The emerging discipline of human data science integrates the study of human science with breakthroughs in data technology to tap into the potential value big data can provide in advancing the understanding of human health. In essence, the human data scientist serves as a translator between the world of the clinician and the world of the data specialist. This new paradigm is helping to tackle the challenges facing 21st-century health care.

IQVIA is uniquely positioned to collect, protect, classify and study the data that helps us answer questions about human health. As a leader in human data science, IQVIA has a deep level of life sciences expertise as well as sophisticated analytical capabilities to glean insights from a plethora of data points that can help life science customers bring new medications to market faster and drive toward better health outcomes. By understanding today’s challenges and being creative about how new innovations can accelerate new answers, IQVIA has leaned into the concept of human data science—transforming the way the life sciences industry finds patients, diagnoses illness, and treats conditions.

How can AI best assist drug researchers in narrowing down which specific drugs deserve more industry resources?

Bringing new medications to market is incredibly costly and time-consuming—on average, it takes about 10 years and costs $2.6 billion to do so. When drug developers explore a molecule’s potential to treat or prevent a disease, they analyze any available data relevant to that molecule, which requires significant time and resources. Furthermore, once a drug is introduced and brought to market, companies are responsible for pharmacovigilance in which they need to leverage technology to monitor adverse events (AEs)—any undesirable experiences associated with the use of a given medication—thus helping to ensure patient safety.

Artificial intelligence (AI) tools can help life sciences organizations automate manual data processing tasks to look for and track patterns within data. Rather than having to manually sift through hundreds or thousands of data points to uncover the most relevant insights pertaining to a particular treatment, AI can help life sciences teams effectively uncover the most important information and bring it to the forefront for further exploration and actionable insights. This ensures more time and resources from life science teams are reserved for strategic analysis and decision-making rather than for data reporting.

You recently wrote an article detailing how biopharmaceutical companies that use natural language processing will have a competitive edge. Why do you believe this is so important?

Life sciences companies are under more pressure than ever to innovate, as they strive to advance global health and stay competitive in a highly saturated marketplace. Natural language processing (NLP) is currently being leveraged by life science companies to help mine and “read” unstructured, text-based documents. However, there is still significant untapped potential for leveraging NLP in pharmacovigilance to further protect patient safety, as well as assure regulatory compliance. NLP has the potential to meet evolving compliance requirements, understand new data sources, and elevate new opportunities to drive innovation. It does so by combining and comparing AEs from decades of statistical legacy data and new incoming patient data–which can be processed in real-time—giving an unprecedented amount of visibility and clarity around information being mined from critical data sources.

Pharmacovigilance (the detection, collection, assessment, monitoring, and prevention of adverse effects with pharmaceutical products) is increasingly reliant on AI. Can you discuss some of the efforts being applied by IQVIA towards this?

As mentioned, one of the primary roles of pharmacovigilance (PV) departments is collecting and analyzing information on AEs. Today, approximately 80 percent of healthcare data resides in unstructured formats, like emails and paper documents, and AEs need to be aggregated and correlated from disparate and expansive data sources, including social media, online communities and other digital formats. What is more, language is subjective, and definitions are fluid. Although two patients taking the same medication may describe similar AE reactions, each patient may experience, measure, and describe pain or discomfort levels on a dynamic scale based on various factors. PV and safety professionals working at life sciences organizations that still rely on manual data reporting and processing need to review these extensive, varied, and complex data sets via inefficient processes. This not only slows down clinical trials but also potentially delays the introduction of new drugs to the marketplace, preventing patients from getting access to potentially life-saving medications.

The life sciences industry is highly data-driven, and there is no better ally for data analysis and pattern detection than AI.  These tools are especially useful in processing and extrapolating large, complex PV data sets to help automate manual workloads and make the best use of the human assets on safety teams. Indeed, the adoption of AI and NLP tools within the life sciences industry is making it possible to take these large, unstructured data sets and turn them into actionable insights at unprecedented speed. Here are a few of the ways AI can improve operational efficiency for PV teams, which IQVIA actively delivers to its customers today:

  1. Speed literature searches for relevant information
  2. Scan social media across the globe to pinpoint AEs
  3. Listen and absorb audio calls (e.g. into a call center) for mentions of a company or drug
  4. Translate large amounts of information from one language into another
  5. Transform scanned documents on AEs into actionable information
  6. Read and interpret case narratives with minimal human guidance
  7. Determine whether any patterns in adverse reaction data are providing new, previously unrealized information that could improve patient safety
  8. Automate case follow-ups to verify information and capture any missing data

Is there anything else you would like to share about IQVIA?

IQVIA leverages its large data sets, advanced technology and deep domain expertise to provide the critical differentiator in providing AI tools that are specifically built and trained for the life sciences industry. This unique combination of attributes is what has contributed to the successful implementation of IQVIA technology across a wide array of industry players. This supports integrated global compliance efforts for the industry as well as improving patient safety.

Thank you for the great interview, readers who wish to learn more should visit IQVIA.

Spread the love
Continue Reading