Connect with us

COVID-19

Owkin Launches the Collaborative COVID-19 Open AI Consortium (COAI)

mm

Published

 on

After a fresh round of funding, Owkin recently launched the Covid-19 Open AI Consortium (COAI). This consortium will enable advanced collaborative research and accelerate clinical development of effective treatments for patients who are infected with COVID-19.

The first stage of the project is on fully understanding and treating cardiovascular complications in COVID-19 patients, this will be performed in collaboration with CAPACITY, an international registry working with over 50 centers around the world. Other areas of research will include patient outcomes and triage, and the prediction and characterization of immune response.

Owkin’s manifesto perfectly states the company’s vision:

“We are fully engaged in this new frontier with the goal of improving drug development and patient outcomes. Founded in 2016, Owkin has quickly emerged as a leader in bringing Artificial Intelligence (AI) and Machine Learning (ML) technologies to the healthcare industry. Our solutions improve the traditional medical research paradigm by turning a previously siloed, disjointed system into an innovative and collaborative one that, above all, puts the privacy of patients first.”

Federated Learning

To understand the model that Owkin is engaging one must fully understand a new technology which is called Federated Learning. Federated learning offers a framework for AI development that enables enterprises to train machine learning models on data that is distributed at scale across multiple medical institutions without centralizing the data. The benefits of this are two-fold, there is no loss of privacy since the data is not directly linked to any specific patient, and the data is maintained at the healthcare institution that collects this data.

The use of Federated Learning thereby enables a significantly wider range of data than what any single organization possesses in-house. What this means is that by used Federated Learning researchers have access to as much data as available, and the more big data a machine learning system possesses, the more accurate the AI becomes.

There are currently multiple national efforts in using AI to tackle COVID-19. The problem with many of these nationalistic disjointed efforts is that the data is specific to one country. Collecting data from a single region may fail to reveal important information that would enable researchers to fully understand how exposure to environmental elements, ethnic makeup, genetics, age, and gender may play important roles in understanding this disease. This is why collaboration is so important, and why gathering data from multiple jurisdictions is even more important.

As described by Owkin, they seek to used Federated Learning for the following:

“We aim to help them understand why drug efficacy varies from patient to patient, enhance the drug development process and identify the best drug for the right patient at the right time, to improve treatment outcomes.”

Understanding and treading cardiovascular health issues will be the first challenge undertaken by Owkin. As important as data is, what is even more important are the efforts of researchers and contributors who are spearheading this effort. This is why Unite.AI will be releasing three interviews with researchers that are contributing to the COAI project.

The Interviews

Sanjay Budhdeo, MD, Business Development:

Sanjay is a practicing physician. He holds Medical Sciences and Medical degrees from Oxford University and a Masters Degree from Cambridge University. Sanjay has research experience in neuroimaging, epidemiology and digital health. Prior to joining Owkin as a Partnership Manager, he was a Senior Associate at Boston Consulting Group, where he focused on data and digital in healthcare. He sits on the Patient Safety Committee at the Royal Society of Medicine and was previously a Specialist Advisor at the Care Quality Commission.

Click Here to read the interview with Sanjay.

Dr. Stephen Weng, Principal Researcher:

Stephen is an Assistant Professor of Integrated Epidemiology and Data Science who leads the data science research within the Primary Care Stratified Medicine Research Group.

He integrate traditional epidemiological methods and study design with new informatics-based approaches, harnessing and interrogating “big health care data” from electronic medical records for the purpose of risk prediction modeling, phenotyping chronic diseases, data science methods research, and translation of stratified medicine into primary care.

Click Here to read the interview with Stephen

Folkert W. Asselbergs, Principal Investigator

Folkert is professor of precision medicine in cardiovascular disease at Institute of Cardiovascular Science, UCL, Director NIHR BRC Clinical Research Informatics Unit at UCLH, professor of cardiovascular genetics and consultant cardiologist at the department of Cardiology, University Medical Center Utrecht, and chief scientific officer of the Durrer Center for Cardiovascular Research, Netherlands Heart Institute. Prof Asselbergs published more than 275 scientific papers and obtained funding from leDucq foundation, British and Dutch Heart Foundation, EU (FP7, ERA-CVD, IMI, BBMRI), and RO1 National Institutes of Health.

Click Here to read the interview with Folkert

Our Hope

The hope of Unite.AI is that by using biomedical images, genomics, and clinical data to discover biomarkers and mechanisms associated with diseases and treatment outcomes this will propel the next generation of treatment to tackle COVID-19. We are contributing to this important project by highlighting the personalities behind this important global effort.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Big Data

Every Second of Every Day, 1.7MB of Data is Created for Every Person – Here’s How We Can Get Control of it

mm

Published

on

Today, AI is everywhere.

From our digital voice assistants that help us stay on top of tasks, to our reliance on Google Maps for directions, to the recommendation engines that help us decide what to watch on Netflix, AI has become an integral part of our lives. Though some may posit that the term has become an almost meaningless cliche, in truth, it’s more important than ever.

Novelty uses like the $220 AI-powered toothbrush that’ll shine your pearly whites to perfection aside, it’s being put to use in incredible and impactful ways more often than not. AI is being used to help banks determine if transactions are fraudulent or legitimate, it enables hospitals to improve patient care and, incredibly, AI is helping identify persons with suicidal tendencies and get them the necessary help before they cause harm to themselves or others, among other important uses.

How AI Creates So. Much. Data

But as AI becomes more commonplace, the amount of personal data organizations hold grows at an exponential rate—and in fact, that’s exactly how AI trains itself. The more data it’s given, the more it learns and the better it performs. The result? Currently, more than 1.7MB of data is created on each person in every second of every day. This is a staggering number and AI-enabled technologies are a major contributing factor.

Interestingly though, in recent years, consumers have grown more aware and concerned regarding the ways in which their data is used, and sometimes abused. In part thanks to AI, but by way of many other tools as well, personal data has become the lifeblood of razor-targeted marketing efforts. Data helps organizations understand buying patterns, customer behavior, click-through rates, and more, to reach a slew of new insights.

Consider how, for example, our beloved recommendation engines work. After a long day at the office (or kitchen table-turned-makeshift workplace), you want to stretch out and unwind with a good show. You turn on your favorite streaming service, waiting to see your myriad viewing options.

Just how do those smart people over at your streaming service know what you’re interested in? The data science team collects thousands of data points per user, such as how long you spent watching any particular show, the time of day you typically watch, the devices you use, etc,. The more you watch, the more personal data the AI collects on you, thus enabling it to make better, more accurate predictions about what you’ll be interested in. This cycle of collecting personal data, and thus creating even more personal data never ceases, resulting in the mind-boggling amount of data that’s “born” each moment.

The Problem With Too Much Information

But now that your streaming service has collected and created all this data, it needs to be stored, managed, and kept safe. This is an expensive proposition. Moreover, the nature of data is to be spread out. Sure, there’s only one database, which is likely thought of as the central bucket for all data, but the reality is so much more complicated and messy; data science teams constantly create copies in various formats as part of training and testing the modules. Employees also unintentionally create copies, sending PI by email, generating reports, and more.

The result is a huge amount of personal data over which there is little supervision and even less control. More than that, most of it has no use for the organization and can be deleted after the use of it but who really remember/knows that it exists? This leaves organizations wide open to censure and penalties, as well as security risks. So how can all this data, which seems to be just about uncontainable, be reconciled with the need to adhere to privacy regulations such as GDPR and CCPA?

Turns out the problem is also the solution.

AI To Rein In All Your Data

Human-made attempts to instill order to the unending personal data problem obviously fall short. Very short. That’s because getting a handle on everything you’ve got requires knowing that you have it in the first place, which we’ve established is near-well impossible. But AI, which is all about scale, speed, accuracy, and automation, is perfectly suited to keep personal data in check.

To start, AI is a whole lot faster at sorting and organizing massive volumes of information than humans (sorry, humans). It can read data far more precisely and quicker than we can. It can automatically categorize data into GDPR or CCPA-sensitive categories, extract PIIs from both structured and unstructured data, merge duplicates of PIIs, and identify potentially sensitive documents on images – and it never gets tired of doing this.

AI can also identify data in places it shouldn’t be and can track and control all data movements, enabling it to monitor for risk. Speaking of risk, by automatically discovering unknown uses of sensitive data and eliminating all unneeded copies, AI enables you to drastically reduce your attack surface.

So for example, let’s say you have an AI engine that can perform entity extraction, understand entity relationships, and the meaning of data elements, as well as understand categories of information such as health-related information or criminal information. With AI, you can analyze endless copies in different data types, like data in motion, data at rest, structured, and unstructured, to actually gain greater control and management of that data. Lastly, with AI, organizations can perform large-scale multilingual data analysis to draw out unique business insights.

The Disease Is The Cure

In one of my all-time favorite movies, The Incredibles, Mr. Incredible realizes that the only thing powerful enough to destroy the robot is the robot himself. AI is an incredibly powerful tool. And as we continue to feed the great monster, it will only grow more powerful. Now’s the time to ensure it’s being harnessed properly and put to good use, by using it to enable organizations to gain far greater control over their most precious asset.

Spread the love
Continue Reading

Big Data

Power Your ML and AI Efforts with Data Transformation – Thought Leaders

mm

Published

on

The greater the variety, velocity, and volume of data we have, the more feasible it becomes to use predictive analytics and modeling to forecast growth and identify areas of opportunity and improvement. However, getting the greatest value from reporting, machine learning (ML), and artificial intelligence (AI) tools requires an organization to access data from many sources and ensure that data is high-quality and trusted. This is often the greatest barrier to transforming big data into business strategy.

Data professionals spend so much time gathering and validating data to prepare it for use that they have little time left to focus on their primary purpose: analyzing the data and deriving business value from it. Unsurprisingly, 76 percent of data scientists say data preparation is the least enjoyable part of their job. Moreover, current data preparation efforts like data wrangling and traditional ETL require manual effort from IT professionals and are not enough to handle the scale and complexity of big data.

Companies that want to leverage the power of AI need to break away from these tedious and largely manual processes that increase the risk of “garbage in, garbage out” results. Instead, they need data transformation processes that extract raw data in multiple sources and formats, join and normalize it, and add value with business logic and metrics to make it ready for analytics. With complex data transformation, they can be sure that AI/ML models are based on clean, accurate data that delivers trustworthy results.

Leveraging the power of the cloud with ELT

The best place to prepare and transform data today is a cloud data warehouse (CDW) such as Amazon Redshift, Google BigQuery, Microsoft Azure Synapse, or Snowflake. While traditional approaches to data warehousing require data to be extracted and transformed before it can be loaded, a CDW leverages the scalability and performance of the cloud for faster data ingestion and transformation and makes it possible to extract and load data from many disparate data sources before transforming it inside the CDW.

Ideally, the ELT model initially moves data into a section of the CDW reserved for raw staging data. From there, the CDW can use its near-unlimited computing resources available for data integration and ETL jobs that cleanse, aggregate, filter, and join the staged data. The data can then be transformed into a different schema – data vault or Star Schema, for example, optimizing the data for reporting and analytics

The ELT approach also allows you to replicate raw data within the CDW for later preparation and transformation when and as needed. This lets you use business intelligence tools that determine schema on read and produce specific transformations on demand, effectively letting you transform the same data in multiple ways as you discover new uses for it.

Accelerating machine learning models

These real-world examples show how two companies in different industries are leveraging data transformation in a CDW to drive AI initiatives.

A boutique marketing and advertising agency built a proprietary customer management platform to help its clients better identify, understand, and motivate their customers. By transforming data within a CDW, the platform quickly and easily integrates real-time customer data across channels into a 360-degree customer view that informs the platform’s AI/ML models for making customer interactions more consistent, timely, and personalized.

A global logistics firm making 100 million deliveries to 37 million unique customers in 72 countries needs vast amounts of data to power its daily operations. Adopting data transformation within a CDW enabled the company to deploy 200 machine learning models in a single year. These models make 500,000 predictions every day, significantly improving efficiency and driving superior customer service that has reduced inbound call center calls by 40 percent.

Best practices for getting started

Companies that want to support their AI/ML initiatives with the power of data transformation in the cloud need to understand their specific use case and needs. Beginning with what you want to do with your data –reducing fuel costs by optimizing delivery routes, boosting sales by delivering next best offers to customer service agents in real-time, etc. – lets you reverse-engineer your processes so you can identify which data will deliver relevant results.

Once you determine what data your AI/ML project needs to build its models, you need a cloud-native ELT solution that will make your data fit for use. Look for a solution that:

  • Is vendor-neutral and able to work with your current technology stack

  • Is flexible enough to scale up and down and adapt as your technology stack changes

  • Can handle complex data transformations from multiple data sources

  • Offers a pay-as-you-go pricing model in which you pay only for what you use

  • Is purpose-built for your preferred CDW so you can fully leverage that CDW’s features to run jobs faster and transform data seamlessly.

A cloud data transformation solution that caters to the common denominators of all CDWs may provide a consistent experience, but only one that enables the powerful differentiating features of your chosen CDW can deliver the high performance that speeds time to insight. The right solution will enable you to power your AI/ML projects with more clean, trusted data from more sources in less time – and generate faster, more reliable results that drive previously unrealized business value and innovation.

Spread the love
Continue Reading

Big Data

Ingo Mierswa, Founder & President at RapidMiner, Inc – Interview Series

mm

Published

on

Ingo Mierswa is the Founder & President at RapidMiner, Inc. RapidMiner brings artificial intelligence to the enterprise through an open and extensible data science platform. Built for analytics teams, RapidMiner unifies the entire data science lifecycle from data prep to machine learning to predictive model deployment. More than 625,000 analytics professionals use RapidMiner products to drive revenue, reduce costs, and avoid risks.

What was your inspiration behind launching RapidMiner?

I had worked in the data science consultancy business for many years and I saw a need for a platform that was more intuitive and approachable for people without a formal education in data science. Many of the existing solutions at the time relied on coding and scripting and they simply were not user-friendly. Furthermore, it made data difficult to manage and maintain the solutions that were developed within those platforms. Basically, I realized that these projects didn’t need to be so difficult so, we started to create the RapidMiner platform to allow anyone to be a great data scientist.

Can you discuss the full transparency governance that is currently being utilized by RapidMiner?

When you can’t explain a model, it’s quite hard to tune, trust and translate. A lot of data science work is the communication of the results to others so that stakeholders can understand how to improve processes. This requires trust and deep understanding. Also, issues with trust and translation can make it very hard to overcome the corporate requirements to get a model into production. We are fighting this battle in a few different ways:

As a visual data science platform, RapidMiner inherently maps out an explanation for all data pipelines and models in a highly consumable format that can be understood by data scientists or non-data scientists. It makes models transparent and helps users in understanding model behavior and evaluating its strengths and weaknesses and detecting potential biases.

In addition, all models created in the platform come with extensive visualizations for the user – typically the user creating the model – to gain model insights, understand model behavior and evaluate model biases.

RapidMiner also provides model explanations – even when in production: For each prediction created by a model, RapidMiner generates and adds the influence factors that have led to or influenced the decisions made by that model in production.

Finally – and this is very important to me personally as I was driving this with our engineering teams a couple of years ago – RapidMiner also provides an extremely powerful model simulator capability, which allows users to simulate and observe the model behavior based on input data provided by the user. Input data can be set and changed very easily, allowing the user to understand the predictive behavior of the models on various hypothetical or real-world cases. The simulator also displays factors that influence the model’s decision. The user – in this case even a business user or domain expert – can understand model behavior, validate the model’s decision against real outcomes or domain knowledge and identify issues. The simulator allows you to simulate the real world and have a look into the future – into your future, in fact.

How does RapidMiner use deep learning?

RapidMiner’s use of deep learning somethings we are very proud of. Deep learning can be very difficult to apply and non-data-scientists often struggle with setting up those networks without expert support. RapidMiner makes this process as simple as possible for users of all types. Deep learning is, for example, part of our Auto machine learning (ML) product called RapidMiner Go. Here the user does not need to know anything about deep learning to make use of those types of sophisticated models. In addition, power users can go deeper and use popular deep learning libraries like Tensorflow, Keras, or DeepLearning4J right from the visual workflows they are building with RapidMiner. This is like playing with building blocks and simplifies the experience for users with fewer data science skills. Through this approach our users can build flexible network architectures with different activation functions and user-defined number of layers and nodes, multiple layers with different numbers of nodes, and choose from different training techniques.

What other type of machine learning is used?

All of them! We offer hundreds of different learning algorithms as part of the RapidMiner platform – everything you can apply in the widely-used data science programming languages Python and R. Among others, RapidMiner offers methods for Naive Bayes, regression such as Generalized Linear Models, clustering such as k-Means, FP-Growth, Decision Trees, Random Forests, Parallelized Deep Learning, and Gradient Boosted Trees. These and many more are all a part of the modeling library of RapidMiner and can be used with a single click.

Can you discuss how the Auto Model knows the optimal values to be used?

RapidMiner AutoModel uses intelligent automation to accelerate everything users do and ensure accurate, sound models are built. This includes instance selection and automatic outlier removal, feature engineering for complex data types such as dates or texts, and full multi-objective automated feature engineering to select the optimal features and construct new ones.  Auto Model also includes other data cleaning methods to fix common issues in data such as missing values, data profiling by assessing the quality and value of data columns, data normalization and various other transformations.

Auto Model also extracts data quality meta data – for example, how much a column behaves like an ID or whether there are lots of missing values. This meta data is used in addition to the basic meta data in automating and assisting users in ‘using the optimal values’ and dealing with data quality issues.

For more detail, we’ve mapped it all out in our Auto Model Blueprint. (Image below for extra context)

There are four basic phases where the automation is applied:

– Data prep: Automatic analysis of data to identify common quality problems like correlations, missing values, and stability.
– Automated model selection and optimization, including full validation and performance comparison, that suggests the best machine learning techniques for given data and determines the optimal parameters.
– Model simulation to help determine the specific (prescriptive) actions to take in order to achieve the desired outcome predicted by the model.
– In the model deployment and operations phase, users are shown factors like drift, bias and business impact, automatically with no extra work required.

Computer bias is an issue with any type of AI, are there any controls in place to prevent bias from creeping up in results?

Yes, this is indeed extremely important for ethical data science. The governance features mentioned before ensure that users can always see exactly what data has been used for model building, how it was transformed, and whether there is bias in the data selection. In addition, our features for drift detection are another powerful tool to detect bias. If a model in production demonstrates a lot of drift in the input data, this can be a sign that the world has changed dramatically. However, it can also be an indicator that there was severe bias in the training data. In the future, we are considering to going even one step further and building machine learning models which can be used to detect bias in other models.

Can you discuss the RapidMiner AI Cloud and how it differentiates itself from competing products?

The requirements for a data science project can be large, complex and compute intensive, which is what has made the use of cloud technology such an attractive strategy for data scientists. Unfortunately, the various native cloud-based data science platforms tie you to cloud services and data storage offerings of that particular cloud vendor.

The RapidMiner AI Cloud is simply our cloud service delivery of the RapidMiner platform. The offering can be tailored to any customer’s environment, regardless of their cloud strategy. This is important these days as most businesses’ approach to cloud data management is evolving very quickly in the current climate. Flexibility is really what sets RapidMiner AI Cloud apart. It can run in any cloud service, private cloud stack or in a hybrid setup. We are cloud portable, cloud agnostic, multi-cloud – whatever you prefer to call it.

RapidMiner AI Cloud is also very low hassle, as of course, we offer the ability manage all or part of the deployment for clients so they can focus on running their business with AI, not the other way around. There’s even an on-demand option, which allows you spin up an environment as needed for short projects.

RapidMiner Radoop eliminates some of the complexity behind data science, can you tell us how Radoop benefits developers?  

Radoop is mainly for non-developers who want to harness the potential of big data. RapidMiner Radoop executes RapidMiner workflows directly inside Hadoop in a code-free manner. We can also embed the RapidMiner execution engine in Spark so it’s easy to push complete workflows into Spark without the complexity that comes from code-centric approaches.

Would a government entity be able to use RapidMiner to analyze data to predict potential pandemics, similar to how BlueDot operates?

As a general data science and machine learning platform, RapidMiner is meant to streamline and enhance the model creation and management process, no matter what subject matter or domain is at the center of the data science/machine learning problem. While our focus is not on predicting pandemics, with the right data a subject matter expert (like a virologist or epidemiologist, in this case) could use the platform to create a model that could accurately predict pandemics. In fact, many researchers do use RapidMiner – and our platform is free for academic purposes.

Is there anything else that you would like to share about RapidMiner?

Give it a try!  You may be surprised how easy data science can be and how much a good platform can improve you and your team’s productivity.

Thank you for this great interviewer, readers who wish to learn more should visit RapidMiner.

Spread the love
Continue Reading