Connect with us

Interviews

Erik Gfesser, Principal Architect for the Data Practice of SPR – Interview Series

mm

Published

 on

Erik Gfesser, Principal Architect for the Data Practice of SPR - Interview Series

Erik joined the data practice of SPR‘s Emerging Technology Group as Principal Architect in 2018.

Erik became specializes in data, open source development using Java, and practical enterprise architecture, including the building of PoCs, prototypes, and MVPs.

What initially attracted you to machine learning?

Its enablement of applications to continuously learn. I had started my development career as a senior data analyst using SPSS at what became a global market research firm, and later incorporated use of a business rules engine called Drools into applications that I built for clients, but the output for all of this work was essentially static.

I later worked through process improvement training, during which time instructors demonstrated in detail how they were able to improve, through statistics and other methods, business processes used by their clients, but here again the output was largely focused on points in time. My experience working to improve a healthcare product my colleagues and I built during this same time period is what showed me why continuous learning is necessary for such efforts, but the resources now available did not exist back then.

Interestingly, my attraction to machine learning has come full circle, as my graduate adviser cautioned me against a specialty in what was then called artificial intelligence, due to the AI winter at the time. I choose to instead make use terms such as ML because these hold fewer connotations, and because even AWS acknowledges that its AI services layer is really just a higher-level abstraction built on top of its ML services layer. While some of the ML hype out there is unrealistic, it provides powerful capabilities from the perspective of developers, as long as these same practitioners acknowledge the fact that the value which ML provides is only as good as the data processed by it.

 

You’re a huge open source advocate, could you discuss why open source is so important?

One aspect about open source that I’ve needed to explain to executives over the years is that the primary benefit of open source is not that use of such software is made available without monetary cost, but that the source code is made freely available.

Additionally, developers making use of this source code can modify it for their own use, and if suggested changes are approved, make these changes available to other developers using it. In fact, the movement behind open source software started due to developers waiting at length for commercial firms to make changes to products they licensed, so developers took it upon themselves to write software with the same functionality, opening it up to be improved upon by other developers.

Commercialized open source takes advantage of these benefits, the reality being that many modern products make use of open source under the covers, even whilst commercial variants of such software typically provide additional components not available as part of a given open source release, providing differentiators as well as support if this is needed.

My first experiences with open source took place while building the healthcare product I mentioned earlier, making use of tooling such as Apache Ant, used to build software, and an early DevOps product at the time called Hudson (the code base of which later became Jenkins). The primary reason behind our decisions to use these open source products was that these either provided better solutions to commercial alternatives, or were innovative solutions not even offered by commercial entities, not to mention that the commercial licensing of some of the products we had been using was overly restrictive, leading to excessive red tape when it came time to needing more licenses, due to the costs involved.

Over time, I’ve seen open source offerings continue to evolve, providing much needed innovation. For example, many of the issues with which my colleagues and I wrestled building this healthcare product were later solved by an innovative open source Java product we started using called Spring Framework, which is still going strong after more than a decade, the ecosystem of which now stretches far beyond some of the innovations that it initially provided, now seen as commonplace, such as dependency injection.

 

You’ve used open source for the building of PoCs, prototypes, and MVPs. Could you share your journey behind some of these products?

As explained in one of the guiding principles I presented to a recent client, build-outs for the data platform we built for them should continue to be iteratively carried out as needed over time. The components built out for this platform should not be expected to remain static, as needs change and new components and component features will be made available over time.

When building out platform functionality, always start with what is minimally viable before adding unneeded bells and whistles, which in some cases even includes configuration. Start with what is functional, make sure you understand it, and then evolve it. Don’t waste time and money building what has low likelihood of being used, but make an effort to get ahead of future needs.

The MVP we built for this product expressly needed to be built so that additional use cases could continue to be built on top of it, even though it came packaged with implementation of a single use case, for expense anomaly detection. Unlike this client, an earlier product that I built had some history behind it prior to my arrival. In this case, stakeholders had been debating for three years (!) how they should approach a product they were looking to build. A client executive explained that one of the reasons he brought me in was to help the firm get past some of these internal debates, especially because the product that he was looking to build needed to satisfy the hierarchy of organizations involved.

I came to find that these turf wars were largely associated with the data owned by the client, its subsidiaries, and its external customers, so in this case the entire product backlog revolved around how this data would be ingested, stored, secured, and consumed for a single use case generating on-the-fly networks of healthcare providers for cost analyses.

Earlier in my career, I came to understand that an architectural quality called “usability” was not limited to just end users, but software developers themselves. The reason this is the case is because the code that is written needs to be usable just like user interfaces need to be usable by end users. In order that a product become usable, proofs of concept need to be built to demonstrate that developers are going to be able to do what they set out to do, especially when related to the specific technology choices they are making. But proofs of concept are just the beginning, as products are best when evolved over time. In my view, the foundation for an MVP, however, should ideally be built on prototypes exhibiting some stability so that developers will be able to continue to evolve it.

 

While reviewing the book ‘Machine Learning at Enterprise Scale’ you stated that ‘use of open source products, frameworks, and languages alongside an agile architecture composed of a mix of open source and commercial components provides the nimbleness that many firms need but don’t immediately realize at the outset’. Could you go into some details as to why you believe that firms which use open source are more nimble?

Many commercial data products use key open source components under the covers, and enable developers to use popular programming languages such as Python. The firms which build these products know that the open source components they’ve chosen to incorporate give them a jump start when these are already widely used by the community.

Open source components with strong communities are easier to sell, due to the familiarity that these bring to the table. Commercially available products which consist mainly of closed source, or even open source that is largely only used by specific commercial products, often require either training by these vendors, or licenses in order to make use of the software.

Additionally, documentation for such components is largely not made publicly available, forcing the continued dependency of developers on these firms. When widely accepted open source components such as Apache Spark are the central focus, as with products such as Databricks Unified Analytics Platform, many of these items are already made available in the community, minimizing the portions on which development teams need to depend on commercial entities to do their work.

Additionally, because components such as Apache Spark are broadly accepted as de facto industry standard tooling, code can also be more easily migrated across commercial implementations of such products. Firms will always be inclined to incorporate what they view as competitive differentiators, but many developers don’t want to use products that are completely novel because this proves challenging to move between firms, and tends to cut their ties with the strong communities they have come to expect.

From personal experience, I’ve worked with such products in the past, and it can be challenging to get competent support. And this is ironic, given that such firms sell their products with the customer expectation that support will be provided in a timely manner. I’ve had the experience submitting a pull request to an open source project, with the fix incorporated into the build that same day, but cannot say the same about any commercial project with which I have worked.

 

Something else that you believe about open source is that it leads to ‘access to strong developer communities.’ How large are some of these communities and what makes them so effective?

Developer communities around a given open source product can reach into the hundreds of thousands. Adoption rates don’t necessarily point to community strength, but are a good indicator that this is the case due to their tendency to produce virtuous cycles. I consider communities to be strong when these produce healthy discussion and effective documentation, and where active development is taking place.

When an architect or senior developer works through the process to choose which such products to incorporate into what they are building, many factors typically come into play, not only about the product itself and what the community looks like, but about the development teams who will be adopting these, whether these are a good fit for the ecosystem being developed, what the roadmap looks like, and in some cases whether commercial support can be found in the case this may be needed. However, many of these aspects fall by the wayside in the absence of strong developer communities.

 

You have reviewed 100s of books on your website, are there three that you could recommend to our readers?

These days I read very few programming books, and while there are exceptions, the reality is that these are typically outdated very quickly, and the developer community usually provides better alternatives via discussion forums and documentation. Many of the books I currently read are made freely available to me, either via technology newsletters to which I subscribe, authors and publicists who reach out to me, or the ones Amazon sends to me. For example, Amazon sent me a pre-publication uncorrected proof of “The Lean Startup” for my review in 2011, introducing me to the concept of the MVP, and just recently sent me a copy of “Julia for Beginners”.

(1) One book from O’Reilly that I’ve recommended is “In Search of Database Nirvana”. The author covers in detail the challenges for a data query engine to support workloads spanning the spectrum of OLTP on one end, to analytics on the other end, with operational and business intelligence workloads in the middle. This book can be used as a guide to assess a database engine or combination of query and storage engines, geared toward meeting one’s workload requirements, whether these be transactional, analytical, or a mix of these two. Additionally, the author’s coverage of the “swinging database pendulum” in recent years is especially well done.

(2) While much has changed in the data space over the last few years, since new data analytics products continue to be introduced, “Disruptive Analytics” presents an approachable, short history of the last 50 years of innovation in analytics that I haven’t seen elsewhere, and discusses two types of disruption: disruptive innovation within the analytics value chain, and industry disruption by innovations in analytics. From the perspective of startups and analytics practitioners, success is enabled by disrupting their industries, because using analytics to differentiate a product is a way to create a disruptive business model or to create new markets. From the perspective of investing in analytics technology for their organizations, taking a wait-and-see approach might make sense because technologies at risk of disruption are risky investments due to abbreviated useful lifespans.

(3) One of the best technology business texts I’ve read is “The Limits of Strategy“, by a co-founder of Research Board (acquired by Gartner), an international think tank that investigates developments in the computing world and how corporations should adapt. The author presents very detailed notes from many of his conversations with business leaders, providing insightful analysis throughout about his experiences building (with his wife) a group of clients, major firms that needed to mesh their strategies with the exploding world of computing. As I commented in my review, what sets this book apart from other related efforts are two seemingly opposed characteristics: industry-wide breadth, and intimacy that is only available through face-to-face interaction.

 

You are the Principal Architect for the data practice of SPR. Could you describe what SPR does?

SPR is a digital technology consultancy based in the Chicago area, delivering technology projects for a range of clients, from Fortune 1000 enterprises to local startups. We build end-to-end digital experiences using a range of technology capabilities, everything from custom software development, user experience, data, and cloud infrastructure, to DevOps coaching, software testing, and project management.

 

What are some of your responsibilities with SPR?

As principal architect, my key responsibility is to drive solution delivery for clients, leading architecture and development for projects, and this often means wearing other hats such as product owner because being able to relate to how products are built from a hands-on perspective weighs heavily in regard to how work should be prioritized, especially when building from scratch. I’m also pulled in to discussions with potential clients when my expertise is needed, and the company recently requested that I start an ongoing series of sessions with fellow architects in the data practice to discuss client projects, side projects, and what my colleagues are doing to keep abreast of technology, similar to what I had run for a prior consultancy, albeit the internal meetups so-to-speak for this other firm involved their entire technology practice, not specific to data work.

For the bulk of my career, I’ve specialized in open source development using Java, performing an increasing amount of data work along the way. In addition to these two specializations, I also do what my colleagues and I have come to call “practical” or “pragmatic” enterprise architecture, which means performing architecture tasks in the context of what is to be built, and actually building it, rather than just talking about it or drawing diagrams about it, realizing of course that these other tasks are also important.

In my view, these three specializations overlap with one another and are not mutually exclusive. I’ve explained to executives the last few years that the line that had been traditionally drawn by the technology industry between software development and data work is no longer well defined, partially because the tooling between these two spaces has converged, and partially because, as a result of this convergence, data work itself has largely become a software development effort. However, since traditional data practitioners typically don’t have software development backgrounds, and vice versa, I help meet this gap.

 

What is an interesting project that you are currently working on with SPR?

Just recently, I published the first post in a multi-part case study series about the earlier mentioned data platform that my team and I implemented in AWS from scratch this past year for the CIO of a Chicago-based global consultancy. This platform consists of data pipelines, data lake, canonical data models, visualizations, and machine learning models, to be used by corporate departments, practices, and end customers of the client. While the core platform was to be built by the corporate IT organization run by the CIO, the goal was that this platform would be used by other organizations outside corporate IT as well to centralize data assets and data analysis across the company using a common architecture, building on top of it to meet the use case needs of each organization.

As with many established firms, use of Microsoft Excel was commonplace, with spreadsheets commonly distributed within and across organizations, as well as between the firm and external clients. Additionally, business units and consultancy practices had become siloed, each making use of disparate processes and tooling. So in addition to centralizing data assets and data analysis, another goal was to implement the concept of data ownership, and enable the sharing of data across organizations in a secure, consistent manner.

 

Is there anything else that you would like to share about open source, SPR or another project that you are working on?  

Another project (read about it here and here) that I recently lead involved successfully implementing Databricks Unified Analytics Platform, and migrating the execution of machine learning models to it from Azure HDInsight, a Hadoop distribution, for the director of data engineering of a large insurer.

All of these migrated models were intended to predict the level of consumer adoption that can be expected for various insurance products, with some having been migrated from SAS a few years prior at which time the company moved to making use of HDInsight. The biggest challenge was poor data quality, but other challenges included lack of comprehensive versioning, tribal knowledge and incomplete documentation, and immature Databricks documentation and support with respect to R usage at the time (the Azure implementation of Databricks had just been made generally available a few months prior to this project).

To address these key challenges, as a follow-up to our implementation work I made recommendations around automation, configuration and versioning, separation of data concerns, documentation, and needed alignment across their data, platform, and modeling teams. Our work convinced an initially very skeptical Chief Data Scientist that Databricks is the way to go, with their stated goal following our departure to be migration of their remaining models to Databricks as quickly as possible.

This has been a fascinating interview touching on many subjects, I feel like I have learned a lot about open source. Readers who may wish to learn more may visit the SPR corporate website or Erik Gfesser’s website.

Spread the love

Antoine Tardif is a futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Big Data

Power Your ML and AI Efforts with Data Transformation – Thought Leaders

mm

Published

on

Power Your ML and AI Efforts with Data Transformation - Thought Leaders

The greater the variety, velocity, and volume of data we have, the more feasible it becomes to use predictive analytics and modeling to forecast growth and identify areas of opportunity and improvement. However, getting the greatest value from reporting, machine learning (ML), and artificial intelligence (AI) tools requires an organization to access data from many sources and ensure that data is high-quality and trusted. This is often the greatest barrier to transforming big data into business strategy.

Data professionals spend so much time gathering and validating data to prepare it for use that they have little time left to focus on their primary purpose: analyzing the data and deriving business value from it. Unsurprisingly, 76 percent of data scientists say data preparation is the least enjoyable part of their job. Moreover, current data preparation efforts like data wrangling and traditional ETL require manual effort from IT professionals and are not enough to handle the scale and complexity of big data.

Companies that want to leverage the power of AI need to break away from these tedious and largely manual processes that increase the risk of “garbage in, garbage out” results. Instead, they need data transformation processes that extract raw data in multiple sources and formats, join and normalize it, and add value with business logic and metrics to make it ready for analytics. With complex data transformation, they can be sure that AI/ML models are based on clean, accurate data that delivers trustworthy results.

Leveraging the power of the cloud with ELT

The best place to prepare and transform data today is a cloud data warehouse (CDW) such as Amazon Redshift, Google BigQuery, Microsoft Azure Synapse, or Snowflake. While traditional approaches to data warehousing require data to be extracted and transformed before it can be loaded, a CDW leverages the scalability and performance of the cloud for faster data ingestion and transformation and makes it possible to extract and load data from many disparate data sources before transforming it inside the CDW.

Ideally, the ELT model initially moves data into a section of the CDW reserved for raw staging data. From there, the CDW can use its near-unlimited computing resources available for data integration and ETL jobs that cleanse, aggregate, filter, and join the staged data. The data can then be transformed into a different schema – data vault or Star Schema, for example, optimizing the data for reporting and analytics

The ELT approach also allows you to replicate raw data within the CDW for later preparation and transformation when and as needed. This lets you use business intelligence tools that determine schema on read and produce specific transformations on demand, effectively letting you transform the same data in multiple ways as you discover new uses for it.

Accelerating machine learning models

These real-world examples show how two companies in different industries are leveraging data transformation in a CDW to drive AI initiatives.

A boutique marketing and advertising agency built a proprietary customer management platform to help its clients better identify, understand, and motivate their customers. By transforming data within a CDW, the platform quickly and easily integrates real-time customer data across channels into a 360-degree customer view that informs the platform’s AI/ML models for making customer interactions more consistent, timely, and personalized.

A global logistics firm making 100 million deliveries to 37 million unique customers in 72 countries needs vast amounts of data to power its daily operations. Adopting data transformation within a CDW enabled the company to deploy 200 machine learning models in a single year. These models make 500,000 predictions every day, significantly improving efficiency and driving superior customer service that has reduced inbound call center calls by 40 percent.

Best practices for getting started

Companies that want to support their AI/ML initiatives with the power of data transformation in the cloud need to understand their specific use case and needs. Beginning with what you want to do with your data –reducing fuel costs by optimizing delivery routes, boosting sales by delivering next best offers to customer service agents in real-time, etc. – lets you reverse-engineer your processes so you can identify which data will deliver relevant results.

Once you determine what data your AI/ML project needs to build its models, you need a cloud-native ELT solution that will make your data fit for use. Look for a solution that:

  • Is vendor-neutral and able to work with your current technology stack

  • Is flexible enough to scale up and down and adapt as your technology stack changes

  • Can handle complex data transformations from multiple data sources

  • Offers a pay-as-you-go pricing model in which you pay only for what you use

  • Is purpose-built for your preferred CDW so you can fully leverage that CDW’s features to run jobs faster and transform data seamlessly.

A cloud data transformation solution that caters to the common denominators of all CDWs may provide a consistent experience, but only one that enables the powerful differentiating features of your chosen CDW can deliver the high performance that speeds time to insight. The right solution will enable you to power your AI/ML projects with more clean, trusted data from more sources in less time – and generate faster, more reliable results that drive previously unrealized business value and innovation.

Spread the love
Continue Reading

Big Data

Ingo Mierswa, Founder & President at RapidMiner, Inc – Interview Series

mm

Published

on

Ingo Mierswa, Founder & President at RapidMiner, Inc - Interview Series

Ingo Mierswa is the Founder & President at RapidMiner, Inc. RapidMiner brings artificial intelligence to the enterprise through an open and extensible data science platform. Built for analytics teams, RapidMiner unifies the entire data science lifecycle from data prep to machine learning to predictive model deployment. More than 625,000 analytics professionals use RapidMiner products to drive revenue, reduce costs, and avoid risks.

What was your inspiration behind launching RapidMiner?

I had worked in the data science consultancy business for many years and I saw a need for a platform that was more intuitive and approachable for people without a formal education in data science. Many of the existing solutions at the time relied on coding and scripting and they simply were not user-friendly. Furthermore, it made data difficult to manage and maintain the solutions that were developed within those platforms. Basically, I realized that these projects didn’t need to be so difficult so, we started to create the RapidMiner platform to allow anyone to be a great data scientist.

Can you discuss the full transparency governance that is currently being utilized by RapidMiner?

When you can’t explain a model, it’s quite hard to tune, trust and translate. A lot of data science work is the communication of the results to others so that stakeholders can understand how to improve processes. This requires trust and deep understanding. Also, issues with trust and translation can make it very hard to overcome the corporate requirements to get a model into production. We are fighting this battle in a few different ways:

As a visual data science platform, RapidMiner inherently maps out an explanation for all data pipelines and models in a highly consumable format that can be understood by data scientists or non-data scientists. It makes models transparent and helps users in understanding model behavior and evaluating its strengths and weaknesses and detecting potential biases.

In addition, all models created in the platform come with extensive visualizations for the user – typically the user creating the model – to gain model insights, understand model behavior and evaluate model biases.

RapidMiner also provides model explanations – even when in production: For each prediction created by a model, RapidMiner generates and adds the influence factors that have led to or influenced the decisions made by that model in production.

Finally – and this is very important to me personally as I was driving this with our engineering teams a couple of years ago – RapidMiner also provides an extremely powerful model simulator capability, which allows users to simulate and observe the model behavior based on input data provided by the user. Input data can be set and changed very easily, allowing the user to understand the predictive behavior of the models on various hypothetical or real-world cases. The simulator also displays factors that influence the model’s decision. The user – in this case even a business user or domain expert – can understand model behavior, validate the model’s decision against real outcomes or domain knowledge and identify issues. The simulator allows you to simulate the real world and have a look into the future – into your future, in fact.

How does RapidMiner use deep learning?

RapidMiner’s use of deep learning somethings we are very proud of. Deep learning can be very difficult to apply and non-data-scientists often struggle with setting up those networks without expert support. RapidMiner makes this process as simple as possible for users of all types. Deep learning is, for example, part of our Auto machine learning (ML) product called RapidMiner Go. Here the user does not need to know anything about deep learning to make use of those types of sophisticated models. In addition, power users can go deeper and use popular deep learning libraries like Tensorflow, Keras, or DeepLearning4J right from the visual workflows they are building with RapidMiner. This is like playing with building blocks and simplifies the experience for users with fewer data science skills. Through this approach our users can build flexible network architectures with different activation functions and user-defined number of layers and nodes, multiple layers with different numbers of nodes, and choose from different training techniques.

What other type of machine learning is used?

All of them! We offer hundreds of different learning algorithms as part of the RapidMiner platform – everything you can apply in the widely-used data science programming languages Python and R. Among others, RapidMiner offers methods for Naive Bayes, regression such as Generalized Linear Models, clustering such as k-Means, FP-Growth, Decision Trees, Random Forests, Parallelized Deep Learning, and Gradient Boosted Trees. These and many more are all a part of the modeling library of RapidMiner and can be used with a single click.

Can you discuss how the Auto Model knows the optimal values to be used?

RapidMiner AutoModel uses intelligent automation to accelerate everything users do and ensure accurate, sound models are built. This includes instance selection and automatic outlier removal, feature engineering for complex data types such as dates or texts, and full multi-objective automated feature engineering to select the optimal features and construct new ones.  Auto Model also includes other data cleaning methods to fix common issues in data such as missing values, data profiling by assessing the quality and value of data columns, data normalization and various other transformations.

Auto Model also extracts data quality meta data – for example, how much a column behaves like an ID or whether there are lots of missing values. This meta data is used in addition to the basic meta data in automating and assisting users in ‘using the optimal values’ and dealing with data quality issues.

For more detail, we’ve mapped it all out in our Auto Model Blueprint. (Image below for extra context)

There are four basic phases where the automation is applied:

– Data prep: Automatic analysis of data to identify common quality problems like correlations, missing values, and stability.
– Automated model selection and optimization, including full validation and performance comparison, that suggests the best machine learning techniques for given data and determines the optimal parameters.
– Model simulation to help determine the specific (prescriptive) actions to take in order to achieve the desired outcome predicted by the model.
– In the model deployment and operations phase, users are shown factors like drift, bias and business impact, automatically with no extra work required.

Ingo Mierswa, Founder & President at RapidMiner, Inc - Interview Series

Computer bias is an issue with any type of AI, are there any controls in place to prevent bias from creeping up in results?

Yes, this is indeed extremely important for ethical data science. The governance features mentioned before ensure that users can always see exactly what data has been used for model building, how it was transformed, and whether there is bias in the data selection. In addition, our features for drift detection are another powerful tool to detect bias. If a model in production demonstrates a lot of drift in the input data, this can be a sign that the world has changed dramatically. However, it can also be an indicator that there was severe bias in the training data. In the future, we are considering to going even one step further and building machine learning models which can be used to detect bias in other models.

Can you discuss the RapidMiner AI Cloud and how it differentiates itself from competing products?

The requirements for a data science project can be large, complex and compute intensive, which is what has made the use of cloud technology such an attractive strategy for data scientists. Unfortunately, the various native cloud-based data science platforms tie you to cloud services and data storage offerings of that particular cloud vendor.

The RapidMiner AI Cloud is simply our cloud service delivery of the RapidMiner platform. The offering can be tailored to any customer’s environment, regardless of their cloud strategy. This is important these days as most businesses’ approach to cloud data management is evolving very quickly in the current climate. Flexibility is really what sets RapidMiner AI Cloud apart. It can run in any cloud service, private cloud stack or in a hybrid setup. We are cloud portable, cloud agnostic, multi-cloud – whatever you prefer to call it.

RapidMiner AI Cloud is also very low hassle, as of course, we offer the ability manage all or part of the deployment for clients so they can focus on running their business with AI, not the other way around. There’s even an on-demand option, which allows you spin up an environment as needed for short projects.

RapidMiner Radoop eliminates some of the complexity behind data science, can you tell us how Radoop benefits developers?  

Radoop is mainly for non-developers who want to harness the potential of big data. RapidMiner Radoop executes RapidMiner workflows directly inside Hadoop in a code-free manner. We can also embed the RapidMiner execution engine in Spark so it’s easy to push complete workflows into Spark without the complexity that comes from code-centric approaches.

Would a government entity be able to use RapidMiner to analyze data to predict potential pandemics, similar to how BlueDot operates?

As a general data science and machine learning platform, RapidMiner is meant to streamline and enhance the model creation and management process, no matter what subject matter or domain is at the center of the data science/machine learning problem. While our focus is not on predicting pandemics, with the right data a subject matter expert (like a virologist or epidemiologist, in this case) could use the platform to create a model that could accurately predict pandemics. In fact, many researchers do use RapidMiner – and our platform is free for academic purposes.

Is there anything else that you would like to share about RapidMiner?

Give it a try!  You may be surprised how easy data science can be and how much a good platform can improve you and your team’s productivity.

Thank you for this great interviewer, readers who wish to learn more should visit RapidMiner.

Spread the love
Continue Reading

Big Data

Owkin Launches the Collaborative COVID-19 Open AI Consortium (COAI)

mm

Published

on

Owkin Launches the Collaborative COVID-19 Open AI Consortium (COAI)

After a fresh round of funding, Owkin recently launched the Covid-19 Open AI Consortium (COAI). This consortium will enable advanced collaborative research and accelerate clinical development of effective treatments for patients who are infected with COVID-19.

The first stage of the project is on fully understanding and treating cardiovascular complications in COVID-19 patients, this will be performed in collaboration with CAPACITY, an international registry working with over 50 centers around the world. Other areas of research will include patient outcomes and triage, and the prediction and characterization of immune response.

Owkin’s manifesto perfectly states the company’s vision:

“We are fully engaged in this new frontier with the goal of improving drug development and patient outcomes. Founded in 2016, Owkin has quickly emerged as a leader in bringing Artificial Intelligence (AI) and Machine Learning (ML) technologies to the healthcare industry. Our solutions improve the traditional medical research paradigm by turning a previously siloed, disjointed system into an innovative and collaborative one that, above all, puts the privacy of patients first.”

Federated Learning

To understand the model that Owkin is engaging one must fully understand a new technology which is called Federated Learning. Federated learning offers a framework for AI development that enables enterprises to train machine learning models on data that is distributed at scale across multiple medical institutions without centralizing the data. The benefits of this are two-fold, there is no loss of privacy since the data is not directly linked to any specific patient, and the data is maintained at the healthcare institution that collects this data.

The use of Federated Learning thereby enables a significantly wider range of data than what any single organization possesses in-house. What this means is that by used Federated Learning researchers have access to as much data as available, and the more big data a machine learning system possesses, the more accurate the AI becomes.

There are currently multiple national efforts in using AI to tackle COVID-19. The problem with many of these nationalistic disjointed efforts is that the data is specific to one country. Collecting data from a single region may fail to reveal important information that would enable researchers to fully understand how exposure to environmental elements, ethnic makeup, genetics, age, and gender may play important roles in understanding this disease. This is why collaboration is so important, and why gathering data from multiple jurisdictions is even more important.

As described by Owkin, they seek to used Federated Learning for the following:

“We aim to help them understand why drug efficacy varies from patient to patient, enhance the drug development process and identify the best drug for the right patient at the right time, to improve treatment outcomes.”

Understanding and treading cardiovascular health issues will be the first challenge undertaken by Owkin. As important as data is, what is even more important are the efforts of researchers and contributors who are spearheading this effort. This is why Unite.AI will be releasing three interviews with researchers that are contributing to the COAI project.

The Interviews

Sanjay Budhdeo, MD, Business Development:

Sanjay is a practicing physician. He holds Medical Sciences and Medical degrees from Oxford University and a Masters Degree from Cambridge University. Sanjay has research experience in neuroimaging, epidemiology and digital health. Prior to joining Owkin as a Partnership Manager, he was a Senior Associate at Boston Consulting Group, where he focused on data and digital in healthcare. He sits on the Patient Safety Committee at the Royal Society of Medicine and was previously a Specialist Advisor at the Care Quality Commission.

Click Here to read the interview with Sanjay.

Dr. Stephen Weng, Principal Researcher:

Stephen is an Assistant Professor of Integrated Epidemiology and Data Science who leads the data science research within the Primary Care Stratified Medicine Research Group.

He integrate traditional epidemiological methods and study design with new informatics-based approaches, harnessing and interrogating “big health care data” from electronic medical records for the purpose of risk prediction modeling, phenotyping chronic diseases, data science methods research, and translation of stratified medicine into primary care.

Click Here to read the interview with Stephen

Folkert W. Asselbergs, Principal Investigator

Folkert is professor of precision medicine in cardiovascular disease at Institute of Cardiovascular Science, UCL, Director NIHR BRC Clinical Research Informatics Unit at UCLH, professor of cardiovascular genetics and consultant cardiologist at the department of Cardiology, University Medical Center Utrecht, and chief scientific officer of the Durrer Center for Cardiovascular Research, Netherlands Heart Institute. Prof Asselbergs published more than 275 scientific papers and obtained funding from leDucq foundation, British and Dutch Heart Foundation, EU (FP7, ERA-CVD, IMI, BBMRI), and RO1 National Institutes of Health.

Click Here to read the interview with Folkert

Our Hope

The hope of Unite.AI is that by using biomedical images, genomics, and clinical data to discover biomarkers and mechanisms associated with diseases and treatment outcomes this will propel the next generation of treatment to tackle COVID-19. We are contributing to this important project by highlighting the personalities behind this important global effort.

Spread the love
Continue Reading