Connect with us

Interviews

Allan Hanbury, Co-Founder of contextflow – Interview Series

mm

Published

 on

Allan Hanbury is Professor for Data Intelligence at the TU Wien, Austria, and Faculty Member of the Complexity Science Hub, where he leads research and innovation to make sense of unstructured data. He is initiator of the Austrian ICT Lighthouse Project, Data Market Austria, which is creating a Data-Services Ecosystem in Austria. He was scientific coordinator of the EU-funded Khresmoi Integrated Project on medical and health information search and analysis, and is co-founder of contextflow, the spin-off company commercialising the radiology image search technology developed in the Khresmoi project. He also coordinated the EU-funded VISCERAL project on evaluation of algorithms on big data, and the EU-funded KConnect project on technology for analysing medical text.

contextflow is a spin-off from the Medical University of Vienna and European research project KHRESMOI. Could you tell us about the KHRESMOI project?

Sure! The goal of Khresmoi was to develop a multilingual, multimodal search and access system for biomedical information and documents, which required us to effectively automate the information extraction process, develop adaptive user interfaces and link both unstructured and semi-structured text information to images. Essentially, we wanted to make the information retrieval process for medical professionals reliable, fast, accurate and understandable.

 

What’s the current dataset which is powering the contextflow deep learning algorithm?

Our dataset contains approximately 8000 lung CTs. As our AI is rather flexible, we’re moving towards brain MRIs next.

 

Have you seen improvements with how the AI performs as the dataset has become larger?

We’re frequently asked this question, and the answer is likely not satisfying to most readers. To a certain extent, yes, the quality improves with more scans, but after a particular threshold, you don’t gain much more simply from having more. How much is enough really depends on various factors (organ, modality, disease pattern, etc), and it’s impossible to give an exact number. What’s most important is the quality of the data.

 

Is contextflow designed for all cases, or to simply be used for determining differential diagnosis for difficult cases?

Radiologists are really good at what they do. For the majority of cases, the findings are obvious, and external tools are unnecessary. contextflow has differentiated itself in the market by focusing on general search rather than automated diagnosis. There are a few use cases for our tools, but the main one is for helping with difficult cases where the findings aren’t immediately apparent. Here radiologists must consult various resources, and that process takes time. contextflow SEARCH, our 3D image-based search engine) aims to reduce the time it takes for radiologists to search for information during image interpretation by allowing them to search via the image itself. Because we also provide reference information helpful for differential diagnosis, training new radiologists is another promising use case.

 

Can you walk us through the process of how a radiologist would use the contextflow platform?

contextflow SEARCH Lung CT is completely integrated into the radiologist’s workflow (or else they would not use it). The radiologist performs their work as usual, and when they require additional information for a particular patient, they simply select a region of interest in that scan and click the contextflow icon in their workstation to open up our system in a new browser window. From there, they will receive reference cases from our database of patients with similar disease patterns present as the patient they are currently evaluating plus statistics and medical literature (e.g. radiopedia). They can scroll through their patient in our system normally, selecting additional regions to search for additional information and compare side-by-side with the reference cases. There are also heatmaps providing a visualization of the overall distribution of disease patterns, which helps with reporting findings as well. We really tried to put everything a radiologist needs to write a report in one place and available within seconds.

This was designed initially for lung CT scans, will contextflow be expanding to other types of scans?

Yes! We have a list of organs and modalities requested by radiologists that we are eager to add. The ultimate goal is to provide a system that covers the entire human body, regardless of organ or type of scan.

 

contextflow has received the support of two amazing incubator programs INiTS and i2c TU Wien. How beneficial have these programs been and what have you learned from the process?

We owe a lot of gratitude to these incubators. Both connected us with mentors, consultants and investors which challenged our business model and ultimately clarified our who/why/how. They also act very practically, providing funding and office space so that we could really focus on the work and not worry SO much about administrative topics. We truly could not have come as far as we have without their support. The Austrian startup ecosystem is still small, but there are programs out there to help bring innovative ideas to fruition.

 

You are also the initiator of the Austrian ICT Lighthouse Project which aims to build a sustainable Data-Services Ecosystem in Austria. Could you tell us more about this project and about your role in it?

The amount of data produced daily is exponentially growing, and its importance to most industries is also exploding…it’s really one of the world’s most important resources! Data Market Austria’s Lighthouse project aims to develop or reform requirements for successful data-driven businesses, ensuring low cost, high quality and interoperability. I coordinated the project for the first year in 2016-2017. This led to the creation of the Data Intelligence Offensive where I am on the board of directors. The DIO’s mission is to exchange information and know-how between members regarding data management and security.

 

Is there anything else that you would like to share with our readers about contextflow?  

Radiology workflows are not on the average citizen’s mind, and that’s how it should be. The system should just work. Unfortunately, once you become a patient, you realize that is not always the case. contextflow is working to transform that process for both radiologists and patients. You can expect a lot of exciting developments from us in the coming years, so stay tuned!

Please visit contextflow to learn more.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Data Science

Three Uses Of Automation Within Supply Chain 4.0

mm

Published

on

The increased availability of advanced technologies has revolutionized the traditional supply chain model. Supply Chain 4.0 responds to modern customer expectations by relying heavily on the Internet of Things (IoT), advanced robotics, big data analytics, and blockchain. These tools enable automation and thus give organizations a chance to close information gaps and optimally match supply and demand.

“The reorganization of supply chains […] is transforming the model of supply chain management from a linear one, in which instructions flow from supplier to producer to distributor to consumer, and back, to a more integrated model in which information flows in an omnidirectional manner to the supply chain.” – Understanding Supply Chain 4.0 and its potential impact on global value chains

Industry giants like Netflix, Tesla, UPS, Amazon, and Microsoft rely heavily on automation within their supply chain to lead their respective industries. Let us take a closer look at three powerful automation use cases.

Three Uses Of Automation Within Supply Chain 4.0:

1. Managing demand uncertainty

A painful aspect of supply chain ecosystems is the demand uncertainty and the inability to accurately forecast demand. Generally, this leads to a set of performance issues, from increased operational cost to excess inventory and suboptimal production capacity. Automation tools can forecast demand, remove uncertainty from the equation, and thus improve operational efficiency at each step along the supply chain.

Big data analytics is an established tool that helps organizations manage demand uncertainty. It consists of data collection & aggregation infrastructure combined with powerful ML algorithms, designed to forecast demand based on historical (or even real-time) data. Modern storage solutions (such as data lakes) make it possible to aggregate data from a variety of sources: market trends, competitor information, and consumer preferences. 

Machine learning(ML) algorithms continually analyze this rich data to find new patterns, improve the accuracy of demand forecasting, and enhance operational efficiency. This is the recipe that Amazon uses to predict demand for a product before it is purchased and stocked in their warehouse. By examining tweets and posts on websites and social media, they understand customer sentiments about products and have a data-based way to model demand uncertainty. 

The good news is that such powerful analytics tools are not restricted to industry giants anymore. Out-of-the-box solutions (such as Amazon Forecast) make such capabilities widely available to all organizations that wish to handle demand uncertainty. 

2. Managing process uncertainties

Organizations operating in today’s supply chain industry need to handle increasingly complex logistic processes. The competitive environment, together with ever-increasing customer expectations make it imperative to minimize uncertainties across all areas of supply chain management. 

From production and inventory, to order management, packing, and shipping of goods, automation tools can tackle uncertainties and minimize process flaws. AI, robotics, and IoT are well-known methods that facilitate an optimal flow of resources, minimize delays, and promote optimized production schedules.

Internet of Things (IoT) is playing an important role to overcome process uncertainties in the supply chain. One major IoT application is the accurate tracking of goods and assets. IoT sensors are used for tracking in the warehouse, during loading, in-transit, and unloading phases. This enables applications such as live monitoring, which increases process visibility and enables managers to act on real-time information. It also makes it possible to further optimize a variety of other processes, from loading operations to payment collection.

Supply Chain management and automation

IoT increases process visibility and enables managers to act on real-time information. Source: Canva

Since 2012, Amazon fulfillment warehouses use AI-powered robots that are doing real magic. One can see robots and humans working side by side through wireless communication, handling orders that are unique in size, shape, and weight. Thousands of Wi-Fi connected robots gather merchandise for each individual order. These robots have two powered wheels that let them rotate in place, IR for obstacle detection, and built-in cameras to read QR codes on the ground. Robots use these QR codes to determine their location and direction. Like this, efficiency is increased, the physical activity of employees is reduced and process uncertainty is kept to a minimum.

Another example of how automation helps make process improvements comes from vehicle transport company CFR Rinkens. They have utilized automation in their accounting and billing departments to quicken payment processing times. Through auto-created invoices, they have decreased costs and errors which in turn reduces delays.

“An area of need that we applied automation was within the accounting department for billing and paying vendors. With tons of invoices coming in and out, automation here ensures nothing falls through the cracks, and clients receive invoices on time providing them with enough time to process payment.”   -Joseph Giranda, CFR Rinkens

The biggest benefit of automation is transparency. Each step of an organized supply chain eliminates grey areas for both clients and businesses. 

3. Synchronization among supply chain partners and customers

Digital supply chains are characterized by synchronization among hundreds of departments, vendors, suppliers, and customers. In order to orchestrate activities all the way from planning to execution, supply chains require information to be collected, analyzed, and utilized in real-time. A sure way to achieve a fully synchronized supply chain is to leverage the power of automation. 

CFR Rinkens uses a dynamic dashboard to keep track of cargo as they deliver vehicles across the world. This dashboard is automatically updated with relevant information that increases transparency and efficiency. High transparency allows for excellent customer service and satisfaction. 

“Upon a vehicle’s arrival, images are taken and uploaded onto a CFR dashboard that our clients are able to access. All vehicle documents, images, and movements are automatically displayed within this dashboard. This automation helps on the customer service side because it allows for full transparency and accountability for quality control, delivery window times, and real-time visibility.”   -Joseph Giranda, CFR Rinkens

Automation offers an effective solution to the synchronization issue with blockchain. Blockchain is a distributive digital ledger with many applications and can be used for any exchange, tracking, or payment. Blockchain allows information to be instantly visible to all supply chain partners and enables a multitude of applications. Documents, transactions, and goods can easily be tracked. Payments and pricing can also be historically recorded, all in a secure and transparent manner.

Supply Chain management and automation 2

Digital Supply Chains increase transparency and efficiency. Source: Canva

The shipping giant FedEx has joined Blockchain in Transport Alliance (BiTA) and launched a blockchain-powered pilot program to help solve customer disputes. Similarly, UPS has also joined BiTA as early as 2017, reaching for increased transparency and efficiency among its entire partner network. Such real-life use cases show the potential of blockchain technology and the impact that automation can have on the entire freight industry.

Blockchain increases the transparency of the supply chain and removes information latency for all partners on the network. The resulting benefits include increased productivity and operational efficiency as well as better service levels. Its massive potential makes blockchain a top priority for supply chain organizations and their digital automation journey.

Conclusion

Automation is playing a major role in defining the Supply Chain 4.0 environment. With heavy technological tools available to them, leading organizations are taking serious leaps towards efficiency and productivity. Automation gives them the power to accelerate and optimize the whole end-to-end supply chain journey. It also enables them to use data to their advantage and close information gaps across their network. 

Where To Go From Here?

Data can be the obstacle or the solution to all these potential benefits. Fortunately, experts-for-hire on this are easy to reach. Blue Orange Digital, a top-ranked AI development agency in NYC, specializes in cloud data storage solutions and facilitates the development of supply chain optimization. They provide custom solutions to meet each unique business needs, but also have many pre-built options for supply chain leaders. From a technology point of view, we have outlined several different ways to improve the efficiency of the supply chain. Taken together, these improvements give you Supply Chain 4.0.

All images source: Canva

Spread the love
Continue Reading

Data Science

Jean Belanger, Co-Founder & CEO at Cerebri AI – Interview Series

mm

Published

on

Jean Belanger, is the Co-Founder & CEO at Cerebri AI, a pioneer in artificial intelligence and machine learning, is the creator of Cerebri Values™, the industry’s first universal measure of customer success. Cerebri Values quantifies each customer’s commitment to a brand or product and dynamically predicts “Next Best Actions” at scale, which enables large companies to focus on accelerating profitable growth.

What was it that initially attracted you to AI?

Cerebri AI is my 2nd data science startup. My first used operations research modelling to optimize order processing for major retail and ecommerce operations. 4 of the top US 10 retailers, including Walmart, used our technology. AI has a huge advantage, which really attracted me. Models learn, which means they are more scalable. Which means we can build and scale awesome technology that really, really adds value.

Can you tell us about your journey to become a co-founder of Cerebri AI?

I was mentoring at a large accelerator here in Austin, Texas – Capital Factory – and I was asked to write the business plan for Cerebri AI.  So, I leveraged my experience of doing data science, with over 80 data science-based installs using our technology. Sometimes you just need to go for it.

What are some of the challenges that enterprises currently face when it comes to CX and customer/brand relationships?

The simple answer is that every business tries to understand their customers’ behavior, so they can satisfy their needs. You cannot get into someone’s head to sort out why they buy a product or service when they do, so brands must do the best they can. Surveys, tracking market share, or measuring market segmentation. There are thousands of ways of tracking or understanding customers. However, the underlying basis for everything is rarely thought about, and that is Moore’s Law.  More powerful, cheaper semiconductors, processors etc., from Intel, Apple, Taiwan Semi, etc., make our modern economy work at such a compute intense level relative to a few years ago. Today, the cost of cloud computing and memory resources make AI doable.  AI is VERY compute intensive. Things that were not possible, even five years ago, can now be done. In terms of customer behavior, we can now process all the info and data that we have digitally recorded in one customer journey per customer. So, customer behavior is suddenly much easier to understand and react to. This is key, and that is the future of selling products and services.

Cerebri AI personalizes the enterprise by combining machine learning and cloud computing to enhance brand commitment. How does the AI increase brand commitment?

When Cerebri AI looks at a customer, the first thing we establish is their commitment to the brand we are working with. We define commitment to the brand as the customer’s willingness to spend in the future. Its fine to be in business and have committed customers, but if they do not buy your goods and services, then in effect, you are out of business. The old saying goes – if you cannot measure something, you cannot improve it.  Now we can measure commitment and other key metrics, which means we can use our data monitoring tools and study a customer’s journey to see what works and what does not. Once we find a tactic that works, our campaign building tools can instantly build a cohort of customers that might be similarly impacted. All of this is impossible without AI and the cloud infrastructure at the software layer, which allows us to move in so many directions with customers.

What type of data does Cerebri collect? Or use within its system? How does this comply with PII (Personally Identifiable Information) restrictions?

Until now we only operate behind the customer’s firewall, so PII has not been an issue. We are going to open a direct access web site in the Fall, so that will require use of anonymized data. We are excited about the prospect of bringing our advanced technology to a broader array of companies and organizations.

You are working with the Bank of Canada, Canada’s central bank, to introduce AI to their macroeconomic forecasting. Could you describe this relationship, and how your platform is being used?

The Bank of Canada is an awesome customer. Brilliant people and macroeconomic experts.  We started 18 months or so ago. Introducing AI into the technology choices the bank’s team would have at their disposal. We started with predictions of quarterly GDP for Canada.  That was great, now we are expanding the dataset used in the AI-based forecasts to increase accuracy, etc.  To do this, we developed an AI optimizer, which automates the thousands of choices facing a data scientist when they carry out a modelling exercise. Macro-economic time series require a very sophisticated approach when you are dealing with decades of data, all of which may have an impact on overall GDP.  The AI Optimizer was so successful that we decided to incorporate this into Cerebri AI’s standard CCX platform offering.  It will be used in all future engagements.  Amazing technology.  One of the reasons we have filed 24 patents to date.

Cerebri AI launched CCX v2 in the autumn last year. What is this platform exactly?

Our CCX offering has three components.

Our CCX platform, which consists of a 10-stage software pipeline, which our data scientists use to build their models and product insights. It is also our deployment system from data intake to our UX and insights.  We have several applications in our offering, such as QM for quality management of the entire process, and Audit, which tells users what features drive the insights they are seeing.

Then, we have our Insights themselves, which are generated from our modelling technology. Our flagship insight is our Cerebri Values, which is a customer’s commitment to your brand, which is – in effect – a measure of how much money a customer is willing to spend in the future on a brand’s products and services.

We derive a host of customer engagement and revenue KPI insights from our core offering and we can help with our next best action{set}s to drive engagement, up-selling, cross-selling, reducing churn, etc.

You sat down to interview representatives from four major faith traditions in the world today — Islam, Hinduism, Judaism and Christianity. Have your views of the world shifted since these interviews, and is there one major insight that you would like to share with our readers during the current pandemic?

Diversity matters. Not because it is a goal in and of itself, but because treating anyone in anything less than a totally equitable manner is just plain stupid. Period. When I was challenged to put in a program to reinforce Cerebri AI’s commitment to diversity, it was apparent to me that what we used to learn as children, in our houses of worship, has been largely forgotten.  So, I decided to ask the faith communities and their leaders in the US to tell us how they think through treating everyone equally. The sessions have proved to be incredibly popular, and we make them available to anyone who wants to use them in their business.

On the pandemic, I have an expert at home. My wife is a world-class epidemiologist.  She told me on day one. Make sure the people most at risk are properly isolated, she called this epi-101. This did not happen. The effects have been devastating.  Age discrimination is not just an equity problem in working, it is also all about how we treat our parents, grandparents, etc., wherever they are residing.  We did not distinguish ourselves in the pandemic in how we dealt with nursing home residents, for example, a total disaster in many communities. I live in Texas, we are the 2nd biggest state population wise, and our pandemic-related deaths per population is 40th in the US among all states.  Arguably the best in Europe is Germany with 107 pandemic deaths per million, Texas sits at 77, so our state authorities have done a great job so far.

You’ve stated that a lot of the media focuses on the doom and gloom of AI but does not focus enough on how the technology can be useful to make our lives better. What are your views on some of the improvements in our lives that we will witness from the further advancement of AI?

Our product helps eliminate spam email from the vendors you do business with. Does it get better than that? Just kidding. There are so many fields where AI is helping, it is difficult to imagine a world without AI.

Is there anything else that you would like to share about Cerebri AI?

The sky’s the limit, as understanding customer behavior is only really just beginning. Being enabled for the first time by AI and the totally massive compute power available on the cloud and due to Moore’s Law.

Thank you for the great interviews, readers who wish to learn more should visit Cerebri AI.

Spread the love
Continue Reading

Data Science

Omer Har, Co-Founder and CTO, Explorium – Interview Series

mm

Published

on

Omer Har is a data science and software engineering veteran with nearly a decade of experience building AI models that drive big businesses forward.

Omer Har is the Co-Founder and CTO of Explorium,  a company that offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

When did you first discover that you wanted to be involved in data science?

My interest in data science goes back over a decade, which is about how long I’ve been practicing and leading data science teams. I started out as a software engineer but was always drawn to the complex data and algorithmic challenges from early on. I was lucky to have learned the craft at Microsoft Research, which was one of the few places at the time where you could really work on complex applied machine learning challenges at scale.

 

You Co-Founded Explorium in 2017, could you discuss the inspiration behind launching this start-up?

Explorium is based on a simple and very powerful need — there is so much data around us that could potentially help build better models, but there is no way to know in advance which data sources are going to be impactful, and how. The original idea came from Maor Shlomo, Explorium Co-founder and CEO, who was dealing with unprecedented data variety in his military service and tackling ways to leverage it into decision making and modeling. When the three of us first came together, it was immediately clear to us that this experience echoes the needs we were dealing within the business world, particularly in fast-growing data science-driven fields like advertising and marketing technology unicorns, where both I and Or Tamir (Explorium Co-founder and COO) were leading growth through data.

Before Explorium, finding relevant data sources that really made an impact — to improve your machine learning model’s accuracy — was a labor-intensive, time-consuming, and expensive process with low chances of success. The reason is that you are basically guessing, and using your most expensive people — data scientists — to experiment. Moreover, data acquisition itself is a complex business process and data science teams usually do not have the ability to commercially engage with multiple data providers.

As a data science leader that was measured by business impact generated by models, I didn’t have the luxury of sending my team on a wild goose chase. As a result, you often prefer to deploy your efforts on things that can have a much lower impact than a relevant new data source, just because they are much more within your realm of control.

 

Explorium recently successfully raised an additional $31M in funding in a Series B round. Have you been surprised at how fast your company has grown?

It has definitely been a rocket ship ride so far, and you can never take that for granted. I can’t say I was surprised by how widespread the need for better data is, but it’s always an incredible experience to see the impact you generate for customers and their business. The greatest analytical challenge organizations will face over the next decade is finding the right data to feed their models and automated processes. The right data assets can crown new market leaders, so our growth really reflects the rapidly growing number of customers that realize that and are making data a priority. In fact, the number of “Data Hunters” — people looking for data as part of their day to day job — is growing exponentially in our experience.

 

Could you explain what Explorium’s data platform is and what the automated data discovery process is?

Explorium offers an end-to-end data science platform powered by augmented data discovery and feature engineering. We are focused on the “data” part of data science — which means  automatically connecting to thousands of external data sources and leveraging machine learning processes to distill the most impactful signals and features. This is a complex and  multi-stage process, which starts by connecting to a myriad of contextually relevant sources in what we call the Explorium data catalog. Then we automate the process that explores this interconnected data variety, by testing hundreds of thousands of ideas for meaningful features and signals to create the optimal feature set, build models on top of it, and serve them to production in flexible ways.

By automating the search for the data you need, not just the data you have internally, the Explorium platform is doing to data science what search engines did for the web — we are scouring, ranking, and bringing you the most relevant data for the predictive question at hand.

This empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

 

What types of external data sources does Explorium tap into?

We hold access to thousands of sources across pretty much any data category you can think of including company, geospatial, behavioral, time-based, website data, and more. We have multiple expert teams that specialize in data acquisition through open, public, and premium sources, as well as partnerships. Our access to unique talent out of Israel’s top intelligence and technology units brings substantial know-how and experience in leveraging data variety for decision making.

 

How does Explorium use machine learning to understand which types of data are relevant to clients?

This is part of our “secret sauce” so I can’t dive in, but on a high level, we use machine learning to understand the meaning behind the different parts of your datasets and employ constantly improving algorithms to identify which sources in our evolving catalog are potentially relevant. By actually connecting these sources to your data, we are able to perform complex data discovery and feature engineering processes, specifically designed to be effective for external and high-dimensional data, to identify the most impactful features from the most relevant sources. Doing it all in the context of machine learning models makes the impact statistically measurable and allows us to constantly learn and improve our matching, generation, and discovery capabilities.

 

One of the solutions that is offered is mitigating application fraud risk for online lenders by using augmented data discovery. Could you go into details on how this solution works?

Lending is all about predicting and mitigating risk — whether it comes from the borrower’s ability to repay the loan (e.g. financial performance) or their intention to do so (e.g. fraud). Loan applications are inherently a tradeoff between the lender’s desire to collect more information and their ability to compete with other providers, as longer and more cumbersome questionnaires have lower completion rates, are biased by definition, and so on.

With Explorium, both incumbent banks and online challengers are able to automatically augment the application process with external and objective sources that add immediate context and uncover meaningful relationships. Without giving away too much to help fraudsters, you can imagine that in the context of fraud this could mean different behaviors and properties that stand out versus real applicants if you are able to gather a 360-view of them. Everything from online presence, official records, behavioral patterns on social media, and physical footprints leave breadcrumbs that could be hypothesized and tested as potential features and indicators if you can access the relevant data and vertical know-how. Simply put, better data ensures better predictive models, which helps translate the reduced risk and higher revenue to lenders’ bottom line.

In a wider view, since COVID-19 hit on a global scale, we’ve been seeing an increase in new fraud patterns as well as lenders’ need to go back to basics, as the pandemic broke all the models. No one really took this sort of a “Black Swan” event into account, and part of our initial response to help these companies has been generating custom signals that help assess business risk in these uncertain and dynamic times.

You can read more about in an excellent post written by Maor Shlomo, Explorium Co-Founder and CEO.

Thank you for the great interview, readers who wish to learn more should visit Explorium.

Spread the love
Continue Reading