Connect with us

Interviews

Nir Bar-Lev, CEO & Co-Founder of Allegro AI – Interview Series

mm

Published

 on

Nir Bar-Lev is the CEO & Co-Founder of Allegro AI. Allegro AI specializes in helping companies develop, deploy and manage machine & deep learning solutions. With Allegro AI, organizations bring to market and manage higher quality products, faster and more cost effectively. The products are based on the Allegro Trains open source ML & DL experiment manager and ML-Ops package.

What initially attracted you to AI?

What I have been mostly drawn to in my career has been about bringing cutting edge tech innovation to address problems or opportunities (and actually they are two sides of the same coin) on a huge scale. I must admit that my time at Google has certainly helped shape this inclination.

AI certainly ticks off both these boxes. It is at the cutting edge of some of the technology frontiers today and it has the potential to affect almost every single aspect of our lives on this planet.

You have had an impressive career starting at Google as the founding product lead for Google’s voice recognition platform. Could you discuss these early days of working at Google and what you learned from this experience?

Coming straight out of business school from the Wharton School of Business I was struck by how Google was functioning at extreme odds with established business norms on how to run successful businesses, as taught in the best business schools in the world and as I experienced in my career pre-business school. I vividly remember discussing this with a couple of my colleagues who also joined Google at the same time straight out of an MBA.

It turns out Google changed – to some extent – the business playbook, but it also enjoyed an immense virtual firehouse of money from its ad business that allowed it to experiment in ways that most companies could not afford to do. I can attest that as I spent a decade at Google it increasingly adopted “mainstream” established business practices and thought processes as it grew.

To me also, leading the voice recognition platform as the head product manager, I had to work with research scientists. This was actually one of the earliest, if not the first, research team at Google that was really about applied research. To me this was a big challenge. Researchers have very different mindsets than engineers and here I was trying to work with accomplished researchers in a company that is extremely engineering oriented.

Turns out that the challenges I faced back then almost 15yrs ago are very similar to problems companies face today when trying to assimilate AI data scientists into their organizations.

In 2016 you proceeded to become a Co-Founder of Allegro AI? What was your inspiration behind launching Allegro AI?

In founding Allegro AI, I teamed up with two amazing partners who are out of this world engineering talents. One of my partners was the first PhD student in one of Israel’s first and currently leading AI labs in what is arguably one of the leading AI hubs globally. So he really – to me – was part of the founding teams of applied AI in the community locally. He had the vision to see how applying ML / DL in practice would have to deal with a new set of challenges around scale, automation, reliability, quality and more.  In talking to them it became clear to me that I can contribute to the team from my experience at Google and earlier to really have a shot at creating a company that can have an immense impact on AI through the tools we provide. Google and some of the other tech giants are in an enviable position in terms of their ability to garner endless resources of the best quality at these challenges. But pretty much everyone else cannot afford that (whether in terms of access to talent, monetary resources, company focus, etc). So this was an opportunity to aligned exactly with what I love to do most (see q1) and help the whole ecosystem.

Allegro AI serves as an open source machine learning & deep learning management platform. Could you discuss the benefits of using open source software?

Open source has several benefits to it. Most importantly it leverages the wider community to improve the product itself. Users find bugs, issues, there is a wide discourse on  features that are of interest; integration into other [open source] tools is much easier to facilitate than it would be b/w two commercial organizations with closed source proprietary tools; etc.

It provides a great model for a win-win for both the community and the company behind it. It lends easily to trying and testing and even expanding for organizations that do not / will not pay, and at the same time enables larger potential customers to pay for extended features / services based on top of a widely used (and therefore less risky) piece of software.

Allegro AI offers data management services. Could you discuss the type of tools that are offered for this?

Allegro Ai offers both structured data and unstructured data management. However, whereas there are a host of proven structured data management solutions, we provide a unique solution to unstructured data.

Specifically, it is important to qualify the type of data management we provide. The idea is not physical data management but rather data management from an AI angle. For AI, it is critical for the data science team to understand what data they have at their disposal. With unstructured data that is quite difficult. Imagine thousands or hundreds of thousands of hours of video, or audio. Imagine billions of sensor signals, etc.

Data scientists need to know the variance of their data to align with the different situations so they can effectively train their models. They need to understand if there’s critical pieces of data that are missing; if there are biases or skews in the data.

And then – on the flip side – they need to have tools to address these situations cost effectively and quickly without having to go out and source new physical data and annotate / label it (a very costly and time consuming undertaking).

This is in essence the type of tooling we provide around this area: powerful tools to do “AI BI (business intelligence)” on your data at an unprecedented level of granularity and detail and on the flip side tools to tightly integrate the data into the experiments and models such that with zero code data scientists can set up effective training runs with the data at hand.

On top of that we provide additional value-add in optimization of data flow, data move etc. Since we are talking about processing terabytes of data. Moving it around is expensive and companies need a solution to optimize that as well.

Allegro AI also offers the outsourcing of data engineering services. What are some of the offerings that are available?

Allegro Ai is primarily a product company and we see ourselves providing the tools, infrastructure or scaffolding for companies to develop, depley and/ro manage products with Ai (DL / ML) models integrated in them.

That said, this is a new area and our customers at times need help setting up their specific pipelines built on top of our tools, or even help with jump-starting their models themselves. When these situations happen, we provide ancillary services to our core s/w offering.

Could you discuss the importance of Federated Learning and how Allegro AI can be used in this context?

Federated learning is basically the ability to train a single AI model leveraging (trained on) datasets located in different physical locations without bringing those datasets to a single location. We also provide an enhanced version of that, which we call “blind federated learning” or “blind collaborative learning” where no single entity in this scenario has access to data that does not belong to it, including the entity that gets the ultimate model.

Federate learning is important in various situations where data privacy or regulatory or IP / confidentiality is critical to preserve while at the same time there is interest to leverage different datasets. For example, two or more hospitals or medical institutions that want to collaborate on training a model for CT scans; or two governmental agencies that want to collaborate on homeland security data to build some anti-terrorism model but for legal reasons cannot expose the data even to one another.

Or even situations where a single entity cannot move its various stores of data b/c it is prohibitively expensive – for example a global automotive OEM looking to train autonomous vehicles leveraging data collected from cars driving all over the world.

Allegro AI is one of less than a handful of companies world-wide that has a proven and tested commercial platform that facilitates federated learning.

Is there anything else that you would like to share about Allegro AI?

Allegro AI is a rising force in the world of AI tools and ML-Ops. Just this past quarter, during the midst of the first wave of the covid-19 crisis we experienced growth that more than doubled our customer base in just that 3mn period.

Thank you for the interview, readers who wish to learn more should visit Allegro AI.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Data Science

Three Uses Of Automation Within Supply Chain 4.0

mm

Published

on

The increased availability of advanced technologies has revolutionized the traditional supply chain model. Supply Chain 4.0 responds to modern customer expectations by relying heavily on the Internet of Things (IoT), advanced robotics, big data analytics, and blockchain. These tools enable automation and thus give organizations a chance to close information gaps and optimally match supply and demand.

“The reorganization of supply chains […] is transforming the model of supply chain management from a linear one, in which instructions flow from supplier to producer to distributor to consumer, and back, to a more integrated model in which information flows in an omnidirectional manner to the supply chain.” – Understanding Supply Chain 4.0 and its potential impact on global value chains

Industry giants like Netflix, Tesla, UPS, Amazon, and Microsoft rely heavily on automation within their supply chain to lead their respective industries. Let us take a closer look at three powerful automation use cases.

Three Uses Of Automation Within Supply Chain 4.0:

1. Managing demand uncertainty

A painful aspect of supply chain ecosystems is the demand uncertainty and the inability to accurately forecast demand. Generally, this leads to a set of performance issues, from increased operational cost to excess inventory and suboptimal production capacity. Automation tools can forecast demand, remove uncertainty from the equation, and thus improve operational efficiency at each step along the supply chain.

Big data analytics is an established tool that helps organizations manage demand uncertainty. It consists of data collection & aggregation infrastructure combined with powerful ML algorithms, designed to forecast demand based on historical (or even real-time) data. Modern storage solutions (such as data lakes) make it possible to aggregate data from a variety of sources: market trends, competitor information, and consumer preferences. 

Machine learning(ML) algorithms continually analyze this rich data to find new patterns, improve the accuracy of demand forecasting, and enhance operational efficiency. This is the recipe that Amazon uses to predict demand for a product before it is purchased and stocked in their warehouse. By examining tweets and posts on websites and social media, they understand customer sentiments about products and have a data-based way to model demand uncertainty. 

The good news is that such powerful analytics tools are not restricted to industry giants anymore. Out-of-the-box solutions (such as Amazon Forecast) make such capabilities widely available to all organizations that wish to handle demand uncertainty. 

2. Managing process uncertainties

Organizations operating in today’s supply chain industry need to handle increasingly complex logistic processes. The competitive environment, together with ever-increasing customer expectations make it imperative to minimize uncertainties across all areas of supply chain management. 

From production and inventory, to order management, packing, and shipping of goods, automation tools can tackle uncertainties and minimize process flaws. AI, robotics, and IoT are well-known methods that facilitate an optimal flow of resources, minimize delays, and promote optimized production schedules.

Internet of Things (IoT) is playing an important role to overcome process uncertainties in the supply chain. One major IoT application is the accurate tracking of goods and assets. IoT sensors are used for tracking in the warehouse, during loading, in-transit, and unloading phases. This enables applications such as live monitoring, which increases process visibility and enables managers to act on real-time information. It also makes it possible to further optimize a variety of other processes, from loading operations to payment collection.

Supply Chain management and automation

IoT increases process visibility and enables managers to act on real-time information. Source: Canva

Since 2012, Amazon fulfillment warehouses use AI-powered robots that are doing real magic. One can see robots and humans working side by side through wireless communication, handling orders that are unique in size, shape, and weight. Thousands of Wi-Fi connected robots gather merchandise for each individual order. These robots have two powered wheels that let them rotate in place, IR for obstacle detection, and built-in cameras to read QR codes on the ground. Robots use these QR codes to determine their location and direction. Like this, efficiency is increased, the physical activity of employees is reduced and process uncertainty is kept to a minimum.

Another example of how automation helps make process improvements comes from vehicle transport company CFR Rinkens. They have utilized automation in their accounting and billing departments to quicken payment processing times. Through auto-created invoices, they have decreased costs and errors which in turn reduces delays.

“An area of need that we applied automation was within the accounting department for billing and paying vendors. With tons of invoices coming in and out, automation here ensures nothing falls through the cracks, and clients receive invoices on time providing them with enough time to process payment.”   -Joseph Giranda, CFR Rinkens

The biggest benefit of automation is transparency. Each step of an organized supply chain eliminates grey areas for both clients and businesses. 

3. Synchronization among supply chain partners and customers

Digital supply chains are characterized by synchronization among hundreds of departments, vendors, suppliers, and customers. In order to orchestrate activities all the way from planning to execution, supply chains require information to be collected, analyzed, and utilized in real-time. A sure way to achieve a fully synchronized supply chain is to leverage the power of automation. 

CFR Rinkens uses a dynamic dashboard to keep track of cargo as they deliver vehicles across the world. This dashboard is automatically updated with relevant information that increases transparency and efficiency. High transparency allows for excellent customer service and satisfaction. 

“Upon a vehicle’s arrival, images are taken and uploaded onto a CFR dashboard that our clients are able to access. All vehicle documents, images, and movements are automatically displayed within this dashboard. This automation helps on the customer service side because it allows for full transparency and accountability for quality control, delivery window times, and real-time visibility.”   -Joseph Giranda, CFR Rinkens

Automation offers an effective solution to the synchronization issue with blockchain. Blockchain is a distributive digital ledger with many applications and can be used for any exchange, tracking, or payment. Blockchain allows information to be instantly visible to all supply chain partners and enables a multitude of applications. Documents, transactions, and goods can easily be tracked. Payments and pricing can also be historically recorded, all in a secure and transparent manner.

Supply Chain management and automation 2

Digital Supply Chains increase transparency and efficiency. Source: Canva

The shipping giant FedEx has joined Blockchain in Transport Alliance (BiTA) and launched a blockchain-powered pilot program to help solve customer disputes. Similarly, UPS has also joined BiTA as early as 2017, reaching for increased transparency and efficiency among its entire partner network. Such real-life use cases show the potential of blockchain technology and the impact that automation can have on the entire freight industry.

Blockchain increases the transparency of the supply chain and removes information latency for all partners on the network. The resulting benefits include increased productivity and operational efficiency as well as better service levels. Its massive potential makes blockchain a top priority for supply chain organizations and their digital automation journey.

Conclusion

Automation is playing a major role in defining the Supply Chain 4.0 environment. With heavy technological tools available to them, leading organizations are taking serious leaps towards efficiency and productivity. Automation gives them the power to accelerate and optimize the whole end-to-end supply chain journey. It also enables them to use data to their advantage and close information gaps across their network. 

Where To Go From Here?

Data can be the obstacle or the solution to all these potential benefits. Fortunately, experts-for-hire on this are easy to reach. Blue Orange Digital, a top-ranked AI development agency in NYC, specializes in cloud data storage solutions and facilitates the development of supply chain optimization. They provide custom solutions to meet each unique business needs, but also have many pre-built options for supply chain leaders. From a technology point of view, we have outlined several different ways to improve the efficiency of the supply chain. Taken together, these improvements give you Supply Chain 4.0.

All images source: Canva

Spread the love
Continue Reading

Data Science

Jean Belanger, Co-Founder & CEO at Cerebri AI – Interview Series

mm

Published

on

Jean Belanger, is the Co-Founder & CEO at Cerebri AI, a pioneer in artificial intelligence and machine learning, is the creator of Cerebri Values™, the industry’s first universal measure of customer success. Cerebri Values quantifies each customer’s commitment to a brand or product and dynamically predicts “Next Best Actions” at scale, which enables large companies to focus on accelerating profitable growth.

What was it that initially attracted you to AI?

Cerebri AI is my 2nd data science startup. My first used operations research modelling to optimize order processing for major retail and ecommerce operations. 4 of the top US 10 retailers, including Walmart, used our technology. AI has a huge advantage, which really attracted me. Models learn, which means they are more scalable. Which means we can build and scale awesome technology that really, really adds value.

Can you tell us about your journey to become a co-founder of Cerebri AI?

I was mentoring at a large accelerator here in Austin, Texas – Capital Factory – and I was asked to write the business plan for Cerebri AI.  So, I leveraged my experience of doing data science, with over 80 data science-based installs using our technology. Sometimes you just need to go for it.

What are some of the challenges that enterprises currently face when it comes to CX and customer/brand relationships?

The simple answer is that every business tries to understand their customers’ behavior, so they can satisfy their needs. You cannot get into someone’s head to sort out why they buy a product or service when they do, so brands must do the best they can. Surveys, tracking market share, or measuring market segmentation. There are thousands of ways of tracking or understanding customers. However, the underlying basis for everything is rarely thought about, and that is Moore’s Law.  More powerful, cheaper semiconductors, processors etc., from Intel, Apple, Taiwan Semi, etc., make our modern economy work at such a compute intense level relative to a few years ago. Today, the cost of cloud computing and memory resources make AI doable.  AI is VERY compute intensive. Things that were not possible, even five years ago, can now be done. In terms of customer behavior, we can now process all the info and data that we have digitally recorded in one customer journey per customer. So, customer behavior is suddenly much easier to understand and react to. This is key, and that is the future of selling products and services.

Cerebri AI personalizes the enterprise by combining machine learning and cloud computing to enhance brand commitment. How does the AI increase brand commitment?

When Cerebri AI looks at a customer, the first thing we establish is their commitment to the brand we are working with. We define commitment to the brand as the customer’s willingness to spend in the future. Its fine to be in business and have committed customers, but if they do not buy your goods and services, then in effect, you are out of business. The old saying goes – if you cannot measure something, you cannot improve it.  Now we can measure commitment and other key metrics, which means we can use our data monitoring tools and study a customer’s journey to see what works and what does not. Once we find a tactic that works, our campaign building tools can instantly build a cohort of customers that might be similarly impacted. All of this is impossible without AI and the cloud infrastructure at the software layer, which allows us to move in so many directions with customers.

What type of data does Cerebri collect? Or use within its system? How does this comply with PII (Personally Identifiable Information) restrictions?

Until now we only operate behind the customer’s firewall, so PII has not been an issue. We are going to open a direct access web site in the Fall, so that will require use of anonymized data. We are excited about the prospect of bringing our advanced technology to a broader array of companies and organizations.

You are working with the Bank of Canada, Canada’s central bank, to introduce AI to their macroeconomic forecasting. Could you describe this relationship, and how your platform is being used?

The Bank of Canada is an awesome customer. Brilliant people and macroeconomic experts.  We started 18 months or so ago. Introducing AI into the technology choices the bank’s team would have at their disposal. We started with predictions of quarterly GDP for Canada.  That was great, now we are expanding the dataset used in the AI-based forecasts to increase accuracy, etc.  To do this, we developed an AI optimizer, which automates the thousands of choices facing a data scientist when they carry out a modelling exercise. Macro-economic time series require a very sophisticated approach when you are dealing with decades of data, all of which may have an impact on overall GDP.  The AI Optimizer was so successful that we decided to incorporate this into Cerebri AI’s standard CCX platform offering.  It will be used in all future engagements.  Amazing technology.  One of the reasons we have filed 24 patents to date.

Cerebri AI launched CCX v2 in the autumn last year. What is this platform exactly?

Our CCX offering has three components.

Our CCX platform, which consists of a 10-stage software pipeline, which our data scientists use to build their models and product insights. It is also our deployment system from data intake to our UX and insights.  We have several applications in our offering, such as QM for quality management of the entire process, and Audit, which tells users what features drive the insights they are seeing.

Then, we have our Insights themselves, which are generated from our modelling technology. Our flagship insight is our Cerebri Values, which is a customer’s commitment to your brand, which is – in effect – a measure of how much money a customer is willing to spend in the future on a brand’s products and services.

We derive a host of customer engagement and revenue KPI insights from our core offering and we can help with our next best action{set}s to drive engagement, up-selling, cross-selling, reducing churn, etc.

You sat down to interview representatives from four major faith traditions in the world today — Islam, Hinduism, Judaism and Christianity. Have your views of the world shifted since these interviews, and is there one major insight that you would like to share with our readers during the current pandemic?

Diversity matters. Not because it is a goal in and of itself, but because treating anyone in anything less than a totally equitable manner is just plain stupid. Period. When I was challenged to put in a program to reinforce Cerebri AI’s commitment to diversity, it was apparent to me that what we used to learn as children, in our houses of worship, has been largely forgotten.  So, I decided to ask the faith communities and their leaders in the US to tell us how they think through treating everyone equally. The sessions have proved to be incredibly popular, and we make them available to anyone who wants to use them in their business.

On the pandemic, I have an expert at home. My wife is a world-class epidemiologist.  She told me on day one. Make sure the people most at risk are properly isolated, she called this epi-101. This did not happen. The effects have been devastating.  Age discrimination is not just an equity problem in working, it is also all about how we treat our parents, grandparents, etc., wherever they are residing.  We did not distinguish ourselves in the pandemic in how we dealt with nursing home residents, for example, a total disaster in many communities. I live in Texas, we are the 2nd biggest state population wise, and our pandemic-related deaths per population is 40th in the US among all states.  Arguably the best in Europe is Germany with 107 pandemic deaths per million, Texas sits at 77, so our state authorities have done a great job so far.

You’ve stated that a lot of the media focuses on the doom and gloom of AI but does not focus enough on how the technology can be useful to make our lives better. What are your views on some of the improvements in our lives that we will witness from the further advancement of AI?

Our product helps eliminate spam email from the vendors you do business with. Does it get better than that? Just kidding. There are so many fields where AI is helping, it is difficult to imagine a world without AI.

Is there anything else that you would like to share about Cerebri AI?

The sky’s the limit, as understanding customer behavior is only really just beginning. Being enabled for the first time by AI and the totally massive compute power available on the cloud and due to Moore’s Law.

Thank you for the great interviews, readers who wish to learn more should visit Cerebri AI.

Spread the love
Continue Reading

Data Science

Omer Har, Co-Founder and CTO, Explorium – Interview Series

mm

Published

on

Omer Har is a data science and software engineering veteran with nearly a decade of experience building AI models that drive big businesses forward.

Omer Har is the Co-Founder and CTO of Explorium,  a company that offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

When did you first discover that you wanted to be involved in data science?

My interest in data science goes back over a decade, which is about how long I’ve been practicing and leading data science teams. I started out as a software engineer but was always drawn to the complex data and algorithmic challenges from early on. I was lucky to have learned the craft at Microsoft Research, which was one of the few places at the time where you could really work on complex applied machine learning challenges at scale.

 

You Co-Founded Explorium in 2017, could you discuss the inspiration behind launching this start-up?

Explorium is based on a simple and very powerful need — there is so much data around us that could potentially help build better models, but there is no way to know in advance which data sources are going to be impactful, and how. The original idea came from Maor Shlomo, Explorium Co-founder and CEO, who was dealing with unprecedented data variety in his military service and tackling ways to leverage it into decision making and modeling. When the three of us first came together, it was immediately clear to us that this experience echoes the needs we were dealing within the business world, particularly in fast-growing data science-driven fields like advertising and marketing technology unicorns, where both I and Or Tamir (Explorium Co-founder and COO) were leading growth through data.

Before Explorium, finding relevant data sources that really made an impact — to improve your machine learning model’s accuracy — was a labor-intensive, time-consuming, and expensive process with low chances of success. The reason is that you are basically guessing, and using your most expensive people — data scientists — to experiment. Moreover, data acquisition itself is a complex business process and data science teams usually do not have the ability to commercially engage with multiple data providers.

As a data science leader that was measured by business impact generated by models, I didn’t have the luxury of sending my team on a wild goose chase. As a result, you often prefer to deploy your efforts on things that can have a much lower impact than a relevant new data source, just because they are much more within your realm of control.

 

Explorium recently successfully raised an additional $31M in funding in a Series B round. Have you been surprised at how fast your company has grown?

It has definitely been a rocket ship ride so far, and you can never take that for granted. I can’t say I was surprised by how widespread the need for better data is, but it’s always an incredible experience to see the impact you generate for customers and their business. The greatest analytical challenge organizations will face over the next decade is finding the right data to feed their models and automated processes. The right data assets can crown new market leaders, so our growth really reflects the rapidly growing number of customers that realize that and are making data a priority. In fact, the number of “Data Hunters” — people looking for data as part of their day to day job — is growing exponentially in our experience.

 

Could you explain what Explorium’s data platform is and what the automated data discovery process is?

Explorium offers an end-to-end data science platform powered by augmented data discovery and feature engineering. We are focused on the “data” part of data science — which means  automatically connecting to thousands of external data sources and leveraging machine learning processes to distill the most impactful signals and features. This is a complex and  multi-stage process, which starts by connecting to a myriad of contextually relevant sources in what we call the Explorium data catalog. Then we automate the process that explores this interconnected data variety, by testing hundreds of thousands of ideas for meaningful features and signals to create the optimal feature set, build models on top of it, and serve them to production in flexible ways.

By automating the search for the data you need, not just the data you have internally, the Explorium platform is doing to data science what search engines did for the web — we are scouring, ranking, and bringing you the most relevant data for the predictive question at hand.

This empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

 

What types of external data sources does Explorium tap into?

We hold access to thousands of sources across pretty much any data category you can think of including company, geospatial, behavioral, time-based, website data, and more. We have multiple expert teams that specialize in data acquisition through open, public, and premium sources, as well as partnerships. Our access to unique talent out of Israel’s top intelligence and technology units brings substantial know-how and experience in leveraging data variety for decision making.

 

How does Explorium use machine learning to understand which types of data are relevant to clients?

This is part of our “secret sauce” so I can’t dive in, but on a high level, we use machine learning to understand the meaning behind the different parts of your datasets and employ constantly improving algorithms to identify which sources in our evolving catalog are potentially relevant. By actually connecting these sources to your data, we are able to perform complex data discovery and feature engineering processes, specifically designed to be effective for external and high-dimensional data, to identify the most impactful features from the most relevant sources. Doing it all in the context of machine learning models makes the impact statistically measurable and allows us to constantly learn and improve our matching, generation, and discovery capabilities.

 

One of the solutions that is offered is mitigating application fraud risk for online lenders by using augmented data discovery. Could you go into details on how this solution works?

Lending is all about predicting and mitigating risk — whether it comes from the borrower’s ability to repay the loan (e.g. financial performance) or their intention to do so (e.g. fraud). Loan applications are inherently a tradeoff between the lender’s desire to collect more information and their ability to compete with other providers, as longer and more cumbersome questionnaires have lower completion rates, are biased by definition, and so on.

With Explorium, both incumbent banks and online challengers are able to automatically augment the application process with external and objective sources that add immediate context and uncover meaningful relationships. Without giving away too much to help fraudsters, you can imagine that in the context of fraud this could mean different behaviors and properties that stand out versus real applicants if you are able to gather a 360-view of them. Everything from online presence, official records, behavioral patterns on social media, and physical footprints leave breadcrumbs that could be hypothesized and tested as potential features and indicators if you can access the relevant data and vertical know-how. Simply put, better data ensures better predictive models, which helps translate the reduced risk and higher revenue to lenders’ bottom line.

In a wider view, since COVID-19 hit on a global scale, we’ve been seeing an increase in new fraud patterns as well as lenders’ need to go back to basics, as the pandemic broke all the models. No one really took this sort of a “Black Swan” event into account, and part of our initial response to help these companies has been generating custom signals that help assess business risk in these uncertain and dynamic times.

You can read more about in an excellent post written by Maor Shlomo, Explorium Co-Founder and CEO.

Thank you for the great interview, readers who wish to learn more should visit Explorium.

Spread the love
Continue Reading