Connect with us

Interviews

Ian Wong, Co-founder & CTO of Opendoor – Interview Series

mm

Published

 on

Can you summarize the concept behind Opendoor, and how does it differ from competitors such as Zillow?

Opendoor gives people a simple and convenient way to buy, sell and trade-in homes. We’re turning a fragmented, inflexible real estate model into an  end-to-end, digital and on-demand experience. As the pioneer of “ibuyering”, Opendoor has served over 70,000 customers to date and expanded to 21 U.S. markets.

Opendoor is able to provide near-instant fair market values for homes using a proprietary valuation model that leverages first and third-party data, along with the use of machine learning, AI and human review. With just a few taps in the Opendoor app, sellers can receive an offer from Opendoor within 24 hours. Selling to Opendoor provides more choice and certainty as homeowners can choose their move out date and avoid the hassle and stress of home showings and repairs.

In addition, we’ve begun solving for other pain points in the home transaction with the launch of anew product that reimagines the home buying process, the launch of a home loans business and the acquisition of a title and escrow company. Our goal is to make moving seamless, on-demand and stress free.

 

What was it that attracted you to Opendoor?

We have the chance to reimagine the real-estate transaction, thereby redefining people’s relationships with their largest asset. As opposed to a liability, what if homeowners can tap into liquidity afforded by their homes in the same way you and I can withdraw from our checking account? What if buyers and sellers can skip months of stress and uncertainty, and become more confident moving forward with the next chapter of their lives? The vision of enabling more geographic mobility and financial freedom is super exciting, and it feels like we’re just embarking on that journey.

 

Opendoor analyzes a large collection of historical market transactions. What type of data points are you assembling?

Accurate real estate data with the level of granularity we need is not easy to come by. We use a combination of large propriety and third-party data sets to understand historical market transactions, including listing-level and home-level details. This means we look at common data points from a listing, like the sale date and price, when the home listed, as well as data points about individual homes, like the number of bedrooms and bathrooms, kitchen attributes or square footage. On top of this, we incorporate features that denote a home’s quality or uniqueness, allowing us to better select comparables and ultimately price the home as accurately as possible. We also take into account similar data from homes currently on the market. Ultimately, these data points help us predict the fair market value of a home and the amount of time it will likely take to resell the home.

 

Opendoor also analyzes homes that are taken off the market without transacting, how is this data used differently compared to homes that have sold?

We look at similar data for both active homes and homes that are taken off the market without transacting — homes we call “delistings.” Our data set looks at a variety of home-level and list-level details, including square feet and list price, for each transaction. We examine those insights for delistings, but do not get to observe our target variable of days-on-market. Additionally, we look at the market holistically to understand supply and demand. By incorporating non-transacted listings, we’re able to have a more comprehensive picture of the market.

 

Opendoor uses Ensembling as a factor in house pricing. Can you explain what ensembling is and how Opendoor uses this technology?

When a buyer wants to buy a house or a seller decides to list their home on the market, the way they determine the home’s value will depend on why they are buying or selling. And this can be very different depending on the buyer and seller type. We incorporate this in our model to understand how buyers and sellers view the market, which is where ensembling comes in. Ensembling allows us to use different pricing models together to compute a weighted average of home values. Some models may weigh certain variables differently than others. We’ve found that ensembling generally results in more accurate pricing than any single model.

 

Opendoor imports big data from various sources which can be a challenge due to how the data was originally labeled or formatted. Opendoor uses Markov Random Field to assist with this issue. Can you explain what this is?

The challenge stems from mutations in the text data, from abbreviations and misspellings to inconsistent ordering of words and numeric spellings. Poor quality data impacts our home valuation models, which is why we implemented a mathematical approach to help standardize text and improve the quality of labels. Markov Random Field enables us to score all labels jointly and more accurately interpret characteristics like subdivisions. The score of each labeling comes from two different components: 1) how well the final labels relate to the original text and 2) how spatially continuous the labels are among neighbors. With the mathematics of Markov chains, we make the data more than just the sum of its parts.

 

You use a technique called survival analysis to model the average holding time of a home that is listed for sale. What is survival analysis and does it apply in Opendoor’s case?

Fundamentally, we need to understand liquidity on a per home basis, and be able to update our view of the liquidity profile of a home as we get more information. Survival analysis is a statistical method that analyzes the anticipated amount of time it will take until one or more events happen. In our case, we use survival analysis to help us understand and predict how long a house will take to sell. Using this method, we dramatically improve our ability to respond to evolving market conditions, and more accurately predict our unit economics. This helps us determine a risk threshold for each home and make smarter investments, which is vital to our business.

 

There are often factors that affect the value of a home which are very location dependent, such as road noise. How do you use machine learning to program your valuation model for such an issue?

The Opendoor Valuation Model (OVM) combines machine intelligence with human expertise to provide accurate and competitive offers, taking less apparent factors, like road noise, into account. To do so, we rely on our human operators to identify variables and our machines to predict how much they matter in the pricing algorithm. OpenStreetMap (OSM) is a freely available data set for road geometries and helps us identify homes adjacent to roads. We also look for previous human adjustments on homes to compute the average adjustment value. We’re able to refine these values with scale, and as we collect more human adjustment data for markets, the data set grows and improves the OVM performance. Most importantly, we enrich readily available third-party data with our own proprietary data. As a result, the overall location dependent signals improve dramatically over time.

 

Is there anything else that you would like to share about Opendoor?

What makes working at Opendoor particularly special for me is that we’re using technology, data science and operational excellence to help solve real world pain points for millions of consumers. This marriage of the online and offline worlds has never been done and comes with lots of new and interesting challenges.

To Learn more visit Opendoor

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Data Science

Three Uses Of Automation Within Supply Chain 4.0

mm

Published

on

The increased availability of advanced technologies has revolutionized the traditional supply chain model. Supply Chain 4.0 responds to modern customer expectations by relying heavily on the Internet of Things (IoT), advanced robotics, big data analytics, and blockchain. These tools enable automation and thus give organizations a chance to close information gaps and optimally match supply and demand.

“The reorganization of supply chains […] is transforming the model of supply chain management from a linear one, in which instructions flow from supplier to producer to distributor to consumer, and back, to a more integrated model in which information flows in an omnidirectional manner to the supply chain.” – Understanding Supply Chain 4.0 and its potential impact on global value chains

Industry giants like Netflix, Tesla, UPS, Amazon, and Microsoft rely heavily on automation within their supply chain to lead their respective industries. Let us take a closer look at three powerful automation use cases.

Three Uses Of Automation Within Supply Chain 4.0:

1. Managing demand uncertainty

A painful aspect of supply chain ecosystems is the demand uncertainty and the inability to accurately forecast demand. Generally, this leads to a set of performance issues, from increased operational cost to excess inventory and suboptimal production capacity. Automation tools can forecast demand, remove uncertainty from the equation, and thus improve operational efficiency at each step along the supply chain.

Big data analytics is an established tool that helps organizations manage demand uncertainty. It consists of data collection & aggregation infrastructure combined with powerful ML algorithms, designed to forecast demand based on historical (or even real-time) data. Modern storage solutions (such as data lakes) make it possible to aggregate data from a variety of sources: market trends, competitor information, and consumer preferences. 

Machine learning(ML) algorithms continually analyze this rich data to find new patterns, improve the accuracy of demand forecasting, and enhance operational efficiency. This is the recipe that Amazon uses to predict demand for a product before it is purchased and stocked in their warehouse. By examining tweets and posts on websites and social media, they understand customer sentiments about products and have a data-based way to model demand uncertainty. 

The good news is that such powerful analytics tools are not restricted to industry giants anymore. Out-of-the-box solutions (such as Amazon Forecast) make such capabilities widely available to all organizations that wish to handle demand uncertainty. 

2. Managing process uncertainties

Organizations operating in today’s supply chain industry need to handle increasingly complex logistic processes. The competitive environment, together with ever-increasing customer expectations make it imperative to minimize uncertainties across all areas of supply chain management. 

From production and inventory, to order management, packing, and shipping of goods, automation tools can tackle uncertainties and minimize process flaws. AI, robotics, and IoT are well-known methods that facilitate an optimal flow of resources, minimize delays, and promote optimized production schedules.

Internet of Things (IoT) is playing an important role to overcome process uncertainties in the supply chain. One major IoT application is the accurate tracking of goods and assets. IoT sensors are used for tracking in the warehouse, during loading, in-transit, and unloading phases. This enables applications such as live monitoring, which increases process visibility and enables managers to act on real-time information. It also makes it possible to further optimize a variety of other processes, from loading operations to payment collection.

Supply Chain management and automation

IoT increases process visibility and enables managers to act on real-time information. Source: Canva

Since 2012, Amazon fulfillment warehouses use AI-powered robots that are doing real magic. One can see robots and humans working side by side through wireless communication, handling orders that are unique in size, shape, and weight. Thousands of Wi-Fi connected robots gather merchandise for each individual order. These robots have two powered wheels that let them rotate in place, IR for obstacle detection, and built-in cameras to read QR codes on the ground. Robots use these QR codes to determine their location and direction. Like this, efficiency is increased, the physical activity of employees is reduced and process uncertainty is kept to a minimum.

Another example of how automation helps make process improvements comes from vehicle transport company CFR Rinkens. They have utilized automation in their accounting and billing departments to quicken payment processing times. Through auto-created invoices, they have decreased costs and errors which in turn reduces delays.

“An area of need that we applied automation was within the accounting department for billing and paying vendors. With tons of invoices coming in and out, automation here ensures nothing falls through the cracks, and clients receive invoices on time providing them with enough time to process payment.”   -Joseph Giranda, CFR Rinkens

The biggest benefit of automation is transparency. Each step of an organized supply chain eliminates grey areas for both clients and businesses. 

3. Synchronization among supply chain partners and customers

Digital supply chains are characterized by synchronization among hundreds of departments, vendors, suppliers, and customers. In order to orchestrate activities all the way from planning to execution, supply chains require information to be collected, analyzed, and utilized in real-time. A sure way to achieve a fully synchronized supply chain is to leverage the power of automation. 

CFR Rinkens uses a dynamic dashboard to keep track of cargo as they deliver vehicles across the world. This dashboard is automatically updated with relevant information that increases transparency and efficiency. High transparency allows for excellent customer service and satisfaction. 

“Upon a vehicle’s arrival, images are taken and uploaded onto a CFR dashboard that our clients are able to access. All vehicle documents, images, and movements are automatically displayed within this dashboard. This automation helps on the customer service side because it allows for full transparency and accountability for quality control, delivery window times, and real-time visibility.”   -Joseph Giranda, CFR Rinkens

Automation offers an effective solution to the synchronization issue with blockchain. Blockchain is a distributive digital ledger with many applications and can be used for any exchange, tracking, or payment. Blockchain allows information to be instantly visible to all supply chain partners and enables a multitude of applications. Documents, transactions, and goods can easily be tracked. Payments and pricing can also be historically recorded, all in a secure and transparent manner.

Supply Chain management and automation 2

Digital Supply Chains increase transparency and efficiency. Source: Canva

The shipping giant FedEx has joined Blockchain in Transport Alliance (BiTA) and launched a blockchain-powered pilot program to help solve customer disputes. Similarly, UPS has also joined BiTA as early as 2017, reaching for increased transparency and efficiency among its entire partner network. Such real-life use cases show the potential of blockchain technology and the impact that automation can have on the entire freight industry.

Blockchain increases the transparency of the supply chain and removes information latency for all partners on the network. The resulting benefits include increased productivity and operational efficiency as well as better service levels. Its massive potential makes blockchain a top priority for supply chain organizations and their digital automation journey.

Conclusion

Automation is playing a major role in defining the Supply Chain 4.0 environment. With heavy technological tools available to them, leading organizations are taking serious leaps towards efficiency and productivity. Automation gives them the power to accelerate and optimize the whole end-to-end supply chain journey. It also enables them to use data to their advantage and close information gaps across their network. 

Where To Go From Here?

Data can be the obstacle or the solution to all these potential benefits. Fortunately, experts-for-hire on this are easy to reach. Blue Orange Digital, a top-ranked AI development agency in NYC, specializes in cloud data storage solutions and facilitates the development of supply chain optimization. They provide custom solutions to meet each unique business needs, but also have many pre-built options for supply chain leaders. From a technology point of view, we have outlined several different ways to improve the efficiency of the supply chain. Taken together, these improvements give you Supply Chain 4.0.

All images source: Canva

Spread the love
Continue Reading

Data Science

Jean Belanger, Co-Founder & CEO at Cerebri AI – Interview Series

mm

Published

on

Jean Belanger, is the Co-Founder & CEO at Cerebri AI, a pioneer in artificial intelligence and machine learning, is the creator of Cerebri Values™, the industry’s first universal measure of customer success. Cerebri Values quantifies each customer’s commitment to a brand or product and dynamically predicts “Next Best Actions” at scale, which enables large companies to focus on accelerating profitable growth.

What was it that initially attracted you to AI?

Cerebri AI is my 2nd data science startup. My first used operations research modelling to optimize order processing for major retail and ecommerce operations. 4 of the top US 10 retailers, including Walmart, used our technology. AI has a huge advantage, which really attracted me. Models learn, which means they are more scalable. Which means we can build and scale awesome technology that really, really adds value.

Can you tell us about your journey to become a co-founder of Cerebri AI?

I was mentoring at a large accelerator here in Austin, Texas – Capital Factory – and I was asked to write the business plan for Cerebri AI.  So, I leveraged my experience of doing data science, with over 80 data science-based installs using our technology. Sometimes you just need to go for it.

What are some of the challenges that enterprises currently face when it comes to CX and customer/brand relationships?

The simple answer is that every business tries to understand their customers’ behavior, so they can satisfy their needs. You cannot get into someone’s head to sort out why they buy a product or service when they do, so brands must do the best they can. Surveys, tracking market share, or measuring market segmentation. There are thousands of ways of tracking or understanding customers. However, the underlying basis for everything is rarely thought about, and that is Moore’s Law.  More powerful, cheaper semiconductors, processors etc., from Intel, Apple, Taiwan Semi, etc., make our modern economy work at such a compute intense level relative to a few years ago. Today, the cost of cloud computing and memory resources make AI doable.  AI is VERY compute intensive. Things that were not possible, even five years ago, can now be done. In terms of customer behavior, we can now process all the info and data that we have digitally recorded in one customer journey per customer. So, customer behavior is suddenly much easier to understand and react to. This is key, and that is the future of selling products and services.

Cerebri AI personalizes the enterprise by combining machine learning and cloud computing to enhance brand commitment. How does the AI increase brand commitment?

When Cerebri AI looks at a customer, the first thing we establish is their commitment to the brand we are working with. We define commitment to the brand as the customer’s willingness to spend in the future. Its fine to be in business and have committed customers, but if they do not buy your goods and services, then in effect, you are out of business. The old saying goes – if you cannot measure something, you cannot improve it.  Now we can measure commitment and other key metrics, which means we can use our data monitoring tools and study a customer’s journey to see what works and what does not. Once we find a tactic that works, our campaign building tools can instantly build a cohort of customers that might be similarly impacted. All of this is impossible without AI and the cloud infrastructure at the software layer, which allows us to move in so many directions with customers.

What type of data does Cerebri collect? Or use within its system? How does this comply with PII (Personally Identifiable Information) restrictions?

Until now we only operate behind the customer’s firewall, so PII has not been an issue. We are going to open a direct access web site in the Fall, so that will require use of anonymized data. We are excited about the prospect of bringing our advanced technology to a broader array of companies and organizations.

You are working with the Bank of Canada, Canada’s central bank, to introduce AI to their macroeconomic forecasting. Could you describe this relationship, and how your platform is being used?

The Bank of Canada is an awesome customer. Brilliant people and macroeconomic experts.  We started 18 months or so ago. Introducing AI into the technology choices the bank’s team would have at their disposal. We started with predictions of quarterly GDP for Canada.  That was great, now we are expanding the dataset used in the AI-based forecasts to increase accuracy, etc.  To do this, we developed an AI optimizer, which automates the thousands of choices facing a data scientist when they carry out a modelling exercise. Macro-economic time series require a very sophisticated approach when you are dealing with decades of data, all of which may have an impact on overall GDP.  The AI Optimizer was so successful that we decided to incorporate this into Cerebri AI’s standard CCX platform offering.  It will be used in all future engagements.  Amazing technology.  One of the reasons we have filed 24 patents to date.

Cerebri AI launched CCX v2 in the autumn last year. What is this platform exactly?

Our CCX offering has three components.

Our CCX platform, which consists of a 10-stage software pipeline, which our data scientists use to build their models and product insights. It is also our deployment system from data intake to our UX and insights.  We have several applications in our offering, such as QM for quality management of the entire process, and Audit, which tells users what features drive the insights they are seeing.

Then, we have our Insights themselves, which are generated from our modelling technology. Our flagship insight is our Cerebri Values, which is a customer’s commitment to your brand, which is – in effect – a measure of how much money a customer is willing to spend in the future on a brand’s products and services.

We derive a host of customer engagement and revenue KPI insights from our core offering and we can help with our next best action{set}s to drive engagement, up-selling, cross-selling, reducing churn, etc.

You sat down to interview representatives from four major faith traditions in the world today — Islam, Hinduism, Judaism and Christianity. Have your views of the world shifted since these interviews, and is there one major insight that you would like to share with our readers during the current pandemic?

Diversity matters. Not because it is a goal in and of itself, but because treating anyone in anything less than a totally equitable manner is just plain stupid. Period. When I was challenged to put in a program to reinforce Cerebri AI’s commitment to diversity, it was apparent to me that what we used to learn as children, in our houses of worship, has been largely forgotten.  So, I decided to ask the faith communities and their leaders in the US to tell us how they think through treating everyone equally. The sessions have proved to be incredibly popular, and we make them available to anyone who wants to use them in their business.

On the pandemic, I have an expert at home. My wife is a world-class epidemiologist.  She told me on day one. Make sure the people most at risk are properly isolated, she called this epi-101. This did not happen. The effects have been devastating.  Age discrimination is not just an equity problem in working, it is also all about how we treat our parents, grandparents, etc., wherever they are residing.  We did not distinguish ourselves in the pandemic in how we dealt with nursing home residents, for example, a total disaster in many communities. I live in Texas, we are the 2nd biggest state population wise, and our pandemic-related deaths per population is 40th in the US among all states.  Arguably the best in Europe is Germany with 107 pandemic deaths per million, Texas sits at 77, so our state authorities have done a great job so far.

You’ve stated that a lot of the media focuses on the doom and gloom of AI but does not focus enough on how the technology can be useful to make our lives better. What are your views on some of the improvements in our lives that we will witness from the further advancement of AI?

Our product helps eliminate spam email from the vendors you do business with. Does it get better than that? Just kidding. There are so many fields where AI is helping, it is difficult to imagine a world without AI.

Is there anything else that you would like to share about Cerebri AI?

The sky’s the limit, as understanding customer behavior is only really just beginning. Being enabled for the first time by AI and the totally massive compute power available on the cloud and due to Moore’s Law.

Thank you for the great interviews, readers who wish to learn more should visit Cerebri AI.

Spread the love
Continue Reading

Data Science

Omer Har, Co-Founder and CTO, Explorium – Interview Series

mm

Published

on

Omer Har is a data science and software engineering veteran with nearly a decade of experience building AI models that drive big businesses forward.

Omer Har is the Co-Founder and CTO of Explorium,  a company that offers a first of its kind data science platform powered by augmented data discovery and feature engineering. By automatically connecting to thousands of external data sources and leveraging machine learning to distill the most impactful signals, the Explorium platform empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

When did you first discover that you wanted to be involved in data science?

My interest in data science goes back over a decade, which is about how long I’ve been practicing and leading data science teams. I started out as a software engineer but was always drawn to the complex data and algorithmic challenges from early on. I was lucky to have learned the craft at Microsoft Research, which was one of the few places at the time where you could really work on complex applied machine learning challenges at scale.

 

You Co-Founded Explorium in 2017, could you discuss the inspiration behind launching this start-up?

Explorium is based on a simple and very powerful need — there is so much data around us that could potentially help build better models, but there is no way to know in advance which data sources are going to be impactful, and how. The original idea came from Maor Shlomo, Explorium Co-founder and CEO, who was dealing with unprecedented data variety in his military service and tackling ways to leverage it into decision making and modeling. When the three of us first came together, it was immediately clear to us that this experience echoes the needs we were dealing within the business world, particularly in fast-growing data science-driven fields like advertising and marketing technology unicorns, where both I and Or Tamir (Explorium Co-founder and COO) were leading growth through data.

Before Explorium, finding relevant data sources that really made an impact — to improve your machine learning model’s accuracy — was a labor-intensive, time-consuming, and expensive process with low chances of success. The reason is that you are basically guessing, and using your most expensive people — data scientists — to experiment. Moreover, data acquisition itself is a complex business process and data science teams usually do not have the ability to commercially engage with multiple data providers.

As a data science leader that was measured by business impact generated by models, I didn’t have the luxury of sending my team on a wild goose chase. As a result, you often prefer to deploy your efforts on things that can have a much lower impact than a relevant new data source, just because they are much more within your realm of control.

 

Explorium recently successfully raised an additional $31M in funding in a Series B round. Have you been surprised at how fast your company has grown?

It has definitely been a rocket ship ride so far, and you can never take that for granted. I can’t say I was surprised by how widespread the need for better data is, but it’s always an incredible experience to see the impact you generate for customers and their business. The greatest analytical challenge organizations will face over the next decade is finding the right data to feed their models and automated processes. The right data assets can crown new market leaders, so our growth really reflects the rapidly growing number of customers that realize that and are making data a priority. In fact, the number of “Data Hunters” — people looking for data as part of their day to day job — is growing exponentially in our experience.

 

Could you explain what Explorium’s data platform is and what the automated data discovery process is?

Explorium offers an end-to-end data science platform powered by augmented data discovery and feature engineering. We are focused on the “data” part of data science — which means  automatically connecting to thousands of external data sources and leveraging machine learning processes to distill the most impactful signals and features. This is a complex and  multi-stage process, which starts by connecting to a myriad of contextually relevant sources in what we call the Explorium data catalog. Then we automate the process that explores this interconnected data variety, by testing hundreds of thousands of ideas for meaningful features and signals to create the optimal feature set, build models on top of it, and serve them to production in flexible ways.

By automating the search for the data you need, not just the data you have internally, the Explorium platform is doing to data science what search engines did for the web — we are scouring, ranking, and bringing you the most relevant data for the predictive question at hand.

This empowers data scientists and business leaders to drive decision-making by eliminating the barrier to acquire the right data and enabling superior predictive power.

 

What types of external data sources does Explorium tap into?

We hold access to thousands of sources across pretty much any data category you can think of including company, geospatial, behavioral, time-based, website data, and more. We have multiple expert teams that specialize in data acquisition through open, public, and premium sources, as well as partnerships. Our access to unique talent out of Israel’s top intelligence and technology units brings substantial know-how and experience in leveraging data variety for decision making.

 

How does Explorium use machine learning to understand which types of data are relevant to clients?

This is part of our “secret sauce” so I can’t dive in, but on a high level, we use machine learning to understand the meaning behind the different parts of your datasets and employ constantly improving algorithms to identify which sources in our evolving catalog are potentially relevant. By actually connecting these sources to your data, we are able to perform complex data discovery and feature engineering processes, specifically designed to be effective for external and high-dimensional data, to identify the most impactful features from the most relevant sources. Doing it all in the context of machine learning models makes the impact statistically measurable and allows us to constantly learn and improve our matching, generation, and discovery capabilities.

 

One of the solutions that is offered is mitigating application fraud risk for online lenders by using augmented data discovery. Could you go into details on how this solution works?

Lending is all about predicting and mitigating risk — whether it comes from the borrower’s ability to repay the loan (e.g. financial performance) or their intention to do so (e.g. fraud). Loan applications are inherently a tradeoff between the lender’s desire to collect more information and their ability to compete with other providers, as longer and more cumbersome questionnaires have lower completion rates, are biased by definition, and so on.

With Explorium, both incumbent banks and online challengers are able to automatically augment the application process with external and objective sources that add immediate context and uncover meaningful relationships. Without giving away too much to help fraudsters, you can imagine that in the context of fraud this could mean different behaviors and properties that stand out versus real applicants if you are able to gather a 360-view of them. Everything from online presence, official records, behavioral patterns on social media, and physical footprints leave breadcrumbs that could be hypothesized and tested as potential features and indicators if you can access the relevant data and vertical know-how. Simply put, better data ensures better predictive models, which helps translate the reduced risk and higher revenue to lenders’ bottom line.

In a wider view, since COVID-19 hit on a global scale, we’ve been seeing an increase in new fraud patterns as well as lenders’ need to go back to basics, as the pandemic broke all the models. No one really took this sort of a “Black Swan” event into account, and part of our initial response to help these companies has been generating custom signals that help assess business risk in these uncertain and dynamic times.

You can read more about in an excellent post written by Maor Shlomo, Explorium Co-Founder and CEO.

Thank you for the great interview, readers who wish to learn more should visit Explorium.

Spread the love
Continue Reading