stub Liran Hason, Co-Founder & CEO of Aporia - Interview Series - Unite.AI
Connect with us

Interviews

Liran Hason, Co-Founder & CEO of Aporia – Interview Series

mm

Published

 on

Liran Hason is the Co-Founder and CEO of Aporia, a full-stack ML observability platform used by Fortune 500 companies and data science teams across the world to ensure responsible AI. Aporia integrates seamlessly with any ML infrastructure. Whether it’s a FastAPI server on top of Kubernetes, an open-source deployment tool like MLFlow or a machine learning platform like AWS Sagemaker

Prior to founding Aporia, Liran was an ML Architect at Adallom (acquired by Microsoft), and later an investor at Vertex Ventures.

You started coding when you were 10, what initially attracted you to computers, and what were you working on?

It was 1999, and a friend of mine called me and said he had built a website. After typing a 200 characters-long address in my browser, I saw a website with his name on it. I was amazed by the fact that he created something on his computer and I was able to see it on my own computer. This made me super curious about how it works and how I can do the same. I asked my mom to buy me an HTML book, which was my first step into programming.

I find great joy in taking on tech challenges, and as time went by my curiosity only grew. I learned ASP, PHP, and Visual Basic, and really consumed anything I could.

When I was 13, I was already taking on some freelance jobs, building websites and desktop apps.

When I didn’t have any active work, I was working on my own projects – usually different websites and applications aimed to help other people achieve their goals:

Blue-White Programming – is a Hebrew programming language, similar to HTML, that I built after realizing that kids in Israel who don’t have a high level of English are limited or pushed away from the world of coding.

Blinky – My grandparents are deaf and use sign language to communicate with their friends. When video conferencing software like Skype and ooVoo emerged, it enabled them for the first time to talk with friends even if they’re not in the same room (like we all do with our phones). However, as they can’t hear, they weren’t able to know when they have an incoming call. To help them out, I wrote software that identifies incoming video calls and alerts them by blinking a led array in a small hardware device I’ve built and connected to their computer.

These are just a few of the projects I built as a teenager. My curiosity never stopped and I found myself learning C, C++, Assembly, and how operating systems work, and really tried to learn as much as I can.

Could you share the story of your journey of being a machine learning Architect at Microsoft-acquired Adallom?

I started my journey at Adallom following my military service. After 5 years in the army as a Captain, I saw a great opportunity to join an emerging company and market – as one of the first employees. The company was led by great founders, whom I knew from my military service, and backed by top-tier VCs – like Sequoia. The eruption of cloud technologies onto the market was still in its relative infancy, and we were building one of the very first cloud security solutions at the time. Enterprises were just beginning to transition from on-premise to cloud, and we saw new industry standards emerge – such as Office 365, Dropbox, Marketo, Salesforce, and others.

During my first few weeks, I had already known that I wanted to start my own company one day. I really felt, from a tech perspective, that I was up for any challenge thrown my way, and if not myself, I knew the right people to help me overcome anything.

Adallom had a need for someone, who has in-depth knowledge of the tech but could also be customer-facing. Fast forward like a month, and I’m on a plane to the US, for the first time in my life, going to meet with people from LinkedIn (pre-Microsoft). A couple of weeks later and they became our first paying customer in the US. This was just one of many major corporations – Netflix, Disney, and Safeway – that I was helping solve critical cloud issues for. It was super educational and a strong confidence builder.

For me, joining Adallom was really about joining a place where I believe in the market, I believe in the team, and I believe in the vision. I'm extremely thankful for the opportunity that I was given there.

The purpose of what I'm doing was and is very important. For me, it was the same in the army, it was always important. I could easily see how the Adallom approach of connecting to the SaaS solutions, then monitoring the activity of users, of resources, finding anomalies, and so on, was how things were going to be done. I realized this will be the approach of the future. So, I definitely saw Adallom as a company that is going to be successful.

I was responsible for the entire architecture of our ML infrastructure. And I saw and experienced firsthand the lack of proper tooling for the ecosystem. Yeah, it was clear to me that there has to be a dedicated solution in one centralized place where you can see all your models; where you can see what decisions they're making for your business; where you can track and become proactive with your ML goals. For example, we had times when we learned about issues in our machine learning models far too late, and that’s not great for the users and definitely not for the business. This is where the idea for Aporia started to round out.

Could you share the genesis story behind Aporia?

My own personal experience with machine learning starts in 2008, as part of a collaborative project at the Weizmann Institute, along with the University of Bath and a Chinese Research Center. There, I built a biometric identification system by analyzing images of the iris. I was able to achieve 94% accuracy. The project was a success and was applauded from a research standpoint. But, for me, I had been building software since I was 10 years old, and something felt in a way, not real. You couldn’t really use the biometric identification system I built in real life because it worked well only for the specific dataset I used. It’s not deterministic enough.

This is just a bit of background. When you’re building a machine learning system, for example for biometric identification, you want the predictions to be deterministic – you want to know that the system accurately identifies a certain person, right? Just like how your iPhone doesn’t unlock if it doesn’t recognize the right person at the right angle, this is the desired outcome. But this really wasn’t the case with machine learning back then, when I first got into the space.

About seven years later and I was experiencing firsthand, at Adallom, the reality of running production models without reliable guardrails, as they make decisions for our business that affect our customers. Then, I was fortunate enough to work as an investor at Vertex Ventures, for three years. I saw how more and more organizations used ML, and how companies transitioned from just talking about ML to actually doing machine learning. However, these companies adopted ML only to be challenged by the same issues we were facing at Adallom.

Everyone rushed to use ML, and they were trying to build monitoring systems in-house. Obviously, it wasn’t their core business, and these challenges are quite complex. Here is when I also realized that this is my opportunity to make a huge impact.

AI is being adopted across almost every industry, including healthcare, financial services, automotive, and others, and it will touch everyone’s lives and impact us all. This is where Aporia displays its true value – enabling all of these life-changing use cases to function as intended and help improve our society. Because, like with any software, you’re going to have bugs, and machine learning is no different. If left unchecked, these ML issues can really hurt business continuity and impact society with unintentional bias outcomes. Take Amazon’s attempt to implement an AI recruiting tool – unintentional bias caused the machine learning model to heavily recommend male candidates over female. This is obviously an undesired outcome. Thus there needs to be a dedicated solution to detect unintentional bias before it makes it to the news and affects end users.

For organizations to properly rely on and enjoy the benefits of machine learning, they need to know when it’s not working right, and now with new regulations, often ML users will need ways to explain their model predictions. In the end, it’s critical to research and develop new models and innovative projects, but once those models meet the real world and make real decisions for people, businesses, and society, there’s a clear need for a comprehensive observability solution to ensure that they can trust AI.

Can you explain the importance of transparent and explainable AI?

While it may seem similar, there is an important distinction to be made between traditional software and machine learning. In software, you have a software engineer, writing code, defining the logic of the application, we know exactly what will happen in each flow of the code. It's deterministic. That's how software is usually built, the engineers create test cases, testing edge cases, getting to like 70% – 80% of coverage – you feel good enough that you can release to production. If any alerts surface, you can easily debug and understand what flow went wrong, and fix it.

This is not the case with machine learning. Instead if a human defining the logic, it’s being defined as part of the training process of the model. When talking about logic, unlike traditional software it’s not a set of rules, but rather a matrix of millions and billions of numbers that represent the mind, the brain of the machine learning model. And this is a black box, we don't really know the meaning of each and every number in this matrix. But we do know statistically, so this is probabilistic, and not deterministic. It can be accurate in 83% or 93% of the time. This brings up a lot of questions, right? First, how can we trust a system that we cannot explain the way it comes to its predictions? Second, how can we explain predictions for highly regulated industries – such as the financial sector. For example, in the US, financial firms are obligated by law to explain to their customers why they were rejected for a loan application.

The inability to explain machine learning predictions in human readable text could be a major blocker for mass adoption of ML across industries. We want to know, as society, that the model is not making bias decisions. We want to make sure we understand what is leading the model to a specific decision. This is where explainability and transparency are extremely crucial.

How does Aporia’s transparent and explainable AI toolbox solution work?

The Aporia explainable AI toolbox works as part of a unified machine learning observability system. Without deep visibility of production models and a reliable monitoring and alerting solution it’s hard to trust the explainable AI insights – there’s no need to explain predictions if the output is unreliable. And so, that’s where Aporia comes in, providing a single pane of glass visibility over all running models, customizable monitoring, alerting capabilities, debugging tools, root cause investigation, and explainable AI. A dedicated, full-stack observability solution for any and every issue that comes up in production.

The Aporia platform is agnostic and equips AI oriented businesses, data science and ML teams with a centralized dashboard and complete visibility into their model’s health, predictions, and decisions – enabling them to trust their AI. By using Aporia’s explainable AI, organizations are able to keep every relevant stakeholder in the loop by explaining machine learning decisions with a click of a button – get human readable insights into specific model predictions or simulate “What if?” situations. In addition, Aporia constantly tracks the data that's fed into the model as well as the predictions, and proactively sends you alerts upon important events, including performance degradation, unintentional bias, data drift and even opportunities to improve your model. Finally, with Aporia’s investigation toolbox you can get to the root cause of any event to remediate and improve any model in production.

Some of the functionalities that are offered include Data Points and Time Series Investigation Tools, how do these tools assist in preventing AI bias and drift?

Data points provides a live view of the data the model is getting and the predictions it is making for the business. You can get a live feed of that and understand exactly what's going on in your business. So, this ability of visibility is crucial for transparency. Then sometimes things change over time and there is a correlation between multiple changes over time – this is the role of time series investigation.

Recently major retailers have had all of their AI prediction tools fail when it came to predicting supply chain issues, how would the Aporia platform resolve this?

The main challenge in identifying these kind of issues is rooted in the fact that we are talking about future predictions. That means, we predicted something will happen or won't happen in the future. For example, how many people are going to buy a specific shirt or going to buy a new PlayStation.

Then it takes some time to gather all the actual results – more than a few weeks. Then, we can summarize and say, okay, this was the actual demand that we saw. This timeframe, we're talking about a few months altogether. This is what takes us from the moment the model makes the prediction until the business knows exactly if it was right or wrong. And by that time, it's usually too late, the business either lost potential revenues or the margin got squeezed, because they have to sell overstock at huge discounts.

This is a challenge. And this is exactly where Aporia comes into the picture and becomes very, very helpful to these organizations. First, it allows organizations to easily get transparency and visibility into what decisions are being made – Are there any fluctuations? Is there anything that doesn't make sense? Second, as we are talking about large retailers, we are talking about huge, like enormous amounts of inventory, and tracking them manually is near impossible. Here is where businesses and machine learning teams value Aporia most, as a 24/7 automated and customizable monitoring system. Aporia constantly tracks the data and the predictions, it analyzes the statistical behavior of these predictions, and it can anticipate and identify changes in the behavior of the consumers and changes in the behavior of the data as soon as it happens. Instead of waiting six months to realize that the demand forecasting was wrong, you can in a matter of few days, identify that we're on the wrong path with our demand forecasts. So Aporia shortens this timeframe from a few months to a few days. This is a huge game changer for any ML practitioner.

Is there anything else that you would like to share about Aporia?

We are constantly growing and looking for amazing people with brilliant minds to join the Aporia journey. Check out our open positions.

Thank you for the great interview, readers who wish to learn more should visit Aporia.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.