stub Victor Thu, President of Datatron - Interview Series - Unite.AI
Connect with us

Interviews

Victor Thu, President of Datatron – Interview Series

mm

Published

 on

Victor Thu is the President of Datatron, a platform that helps enterprises harness the power of machine learning, by speeding up deployments, detecting problems early, and increasing efficiency of managing multiple models at scale.

Your background is in Product Marketing, Go-to-market, & Product Management, how did this background lead you to working in machine learning and AI?

I love technology and some of my close friends even refer to me as the “technology-whisperer.” I enjoy taking complex technology topics and translating them into a language that people can relate to, and educating myself on new technologies to get to “the why” behind technologies that matter most to people.

My first encounter with what I call “modern AI” is when I was watching a keynote presentation by a famous Stanford AI professor, Dr. Fei-Fei Li. Dr Li’s keynote presentation was so captivating that it served as a turning point for me in my career. That presentation convinced me that this is where I wanted to be next. I wanted to be part of the next wave of technology where we use AI and ML to solve business challenges.

Since then, I have been with a number of AI/ML startups, working to use the technology to address real business needs. I have worked very closely with Ph.D-level ML scientists, who have provided me with tremendous knowledge in AI/ML. And I’m still learning today as the space is evolving so rapidly.

So, it truly was my passion for technology and how to leverage it to help others that brought me to working closely with AI/ML.

Datatron focuses on MLOps, for readers who are unfamiliar with this term, could you describe specifically what it is?

MLOps is essentially codifying and simplifying the highly artisanal process of getting AI and ML models from prototype to production.

One of the biggest misconceptions is that once data scientists have built their AI models, they can get them out into production quickly. However, the reality is that it can take up to a year before a model can be deployed.

The main reason for this delay is that people who have expertise in developing models do not necessarily have software engineering expertise as well. A good comparison is the architects who design skyscrapers – they are not also the developers who construct them.

MLOps is essentially the bridge between model developers and software engineering. Instead of having to spend more than 12 months to get models into production, MLOps can cut that once lengthy process down to just a matter of days.

In an article that you wrote for us in September 2021, you discussed how “The main hurdle of bringing solutions into production isn’t the quality of the models, but rather the lack of infrastructure in place to allow companies to do so.” Why is this such a hurdle for most companies?

There are a few contributing factors to this.

  • The over romanticization of “free” open-source software. I do want to first emphasize that we love open-source software and strongly believe that it has helped the industry move forward by leaps and bounds. However, many do not understand the complexity of open-source in relation to AI and ML. Today, there’s a severe scarcity of AI/ML talents. When you couple that with finding software engineers (ML engineers or MLOps engineers) who know how to handle the unique properties of AI/ML codes, to then expect hire and build an enterprise-scale MLOps platform internally by figuring out the 300+ open-source MLOps projects is setting yourself up for failure.
  • Lack of infrastructure to support engineering teams.Companies need a better environment to set up engineers to succeed. There needs to be proper bandwidth and budget to provide teams the correct tools. AI is a fairly new technology. Enterprises who are doing AI don’t always know what they need to do to get models out quickly, which is why MLOps is such a vital tool.

How does using MLOps solve the lack of infrastructure problem?

MLOps solves the lack of infrastructure problem in four ways:

  1. No proprietary code changes: Data scientists want flexibility to build models to fit business use cases in their environments, therefore any MLOps processes that require code changes complicate the integrity of their models.
  2. Automation/scripting:  Many teams are scripting models in a hard coded fashion which takes a lot of time. MLOps automates that entire process, saving a lot of time and energy.
  3. Streamline updates: AI models change on a regular basis to adapt to their environment. Sometimes data scientists have to go back to update models frequently. Without MLOps, there is no way to avoid this repetitive updating.
  4. Managing the underlying infrastructure: In order to get models out, you need to  compute the network and storage which requires unique properties of  AI/ML models. MLOps tools have the capability of tapping into the correct resources to scale them accordingly.

There are also enterprise requirements that are often not being considered when building your own MLOps tool such as: role-based access control (RBAC), integration and interoperability, support for different ML tools, addressing security vulnerabilities and the unexpected departure of core team members.

What are your personal views on the importance of AI governance?

There have been countless horror stories of AI models not working properly, from mislabeling certain groups of people to causing massive financial losses for publicly traded companies.

AI governance is critically important for businesses when they have AI models running in production. With that said, it is no different from other IT or business governance. Today when your IT runs applications in the cloud or even in their own data centers, they have a series of tools to ensure the applications are working properly.

Once you have AI models running, you need to have mechanisms and tools in place to help give the business and the data scientists visibility on what the models are doing.

Especially at this nascent stage of AI/ML, there’s no ‘set it and forget it’ option. In the beginning, you need to monitor how your model behaves and make appropriate adjustments. Having proper monitoring capabilities so that it can alert you when your models are behaving outside of the desired boundaries is key.

Model risk management (MRM) also needs to take into account the different individuals who are involved in the model development and deployment. What access control do you have put in place in order to ensure the integrity of the models? Or how do you ensure that individuals from different groups do not accidentally use your models for use cases your models are not designed to do? All questions teams need to ask themselves.

How does Datatron help with model risk management?

MLOps allows for quick model updates and changes. For example, if a model is inappropriately rejecting people on a loan application, MLOps allows you to pull the model back and reintroduce a new one, managing that risk in a simple way.

It protects models from a bias drift and maintains key metrics while in production through a simple dashboard that presents these metrics using deep detailed data from a high-level overview that can be easily understood by business decision makers.

The Datatron platform AI governance provides a level up from a generic monitoring capability – giving additional context and logic that displays clear visibility of the model that are more relevant to the customer’s use cases.

In a blog post on Datatron you described how Datatron was taking up the mantra of Reliable AI™. Could you describe in your view what this is?

When we came up with this, we thought about how we are so comfortable flying in commercial airlines today because they are very reliable.

Despite all these talks about ethical AI, responsible AI, etc. the key need is for businesses to be able to use AI/ML reliably – just like if their employees were to jump on a commercial airliner.

Using terms like ethical, responsible AI has really stemmed from the issue of  current AI models not doing what they’re supposed to, and therefore being deemed unreliable. Businesses are not willing to use AI because they do not have confidence that their models are not biased. This means their models are unreliable and Datatron is set on changing that.

Is there anything else that you would like to share about Datatron?

We are one of the few MLOps players who are Super Bowl proven – working successfully in a high stress scenario, which is not typical for a startup or open source tool. The client, Domino’s Pizza, works with Datatron to easily and rapidly operationalize AI models in production, which were then put to the ultimate test during the Super Bowl.

MLOps really is the way to help AI/ML models into production while preserving resources and scaling down on cost. We are a sustainable source for successful AI/ML models and serve as a catalyst for revenue. Companies can finally get their ROI from their AI and ML projects. Regardless of your margins, you can produce results using MLOps.

Thank you for the great interview, readers who wish to learn more should visit Datatron.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.