stub Blair Newman, CTO of Neuton - Interview Series - Unite.AI
Connect with us

Interviews

Blair Newman, CTO of Neuton – Interview Series

mm

Published

 on

Blair Newman, is the CTO of Neuton, a disruptive Neural Network Framework and Automated Machine Learning (AutoML) solution that is far more effective than any other framework, non-neural algorithm or AutoML product available on the market. It makes Artificial Intelligence (AI) available to everyone.

What initially attracted you to machine learning and data science?

From a personal perspective I have always been intrigued with the possibilities that ML/Data Science can provide such as – From Smart Cities – Connected Cars  and now what TinyML can also offer. Now with the democratization of AI we are literally seeing ML Everywhere.

Could you share the genesis story behind Neuton?

We decided to embark on the journey of making AI available to “Everyone” after many years of executing on multiple projects from a Machine Learning perspective. During this period we identified a number of different barriers that limited exponential growth. So in order to truly make ML available to everyone… We needed to address some of the technical barriers that existed…. The requirements for significant amounts of data to perform training…  An Automated SaaS solution to eliminate the need for technical expertise…. Then lastly making our platform available for free to remove the last barrier.

For readers who may be unfamiliar with this terminology, could you define what TinyML is?

I typically like to keep it simple…. The Physical world meets the digital world…. And where those two entities intersect… is the world of TinyML…. TinyML brings intelligence right at that edge.

What is preventing the acceleration of TinyML in the AI community?

TinyML typically requires a tremendous amount of capital from a resource perspective.  HW, Embedded Engineers, Machine Learning Engineer, Software Developers for integration…. One of the areas where we excel is we collapse those requirements significantly.

How does Neuton create compact models without compromising accuracy?

The traditional and more well-known Frameworks (e.g. TensorFlow) starts out with a preexisting structure which inherently includes waste. In addition, building a model is often a very iterative process which then once model is built must be optimized prior to integration. This is what I call a top-down approach. With Neuton we flip this paradigm completely on its head as we build each model from the bottom up one Neuron at a time effectively eliminating the inherent waste experienced with other Frameworks. This being said – the network structure is not predefined, but is grown from a single neuron during training. We couple this approach with constant cross validation as each Neuron is applied to the resulting model. So, the final model is always built for purpose, with no waste and accurate upon completion.

Neuton does not use backpropagation or stochastic gradient descent, what was the reasoning behind avoiding these popular methodologies?

Our patented approach uses a Global Optimization methodology effectively eliminating the need to apply these methodologies.

How much more efficient is the Neuton solution compared to traditional machine learning approaches?

In all of the key metrics such as time to model creation, accuracy, model size and subsequently time to market. We consistently see that we outperform other frameworks and platforms…. Normally we are experiencing that our models often times 1000 times smaller with a reduced time to market by over 70%. Lastly our Explainability Office is second to none in providing complete transparency to our models along with each individual prediction.

Could you provide some details on AI Explainability that the Neuton platform offers?

Our Explainability office comes in multiple forms. Starting first with our EDA (Exploratory Data Analysis) Tool which provides an initial view of the statistics of your data prior to training. From there our Feature Importance Matrix enables our customers to identify what are the top 10 features influencing their predictions and also what are the bottom 10 features which have minimal influence on your predictions. From there we offer our customers next level of transparency to their resulting models as they are able to analyze each prediction individually to see how your prediction may change if value of a given feature changes. Lastly, we provide a Life Cycle management tool (Model-To-Data-Relevance Indicator) which proactively notifies our customers when their model is beginning to decay and their model needs to be retrained.

Is there anything else that you would like to share about Neuton?

Our mission here at Neuton is to literally bring AI to Everyone. We believe we have been successful in beginning to realize these possibilities. Whether it is enabling the non-Data Scientist or empowering the seasoned Data Scientists by providing a Zero Code SaaS base solution. Now with the acceleration of TinyML we are well on our way of truly democratizing AI.

Thank you for the great interview, readers who wish to learn more should visit Neuton.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.