stub Ludovic Larzul, Founder and CEO of Mipsology - Interview Series - Unite.AI
Connect with us

Interviews

Ludovic Larzul, Founder and CEO of Mipsology – Interview Series

mm
Updated on

Ludovic Larzul. is the founder and CEO of Mipsology, a groundbreaking startup focused on state-of-the-art acceleration for deep learning inference. They've devised technology to accelerate the computations of inference neural networks and conceal the hardware accelerator to AI users. Mipsology’s Zebra is the first commercial accelerator that encapsulates such technology to provide high performance and ease of use.

What first got you interested in AI and microchips?

I worked in the design of a specific type of super-computer for about 20 years with my previous company EVE, before it was acquired by Synopsys in 2012. Those computers, also called ASIC emulators, are used by many companies designing ASICs around the world. I quite enjoyed the complexity and diversity of that work. To succeed, you have to (a) understand electronics, software, complex algorithms, how people design chips and how to make sure they work fine, chip architecture, power, and more deep tech, (b) correctly predict the needs of customers a few years in advance, (c) innovate continuously, and (d) as a startup, defeat the competition with far fewer resources. After 20 years of success, I was looking for a new challenge. This was the time when AI had started to come back into the spotlight. AlexNet had made a leap forward into understanding images (and looking back, it was still in its infancy). Deep learning was brand new but promising (Who remembers when it took days to get a result on a simple network?). I found that quite “fun”, but recognized there were many challenges.

 

What was the inspiration behind launching Mipsology?

I don’t know if I would use the word “inspiration.” It was initially more like: “Can we do something that would be different and better?” It started with assumptions of what AI people would like and do, and the next few years were spent finding ever-better solutions based on that. I guess more than inspiration, I would say that the people I work with like to be the best at what they create, in a positive attitude of competition. That makes a strong team that can solve problems others fail to solve adequately.

 

Mipsology uses FPGA boards instead of GPUs. Can you describe what FPGA are?

FPGA are electronic components that can be programmed at the hardware level. You can imagine it as a set of Legos — a few million of them. Each little block performs a simple operation like keeping a value, or slightly more complex operations like addition. By grouping all these blocks, it is possible to create a specific behavior after the chip is manufactured. This is the opposite of GPUS and almost all other chips, which are designed for a specific function and cannot be changed afterwards.

Some, like CPUS and GPUS, can be programmed, but they are not as parallel as FPGAs. At any given moment, an FPGA performs a few million simplistic operations. And this can happen six to seven hundred million times a second. Because they are programmable, what they do can be changed at any time to adapt to different problems, so the extraordinary computing power can be effective. FPGAs are already almost everywhere, including base stations of mobile phones, networks, satellites, cars, etc. People don’t know them well though, because they are not as visible as a CPU like the one in your laptop.

 

What makes these FPGA boards the superior solution to the more popular GPUs?

FPGAs are superior in many aspects. Let’s just focus on a couple important ones.

GPUs are designed for rendering images, mainly for games. They have been found to match well with some computations in AI because of the similarity of the operations. But they remain primarily dedicated to games, which means they come with constraints that do not fit well with neural networks.

Their programming is also limited to the instructions that were decided two or three years before they are available. The problem is that neural networks are advancing more quickly than the design of ASICs, and GPUs are ASICs. So, it is like trying to predict the future: it’s not simple to be right. You can see trends, but the details are what really impact the results, like performance. In contrast, because FPGAs are programmable at the hardware level, we can more easily keep up with the progress of AI. This allows us to deliver a better product with higher performance, and meet the customer’s needs without having to wait for the next silicon generation.

Furthermore, GPUs are designed to be consumer products. Their lifespan is intentionally short, because the companies designing GPUs want to sell new ones a few years later to gamers. This does not work well in electronic systems that need to be reliable for many years. FPGAs are designed to be robust and used 24/7 for many years.

Other well-known advantages of FPGAs include:

  • There are many options that can fit in specific areas like networking or video processing
  • They work as well in data centers as at the edge or in embedded
  • They do not require specific cooling (much less water cooling like big GPUs)

One major drawback is that FPGAs are difficult to program. It requires specific knowledge. Even though companies selling FPGAs have put great effort into bridging the complexity gap, it is still not as simple as a CPU. In truth, GPUs are not simple either. But the software that hides their programming for AI makes that knowledge unnecessary. That is the problem that Mipsology is the first to solve: removing the need for AI computing to program or have any knowledge of FPGA.

 

Are there any current limitations to FPGA boards?

Some FPGA boards are like some GPU boards. They can be plugged into a computer’s PCIe slots. One well known advantage, on top of the lifespan I mentioned before, is that the power consumption is typically lower than GPUs. Another one less known is that there is a larger selection of FPGA boards than GPU boards. There are more FPGAs for more markets, which leads to more boards that fit in different areas of the markets. This simply means that there are more possibilities for computing neural networks everywhere at lower cost. GPUs are more limited; they fit in data centers, but not much else.

 

Mipsology’s Zebra is the first commercial accelerator that encapsulates FPGA boards to provide high performance and ease of use. Can you describe what Zebra is?

For those who are familiar with AI and GPUs, the easiest description is that Zebra is to FPGA what Cuda/CuDNN is to GPU. It is a software stack that completely hides the FPGA behind usual frameworks like PyTorch or TensorFlow. We are primarily targeting inference for images and videos. Zebra starts with a neural network that was trained typically in floating point, and without any manual user effort or proprietary tool, makes it run on any FPGA-based card. It is as simple as: plug in the FPGA board, load the driver, source the Zebra environment, and launch the same inference application as the one running on CPUs or GPUs. We have our own quantization that retains the accuracy, and performance is out of the box. There is no proprietary tool that the user must learn, and it doesn’t take hours of engineering time to get high throughput or low latency. This means simply quick transitions, which also reduces cost and time to market.

 

What are the different types of applications that Zebra is best designed for?

Zebra is a very generic acceleration engine, so it can accelerate the computation for any application that needs to compute neural networks, with a primary focus on images and video because the computing needs are larger for this kind of data. We have requests from very different markets, but they are all similar when it comes to computing the neural networks. They all typically require classification, segmentation, super resolution, body positioning, etc.

As Zebra runs on top of FPGAs, any kind of boards can be used. Some have high throughput and are typically used in data centers. Others are more appropriate for use at the Edge or embedded. Our vision is that, if an FPGA can fit, users can use Zebra to accelerate their neural network computations right away. And if GPUs or CPUs are used, Zebra can replace them and reduce the costs of the AI infrastructure. Most of the companies we talk to are having similar issues: they could deploy more AI-based applications, but the cost is limiting them.

 

For a company that wishes to use Zebra, what is the process?

Simply let us know at [email protected] and we’ll get you started.

 

Is there anything else that you would like to share about Mipsology?

We are very encouraged about the feedback we get from the AI community for our Zebra solution. Specifically, we are told that this is probably the best accelerator on the market. After only a few months, we continue to add to a growing ecosystem of interested partners including Xilinx, Dell, Western Digital, Avnet, TUL and Advantech, to name a few.

I really enjoyed learning about this ground breaking technology. Readers who wish to learn more should visit Mipsology.

Mipsology Demonstrates Zebra: the High-performance Deep Learning Computation Engine

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.