Connect with us

Interviews

Anthony Tayoun, Co-founder & COO of Dexai Robotics – Interview Series

mm

Published

 on

Anthony is the co-founder and COO of Dexai Robotics, a startup that automates activities in commercial kitchens using flexible robot arms. Prior to Dexai, Anthony worked as a consultant with the Boston Consulting Group, focusing on growth strategies. Anthony holds a MBA from Harvard Business School, and a B.E. in Mechanical Engineering and a B.S. in Mathematics from the American University of Beirut. Outside of work, Anthony enjoys chasing soccer balls and exploring sunken sea treasures.

What is it that attracted you to robotics initially?

I’m amazed by our ability, as humans, to develop “complex tools” out of simple components to improve our standard of living. At the same time, we’re living in a period during which many enabling technologies are being improved by an order of magnitude. Just look back at the past two decades: collaborative robots were created and became affordable for commercial applications, control theory advanced substantially, computer vision is arguably at the super human level, machine learning is enabling very rapid decision making, and the internet infrastructure improved enough to connect all of this together. Right now is really the most exciting time for robotics; for the first time in history, robot performance is soon going to exceed our expectations.

You have a very diverse background including being an Associate for the Boston Consulting Group (BCG). One of your projects was designing a prediction tool to detect illicit activity using advanced statistical methods and big data analysis. Could you talk about this project?

At a high level, that project involved analyzing a very large dataset, comprising demographic and behavioral data for commercial establishments, to unearth predictive behavior. We used advanced statistical modeling techniques, such as binomial regression, to compute the probability of illicit activity based on past non-related data. The results were staggering: from data such as types of licenses owned or historical financial performance, we were able to make predictions an order of magnitude more accurate than the baseline.

Can you discuss how you transitioned away from being an Associate of BCG, to launching Dexai Robotics?

My BCG experience enriched my business knowledge tremendously, as I helped companies navigate various strategic and managerial topics. During this experience, I realized that the projects I enjoy the most are those related to market entry or helping clients set up businesses from the ground up, which pushed me in the entrepreneurial direction. I decided to pursue a Master of Business Administration, and joined Harvard Business School. At HBS, I focused on entrepreneurship and related classes, and had the fortune to experiment with a few ideas at the school’s innovation lab. Midway through the MBA, I met Dave Johnson (now Dexai’s co-founder), and together we started developing business plans to commercialize technology that he and others at Harvard and MIT were developing. A few business competitions and tens of customer calls later, Dexai was born!

Dexai Robotics features AIfred a robot that automates activities in commercial kitchens and the food industry. What are the tasks that AIfred is capable of?

Alfred is currently capable of end-to-end meal assembly for a variety of recipes. Alfred can use regular utensils such as tongs, dishers (scoops), spoons, and ladles to pick and/or scoop almost any ingredient. It takes Alfred ~1 day to “learn” a new ingredient, as long as it can be manipulated using the mentioned utensils. Alfred can also “see” and identify different ingredients in the workspace, pass bowls around, and perform simple tasks such as opening a rice cooker or an oven door. In the future, Alfred will learn additional tasks such as operating kitchen equipment (e.g., fryer, grill), and perform ingredients preparation tasks (e.g., cutting, slicing).

Is there a learning curve for a restaurant operator who wishes to install AIfred in their commercial kitchen?

There is a slight learning curve, in-line with most other kitchen appliances. The initial setup consists of entering supported recipes into Dexai’s software, specifying ingredient portions, and connecting Alfred to the point-of-sale system. After that, Alfred runs pretty much on its own, with restaurant operators only needing to periodically refill food bins with fresh ingredients. Alfred is designed to simplify the lives of restaurant workers: we made a conscious choice to solve the “difficult” problem ourselves, so that our customers don’t have to worry about that. Alfred’s camera, combined with Dexai’s proprietary AI software, allows for seamless adaptation to the majority of layouts and processes. Further, Alfred can adapt to changes in the environment, such as moving a bowl around, or swapping ingredients, to maximize the operator’s flexibility.

What’s the initial reaction from restaurateurs that initially test the AIfred robot?

That’s a very interesting question because the reaction progresses very quickly. The universal initial reaction is to take out your phone and start snapping pictures and videos. There’s something really magical about a robotic arm smoothly moving around in a purposeful manner. Maybe it’s because popular culture has us expecting clunky, abrupt motions, similar to when someone makes a “robot impression”. Contrast that with the robot moving very smoothly, picking up utensils, and scooping food the same way a person would do, and your reaction dramatically changes.

Are there any brand names or large restaurants that are currently using AIfred or trialing AIfred?

We deployed a couple of successful trials to test the system, and had to pause due to concerns for our employee safety related to COVID-19. Our customer names are all still confidential and our initial focus is on salads and bowls. Later this year, we will have our first customer-facing deployment, so stay tuned!

One of your earliest robotic projects was the Mule Robot which assisted users with transporting everyday merchandise. How did this early experience influence your thinking on robotics?

My biggest learning from the Mule Robot project was that solving the technical problem is a necessary but insufficient requirement for success. Without customer focus and a robust business model, even the most elegant technical solution won’t leave the research lab. For the Mule Robot, we developed a solution for residential applications, but struggled to take the project forward. Alternatively, thinking about the same problem with a more commercial lens: transporting merchandise inside a building is perfect for “room service” applications in hospitality. Today, a Chicago hotel uses two robots automating room service, made by a startup that successfully commercialized a similar project.

What do you believe a commercial kitchen of the future will look like? How will robots cooperate or in some cases replacing kitchen staff?

I believe that kitchen staff will always be needed; hospitality is incomplete without a human touch. Regarding the kitchen of the future, the answer really depends on how far in the future we’re looking. In the short and medium term, we’ll see dramatic efficiency increases in different areas of the kitchen, either through automated single-use equipment such as sushi rollers and vegetable slicers, or through end-to-end flexible automation such as ingredient assembly through Dexai’s Alfred. Longer term, in 10 years or so, the commercial kitchen will capitalize on efficiencies by combining all these solutions, and will feature novel cooking techniques instead of only efficiency gains. To illustrate this point, imagine a circular, vertically stacked serving counter operated by a robot at the center which can reach inside the oven and make changes to the meal while it cooks. Eventually, the target is to get from raw ingredients to prepared meals through the smallest and most efficient operation.

Is there anything else that you would like to share about Dexai Robotics or AIfred?

We’re really excited to have Alfred’s first public appearance this year. Especially given the health crisis that our world is suffering from, securing the access to prepared food is a necessity. We look forward to a future where everyone has access to affordable, healthy foods!

Thank you for the fantastic interview. I look forward to the day when we see different version of AIfred in commercial kitchens everywhere. Anyone who wishes to learn more should visit Dexai Robotics.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Interviews

Jim McGowan, head of product at ElectrifAi – Interview Series

mm

Published

on

Jim McGowan, is the head of product at ElectrifAi, they specialize in extracting massive amounts of disparate data, transforming chaotic structured and unstructured data into actionable business insights.

What is it that attracted you to the world of machine learning and AI?

I first encountered Machine Learning while earning a doctorate for work in cognitive science. AI systems largely consisted of distilling an expert’s experience down to a flow chart. This seemed intuitively to work, but the systems quickly grew too complex and weren’t living up to their promise. Small problems could be solved, but practical solutions to real-world problems were out of reach. You can say that building practical systems was itself impractical. Then Machine Learning came along. That changed everything. Machine Learning unlocked the promise of AI. ElectrifAi fulfills that promise by building solutions to help our clients run their businesses better.

 

ElectrifAi uses something called Practical AI to guide companies to do more with the data that they already have. Can you elaborate on how ElectrifAi defines Practical AI?

We leverage client’s data to provide clear, actionable insights, for real business needs. We help them make better decisions, faster.  Practical AI is solving a real-world business problem with a solution that works well, is based on a clear understanding of the data, has a definitive outcome, fits into existing processes and tools, ships on time, and delivers tremendous business value. We don’t for companies to replace their data systems. We don’t require a specific business model. We don’t take a year to maybe deliver something that’s a compromise on what a client set out to do. We provide a flexible, high-quality solution that is simple to use and does what it supposed to do very, very well. That’s Practical AI.

We make sure with every solution that we achieve the following:

  • Best-in-class time to value
  • Best-in-class data cleansing
  • Best-in-class insights
  • Best-in-class ROIs

 

Could you give some details on how ElectrifAi enables companies to use this Practical AI?

We pull data from all systems—whether they are custom developed databases, highly customized solutions from major vendors, or even a data dump from some legacy application. We clean and understand that data, and find clear, meaningful signals in all that chaos. We then use machine learning to extract valuable insights from those signals, and finally, we indicate how to act on those insights. SpendAi is a great example. We use machine learning to clean the data, and the more machine learning to categorize 98-99% of data from all a client’s procurement systems. We even let the client control that categorization and a granular level, in just seconds, through a drag-and-drop interface. That’s unique, and incredibly powerful. Then we apply another group of machine learning algorithms to give a clear, simple view of where spend is going. We use machine learning to parse contracts and extract key clauses. We then apply still more machine learning to make specific recommendations. For instance, a client may be due a discount from a clause buried in a contract term. Or they may be over-relying on a vendor in a category, who is at financial risk themselves. A client may be under-leveraging their position with a vendor because the vendor operates under multiple names and across multiple divisions of the company. We surface and clean all that, so the client can reduce their spend, increase their working capital, and reduce their risk.

 

Could you discuss PulmoAi CT, and how it may increase efficiency for radiologists and improves radiological outcomes?

PulmoAi CT is an advanced image analytic product designed specifically for pulmonary CTscans. Combining Practical AI, Machine Learning (ML), and image processing technology, PulmoAi CT automatically segments pulmonary scans pixel by pixel, without the blurring or distortion experienced with similar technologies. The result: Crisply rendered 3D imagery— enabling the immediate identification of indications for tumors, nodules, COVID-19, and other anomalies.    With PulmoAi CT, radiologists can easily zoom in on pulmonary details, viewing them side-byside with both clinical analyses and original images. PulmoAi CT quantifies each lung feature with precise metrics, including feature size, and morphological and volumetric extent. This enables the careful monitoring of anomaly progression, even in the presence of multiple morbidities.

PulmoAi CT is a very different technology than any product in the market or even research laboratory. The results are game changing. There’s nothing else like it. It’s not a brute-force approach that requires tens of thousands of samples to work. PulmoAi CT produces results while other Ai solutions are still looking for training data. It’s powerful and it will change what radiologists can do.

 

Another ElectrifAi product is the PulmoAi X-ray which directly addresses the use of X-rays in crisis zones. Could you discuss this technology?

PulmoAi X-ray directly addresses the use of X-rays in crisis zones today. Adapting to the specific challenges of the pandemic, PulmoAi X-ray goes a step further than distinguishing healthy lungs from COVID-19-infected lungs. The cloud-based solution identifies the crucial differences between coronavirus-positive patients sent home who recover safely, and those sent home who return in need of intubation. Pre-trained on pulmonary scans from hospitals in crisis zones, PulmoAi X-ray leverages deep learning neural network technologies to identify critical abnormalities associated with COVID-19. PulmoAi X-ray is unique because it is narrowly tuned to answer the problem that hospitals in crisis zones are trying to answer: will self-quarantine work, or does the patient need hospitalization?

 

Another product is ContractAi which uses practical AI, Machine Learning, and Natural Language Processing (NLP) to automatically read, analyze, and compare contracts across the enterprise. Could you discuss this product and the best use cases for it?

ContractAi is designed for users who interact with contracts in a day-to-day operational role. For example, ContractAi helps people in a procurement group who are analyzing spend against vendor agreements. Recently, with economic shock due to COVID-19, the software is helping companies understand any leverage they may have to exit supplier contracts. When this capability is connected to our SpendAi product, one can immediately understand the financial impact of this leverage. One of the largest advantages of the technology is that it works with contract data in any format—there is no manual entry and no specific format required. Another advantage is that the technology is specifically designed for users who use contracts in an operational role. Many of the existing contract processing technologies are designed for attorneys, who have a different set of concerns.

 

Is there anything else that you would like to share about ElectrifAi?

As a global machine learning company, we have a unique view in to how various markets are developing and using Machine Learning. One advantage of this view is our ability to understand how machine learning (ML) capabilities can be translated from one geography and/or vertical market to another to help solve substantial problems.

For example, we have spent years around the world helping businesses engage their customers using data science. We have now leveraged that expertise to help the US healthcare industry with patient engagement, helping restart healthcare and getting patients back in to the hospitals for critical elective surgeries.

Spread the love
Continue Reading

Autonomous Vehicles

Phil Duffy, VP of Product, Program & UX Design at Brain Corp – Interview Series

mm

Published

on

Phil Duffy,  is the VP of Product, Program & UX Design at Brain Corp a San Diego-based technology company specializing in the development of intelligent, autonomous navigation systems for everyday machines.

The company was co-founded in 2009 by world-renowned computational neuroscientist, Dr. Eugene Izhikevich, and serial tech entrepreneur, Dr. Allen Gruber. Brain Corp’s initial work involved advanced R&D for Qualcomm Inc. and DARPA. The company is now focused on developing advanced machine learning and computer vision systems for the next generation of self-driving robots.

Brain Corp powers the largest fleet of  autonomous mobile robots (AMRs) with over 10,000 robots deployed or enabled worldwide and works with several Fortune 500 customers like Walmart and Kroger.

What attracted you initially to the field of robotics?

My personal interest in developing robots over the last two decades stems from the fact that intelligent robots are one of the two major unfulfilled dreams of the last century—the other dream being flying cars.

Scientists, science-fiction writers, and filmmakers all predicted we would have intelligent robots doing our bidding and helping us in our daily lives a long time ago. As part of fulfilling that vision, I am passionate about developing robots that tackle the repetitive, dull, dirty, and dangerous tasks that robots excel at, but also building solutions that highlight the unique advantages of humans performing creative, complex tasks that robots struggle with. Developing robots that work alongside humans, both empowering each other, ensures we build advanced tools that help us become more efficient and productive.

I am also driven by being part of a fledgling industry that is building the initial stages of the robotics ecosystem. The robotics industry of the future, like the PC or smartphone industry today, will include a wide array of technical and non-technical staff, developing, selling, deploying, monitoring, servicing, and operating robots. I’m excited to see how that industry grows and how decisions we make today impact the industry’s future direction.

 

In 2014, Brain Corp pivoted from performing research and development for Qualcomm, to the development of machine learning and computer-vision systems for autonomous robots. What caused this change?

It was really about seeing a need and opportunity in the robotics space and seizing it. Brain Corp’s founder, Dr. Eugene Izhikevich, was approached by Qualcomm in 2008 to build a computer based on the human nervous system to investigate how mammalian brains process information and how biological architecture could potentially form the building blocks to a new wave of neuromorphic computing. After completing the project, Eugene and a close-knit team of scientists and engineers decided to apply their computational neuroscience and machine learning approaches to autonomy for robots.

While exploring different product directions, the team realized that the robotics industry of the day looked just like the computer industry before Microsoft—dozens of small companies all adding custom software to a recipe of parts from the same hardware manufacturer. Back then, lots of different types of computers existed, but they were all very expensive and did not work well with each other. Two leaders in operating systems emerged, Microsoft and Apple, with two different approaches: while Apple focused on building a self-contained ecosystem of products and services, Microsoft built an operating system that could work with almost any type of computer.

The Brain Corp team saw the value in creating a “Microsoft of robotics” that would unite all of the disparate robot solutions under one cloud-based software platform. Their goal became to help build out the emerging category of autonomous mobile robots (AMRs) by providing autonomy software that others could use to build their robots. The Brain Corp team decided to focus on making a hardware-agnostic operating system for AMRs. The idea was simple: to enable builders of robots, not build the robot intelligence themselves.

 

What was the inspiration for designing an autonomous scrubber versus other autonomous technologies?

Industrial robotic cleaners were the perfect way to enter the market with our technology. The commercial floor cleaning industry was in the midst of a labor shortage when we started out—constant turnover meant many jobs were simply not getting done. Autonomous mobile cleaning robots would not only help fill the labor gap in an essential industry, they would also be scalable—every environment has a floor and that floor probably needs cleaning. Floorcare was therefore a good opportunity for a first application.

Beyond that, retail companies spend about $13B on floorcare labor annually. Most employ cleaning staff who use large machines to scrub store floors, which is rote, boring work. Workers drive around bulky machines for hours when their time could be better spent on tasks that require acuity. An automated floor cleaning solution would fill in for missing workers while optimizing the efficiency and flow of store operations. By automating the mundane, boring task of scrubbing store floors, retail employees would be able to spend more time with customers and have a greater impact on business, ultimately leading to greater job satisfaction.

 

Can you discuss the challenge of designing robots in an environment that often involves tight spaces and humans who may not be paying attention to their surroundings?

It’s an exciting challenge! Retail was the perfect first implementation environment for Brain Corp’s system because they are such complex environments that pose an autonomy challenge, and are ripe with edge cases that allow Brain Corp to collect data that refines the BrainOS navigation platform.

We addressed these challenges of busy and crowded retail environments by building an intelligent system, BrainOS, that uses cameras and advanced LIDAR sensors to map the robot’s environment and navigate routes. The same technology combination also allows the robots to avoid people and obstacles, and find alternate routes if needed. If the robot encounters a problem it cannot resolve, it will call its human operator for help via text message.

The robots learn how to navigate their surroundings through Brain Corp’s proprietary “teach and repeat” methodology. A human first drives the robot along the route manually to teach it the right path, and then the robot is able to repeat that route autonomously moving forward. This means BrainOS-powered robots can navigate complex environments without major infrastructure modifications or relying on GPS.

 

How has the COVID-19 pandemic accelerated the adoption of Autonomous Mobile Robots (AMRs) in public spaces?

We have seen a significant uptick in autonomous usage across the BrainOS-powered fleet as grocers and retailers look to enhance cleaning efficiency and support workers during the health crisis.

During the first four months of the year, usage of BrainOS-powered robotic floor scrubbers in U.S. retail locations rose 18% compared to the same period last year, including a 24% y-o-y increase in April. Of that 18% increase, more than two-thirds (68%) occurred during the daytime, between 6 a.m. and 5:59 p.m. This means we’re seeing retailers expand usage of the robots to daytime hours when customers are in the stores, in addition to evening or night shifts. We expect this increase to continue as the value of automation comes sharply into focus.

 

What are some of the businesses or government entities that are using Brain Corp robots?

Our customers include top Fortune 500 retail companies including Walmart, Kroger, and Simon Property Group. BrainOS-powered robots are also used at several airports, malls, commercial buildings, and other public indoor environments.

 

Do you feel that this will increase the overall comfort of the public around robots in general?

Yes, people’s perception of robots and automation in general is changing as a result of the pandemic. More people (and businesses) realize how robots can support human workers in meaningful ways. As more businesses reopen, cleanliness will need to be an integral part of their brand and image. As people start to leave their homes to shop, work, or travel, they will look to see how businesses maintain cleanliness. Exceptionally good or poor cleanliness may have the power to sway consumer behavior and attitudes.

As we’ve seen in the last months, retailers are already using BrainOS-powered cleaning robots more often during daytime hours, showing their commitment and investment in cleaning to consumers. Now more than ever, businesses need to prove that they’re providing a safe and clean environment for customers and workers. Robots can help them deliver that next level of clean—a consistent, measureable clean that people can count on and trust.

 

Another application by Brain Corp is the autonomous delivery tug. Could you tell us more about what this is and the use cases for it?

The autonomous delivery tug, powered by BrainOS, enables autonomous delivery of stock carts and loose-pack inventory for any indoor point-to-point delivery needs, enhancing efficiency and productivity. The autonomous delivery tug eliminates inefficient back and forth material delivery and works seamlessly alongside human workers while safely navigating complex, dynamic environments such as retail stores, airports, warehouses, and factories.

A major ongoing challenge for retailers—one that has been exacerbated by the COVID-19 health crisis—is maintaining adequate stock levels in the face of soaring demand from consumers, particularly in grocery. Additionally, the process of moving inventory and goods from the back of a truck, to the stockroom, and then out to store shelves, is a laborious and time-consuming process requiring employees to haul heavy, stock-laden carts back and forth multiple times. The autonomous delivery tug aims to help retailers address these restocking challenges, taking the burden off store workers and providing safe and efficient point-to-point delivery of stock without the need for costly or complicated facility retrofitting.

The autonomous delivery application combines sophisticated AI technology with proven manufacturing equipment to create intelligent machines that can support workers by moving up to 1,000 pounds of stock at a time. Based on an in-field pilot program, the autonomous delivery tug will save retail employees 33 miles of back-and-forth travel per week, potentially increasing their productivity by 67%.

 

Is there anything else that you would like to share about Brain Corp?

Brain Corp powers the largest fleet of AMRs operating in dynamic public indoor spaces with over 10,000 floor care robots deployed or enabled worldwide. According to internal network data, AMRs powered by BrainOS are currently collectively providing over 10,000 hours of daily work, freeing up workers so they can focus on other high value tasks during this health crisis, such as disinfecting high-contact surfaces, re-stocking, or supporting customers.

In the long term, robots give businesses the flexibility to address labor challenges, absentee-ism, rising costs, and more. From a societal standpoint, we believe robots will gain consumer favor as they’re seen more frequently operating in stores, hospitals, and health care facilities, or in warehouses providing essential support for workers.

We’re also excited about what the future holds for Brain Corp. Because BrainOS is a cloud-based platform that can essentially turn any mobile vehicle built by any manufacturer into an autonomous mobile robot, there are countless other applications for the technology beyond commercial floor cleaning, shelf scanning, and material delivery. Brain Corp is committed to continuously improving and building out our AI platform for powering advanced robotic equipment. We look forward to further exploring new markets and applications.

Thank you for the amazing interview, readers who wish to learn more should visit Brain Corp.

Spread the love
Continue Reading

Interviews

Adi Singh, Product Manager in Robotics at Canonical – Interview Series

mm

Published

on

Adi Singh, is the Product Manager in Robotics at Canonical.   Canonical specializes in open source software, including Ubuntu, the world’s most popular enterprise Linux from cloud to edge, and they have a global community of 200,000 contributors.

Ubuntu is the most popular Linux distribution for large embedded systems. As autonomous robots mature, innovative tech companies turn to Ubuntu, we discuss advantages of building a robot using open source software and other key considerations.

What sparked your initial interest in robotics?

A few years into software programming, I was dissatisfied with seeing my work only running on a screen. I had an urge to see some physical action, some tangible response, some real-world result of my engineering. Robotics was a natural answer to this urge.

Can you describe your day to day role with Canonical?

I define and lead the product strategy for Robotics and Automotive verticals at Canonical. I am responsible for coordinating product development, executing go-to-market strategies, and engagements with external organizations related to my domain.

Why is building a robot on open source software so important?

Building anything on open source software is usually a wise idea as it allows you to stand on the shoulders of giants. Individuals and companies alike benefit from the volunteer contributions of some of the brightest minds in the world when they decide to build on a foundation of open source software. As a result, popular FOSS repositories are very robustly engineered and very actively maintained; allowing users to focus on their innovation rather than the nuts and bolts of every library going into their product.

Can you describe what the Ubuntu open source platform offers to IoT and robotics developers?

Ubuntu is the platform of choice for developers around the world for frictionless IoT and robotics development. A number of popular frameworks that help with device engineering are built on Ubuntu, so the OS is able to provide several tools for building and deploying products in this area right out of the box. For instance, the most widely used middleware for robotics development – ROS – is almost entirely run on Ubuntu distros (More than 99.5% according to official metrics here: https://metrics.ros.org/packages_linux.html).

What are some of the key considerations that should be analyzed when choosing a robot’s operating system?

Choosing the right operating system is one of the most important decisions to be made when building a new robot, including several development factors. Hardware and software stack compatibility is key as ample time will be spent ensuring components will work well together so as to not hinder progress on developing the robot itself.

Also, prior familiarity of the operating systems by the dev team is a huge factor affecting economics, as previous experience will no doubt help to accelerate the overall robot development process and thereby cut down on the time to market. Ease of system integration and third-party add-ons should also be heavily considered. A robot is rarely a standalone device and often needs to seamlessly interact with other devices. These companion devices may be as simple as a digital twin for hardware-in-the-loop testing, but in general, off-device computation is getting more popular in robotics. Cloud robotics, speech processing and machine learning are all use-cases that can benefit from processing information in a server farm instead of on a resource-constrained robot.

Additionally, robustness and a level of security engineered into the kernel is imperative. Availability of long-term support for the operating system, especially from the community, is another factor. Something to keep in mind is that operating systems are typically only supported for a set amount of time. For example, long-term support (LTS) releases of Android Things are supported for three years, whereas Ubuntu and Ubuntu Core are supported for five years (or for 10 years with Extended Security Maintenance). If the supported lifespan of the operating system is shorter than the anticipated lifespan of the robot in the field, it will eventually stop getting updates and die early.

Thank for for the interview, readers who wish to learn more should visit Ubuntu Robotics.

Spread the love
Continue Reading