Connect with us

Robotics

Flexible Robot “Grows” Like a Plant

Published

 on

Engineers from MIT have designed a robot that can extend a chain-like appendage. This makes the robot extremely flexible, and it can configure itself in multiple different ways. At the same time, it is strong enough to support heavy weight or apply torque, making it capable of assembling parts in small spaces. After completing its tasks, the robot is able to retract the appendage, and it can extend it again with a different length and shape. 

This newly developed robot can make a difference in areas like warehouses, where most of the robots are not able to put themselves in narrow spaces. The new plant-like robot can be used to grab products at the back of a shelf, and it can even move around a car’s engine parts to unscrew an oil cap. 

The design was inspired by plants and the way they grow. In that process, nutrients are transported to the plant’s tip as a fluid. Once they reach the tip, they are converted into solid material that produces, a little at a time, a supportive stem. 

The plant-like robot has a “growing point” or gearbox, which draws a loose chain of interlocking blocks into the box. Once there, gears lock the chain units together and release the chain, unit by unit, until it forms a rigid appendage. 

Team of Engineers

The new robot was presented this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. In the future, the engineers would like to add on grippers, cameras, and sensors that could be mounted onto the gearbox. This would allow the robot to tighten a loose screw after making its way through an aircraft’s propulsion system. It could also retrieve a product without disturbing anything in the near surroundings. 

Harry Asada is a professor of mechanical engineering at MIT.

“Think about changing the oil in your car,”  Asada says. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”

Tongxi Yan is a former graduate student in Asada’s lab, and he led the work.

“Now we have a robot that can potentially accomplish such tasks,” he says. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”

The team of engineers also consisted of MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara.

Plant-Like Robot

After defining the different aspects of plant growth, the team looked to implement it into a robot. 

“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.

The gearbox was designed to represent the robot’s “growing tip,” which is the equivalent of a bud of a plant. That is where most nutrients flow up to the site, and the tip builds a rigid stem. The box consists of a system of gears and motors, and they pull up a fluidized material. For this robot, it is a sequence of 3-D printed plastic units that are connected with each other. 

The robot is capable of being programmed to choose which units to lock together and which to leave unlocked. This allows it to form specific shapes and “grow” in specific directions.

“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.

The chain is able to support a one-pound weight when locked and rigid. If a gripper were to be attached, the researchers believe it would be able to grow long enough to maneuver through a narrow space, and perform tasks such as unscrewing a cap.

 

Spread the love

Robotics

NASA to Use Machine Learning to Enhance Search for Alien Life on Mars

mm

Published

on

Researchers at NASA have been hard at work on a pilot AI system intended to help future exploration missions find evidence of life on other planets in our solar system. Machine learning algorithms will help exploration devices analyze soil samples on Mars and return the most relevant data to NASA. The pilot program is currently slated for a test run during the ExoMars mission that will see its launch in mid-2022.

As IEEE Spectrum reports, the decision to use machine learning and artificial intelligence to aid the search for life on other planets was driven largely by Erice Lyness, the head of the Goddard Planetary Environments Lab at NASA. Lyness needed to come up with ways of automating aspects of geochemical analyses of samples taken in other parts of our solar system. Lyness decided machine learning could help automate many of the tasks that exploration craft like the Mars rovers must carry out, including the collection and analysis of Martian soil samples.

The ExoMars rover Roslanind Franklin will be capable of drilling at least two meters deep into the martian soil. At this depth, any microbes living there won’t have been killed by the UV light of the sun. This makes it possible that the rover could find living bacteria. Even if no living bacteria samples are found, it’s possible that the drill may find fossilized evidence of life on Mars, held over from earlier eras when the planet was more hospitable to life. The samples that the rover’s drill finds will be given to an instrument called a mass spectrometer for the purpose of analysis.

The mass spectrometer’s purpose is to study the distribution of mass in the ions found within a given sample. This is accomplished by using a laser on the soil sample, which frees up molecules in the soil sample, and then calculating the atomic mass from the different molecules. This process produces a mass spectrum, which researchers will analyze to discern why the patterns of spikes they are seeing in the spectrum could be occurring. There’s an issue with the spectrums generated by the mass spectrometer, however. Various compounds produce a wide variety of different spectrums. It’s a puzzle to analyze a mass spectrum and determine what compounds are within the sample, but machine learning algorithms might be able to help.

The researchers are studying a mineral called montmorillonite. Montmorillonite is commonly found within the Martian soil, and the researchers are aiming to understand how the mineral could manifest itself within a mass spectrum. The team of researchers include montmorillonite samples to see how that output of the mass spectrometer changes, giving them clues as to what the mineral looks like within a mass spectrum. The AI algorithms will assist the researchers in extracting meaningful patterns from the mass spectrometer.

As Lyness was quoted by IEEE Spectrum:

“It could take a long time to really break down a spectrum and understand why you’re seeing peaks at certain [masses] in the spectrum. So anything you can do to point scientists into a direction that says, ‘Don’t worry, I know it’s not this kind of thing or that kind of thing,’ they can more quickly identify what’s in there.”

According to Lyness, the ExoMars mission will be an excellent test case for the AI algorithms designed to help interpret the mass spectrums generated by samples.

There are other potential applications for AI and machine learning in the field of astrobiology. The Dragonfly drone, and potentially another future mision, will be operating farther from Earth and in harsher environments and it will require automating aspects of navigation and the transmission of data.

Spread the love
Continue Reading

Autonomous Vehicles

Phil Duffy, VP of Product, Program & UX Design at Brain Corp – Interview Series

mm

Published

on

Phil Duffy,  is the VP of Product, Program & UX Design at Brain Corp a San Diego-based technology company specializing in the development of intelligent, autonomous navigation systems for everyday machines.

The company was co-founded in 2009 by world-renowned computational neuroscientist, Dr. Eugene Izhikevich, and serial tech entrepreneur, Dr. Allen Gruber. Brain Corp’s initial work involved advanced R&D for Qualcomm Inc. and DARPA. The company is now focused on developing advanced machine learning and computer vision systems for the next generation of self-driving robots.

Brain Corp powers the largest fleet of  autonomous mobile robots (AMRs) with over 10,000 robots deployed or enabled worldwide and works with several Fortune 500 customers like Walmart and Kroger.

What attracted you initially to the field of robotics?

My personal interest in developing robots over the last two decades stems from the fact that intelligent robots are one of the two major unfulfilled dreams of the last century—the other dream being flying cars.

Scientists, science-fiction writers, and filmmakers all predicted we would have intelligent robots doing our bidding and helping us in our daily lives a long time ago. As part of fulfilling that vision, I am passionate about developing robots that tackle the repetitive, dull, dirty, and dangerous tasks that robots excel at, but also building solutions that highlight the unique advantages of humans performing creative, complex tasks that robots struggle with. Developing robots that work alongside humans, both empowering each other, ensures we build advanced tools that help us become more efficient and productive.

I am also driven by being part of a fledgling industry that is building the initial stages of the robotics ecosystem. The robotics industry of the future, like the PC or smartphone industry today, will include a wide array of technical and non-technical staff, developing, selling, deploying, monitoring, servicing, and operating robots. I’m excited to see how that industry grows and how decisions we make today impact the industry’s future direction.

 

In 2014, Brain Corp pivoted from performing research and development for Qualcomm, to the development of machine learning and computer-vision systems for autonomous robots. What caused this change?

It was really about seeing a need and opportunity in the robotics space and seizing it. Brain Corp’s founder, Dr. Eugene Izhikevich, was approached by Qualcomm in 2008 to build a computer based on the human nervous system to investigate how mammalian brains process information and how biological architecture could potentially form the building blocks to a new wave of neuromorphic computing. After completing the project, Eugene and a close-knit team of scientists and engineers decided to apply their computational neuroscience and machine learning approaches to autonomy for robots.

While exploring different product directions, the team realized that the robotics industry of the day looked just like the computer industry before Microsoft—dozens of small companies all adding custom software to a recipe of parts from the same hardware manufacturer. Back then, lots of different types of computers existed, but they were all very expensive and did not work well with each other. Two leaders in operating systems emerged, Microsoft and Apple, with two different approaches: while Apple focused on building a self-contained ecosystem of products and services, Microsoft built an operating system that could work with almost any type of computer.

The Brain Corp team saw the value in creating a “Microsoft of robotics” that would unite all of the disparate robot solutions under one cloud-based software platform. Their goal became to help build out the emerging category of autonomous mobile robots (AMRs) by providing autonomy software that others could use to build their robots. The Brain Corp team decided to focus on making a hardware-agnostic operating system for AMRs. The idea was simple: to enable builders of robots, not build the robot intelligence themselves.

 

What was the inspiration for designing an autonomous scrubber versus other autonomous technologies?

Industrial robotic cleaners were the perfect way to enter the market with our technology. The commercial floor cleaning industry was in the midst of a labor shortage when we started out—constant turnover meant many jobs were simply not getting done. Autonomous mobile cleaning robots would not only help fill the labor gap in an essential industry, they would also be scalable—every environment has a floor and that floor probably needs cleaning. Floorcare was therefore a good opportunity for a first application.

Beyond that, retail companies spend about $13B on floorcare labor annually. Most employ cleaning staff who use large machines to scrub store floors, which is rote, boring work. Workers drive around bulky machines for hours when their time could be better spent on tasks that require acuity. An automated floor cleaning solution would fill in for missing workers while optimizing the efficiency and flow of store operations. By automating the mundane, boring task of scrubbing store floors, retail employees would be able to spend more time with customers and have a greater impact on business, ultimately leading to greater job satisfaction.

 

Can you discuss the challenge of designing robots in an environment that often involves tight spaces and humans who may not be paying attention to their surroundings?

It’s an exciting challenge! Retail was the perfect first implementation environment for Brain Corp’s system because they are such complex environments that pose an autonomy challenge, and are ripe with edge cases that allow Brain Corp to collect data that refines the BrainOS navigation platform.

We addressed these challenges of busy and crowded retail environments by building an intelligent system, BrainOS, that uses cameras and advanced LIDAR sensors to map the robot’s environment and navigate routes. The same technology combination also allows the robots to avoid people and obstacles, and find alternate routes if needed. If the robot encounters a problem it cannot resolve, it will call its human operator for help via text message.

The robots learn how to navigate their surroundings through Brain Corp’s proprietary “teach and repeat” methodology. A human first drives the robot along the route manually to teach it the right path, and then the robot is able to repeat that route autonomously moving forward. This means BrainOS-powered robots can navigate complex environments without major infrastructure modifications or relying on GPS.

 

How has the COVID-19 pandemic accelerated the adoption of Autonomous Mobile Robots (AMRs) in public spaces?

We have seen a significant uptick in autonomous usage across the BrainOS-powered fleet as grocers and retailers look to enhance cleaning efficiency and support workers during the health crisis.

During the first four months of the year, usage of BrainOS-powered robotic floor scrubbers in U.S. retail locations rose 18% compared to the same period last year, including a 24% y-o-y increase in April. Of that 18% increase, more than two-thirds (68%) occurred during the daytime, between 6 a.m. and 5:59 p.m. This means we’re seeing retailers expand usage of the robots to daytime hours when customers are in the stores, in addition to evening or night shifts. We expect this increase to continue as the value of automation comes sharply into focus.

 

What are some of the businesses or government entities that are using Brain Corp robots?

Our customers include top Fortune 500 retail companies including Walmart, Kroger, and Simon Property Group. BrainOS-powered robots are also used at several airports, malls, commercial buildings, and other public indoor environments.

 

Do you feel that this will increase the overall comfort of the public around robots in general?

Yes, people’s perception of robots and automation in general is changing as a result of the pandemic. More people (and businesses) realize how robots can support human workers in meaningful ways. As more businesses reopen, cleanliness will need to be an integral part of their brand and image. As people start to leave their homes to shop, work, or travel, they will look to see how businesses maintain cleanliness. Exceptionally good or poor cleanliness may have the power to sway consumer behavior and attitudes.

As we’ve seen in the last months, retailers are already using BrainOS-powered cleaning robots more often during daytime hours, showing their commitment and investment in cleaning to consumers. Now more than ever, businesses need to prove that they’re providing a safe and clean environment for customers and workers. Robots can help them deliver that next level of clean—a consistent, measureable clean that people can count on and trust.

 

Another application by Brain Corp is the autonomous delivery tug. Could you tell us more about what this is and the use cases for it?

The autonomous delivery tug, powered by BrainOS, enables autonomous delivery of stock carts and loose-pack inventory for any indoor point-to-point delivery needs, enhancing efficiency and productivity. The autonomous delivery tug eliminates inefficient back and forth material delivery and works seamlessly alongside human workers while safely navigating complex, dynamic environments such as retail stores, airports, warehouses, and factories.

A major ongoing challenge for retailers—one that has been exacerbated by the COVID-19 health crisis—is maintaining adequate stock levels in the face of soaring demand from consumers, particularly in grocery. Additionally, the process of moving inventory and goods from the back of a truck, to the stockroom, and then out to store shelves, is a laborious and time-consuming process requiring employees to haul heavy, stock-laden carts back and forth multiple times. The autonomous delivery tug aims to help retailers address these restocking challenges, taking the burden off store workers and providing safe and efficient point-to-point delivery of stock without the need for costly or complicated facility retrofitting.

The autonomous delivery application combines sophisticated AI technology with proven manufacturing equipment to create intelligent machines that can support workers by moving up to 1,000 pounds of stock at a time. Based on an in-field pilot program, the autonomous delivery tug will save retail employees 33 miles of back-and-forth travel per week, potentially increasing their productivity by 67%.

 

Is there anything else that you would like to share about Brain Corp?

Brain Corp powers the largest fleet of AMRs operating in dynamic public indoor spaces with over 10,000 floor care robots deployed or enabled worldwide. According to internal network data, AMRs powered by BrainOS are currently collectively providing over 10,000 hours of daily work, freeing up workers so they can focus on other high value tasks during this health crisis, such as disinfecting high-contact surfaces, re-stocking, or supporting customers.

In the long term, robots give businesses the flexibility to address labor challenges, absentee-ism, rising costs, and more. From a societal standpoint, we believe robots will gain consumer favor as they’re seen more frequently operating in stores, hospitals, and health care facilities, or in warehouses providing essential support for workers.

We’re also excited about what the future holds for Brain Corp. Because BrainOS is a cloud-based platform that can essentially turn any mobile vehicle built by any manufacturer into an autonomous mobile robot, there are countless other applications for the technology beyond commercial floor cleaning, shelf scanning, and material delivery. Brain Corp is committed to continuously improving and building out our AI platform for powering advanced robotic equipment. We look forward to further exploring new markets and applications.

Thank you for the amazing interview, readers who wish to learn more should visit Brain Corp.

Spread the love
Continue Reading

Interviews

Adi Singh, Product Manager in Robotics at Canonical – Interview Series

mm

Published

on

Adi Singh, is the Product Manager in Robotics at Canonical.   Canonical specializes in open source software, including Ubuntu, the world’s most popular enterprise Linux from cloud to edge, and they have a global community of 200,000 contributors.

Ubuntu is the most popular Linux distribution for large embedded systems. As autonomous robots mature, innovative tech companies turn to Ubuntu, we discuss advantages of building a robot using open source software and other key considerations.

What sparked your initial interest in robotics?

A few years into software programming, I was dissatisfied with seeing my work only running on a screen. I had an urge to see some physical action, some tangible response, some real-world result of my engineering. Robotics was a natural answer to this urge.

Can you describe your day to day role with Canonical?

I define and lead the product strategy for Robotics and Automotive verticals at Canonical. I am responsible for coordinating product development, executing go-to-market strategies, and engagements with external organizations related to my domain.

Why is building a robot on open source software so important?

Building anything on open source software is usually a wise idea as it allows you to stand on the shoulders of giants. Individuals and companies alike benefit from the volunteer contributions of some of the brightest minds in the world when they decide to build on a foundation of open source software. As a result, popular FOSS repositories are very robustly engineered and very actively maintained; allowing users to focus on their innovation rather than the nuts and bolts of every library going into their product.

Can you describe what the Ubuntu open source platform offers to IoT and robotics developers?

Ubuntu is the platform of choice for developers around the world for frictionless IoT and robotics development. A number of popular frameworks that help with device engineering are built on Ubuntu, so the OS is able to provide several tools for building and deploying products in this area right out of the box. For instance, the most widely used middleware for robotics development – ROS – is almost entirely run on Ubuntu distros (More than 99.5% according to official metrics here: https://metrics.ros.org/packages_linux.html).

What are some of the key considerations that should be analyzed when choosing a robot’s operating system?

Choosing the right operating system is one of the most important decisions to be made when building a new robot, including several development factors. Hardware and software stack compatibility is key as ample time will be spent ensuring components will work well together so as to not hinder progress on developing the robot itself.

Also, prior familiarity of the operating systems by the dev team is a huge factor affecting economics, as previous experience will no doubt help to accelerate the overall robot development process and thereby cut down on the time to market. Ease of system integration and third-party add-ons should also be heavily considered. A robot is rarely a standalone device and often needs to seamlessly interact with other devices. These companion devices may be as simple as a digital twin for hardware-in-the-loop testing, but in general, off-device computation is getting more popular in robotics. Cloud robotics, speech processing and machine learning are all use-cases that can benefit from processing information in a server farm instead of on a resource-constrained robot.

Additionally, robustness and a level of security engineered into the kernel is imperative. Availability of long-term support for the operating system, especially from the community, is another factor. Something to keep in mind is that operating systems are typically only supported for a set amount of time. For example, long-term support (LTS) releases of Android Things are supported for three years, whereas Ubuntu and Ubuntu Core are supported for five years (or for 10 years with Extended Security Maintenance). If the supported lifespan of the operating system is shorter than the anticipated lifespan of the robot in the field, it will eventually stop getting updates and die early.

Thank for for the interview, readers who wish to learn more should visit Ubuntu Robotics.

Spread the love
Continue Reading