Engineers at MIT have developed a new system that is extremely important for autonomous vehicles and their safety. The system is capable of sensing small changes in shadows on the ground, and it can determine if there are any moving objects around the corner.
One of the major goals for any company seeking to create autonomous vehicles is increased safety. Engineers are constantly working on making the vehicles better at avoiding collisions with other cars or pedestrians, especially those that are coming around a building’s corner.
The new system also has the potential to be used on eventual robots that navigate hospitals. These robots could deliver medication or supplies throughout the hospital, and the system would help them avoid hitting people.
A paper will be presented next week at the International Conference on Intelligent Robots and Systems (IROS). It includes descriptions of the successful experiments conducted by the researchers, including an autonomous car maneuvering around a parking garage and stopping when approaching another vehicle.
The current system is often LIDAR, which is able to detect visible objects by more than a half of a second. According to the researchers, fractions of a second can make a huge difference in fast-moving autonomous vehicles.
“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”
The new autonomous system has only been tested indoors. In these conditions, lighting conditions are lower, and the robotic speeds are slower. The autonomous system can analyze and sense shadows much easier in this environment.
The paper was compiled by Daniela Rus; first author Felix Naser, who is a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; graduate Christina Liao; Guy Rosman of the Toyota Research Institute; and Sertac Karaman, associate professor of aeronautics and astronautics at MIT.
Prior to the new developments, the researchers already had a system called “ShadowCam.” The system is able to identify and classify changes in shadows on the ground through the use of computer-vision techniques. The earlier versions of the system were developed by MIT professors William Freeman and Antonio Torralba. The two professors were not co-authors on the IROS paper, and their work was presented in 2017 and 2018.
ShadowCam utilizes video frames from a target-specific camera, and it is able to detect any changes in light intensity over time. This tells the system if something is moving further away or getting closer, and it then analyzes the information and classifies each image as a stationary object or moving one. This allows the system to proceed in the best possible way.
The ShadowCam was tweaked and changed to be used on autonomous vehicles. Originally, it used augmented reality labels termed “AprilTags,” which were like QR codes. ShadowCam used these to focus on certain clusters of pixels to determine if there were any shadows present. However, this system proved to be impossible to utilize in real-world scenarios.
Because of this, the researchers created a new process that uses image registration and a visual-odometry technique together. Image registration overlays multiple images in order to identify any variations.
The visual-odometry technique that the researchers use is called “Direct Sparse Odometry” (DSO), and it operates similarly to the AprilTags. DSO uses a 3D-print cloud, and it plots the different features of an environment on it. A computer-vision pipeline then locates a region of interest such as a floor.
ShadowCam used DSO-image-registration and overlays all of the images from the same viewpoint of the robot. The robot, moving or staying still, is then able to zero in on the same patch of pixels where there is a shadow.
The researchers will continue to work on this system, and they will focus on the differences between indoor and outdoor lighting conditions. Ultimately, the team wants to increase the speed of the system as well as automate the process.
Phil Duffy, VP of Product, Program & UX Design at Brain Corp – Interview Series
Phil Duffy, is the VP of Product, Program & UX Design at Brain Corp a San Diego-based technology company specializing in the development of intelligent, autonomous navigation systems for everyday machines.
The company was co-founded in 2009 by world-renowned computational neuroscientist, Dr. Eugene Izhikevich, and serial tech entrepreneur, Dr. Allen Gruber. Brain Corp’s initial work involved advanced R&D for Qualcomm Inc. and DARPA. The company is now focused on developing advanced machine learning and computer vision systems for the next generation of self-driving robots.
Brain Corp powers the largest fleet of autonomous mobile robots (AMRs) with over 10,000 robots deployed or enabled worldwide and works with several Fortune 500 customers like Walmart and Kroger.
What attracted you initially to the field of robotics?
My personal interest in developing robots over the last two decades stems from the fact that intelligent robots are one of the two major unfulfilled dreams of the last century—the other dream being flying cars.
Scientists, science-fiction writers, and filmmakers all predicted we would have intelligent robots doing our bidding and helping us in our daily lives a long time ago. As part of fulfilling that vision, I am passionate about developing robots that tackle the repetitive, dull, dirty, and dangerous tasks that robots excel at, but also building solutions that highlight the unique advantages of humans performing creative, complex tasks that robots struggle with. Developing robots that work alongside humans, both empowering each other, ensures we build advanced tools that help us become more efficient and productive.
I am also driven by being part of a fledgling industry that is building the initial stages of the robotics ecosystem. The robotics industry of the future, like the PC or smartphone industry today, will include a wide array of technical and non-technical staff, developing, selling, deploying, monitoring, servicing, and operating robots. I’m excited to see how that industry grows and how decisions we make today impact the industry’s future direction.
In 2014, Brain Corp pivoted from performing research and development for Qualcomm, to the development of machine learning and computer-vision systems for autonomous robots. What caused this change?
It was really about seeing a need and opportunity in the robotics space and seizing it. Brain Corp’s founder, Dr. Eugene Izhikevich, was approached by Qualcomm in 2008 to build a computer based on the human nervous system to investigate how mammalian brains process information and how biological architecture could potentially form the building blocks to a new wave of neuromorphic computing. After completing the project, Eugene and a close-knit team of scientists and engineers decided to apply their computational neuroscience and machine learning approaches to autonomy for robots.
While exploring different product directions, the team realized that the robotics industry of the day looked just like the computer industry before Microsoft—dozens of small companies all adding custom software to a recipe of parts from the same hardware manufacturer. Back then, lots of different types of computers existed, but they were all very expensive and did not work well with each other. Two leaders in operating systems emerged, Microsoft and Apple, with two different approaches: while Apple focused on building a self-contained ecosystem of products and services, Microsoft built an operating system that could work with almost any type of computer.
The Brain Corp team saw the value in creating a “Microsoft of robotics” that would unite all of the disparate robot solutions under one cloud-based software platform. Their goal became to help build out the emerging category of autonomous mobile robots (AMRs) by providing autonomy software that others could use to build their robots. The Brain Corp team decided to focus on making a hardware-agnostic operating system for AMRs. The idea was simple: to enable builders of robots, not build the robot intelligence themselves.
What was the inspiration for designing an autonomous scrubber versus other autonomous technologies?
Industrial robotic cleaners were the perfect way to enter the market with our technology. The commercial floor cleaning industry was in the midst of a labor shortage when we started out—constant turnover meant many jobs were simply not getting done. Autonomous mobile cleaning robots would not only help fill the labor gap in an essential industry, they would also be scalable—every environment has a floor and that floor probably needs cleaning. Floorcare was therefore a good opportunity for a first application.
Beyond that, retail companies spend about $13B on floorcare labor annually. Most employ cleaning staff who use large machines to scrub store floors, which is rote, boring work. Workers drive around bulky machines for hours when their time could be better spent on tasks that require acuity. An automated floor cleaning solution would fill in for missing workers while optimizing the efficiency and flow of store operations. By automating the mundane, boring task of scrubbing store floors, retail employees would be able to spend more time with customers and have a greater impact on business, ultimately leading to greater job satisfaction.
Can you discuss the challenge of designing robots in an environment that often involves tight spaces and humans who may not be paying attention to their surroundings?
It’s an exciting challenge! Retail was the perfect first implementation environment for Brain Corp’s system because they are such complex environments that pose an autonomy challenge, and are ripe with edge cases that allow Brain Corp to collect data that refines the BrainOS navigation platform.
We addressed these challenges of busy and crowded retail environments by building an intelligent system, BrainOS, that uses cameras and advanced LIDAR sensors to map the robot’s environment and navigate routes. The same technology combination also allows the robots to avoid people and obstacles, and find alternate routes if needed. If the robot encounters a problem it cannot resolve, it will call its human operator for help via text message.
The robots learn how to navigate their surroundings through Brain Corp’s proprietary “teach and repeat” methodology. A human first drives the robot along the route manually to teach it the right path, and then the robot is able to repeat that route autonomously moving forward. This means BrainOS-powered robots can navigate complex environments without major infrastructure modifications or relying on GPS.
How has the COVID-19 pandemic accelerated the adoption of Autonomous Mobile Robots (AMRs) in public spaces?
We have seen a significant uptick in autonomous usage across the BrainOS-powered fleet as grocers and retailers look to enhance cleaning efficiency and support workers during the health crisis.
During the first four months of the year, usage of BrainOS-powered robotic floor scrubbers in U.S. retail locations rose 18% compared to the same period last year, including a 24% y-o-y increase in April. Of that 18% increase, more than two-thirds (68%) occurred during the daytime, between 6 a.m. and 5:59 p.m. This means we’re seeing retailers expand usage of the robots to daytime hours when customers are in the stores, in addition to evening or night shifts. We expect this increase to continue as the value of automation comes sharply into focus.
What are some of the businesses or government entities that are using Brain Corp robots?
Our customers include top Fortune 500 retail companies including Walmart, Kroger, and Simon Property Group. BrainOS-powered robots are also used at several airports, malls, commercial buildings, and other public indoor environments.
Do you feel that this will increase the overall comfort of the public around robots in general?
Yes, people’s perception of robots and automation in general is changing as a result of the pandemic. More people (and businesses) realize how robots can support human workers in meaningful ways. As more businesses reopen, cleanliness will need to be an integral part of their brand and image. As people start to leave their homes to shop, work, or travel, they will look to see how businesses maintain cleanliness. Exceptionally good or poor cleanliness may have the power to sway consumer behavior and attitudes.
As we’ve seen in the last months, retailers are already using BrainOS-powered cleaning robots more often during daytime hours, showing their commitment and investment in cleaning to consumers. Now more than ever, businesses need to prove that they’re providing a safe and clean environment for customers and workers. Robots can help them deliver that next level of clean—a consistent, measureable clean that people can count on and trust.
Another application by Brain Corp is the autonomous delivery tug. Could you tell us more about what this is and the use cases for it?
The autonomous delivery tug, powered by BrainOS, enables autonomous delivery of stock carts and loose-pack inventory for any indoor point-to-point delivery needs, enhancing efficiency and productivity. The autonomous delivery tug eliminates inefficient back and forth material delivery and works seamlessly alongside human workers while safely navigating complex, dynamic environments such as retail stores, airports, warehouses, and factories.
A major ongoing challenge for retailers—one that has been exacerbated by the COVID-19 health crisis—is maintaining adequate stock levels in the face of soaring demand from consumers, particularly in grocery. Additionally, the process of moving inventory and goods from the back of a truck, to the stockroom, and then out to store shelves, is a laborious and time-consuming process requiring employees to haul heavy, stock-laden carts back and forth multiple times. The autonomous delivery tug aims to help retailers address these restocking challenges, taking the burden off store workers and providing safe and efficient point-to-point delivery of stock without the need for costly or complicated facility retrofitting.
The autonomous delivery application combines sophisticated AI technology with proven manufacturing equipment to create intelligent machines that can support workers by moving up to 1,000 pounds of stock at a time. Based on an in-field pilot program, the autonomous delivery tug will save retail employees 33 miles of back-and-forth travel per week, potentially increasing their productivity by 67%.
Is there anything else that you would like to share about Brain Corp?
Brain Corp powers the largest fleet of AMRs operating in dynamic public indoor spaces with over 10,000 floor care robots deployed or enabled worldwide. According to internal network data, AMRs powered by BrainOS are currently collectively providing over 10,000 hours of daily work, freeing up workers so they can focus on other high value tasks during this health crisis, such as disinfecting high-contact surfaces, re-stocking, or supporting customers.
In the long term, robots give businesses the flexibility to address labor challenges, absentee-ism, rising costs, and more. From a societal standpoint, we believe robots will gain consumer favor as they’re seen more frequently operating in stores, hospitals, and health care facilities, or in warehouses providing essential support for workers.
We’re also excited about what the future holds for Brain Corp. Because BrainOS is a cloud-based platform that can essentially turn any mobile vehicle built by any manufacturer into an autonomous mobile robot, there are countless other applications for the technology beyond commercial floor cleaning, shelf scanning, and material delivery. Brain Corp is committed to continuously improving and building out our AI platform for powering advanced robotic equipment. We look forward to further exploring new markets and applications.
Thank you for the amazing interview, readers who wish to learn more should visit Brain Corp.
Safety of Self-Driving Cars Improved With New Training Method
One of the most important tasks for a self-driving car when it comes to safety is tracking pedestrians, objects, and other vehicles or bicycles. In order to do this, self-driving cars rely on tracking systems. These systems could become even more effective with a new method developed by researchers at Carnegie Mellon University (CMU).
The new method has unlocked a lot more autonomous driving data compared to before, such as road and traffic data that is crucial for training tracking systems. The more data there is, the more successful the self-driving car can be.
Himangi Mittal is a research intern who works alongside David Held, an assistant professor in CMU’s Robotics Institute.
“Our method is much more robust than previous methods because we can train on much larger datasets,” Mittal said.
Lidar and Scene Flow
Most of today’s autonomous vehicles rely on lidar as their main system for navigation. Lidar is a laser device that looks at what is surrounding the vehicle and generates 3D information out of it.
The 3D information comes in the form of a cloud of points, and the vehicle uses a technique called scene flow in order to process the data. Scene flow involves the speed and trajectory of each 3D point being calculated. So, whenever there are other vehicles, pedestrians, or moving objects, they are portrayed to the system as a group of points moving together.
Traditional methods for training these systems usually require labeled datasets, which is sensor data that has been annotated to track the 3D points over time. Because these datasets are required to be manually labeled and are expensive, a very minimal amount exists. To get around this, simulated data is used in scene flow training, and while it is less effective than the other way, a small amount of real-world data is used to improve it.
The named researchers, along with Ph.D. student Brian Okorn, developed the new method by using unlabeled data in scene flow training. This type of data is much easier to gather and only requires a lidar being placed on top of a car as it drives around.
In order for this to work, the researchers had to find a way for the system to detect its own errors in scene flow. The new system tries to make predictions about where each 3D point will end up and how fast it is traveling, and it then measures the distance between the predicted location and the actual location of the point. This is what forms one type of error to be minimized.
After that process, the system then reverses and works backward from the predicted point location to map where the point originated. By measuring the distance between the predicted position and the origin point, the second type of error is formed from the resulting distance.
After detecting these errors, the system then works to correct them.
“It turns out that to eliminate both of those errors, the system actually needs to learn to do the right thing, without ever being told what the right thing is,” Held said.
The results demonstrated scene flow accuracy at 25% when using a training set of synthetic data, and when it was improved with a small amount of real-world data, that number increased to 31%. The number improved even more to 46% when a large amount of unlabeled data was added to train the system.
Bob Leigh, Market Development for Autonomous Systems, Real-Time Innovations (RTI) – Interview Series
Bob Leigh, is the Senior Market Development Director of Autonomous Systems at Real-Time Innovations (RTI).
RTI is the largest software framework provider for smart machines and real-world systems. The company’s RTI Connext® product enables intelligent architecture by sharing information in real time, making large applications work together as one.
With over 1,500 deployments, RTI software runs the largest power plants in North America, connects perception to control in vehicles, coordinates combat management on US Navy ships, drives a new generation of medical robotics, controls hyperloop and flying cars, and provides 24/7 medical intelligence for hospital patients and emergency victims.
Could you define what IIoT refers to, and which types of devices it is applicable to?
We define the Industrial Internet of Things (IIoT) as a network of devices, machines and computers. These components are connected into highly intelligent systems that combine Industrial operations with advanced data analytics to transform business outcomes. Unlike connecting consumer devices, the IIoT involves large, complex, mission-critical systems. It is ushering in new infrastructure for the most critical societal systems such as the smart grid, public transportation, connected healthcare and autonomous vehicles.
With the IIoT, a standard hospital bed transforms into an intelligent, connected medical system that provides real-time patient data to medical providers, allowing them to draw insights and deliver a higher level of care. These systems must also scale to encompass many thousands or millions of interconnected points, distributed across many different networks and locations. The complexity of these systems and lack of connectivity is often a key obstacle for companies working in these markets. To address these challenges, RTI delivers the software framework that can be used to accelerate development, reduce risk and costs and deliver the resilience, security, performance and scalability necessary to build intelligent architecture, smart machines and real-world systems.
What was it that attracted you to working with IIoT?
I was drawn to work in the IIoT space because there is an opportunity to solve complex technology challenges and to facilitate innovation that will truly help people. From energy to healthcare, from automotive to defense, RTI’s technology enables the sharing of mission critical information in real time, making large applications work together as one. We’ve found that data-centric connectivity, like that provided by RTI, is critical to the success of these high performance, intelligent systems.
How would you describe what Real-Time Innovations (RTI) does in as few words as possible?
Real-Time Innovations (RTI) is the largest software framework provider for smart machines and real-world systems. The company’s RTI Connext® product enables intelligent architecture by sharing information in real time, making large applications work together as one, integrated system. The connecting intelligent, distributed systems that RTI powers help improve medical care, make our roads safer, improve energy use, and protect our freedom. With over 1,500 deployments, RTI software runs the largest power plants in North America, connects perception to control in vehicles, coordinates combat management on US Navy ships, drives a new generation of medical robotics, controls hyperloop and flying cars, and provides 24/7 medical intelligence for hospital patients and emergency victims.
RTI is the leading vendor of products compliant with the Object Management Group® (OMG) Data Distribution Service™ (DDS) standard. RTI is privately held and headquartered in Sunnyvale, California with regional headquarters in Spain and Singapore.
RTI Connext Drive is the first complete connectivity solution for autonomous vehicle development. Could you share some details regarding this technology?
RTI Connext Drive is the first standards-based connectivity framework to handle the complex integration and data distribution challenges of sensor fusion applications in autonomous systems. From research to production, Connext Drive offers automakers and developers the software they need to operate in diverse real-time environments, interoperate with other systems within the vehicle, connect to off-vehicle systems and build in automotive-grade security. From edge to cloud and across remote environments, it distributes and manages real-time data to keep critical systems running. Connext Drive uses DDS, a standard I detail below, and delivers:
- Efficient High-Bandwidth Data Distribution. Communicate rapidly with throughput of over millions of messages per second using a data-centric databus, which allows data to flow when and where it’s needed: securely, at scale and with ultra-low latency.
- Enhanced Performance. With support for the latest Object Management Group® (OMG®) DDS Extensible Types standard, applications benefit from network bandwidth savings, enabling flexibility for multiple Quality of Service (QoS) strategies. An optimized Dynamic Data implementation delivers improved serialization performance.
- Full Redundancy. Any sensor, data source, algorithm, compute platform or even network can be easily duplicated to provide higher reliability. The data-centric design allows the system to resolve this redundancy naturally. Connex Drive supports shared memory, LAN, WAN and Internet transports.
- Broad support for embedded systems and platforms. Connext Drive is integrated with technology from many of the leading automotive technology providers. This interoperability provides automotive customers with the ability to use the software and platforms of choice.
- Safety Certification Pathway. This feature option meets the stringent requirements of ISO 26262 ASIL-D, reducing risk, time and project costs.
- Updated DDS Security. Connext Drive is compliant with the latest OMG DDS Security specification v1.1 and supports the latest OpenSSL v1.1.1. The latest updates to the RTI Security Plugins also support loading keys from an SSL engine to more easily integrate best practice key storage.
- Integration with Standardized Frameworks and Platforms. Through its standards-based architecture, Connext Drive eases integration between OEMs and suppliers, from research through production. It provides Interoperability across programming languages, operating systems and CPU families, plus a Gateway Toolkit and adapters for integrating non-DDS components.
Could you share with our readers what DDS is exactly and what makes it so important?
Data Distribution Service™(DDS) is a standard that aims to enable dependable, high-performance, interoperable, real-time, and scalable data exchanges. DDS is the only open standard for messaging that supports the unique needs of both enterprise and real-time systems. Its open interfaces and advanced integration capabilities slash costs across a system’s life cycle, from initial development and integration through ongoing maintenance and upgrades.
DDS is composed of two primary specifications – one for the application layer interfaces, and another that assures wire-level interoperability between vendor implementations. These layers not only ensure that a different vendor’s DDS implementation can be swapped in without impacting application code, but also that systems built using different implementations of DDS will interoperate.
This standards-based approach delivers enhanced performance and massive scalability while lowering risk. Connext Drive is the first – and only – software that can integrate DDS, ROS 2, AUTOSAR Classic and AUTOSAR Adaptive. This allows automotive companies to work with the standard or standards that best meet their needs at different points in the development cycle.
Could you detail in what capacity Baidu uses RTI technology in autonomous vehicles?
Baidu is developing solutions for autonomous valet parking, fully autonomous mini-buses and more, based on Apollo, a leading global autonomous vehicle technology platform. Baidu uses RTI’s Connext Drive as the connectivity framework for superior reliability — a critical factor in the development of autonomous driving. With Connext Drive, Baidu can guarantee the utilization of bandwidth with TCP + UDP, ensure flexibility through multiple QoS strategies and apply standards-based security and safety.
Can you share with us why building autonomous vehicle connectivity software is so challenging?
Autonomous vehicles are some of the most complex systems ever conceived and hold the promise of profoundly changing daily life. These unmanned machines require real-time, human-level decision-making capabilities with fail proof protections for safety and security. Auto manufacturers also need to ensure their systems can operate in diverse real-time environments, scale and interoperate.
Autonomous vehicle development is challenging and risky because AVs are built from many subsystems and require full interoperability between components. To provide consistently optimal performance, data must flow correctly, reliably and with extremely low latency. Barriers to enter this industry are high, with in-house connectivity solutions taking extensive time to develop and requiring technical expertise. More so, autonomous vehicle designs must last for years, requiring automakers to ensure that their systems not only address today’s connectivity challenges, but also anticipate future challenges. They must also track constantly evolving technology and security requirements.
Reliable connectivity is essential to enable the next generation of vehicles and to achieve higher levels of autonomy.
Is there anything else that you would like to share about RTI or autonomous vehicles?
In the past few years, we’ve seen radically different timelines about when autonomous vehicles will become part of our everyday life. Through the first half of 2020, we’ve seen auto manufacturers and companies increase their investments in the underlying technologies that will be the critical foundation for these autonomous systems – machine learning algorithms, sensors and the underlying connectivity software. However, 2020 likely won’t be the year cars will reach full Level 4 or 5 autonomy, as we still have major hurdles to overcome in terms of safety regulations, satisfying security concerns and weather constraints.
This has been a fantastic interview, thank you for taking the time to explain all of these technologies. Readers who wish to learn more should visit the RTI website.
- Phil Duffy, VP of Product, Program & UX Design at Brain Corp – Interview Series
- Adi Singh, Product Manager in Robotics at Canonical – Interview Series
- Clearview AI Halts Facial Recognition Services in Canada Amid Investigation
- Mike Lahiff, CEO at ZeroEyes – Interview Series
- U.S. Sees First Case of Wrongful Arrest Due to Bad Algorithm