Connect with us

Autonomous Vehicles

Researchers Develop Autonomous Systems Capable of Sensing Changes in Shadows



Engineers at MIT have developed a new system that is extremely important for autonomous vehicles and their safety. The system is capable of sensing small changes in shadows on the ground, and it can determine if there are any moving objects around the corner. 

One of the major goals for any company seeking to create autonomous vehicles is increased safety. Engineers are constantly working on making the vehicles better at avoiding collisions with other cars or pedestrians, especially those that are coming around a building’s corner. 

The new system also has the potential to be used on eventual robots that navigate hospitals. These robots could deliver medication or supplies throughout the hospital, and the system would help them avoid hitting people. 

A paper will be presented next week at the International Conference on Intelligent Robots and Systems (IROS). It includes descriptions of the successful experiments conducted by the researchers, including an autonomous car maneuvering around a parking garage and stopping when approaching another vehicle.

The current system is often LIDAR, which is able to detect visible objects by more than a half of a second. According to the researchers, fractions of a second can make a huge difference in fast-moving autonomous vehicles.  

“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”

The new autonomous system has only been tested indoors. In these conditions, lighting conditions are lower, and the robotic speeds are slower. The autonomous system can analyze and sense shadows much easier in this environment. 

The paper was compiled by Daniela Rus; first author Felix Naser, who is a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; graduate Christina Liao; Guy Rosman of the Toyota Research Institute; and Sertac Karaman, associate professor of aeronautics and astronautics at MIT. 

ShadowCam System

Prior to the new developments, the researchers already had a system called “ShadowCam.” The system is able to identify and classify changes in shadows on the ground through the use of computer-vision techniques. The earlier versions of the system were developed by MIT professors William Freeman and Antonio Torralba. The two professors were not co-authors on the IROS paper, and their work was presented in 2017 and 2018. 

ShadowCam utilizes video frames from a target-specific camera, and it is able to detect any changes in light intensity over time. This tells the system if something is moving further away or getting closer, and it then analyzes the information and classifies each image as a stationary object or moving one. This allows the system to proceed in the best possible way. 

The ShadowCam was tweaked and changed to be used on autonomous vehicles. Originally, it used augmented reality labels termed “AprilTags,” which were like QR codes. ShadowCam used these to focus on certain clusters of pixels to determine if there were any shadows present. However, this system proved to be impossible to utilize in real-world scenarios. 

Because of this, the researchers created a new process that uses image registration and a visual-odometry technique together. Image registration overlays multiple images in order to identify any variations. 

The visual-odometry technique that the researchers use is called “Direct Sparse Odometry” (DSO), and it operates similarly to the AprilTags. DSO uses a 3D-print cloud, and it plots the different features of an environment on it. A computer-vision pipeline then locates a region of interest such as a floor. 

ShadowCam used DSO-image-registration and overlays all of the images from the same viewpoint of the robot. The robot, moving or staying still, is then able to zero in on the same patch of pixels where there is a shadow. 

What’s Next

The researchers will continue to work on this system, and they will focus on the differences between indoor and outdoor lighting conditions. Ultimately, the team wants to increase the speed of the system as well as automate the process. 


Spread the love

Autonomous Vehicles

Andrew Stein, Software Engineer Waymo – Interview Series




Andrew Stein is a Software Engineer who leads the perception team for Waymo Via, Waymo’s autonomous delivery efforts. Waymo is an autonomous driving technology development company that is a subsidiary of Alphabet Inc, the parent company of Google.

What initially attracted you to AI and robotics?

I always liked making things that “did something” ever since I was very young. Arts and crafts could be fun, but my biggest passion was working on creations that were also functional in some way. My favorite parts of Mister Rogers’ Neighborhood were the footage of conveyor belts and actuators in automated factories, seeing bottles and other products filled or assembled, labeled, and transported. I was a huge fan of Legos and other building toys. Then, thanks to some success in Computer Aided Design (CAD) competitions through the Technology Student Association in middle and high school, I ended up landing an after-school job doing CAD for a tiny startup company, Clipper Manufacturing. There, I was designing factory layouts for an enormous robotic sorter and associated conveyor equipment for laundering and organizing hangered uniforms for the retail garment industry. From there, it was off to Georgia Tech to study in electrical engineering, where I participated in the IEEE Robotics Club and took some classes in Computer Vision. Those eventually led me to the Robotics Institute at Carnegie Mellon University for my PhD. Many of my fellow graduate students from CMU have been close colleagues ever since, both at Anki and now at Waymo.

You previously worked as a lead engineer at Anki a robotics startup. What are some of the projects that you had the opportunity to work on at Anki?

I was the first full-time hire on the Cozmo project at Anki, where I had the privilege of starting the code repository from scratch and saw the product through to over one million cute, lifelike robots shipped into people’s homes. That work transitioned into our next product, Vector, which was another, more advanced and self-contained version of Cozmo. I got to work on many parts of those products, but was primarily responsible for computer vision for face detection, face recognition, 3D pose estimation, localization, and other aspects of perception. I also ported TensorFlow Lite to run on Vector’s embedded OS and helped deploy deep learning models to run onboard the robot for hand and person detection.

I also built Cozmo’s and Vector’s eye rendering systems, which gave me the chance to work particularly closely with much of Anki’s very talented and creative animation team, which was also a lot of fun.

In 2019, Waymo hired you and twelve other robotics experts from Anki to adapt its self-driving technology to other platforms, including commercial trucks. What was your initial reaction to the prospect of working at Waymo?

I knew many current and past engineers at Waymo and certainly was aware of the company’s reputation as a leader in the field of autonomous vehicles. I very much enjoyed the creativity of working on toys and educational products for kids at Anki, but I was also excited to join a larger company working in such an impactful space for society, to see how software development and safety are done at this organizational scale and level of technical complexity.

Can you discuss what a day working at Waymo is like for you?

Most of my role is currently focused on guiding and growing my team as we identify and solve trucking-specific challenges in close collaboration with other engineering teams at Waymo. That means my days are spent meeting with my team, other technical leads, and product and program managers as we plan for technical and organizational approaches to develop and deploy our self-driving system, called the Waymo Driver, and extend its capabilities to our growing fleet of trucks. Besides that, given that we are actively hiring, I also spend significant time interviewing candidates.

What are some of the unique computer vision and AI challenges that are faced with autonomous trucks compared to autonomous vehicles?

While we utilize the same core technology stack across all of our vehicles, there are some new considerations specific to trucking that we have to take into account. First and foremost, the domain is different: compared to passenger cars, trucks spend a lot more time on freeways, which are higher-speed environments. Due to a lot more mass, trucks are slower to accelerate and brake than cars, which means the Waymo Driver needs to perceive things from very far away. Furthermore, freeway construction uses different markers and signage and can even involve median crossovers to the “wrong” side of the road; there are freeway-specific laws like moving over for vehicles stopped on shoulders; and there can be many lanes of jammed traffic to navigate. Having a potentially larger blind spot caused by a trailer is another challenge we need to overcome.

Waymo’s recently began testing a driverless fleet of heavy-duty trucks in Texas with trained drivers on-board. At this point in the game, what are some of the things that Waymo hopes to learn from these tests?

Our trucks test in the areas in which we operate (AZ / CA / TX / NM) to gain meaningful experience and data in all different types of situations we might encounter driving on the freeway. This process exercises our software and hardware, allowing us to learn how we can continue to improve and adapt our Waymo Driver for the trucking domain.

Looking at Texas specifically: Dallas and Houston are known to be part of the biggest freight hubs in the US. Operating in that environment, we can test our Waymo Driver on highly dense highways and shipper lanes, further understand how other truck and passenger car drivers behave on these routes, and continue to refine the way our Waymo Driver reacts and responds in these busy driving regions. Additionally, it also enables us to test in a place with unique weather conditions that can help us drive our capabilities in that area forward.

Can you discuss the Waymo Open Dataset which includes both sensor data and labeled data, and the benefits to Waymo for sharing this valuable dataset?

At Waymo, we’re tackling some of the hardest problems that exist in machine learning. To aid the research community in making advancements in machine perception and self-driving technology, we’ve released the Waymo Open Dataset, which is one of the largest and most diverse publicly available fully self-driving datasets. Available at no cost to researchers at, the dataset consists of 1,950 segments of high-resolution sensor data and covers a wide variety of environments, from dense urban centers to suburban landscapes, as well as data collected during day and night, at dawn and dusk, in sunshine and rain. In March 2020, we also launched the Waymo Open Dataset Challenges to provide the research community a way to test their expertise and see what others are doing.

In your personal opinion, how long will it be until the industry achieves true level 5 autonomy?

We have been working on this for over ten years now and so we have the benefit of that experience to know that this technology will come to the world step by step. Self-driving technology is so complex and we’ve gotten to where we are today because of advances in so many fields from sensing in hardware to machine learning. That’s why we’ve been taking a gradual approach to introduce this technology to the world. We believe it’s the safest and most responsible way to go, and we’ve also heard from our riders and partners that they appreciate this thoughtful and measured approach we’re taking to safely deploy this technology in their communities.

Thank you for the great interview, readers who wish to learn more should visit Waymo Via.

Spread the love
Continue Reading

Autonomous Vehicles

New Software Increases Safety of Autonomous Vehicles in Traffic Situations



The Technical University of Munich (TUM) has developed new software that will improve the safety of autonomous vehicles when confronted with road traffic. The software is able to make predictions about a traffic situation, and it works extremely fast, making predictions every millisecond. 

This software will be useful in a situation, for example, where the autonomous vehicle encounters another vehicle and pedestrians simultaneously. This scenario seems unpredictable, and experienced human drivers would have to pay attention to a variety of different factors. 

The research was published in Nature Machine Intelligence, titled “Using online verification to prevent autonomous vehicles from causing accidents.” 

Ensuring Safe Software

Matthias Althoff is Professor of Cyber-Physical Systems at TUM.

“These kinds of situations present an enormous challenge for autonomous vehicles controlled by computer programs,” Althoff says. “But autonomous driving will only gain acceptance of the general public if you can ensure that the vehicles will not endanger other road users — no matter how confusing the traffic situation.”

One of the main challenges surrounding the development of autonomous vehicle software is making sure it will not cause accidents. 

The software, which was developed by a team including Althoff at the Munich School of Robotics and Machine Intelligence at TUM, is capable of permanently analyzing and predicting traffic events while on the road. It works by recording and evaluating vehicle sensor data every millisecond, and the software is able to make predictions and calculations for all possible movements for every traffic participant. This is dependent on the participants following road traffic regulations, and it results in the system being capable of predicting three to six seconds into the future.

Once it determines those future seconds, the system decides on possible movements for the autonomous vehicle, all the while calculating emergency movements in the case of a dangerous situation. Because of this emergency aspect of the software, it is common for the system to only follow routes that present no foreseeable collisions where the emergency maneuver option is required. 

Once Seen as Non-Practical

The reasons that it took so long to develop a system such as this one is that it was traditionally seen as time-consuming and less practical than other solutions. However, the team of researchers has now proven its effectiveness and how to go about implementing it. 

Simplified dynamic models are relied on for the calculations, while reachability analysis helps calculate the future traffic movements. Because it takes so long to calculate all road users and their characteristics at the same time, the team focused on simplified models to speed up the process. These models are both mathematically feasible and have a greater range of motion than real ones, and they allow for a large number of possible combinations to be explored.

The team then developed a virtual model based on real traffic data that was collected during test drives with an autonomous vehicle, which provided a real-life traffic environment to test the system on. 

“Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop,” Althoff says. 

The new software is just the latest example of advancements taking place within the field of autonomous vehicles, and it is further proof of the possible effectiveness of what were once seen as non-practical solutions.


Spread the love
Continue Reading

Autonomous Vehicles

AI-Controlled Jet Fighter Defeats Human Pilot In Simulated Combat




An event pitting an AI-controlled fighter plane against a human pilot in a virtual dogfight was recently held, with the end result that the AI managed to defeat its human opponent, adding another example of AIs outclassing humans at even extraordinarily complex tasks.

As reported by DefenseOne, the recent virtual dogfight was orchestrated by the US military as part of an ongoing effort to demonstrate the capability of autonomous agents to defeat aircraft in dogfights, a project called the AlphaDogFight challenge. The Defense Advanced Research Project Agency (DARPA) chose eight teams of AIs developed by various defense contractors, and pitted these AI teams against each other in virtual dogfights. The winner of this tournament was an AI developed by Heron Systems, and afterward the AI was pitted against a human pilot who wore a VR helmet and sat in a flight simulator. The AI reportedly won all five rounds it played.

The AI developed by Heron Systems was a deep reinforcement learning system. Deep reinforcement learning is the process of allowing an AI agent to experiment in an environment again and again, learning from trial and error. Lockheed Martin’s AI was the runner up in the competition and it also utilized a deep reinforcement learning system. Lockheed Martin engineers and directors explained that developing algorithms that can perform well in air combat is a much different task to simply designing an algorithm that can fly and maintain particular orientations and altitude. The AI algorithms must come to understand not only that there are penalties to certain actions, but that not all penalties are equally weighted. Some actions have very severe consequences compared to other actions, such as crashing. This must be done by assigning weights to every possible action and then adjusting these weights based on the experiences that the agent has.

Heron Systems said that they trained their model by putting it through over 4 billion simulations, and that the model had acquired around 12 years of experience as a result. However, the AI was not permitted to learn from its experiences in the combat trials themselves. It’s unclear how the results of the contest would have changed if the model was allowed to learn from the contest rounds. If the contest had gone on longer, there may have been a different result as well. The human pilot was able to adapt to the AI’s tactics after a few rounds, and became able to last much longer against the AI by the end of the game. It was just a little too late the time the pilot had adapted.

This is actually the second time that an AI has beaten a human in a simulated dogfight. In 2016, an AI system defeated a fighter jet instructor. The recent DARPA simulation was more robust than the 2016 trial, due to the fact that numerous AIs were pitted against each other to find the best one before it took on the human pilot.

The director of DARPA’s Strategic Technology Office, Timothy Grayson, was quoted as saying that the trial aims to better understand how machines and humans interact and to build better human-machine teams. As Grayson was quoted by:

“I think what we’re seeing today is the beginning of something I’m going to call human-machine symbiosis… Let’s think about the human sitting in the cockpit, being flown by one of these AI algorithms as truly being one weapon system, where the human is focusing on what the human does best [like higher-order strategic thinking] and the AI is doing what the AI does best.”

Spread the love
Continue Reading