Connect with us

Startups

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

mm

Published

 on

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

The autonomous vehicles developed by Waymo utilize computer vision techniques and artificial intelligence to perceive the surrounding environment and make real-time decisions about how the vehicle should react and move. When objects are perceived by the camera and sensors inside of the vehicle, they are matched against a large database compiled by Alphabet in order to be recognized.

Massive datasets are of great importance to the training of autonomous vehicles, as they enable the AI within the vehicles to get better and improve their performance. However, engineers need some way of efficiently matching items within the dataset to queries so that they can investigate how the AI performs on specific types of images. To solve this problem, as VentureBeat reports, Waymo recently developed a tool dubbed “Content Search”, which functions similarly to how Google Image Search and Google Photos operate. These systems match queries with the semantic content within images, generating representations of the objects that make image retrieval based on natural language queries easier.

Before the advent of Content Search, if Waymo’s researchers wanted to retrieve certain samples from the logs, they had to describe the object by using heuristics. Waymo’s logs had to be searched using commands that searched for objects based on rules, which meant running searches for objects that were “under X height” or objects that “traveled at y miles per hour”. The results of these rules-based searches could often be quite broad and researchers would have then need to comb through the returned results manually.

Content Search solves this problem by creating catalogs of data and conducting similarity searches on the different catalogs in order to find the most similar categories when presented with an object. If Content Search is presented with a truck or tree, it will return other trucks or trees that Waymo’s autonomous vehicles have encountered. As a Waymo vehicle drives around it records images of objects around it, then it stores these objects as embeddings/mathematical representations. This means that the tool can make a comparison between object categories and rank responses by how similar to the provided object the stored object images are. This is similar to how the embedding similarity matching service operated by Google works.

The objects that Waymo’s vehicles encounter can come in all different shapes and sizes, but they all need to be distilled down into their essential components and categorized in order for Content Search to work. In order for this to happen, Waymo makes use of multiple AI models that are trained on a wide variety of objects. The various models learn to recognize a variety of objects and they are supported by Content Search, which enables the models to understand whether or not items belonging to a specific category are found within a given image. An additional optical character recognition model is utilized alongside the main model, allowing the Waymo vehicles to add extra identifying information to objects in images, based upon any text found in the image. For example, a truck equipped with signage would have the text of the sign included in its Content Search description.

Thanks to the above models working in concert, Waymo’s researchers and engineers are capable of searching the image data logs for very specific objects like specific species of trees and makes of car.

According to Waymo, as quoted by VentureBeat:

“With Content Search, we’re able to automatically annotate … objects in our driving history which in turn has exponentially increased the speed and quality of data we send for labeling. The ability to accelerate labeling has contributed to many improvements across our system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street. As Waymo expands to more cities, we’ll continue to encounter new objects and scenarios.”

This isn’t the first time that Waymo has used multiple machine learning models to enhance the reliability and accuracy of their vehicles. Waymo has collaborated with Alphabet/Google in the past, helping develop an AI technique alongside DeepMind. The AI system takes inspiration from evolutionary biology. To begin with, a variety of machine learning models are created and after they are trained the models that underperformed are culled and replaced with offspring models. This technique reportedly managed to reduce false positives dramatically while also reducing the required computational resources and training time.

Waymo’s AI Content Search tool lets engineers quickly find objects in driving records

Spread the love

Autonomous Vehicles

Dr. Leilei Shinohara, Vice President of R&D at RoboSense – Interview Series

mm

Published

on

Dr. Leilei Shinohara, Vice President of R&D at RoboSense - Interview Series

Dr. Leilei Shinohara is Vice President of R&D, RoboSense. With more than a decade of experience developing LiDAR systems, Dr. Shinohara is one of the most accomplished experts in this field. Prior to joining RoboSense, Dr. Shinohara worked at Valeo as the Technical Lead for the world’s first automotive grade LiDAR, SCALA®. He was responsible for multiple programs, including automotive LiDAR and sensor fusion projects. Dr. Shinohara managed an international sensor product development team for the development and implementation of systems, software, hardware, mechanics, testing, validation, and functional safety to build the first automotive grade LiDAR product.

Prior to joining Robosense as Vice President of R&D you had more than a decade of experience in developing LiDAR, including having worked with Valeo’s SCALA® LiDAR project. What was it that attracted you to joining Robosense?

RoboSense is Asia’s No.1 LiDAR with the amazing development speed.

Prior to joining RoboSense, I was impressed by the innovation capabilities and technical acumen of RoboSense. RoboSense is targeting to be the top smart LiDAR sensor provider to the automotive market, which not only about the LiDAR HW but also the AI perception algorithm. This goal is quite fitting with my vision for the future smart sensor approach. At CES 2019, RoboSense exhibited its latest MEMS solid-state LiDAR, which has superior performance to its peer products. At CES 2020, RoboSense made huge progress and announced that the solid-state LiDAR RS-LiDAR-M1 is ready for sale with price of $1,898.

“At CES 2019, RoboSense exhibited its latest MEMS solid-state LiDAR, which has superior performance to its peer products. At CES 2020, RoboSense made huge progress and announced that the solid-state LiDAR RS-LiDAR-M1 is ready for sale with price of $1,898.”

With RoboSense’s leading technology and my previous experience in the automotive industry, I am confident that together, we can greatly accelerate the development of automotive-grade LiDAR products that can be mass-produced to make highly automated driving a reality.

 

It’s important to understand the benefits of LiDAR technology for autonomous vehicles versus regular camera systems. Could you want us through some of these benefits?

Cameras and radar have their limitations. For example, cameras don’t work well under bad ambient light conditions and radar has limitations detecting the stationary obstacle. Compared to the camera, LiDAR’s biggest advantage lies in higher accuracy and precision. It is not affected by the ambient light conditions, such as night, bright sunlight or the oncoming headlights of other cars, and able to work in various complex traffic conditions.

Recently we know quite some news about the tesla autopilot accident. As we know, Tesla’s autopilot system only relies on the Camera and Radar. Those accidents also prove that LiDAR is critical to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors.

“Those accidents also prove that LiDAR is critical to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors.”

Both Audi’s A8 (a Level 3 mass-produced autonomous vehicle) and the Waymo One (an autopilot ride-hailing service) have used LiDAR, which is an important industry indicator. Level 3 autonomous passenger vehicles using LiDAR will gradually become the industry standard.

 

One of the common complaints that we hear about LiDAR is that it’s too expensive for the bulk of consumer vehicles. Do you feel that the price will eventually drop to make it more competitive?

As we all know, high cost is one of the major limits for traditional LiDAR systems to meet mass production, it is an inevitable trend that LiDAR price will eventually drop to meet consumer autonomous vehicle’s needs. Currently, our MEMS LiDAR using a MEMS micromirror to steer the laser beam for scanning can feasibly be made small and at a lower cost,which makes it more competitive to overpriced Mechanical LiDAR.

“Currently, our MEMS LiDAR using a MEMS micromirror to steer the laser beam for scanning can feasibly be made small and at a lower cost,which makes it more competitive to overpriced Mechanical LiDAR.”

RoboSense MEMS-based LiDAR M1 uses 905nm lasers with low cost, automotive grade, and compact size. Parts have reduced from hundreds to dozens in comparison to traditional mechanical LiDARs, greatly reducing the cost and shortening production time– achieving a breakthrough in manufacturability. The coin-sized optical module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements.

The M1’s prototype with a 200m detection range and equivalent to 125 layers now sells with a price of $1,898 in comparison with the conventional 128 layers mechanical LiDAR which costs ten thousand dollars. Furthermore, when we are moving to the volume mass-production, the sensor cost can drop to a range of $200.

 

In December 2019, Robosense announced its launch of a developed and complete LiDAR perception solution for Robo-Taxi (RS-Fusion-P5). What is this solution?

“The RS-Fusion-P5 equipped with RoboSense’s flagship mechanical LiDAR model RS-Ruby and four short range blind-spot LiDAR RS-Bpearl. The multiple LiDAR fusion perception solution is developed for further accelerating the development of Robo-Taxi.”

The RS-Fusion-P5 has excellent perception capabilities. It is able to reach 200m detection range for a 10% reflectivity target, and with up to 0.1° high-precision resolution with full coverage, zero blind spots in the sensing zone. In addition, through its advanced AI perception algorithms, multi-sensor fusion, and synchronization interfaces, vehicles are able to identify all-around obstacles and position easily and precisely, empowering Level 4 or above autonomous vehicles with full-stack perception Capabilities.

The embedded four RS-BPearl form a hemispherical FOV coverage of 90° * 360° (or 180° * 180°), which not only can precisely identify objects around the vehicle body such as pets, children, roadbeds as well as other details of the near-field ground area but also detect the actual height information under particular scenarios such as bridge tunnels and culverts, further supporting autonomous vehicles for driving decision making and greatly improving car safety.

 

The RS-Fusion-P5 has zero blind spots in the sensing zone. How is this achieved?

To cover the blind spot zone, as following picture shows, 4x RS-BPearl are integrated on 4 sides around the vehicle.

The BPearl is a mechanical type LiDAR based on the same platform as 16/32/Ruby LiDAR but special designed for the blind spot area detection.

Dr. Leilei Shinohara, Vice President of R&D at RoboSense - Interview Series

 

 

RoboSense’s LiDAR production line recently obtained the IATF16949 Letter of Conformity. This is a huge milestone for the company, can you explain the importance behind this letter and what it means for the company?

IATF 16949 is the most widely used global quality management standard for the automotive industry, which emphasizes various product reliability metrics. RoboSense has obtained the IATF16949 certificate in the automotive field, which now fully qualifies it to supply to automotive customers. It also has accelerated partnerships of automotive-grade LiDAR serial productions with major OEMs and Tier1s. Moreover, it stands for the global industry experts’ recognition of RoboSense product design, development, and production processes and also indicates that RoboSense has achieved a new milestone of complete readiness for serial mass production of automotive LiDARs, including the latest solid-state smart LiDAR “RS-LiDAR-M1”.

 

Robosense won this year’s CES 2020 innovation award for the first MEMS-based smart LiDAR sensor, the RS-LiDAR-M1. What sets it apart from competing solutions?

Since opening to partners in 2017, the Smart LiDAR Sensor’s built-in perception algorithm, the RS-LiDAR-Algorithm, is in the leading position in the automotive LiDAR industry. The RoboSense RS-LiDAR-M1 Smart LiDAR is world’s first and smallest MEMS-based smart LiDAR sensor incorporating LiDAR sensors, AI algorithms and IC chipsets, which transforms overpriced traditional LiDAR systems (also known as solely information collectors) to full data analysis and comprehension system, providing rich and reliable 3D point cloud data and structured, semantic environmental perception results in real-time for a faster autonomous vehicle decision-making than ever before. It fully ensures Level 3-Level 5 advanced automatic driving with the highest level ASIL-D relevant perception safety, which distinguishes us a lot from LiDAR companies.

“The RoboSense RS-LiDAR-M1 Smart LiDAR is world’s first and smallest MEMS-based smart LiDAR sensor”

In addition, the RS-LiDAR-M1 based on solid-state MEMS technology meets the automotive requirement. The RS-LiDAR-M1 has a field of view of 120°*25°, which is the MEMS solid-state LiDAR’s largest field of view among released products worldwide. RoboSense uses 905nm lasers with low cost, automotive grade and small size instead of expensive 1550nm lasers.

 

How does the LiDAR industry in China compared to North America?

LiDAR industry in China starts later than North America, but sooner becomes one of the fastest-growing markets in terms of autonomous driving. In 2018, RoboSense won a strategic investment of over $45 million USD from Alibaba Cainiao Network, SAIC and BAIC, setting the largest single financing record in China’s LiDAR industry. Along with this strategic investment, powered by RoboSense’s MEMS Solid-State LiDAR M1, Alibaba Cainiao announced the UAV logistic vehicle which accelerates the LiDAR application in the logistic market. Meanwhile, the RoboTaxi application also speeds up the LiDAR market in China since last year.

As a conclusion, the current market size in China is smaller than the US, but I do also see the fast growth in application of autonomous driving, MaaS, logistics, and robot applications.

 

When do you believe that we will see fully operation Level 5 autonomous vehicles on the road?

Fully automated vehicles (L5) to see on the road, I think, will still take a long time to be reached. There will be step-by-step growth in autonomous vehicles. There are already vehicles equipped with L3 system. Also, some of our partners and customers are in developing the L4 system with a potential start of production time in 5 years. But for the fully automated L5 vehicle, the biggest concerns are always the safety and public acceptance. If they are not able to prove that fully automated vehicles are safer than human drivers, there will be difficulty becoming popular. Currently, I do see the industry is moving in this direction step by step. But I don’t think there will be fully automated vehicles in 10 years.

 

Is there anything else that you would like to share about Robosense? 

RoboSense has received numerous awards, including the CES 2020 and 2019 Innovation Awards, 2019 AutoSens Award, and 2019 Stevie Gold Award and our partners cover the world’s major autonomous driving technology companies, OEMs, and Tier 1s, including the world’s leading automaker, China’s FAW(First Automobile Works), who will use the RoboSense RS-LiDAR-M1 LiDAR as FAW’s proprietary next-generation autonomous driving system.

RoboSense will focus on the development of the solid-state M1 product into automotive-grade mass-production as the first priority. We are not only developing the hardware, but also software as a comprehensive smart sensor system. The delivery of our Automotive Grade MEMS LiDAR in 2020 was one of our biggest milestones.

In addition, safety is the biggest challenge we will tackle. To ensure safety, fusion with different sensors is needed. Furthermore, an AD-friendly infrastructure, such as an intelligent vehicle cooperative infrastructure system (IVICS), also supports autonomous driving. Therefore, the development of short-range Blind Spot Detection (BSD) LiDAR, multiple sensor fusion projects and IVICS projects to provide high precision perception systems are also our focus in 2020.

Thank you for this fascinating interview, anyone who wishes to learn more should visit Robosense.

Spread the love
Continue Reading

Autonomous Vehicles

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading

Artificial Neural Networks

Scalable Autonomous Vehicle Safety Tools Developed By Researchers

mm

Published

on

Scalable Autonomous Vehicle Safety Tools Developed By Researchers

As the speed of autonomous vehicle manufacturing and deployment increases, the safety of autonomous vehicles becomes even more important. For that reason, researchers are investing in the creation of metrics and tools to track the safety of autonomous vehicles. As reported by ScienceDaily, a research team from the University of Illinois at Urbana-Champaign have used machine learning algorithms to create a scalable autonomous vehicle safety analysis platform, utilizing both hardware and software improvements to do so.

Improving the safety of autonomous vehicles has remained one of the more difficult tasks in AI, because of the many variables involved in the task. Not only are the sensors and algorithms involved in the vehicle extremely complex, but there are many external conditions that are constantly in flux, such as road conditions, topography, weather, lighting and traffic.

The landscape and algorithms of autonomous vehicles are both constantly changing, and companies need a way to keep up with the changes and respond to new issues. The Illinois researchers are working on a platform that lets companies address recently identified safety concerns in a quick, cost-effective method. However, the sheer complexity of the systems that drive autonomous vehicles make this a massive undertaking. The research team is designing a system that will be able to keep track of and update autonomous vehicle systems that contain dozens of processors and accelerators running millions of lines of code.

In general, autonomous vehicles drive quite safely. However, when a failure or unexpected event occurs, an autonomous vehicle is currently more likely to get in an accident than human drivers, as the vehicle often has trouble negotiating sudden emergencies.  While it is admittedly difficult to quantify how safe autonomous vehicles are and what is to blame for accidents, it is obvious that failures of a vehicle going at 70 mph down a road could prove extremely dangerous, hence the need to improve the handling of emergencies by autonomous vehicles.

Saurabh Jha, a doctoral candidate and one of the researchers involved with the program, explained to ScienceDaily the need to improve failure handling in autonomous vehicles. Jha explained:

“If a driver of a typical car senses a problem such as vehicle drift or pull, the driver can adjust his/her behavior and guide the car to a safe stopping point. However, the behavior of the autonomous vehicle may be unpredictable in such a scenario unless the autonomous vehicle is explicitly trained for such problems. In the real world, there are an infinite number of such cases.”

The researchers are aiming to solve this problem by gathering and analyzing data involving safety reports submitted by autonomous vehicle companies.  Companies like Waymo and Uber are required to submit reports to the DMV in California at least annually. These reports contain data on statistics like how far cars have driven, how many accidents occured, and what conditions the vehicles were operating under.

The University of Illinois research team analyzed reports covering the years 2014 to 2017. During this period, autonomous vehicles drove around 1,116,000 miles distributed across 144 different vehicles. According to the findings of the research team, when compared with the same distance driven by human drivers, accidents were 4000 times more likely to occur. The accidents may imply that the AI of the vehicle failed to properly disengage and avoid the accident, relying instead on the human driver to take over.

It’s difficult to diagnose potential errors in the hardware or software of the autonomous vehicle because many errors will manifest only under the correct conditions. It also isn’t feasible to conduct tests under every possible condition that could occur on the road. Instead of collecting data on hundreds of thousands of real miles logged by autonomous vehicles, the research team is utilizing simulated environments to drastically reduce the amount of money and time spent in generating data for the training of AVs.

The research team uses the generated data to explore situations where AV failures can happen and safety issues can occur. It appears that utilizing the simulations can genuinely help companies find safety risks they wouldn’t be able to otherwise. For instance, when the team tested the Apollo AV, created by Baidu, they isolated over 500 instances where the AV failed to handle an emergency situation and an accident occurred as a result. The research team hopes that other companies will make use of their testing platform and improve the safety of their autonomous vehicles.

Spread the love
Continue Reading