Connect with us

Startups

Tesla Acquires AI Startup DeepScale

Published

 on

Tesla Acquires AI Startup DeepScale

First reported by CNBC and later picked up by various outlets, Tesla has acquired DeepScale, a four-year-old artificial intelligence startup that will help push Tesla forward in developing autonomous vehicles. As of right now, their vehicles are not considered fully autonomous. They have been awarded a Level 2 designation by the Society of Automobile Engineers. A Level 4 ranking indicates full autonomy, and it’s defined as a vehicle being capable of operating in certain conditions without human intervention. 

DeepScale is a California-based computer vision startup that has received $18M in venture funding. That came from a round led by Steve Cohen’s venture fund Point72 as well as next47, a Siemens-backed venture fund. There was also a $3 million seed round from Andy Bechtolsheim, co-founder of Sun Microsystems, Ali Partovi, investor and co-founder of Code.org, and AME Cloud Ventures from Jerry Yang. 

DeepScale focuses on neural networks, and they develop perceptual systems used for semi-autonomous and autonomous vehicles. When it comes to the field of artificial intelligence and vehicles, they develop low-wattage processors that are used in mass-market automotive crash avoidance systems. This allows the vehicle to see what is around it more effectively.

The company has its own artificial intelligence software for vehicles called Craver21. DeepScale will be able to merge this with Tesla by optimizing hardware in the vehicles, which will help the vehicles get closer to full autonomy. This new cooperation will also build back up the Tesla Autopilot team. It was reported that as many as 11 Autopilot executives and engineers have left over the past year. 

Tesla is the best in the market at making electric vehicles, and they are still working on getting their self-driving technologies to be just as good. They have a large market share of the electric car market, which is still a small part of the total global automobile market. They broke records with their Model 3 Sedan after delivering 97,000 vehicles worldwide in the third quarter. 

One of Tesla’s major goals is to develop an Uber-like platform that uses fully autonomous vehicles without a driver. This new acquisition will help bring Tesla and Elon Musk closer to that goal, but they will still face problems regarding computation resources and limitations that arise when building them into vehicles. ‘

DeepScale CEO Forrest Iandola became a senior staff machine learning scientist after the acquisition. He made an announcement on his LinkedIn page. 

“I joined the Tesla #Autopilot team this week. I am looking forward to working with some of the brightest minds in #deeplearning and #autonomousdriving.” 

Iandola has a PhD in electrical engineering and computer science from UC Berkeley. Much of his work involves deep neural networks that can be implemented on mobile devices while using small amounts of memory.

The new acquisition of DeepScale by Tesla is the latest deal made by the electric car company. There are at least five others which include Maxwell Technologies from early this year and Solarcity in 2016.

Besides the reporting of the acquisition, this new deal has been kept quiet. There has been no official announcement from DeepScale nor Tesla, and the acquisition price is still unknown. 

According to Sabbir Rangwala, a specialist in perception for movement automation at Patience Consulting LLC and former President of Princeton Lightwave, Tesla “got a great bunch of talent to work on the AI for Tesla’s Driverless Car vision. Gaining people like Forrest who did spectacular work on “efficient AI” at Berkeley and then took it further at DeepScale bodes well that they can integrate into Tesla with their aggressive plans for robotaxis.”

This new deal will help integrate AI into the best fully autonomous electric vehicles that we have, and it can take Tesla exactly where they are looking to go. 

 

Spread the love

Autonomous Vehicles

Dr. Leilei Shinohara, Vice President of R&D at RoboSense – Interview Series

mm

Published

on

Dr. Leilei Shinohara, Vice President of R&D at RoboSense - Interview Series

Dr. Leilei Shinohara is Vice President of R&D, RoboSense. With more than a decade of experience developing LiDAR systems, Dr. Shinohara is one of the most accomplished experts in this field. Prior to joining RoboSense, Dr. Shinohara worked at Valeo as the Technical Lead for the world’s first automotive grade LiDAR, SCALA®. He was responsible for multiple programs, including automotive LiDAR and sensor fusion projects. Dr. Shinohara managed an international sensor product development team for the development and implementation of systems, software, hardware, mechanics, testing, validation, and functional safety to build the first automotive grade LiDAR product.

Prior to joining Robosense as Vice President of R&D you had more than a decade of experience in developing LiDAR, including having worked with Valeo’s SCALA® LiDAR project. What was it that attracted you to joining Robosense?

RoboSense is Asia’s No.1 LiDAR with the amazing development speed.

Prior to joining RoboSense, I was impressed by the innovation capabilities and technical acumen of RoboSense. RoboSense is targeting to be the top smart LiDAR sensor provider to the automotive market, which not only about the LiDAR HW but also the AI perception algorithm. This goal is quite fitting with my vision for the future smart sensor approach. At CES 2019, RoboSense exhibited its latest MEMS solid-state LiDAR, which has superior performance to its peer products. At CES 2020, RoboSense made huge progress and announced that the solid-state LiDAR RS-LiDAR-M1 is ready for sale with price of $1,898.

“At CES 2019, RoboSense exhibited its latest MEMS solid-state LiDAR, which has superior performance to its peer products. At CES 2020, RoboSense made huge progress and announced that the solid-state LiDAR RS-LiDAR-M1 is ready for sale with price of $1,898.”

With RoboSense’s leading technology and my previous experience in the automotive industry, I am confident that together, we can greatly accelerate the development of automotive-grade LiDAR products that can be mass-produced to make highly automated driving a reality.

 

It’s important to understand the benefits of LiDAR technology for autonomous vehicles versus regular camera systems. Could you want us through some of these benefits?

Cameras and radar have their limitations. For example, cameras don’t work well under bad ambient light conditions and radar has limitations detecting the stationary obstacle. Compared to the camera, LiDAR’s biggest advantage lies in higher accuracy and precision. It is not affected by the ambient light conditions, such as night, bright sunlight or the oncoming headlights of other cars, and able to work in various complex traffic conditions.

Recently we know quite some news about the tesla autopilot accident. As we know, Tesla’s autopilot system only relies on the Camera and Radar. Those accidents also prove that LiDAR is critical to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors.

“Those accidents also prove that LiDAR is critical to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors.”

Both Audi’s A8 (a Level 3 mass-produced autonomous vehicle) and the Waymo One (an autopilot ride-hailing service) have used LiDAR, which is an important industry indicator. Level 3 autonomous passenger vehicles using LiDAR will gradually become the industry standard.

 

One of the common complaints that we hear about LiDAR is that it’s too expensive for the bulk of consumer vehicles. Do you feel that the price will eventually drop to make it more competitive?

As we all know, high cost is one of the major limits for traditional LiDAR systems to meet mass production, it is an inevitable trend that LiDAR price will eventually drop to meet consumer autonomous vehicle’s needs. Currently, our MEMS LiDAR using a MEMS micromirror to steer the laser beam for scanning can feasibly be made small and at a lower cost,which makes it more competitive to overpriced Mechanical LiDAR.

“Currently, our MEMS LiDAR using a MEMS micromirror to steer the laser beam for scanning can feasibly be made small and at a lower cost,which makes it more competitive to overpriced Mechanical LiDAR.”

RoboSense MEMS-based LiDAR M1 uses 905nm lasers with low cost, automotive grade, and compact size. Parts have reduced from hundreds to dozens in comparison to traditional mechanical LiDARs, greatly reducing the cost and shortening production time– achieving a breakthrough in manufacturability. The coin-sized optical module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements.

The M1’s prototype with a 200m detection range and equivalent to 125 layers now sells with a price of $1,898 in comparison with the conventional 128 layers mechanical LiDAR which costs ten thousand dollars. Furthermore, when we are moving to the volume mass-production, the sensor cost can drop to a range of $200.

 

In December 2019, Robosense announced its launch of a developed and complete LiDAR perception solution for Robo-Taxi (RS-Fusion-P5). What is this solution?

“The RS-Fusion-P5 equipped with RoboSense’s flagship mechanical LiDAR model RS-Ruby and four short range blind-spot LiDAR RS-Bpearl. The multiple LiDAR fusion perception solution is developed for further accelerating the development of Robo-Taxi.”

The RS-Fusion-P5 has excellent perception capabilities. It is able to reach 200m detection range for a 10% reflectivity target, and with up to 0.1° high-precision resolution with full coverage, zero blind spots in the sensing zone. In addition, through its advanced AI perception algorithms, multi-sensor fusion, and synchronization interfaces, vehicles are able to identify all-around obstacles and position easily and precisely, empowering Level 4 or above autonomous vehicles with full-stack perception Capabilities.

The embedded four RS-BPearl form a hemispherical FOV coverage of 90° * 360° (or 180° * 180°), which not only can precisely identify objects around the vehicle body such as pets, children, roadbeds as well as other details of the near-field ground area but also detect the actual height information under particular scenarios such as bridge tunnels and culverts, further supporting autonomous vehicles for driving decision making and greatly improving car safety.

 

The RS-Fusion-P5 has zero blind spots in the sensing zone. How is this achieved?

To cover the blind spot zone, as following picture shows, 4x RS-BPearl are integrated on 4 sides around the vehicle.

The BPearl is a mechanical type LiDAR based on the same platform as 16/32/Ruby LiDAR but special designed for the blind spot area detection.

Dr. Leilei Shinohara, Vice President of R&D at RoboSense - Interview Series

 

 

RoboSense’s LiDAR production line recently obtained the IATF16949 Letter of Conformity. This is a huge milestone for the company, can you explain the importance behind this letter and what it means for the company?

IATF 16949 is the most widely used global quality management standard for the automotive industry, which emphasizes various product reliability metrics. RoboSense has obtained the IATF16949 certificate in the automotive field, which now fully qualifies it to supply to automotive customers. It also has accelerated partnerships of automotive-grade LiDAR serial productions with major OEMs and Tier1s. Moreover, it stands for the global industry experts’ recognition of RoboSense product design, development, and production processes and also indicates that RoboSense has achieved a new milestone of complete readiness for serial mass production of automotive LiDARs, including the latest solid-state smart LiDAR “RS-LiDAR-M1”.

 

Robosense won this year’s CES 2020 innovation award for the first MEMS-based smart LiDAR sensor, the RS-LiDAR-M1. What sets it apart from competing solutions?

Since opening to partners in 2017, the Smart LiDAR Sensor’s built-in perception algorithm, the RS-LiDAR-Algorithm, is in the leading position in the automotive LiDAR industry. The RoboSense RS-LiDAR-M1 Smart LiDAR is world’s first and smallest MEMS-based smart LiDAR sensor incorporating LiDAR sensors, AI algorithms and IC chipsets, which transforms overpriced traditional LiDAR systems (also known as solely information collectors) to full data analysis and comprehension system, providing rich and reliable 3D point cloud data and structured, semantic environmental perception results in real-time for a faster autonomous vehicle decision-making than ever before. It fully ensures Level 3-Level 5 advanced automatic driving with the highest level ASIL-D relevant perception safety, which distinguishes us a lot from LiDAR companies.

“The RoboSense RS-LiDAR-M1 Smart LiDAR is world’s first and smallest MEMS-based smart LiDAR sensor”

In addition, the RS-LiDAR-M1 based on solid-state MEMS technology meets the automotive requirement. The RS-LiDAR-M1 has a field of view of 120°*25°, which is the MEMS solid-state LiDAR’s largest field of view among released products worldwide. RoboSense uses 905nm lasers with low cost, automotive grade and small size instead of expensive 1550nm lasers.

 

How does the LiDAR industry in China compared to North America?

LiDAR industry in China starts later than North America, but sooner becomes one of the fastest-growing markets in terms of autonomous driving. In 2018, RoboSense won a strategic investment of over $45 million USD from Alibaba Cainiao Network, SAIC and BAIC, setting the largest single financing record in China’s LiDAR industry. Along with this strategic investment, powered by RoboSense’s MEMS Solid-State LiDAR M1, Alibaba Cainiao announced the UAV logistic vehicle which accelerates the LiDAR application in the logistic market. Meanwhile, the RoboTaxi application also speeds up the LiDAR market in China since last year.

As a conclusion, the current market size in China is smaller than the US, but I do also see the fast growth in application of autonomous driving, MaaS, logistics, and robot applications.

 

When do you believe that we will see fully operation Level 5 autonomous vehicles on the road?

Fully automated vehicles (L5) to see on the road, I think, will still take a long time to be reached. There will be step-by-step growth in autonomous vehicles. There are already vehicles equipped with L3 system. Also, some of our partners and customers are in developing the L4 system with a potential start of production time in 5 years. But for the fully automated L5 vehicle, the biggest concerns are always the safety and public acceptance. If they are not able to prove that fully automated vehicles are safer than human drivers, there will be difficulty becoming popular. Currently, I do see the industry is moving in this direction step by step. But I don’t think there will be fully automated vehicles in 10 years.

 

Is there anything else that you would like to share about Robosense? 

RoboSense has received numerous awards, including the CES 2020 and 2019 Innovation Awards, 2019 AutoSens Award, and 2019 Stevie Gold Award and our partners cover the world’s major autonomous driving technology companies, OEMs, and Tier 1s, including the world’s leading automaker, China’s FAW(First Automobile Works), who will use the RoboSense RS-LiDAR-M1 LiDAR as FAW’s proprietary next-generation autonomous driving system.

RoboSense will focus on the development of the solid-state M1 product into automotive-grade mass-production as the first priority. We are not only developing the hardware, but also software as a comprehensive smart sensor system. The delivery of our Automotive Grade MEMS LiDAR in 2020 was one of our biggest milestones.

In addition, safety is the biggest challenge we will tackle. To ensure safety, fusion with different sensors is needed. Furthermore, an AD-friendly infrastructure, such as an intelligent vehicle cooperative infrastructure system (IVICS), also supports autonomous driving. Therefore, the development of short-range Blind Spot Detection (BSD) LiDAR, multiple sensor fusion projects and IVICS projects to provide high precision perception systems are also our focus in 2020.

Thank you for this fascinating interview, anyone who wishes to learn more should visit Robosense.

Spread the love
Continue Reading

Autonomous Vehicles

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading

Autonomous Vehicles

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

mm

Published

on

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

The autonomous vehicles developed by Waymo utilize computer vision techniques and artificial intelligence to perceive the surrounding environment and make real-time decisions about how the vehicle should react and move. When objects are perceived by the camera and sensors inside of the vehicle, they are matched against a large database compiled by Alphabet in order to be recognized.

Massive datasets are of great importance to the training of autonomous vehicles, as they enable the AI within the vehicles to get better and improve their performance. However, engineers need some way of efficiently matching items within the dataset to queries so that they can investigate how the AI performs on specific types of images. To solve this problem, as VentureBeat reports, Waymo recently developed a tool dubbed “Content Search”, which functions similarly to how Google Image Search and Google Photos operate. These systems match queries with the semantic content within images, generating representations of the objects that make image retrieval based on natural language queries easier.

Before the advent of Content Search, if Waymo’s researchers wanted to retrieve certain samples from the logs, they had to describe the object by using heuristics. Waymo’s logs had to be searched using commands that searched for objects based on rules, which meant running searches for objects that were “under X height” or objects that “traveled at y miles per hour”. The results of these rules-based searches could often be quite broad and researchers would have then need to comb through the returned results manually.

Content Search solves this problem by creating catalogs of data and conducting similarity searches on the different catalogs in order to find the most similar categories when presented with an object. If Content Search is presented with a truck or tree, it will return other trucks or trees that Waymo’s autonomous vehicles have encountered. As a Waymo vehicle drives around it records images of objects around it, then it stores these objects as embeddings/mathematical representations. This means that the tool can make a comparison between object categories and rank responses by how similar to the provided object the stored object images are. This is similar to how the embedding similarity matching service operated by Google works.

The objects that Waymo’s vehicles encounter can come in all different shapes and sizes, but they all need to be distilled down into their essential components and categorized in order for Content Search to work. In order for this to happen, Waymo makes use of multiple AI models that are trained on a wide variety of objects. The various models learn to recognize a variety of objects and they are supported by Content Search, which enables the models to understand whether or not items belonging to a specific category are found within a given image. An additional optical character recognition model is utilized alongside the main model, allowing the Waymo vehicles to add extra identifying information to objects in images, based upon any text found in the image. For example, a truck equipped with signage would have the text of the sign included in its Content Search description.

Thanks to the above models working in concert, Waymo’s researchers and engineers are capable of searching the image data logs for very specific objects like specific species of trees and makes of car.

According to Waymo, as quoted by VentureBeat:

“With Content Search, we’re able to automatically annotate … objects in our driving history which in turn has exponentially increased the speed and quality of data we send for labeling. The ability to accelerate labeling has contributed to many improvements across our system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street. As Waymo expands to more cities, we’ll continue to encounter new objects and scenarios.”

This isn’t the first time that Waymo has used multiple machine learning models to enhance the reliability and accuracy of their vehicles. Waymo has collaborated with Alphabet/Google in the past, helping develop an AI technique alongside DeepMind. The AI system takes inspiration from evolutionary biology. To begin with, a variety of machine learning models are created and after they are trained the models that underperformed are culled and replaced with offspring models. This technique reportedly managed to reduce false positives dramatically while also reducing the required computational resources and training time.

Waymo’s AI Content Search tool lets engineers quickly find objects in driving records

Spread the love
Continue Reading