Connect with us

Startups

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

mm

Published

 on

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

The autonomous vehicles developed by Waymo utilize computer vision techniques and artificial intelligence to perceive the surrounding environment and make real-time decisions about how the vehicle should react and move. When objects are perceived by the camera and sensors inside of the vehicle, they are matched against a large database compiled by Alphabet in order to be recognized.

Massive datasets are of great importance to the training of autonomous vehicles, as they enable the AI within the vehicles to get better and improve their performance. However, engineers need some way of efficiently matching items within the dataset to queries so that they can investigate how the AI performs on specific types of images. To solve this problem, as VentureBeat reports, Waymo recently developed a tool dubbed “Content Search”, which functions similarly to how Google Image Search and Google Photos operate. These systems match queries with the semantic content within images, generating representations of the objects that make image retrieval based on natural language queries easier.

Before the advent of Content Search, if Waymo’s researchers wanted to retrieve certain samples from the logs, they had to describe the object by using heuristics. Waymo’s logs had to be searched using commands that searched for objects based on rules, which meant running searches for objects that were “under X height” or objects that “traveled at y miles per hour”. The results of these rules-based searches could often be quite broad and researchers would have then need to comb through the returned results manually.

Content Search solves this problem by creating catalogs of data and conducting similarity searches on the different catalogs in order to find the most similar categories when presented with an object. If Content Search is presented with a truck or tree, it will return other trucks or trees that Waymo’s autonomous vehicles have encountered. As a Waymo vehicle drives around it records images of objects around it, then it stores these objects as embeddings/mathematical representations. This means that the tool can make a comparison between object categories and rank responses by how similar to the provided object the stored object images are. This is similar to how the embedding similarity matching service operated by Google works.

The objects that Waymo’s vehicles encounter can come in all different shapes and sizes, but they all need to be distilled down into their essential components and categorized in order for Content Search to work. In order for this to happen, Waymo makes use of multiple AI models that are trained on a wide variety of objects. The various models learn to recognize a variety of objects and they are supported by Content Search, which enables the models to understand whether or not items belonging to a specific category are found within a given image. An additional optical character recognition model is utilized alongside the main model, allowing the Waymo vehicles to add extra identifying information to objects in images, based upon any text found in the image. For example, a truck equipped with signage would have the text of the sign included in its Content Search description.

Thanks to the above models working in concert, Waymo’s researchers and engineers are capable of searching the image data logs for very specific objects like specific species of trees and makes of car.

According to Waymo, as quoted by VentureBeat:

“With Content Search, we’re able to automatically annotate … objects in our driving history which in turn has exponentially increased the speed and quality of data we send for labeling. The ability to accelerate labeling has contributed to many improvements across our system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street. As Waymo expands to more cities, we’ll continue to encounter new objects and scenarios.”

This isn’t the first time that Waymo has used multiple machine learning models to enhance the reliability and accuracy of their vehicles. Waymo has collaborated with Alphabet/Google in the past, helping develop an AI technique alongside DeepMind. The AI system takes inspiration from evolutionary biology. To begin with, a variety of machine learning models are created and after they are trained the models that underperformed are culled and replaced with offspring models. This technique reportedly managed to reduce false positives dramatically while also reducing the required computational resources and training time.

Waymo’s AI Content Search tool lets engineers quickly find objects in driving records

Spread the love

Startups

AI “Maths Robot” Helps Manage Microclimates and Increase Berry Yield Predictions

mm

Published

on

AI "Maths Robot" Helps Manage Microclimates and Increase Berry Yield Predictions

One of the biggest agriculture/horticulture companies in Australia is Costa Group, and the company has recently employed an AI system intended to improve crop quality and yield by helping the company analyze its berry crops. As reported by ZDNet, the system that Costa Group employs was designed by The Yield, an AgTech company based in Sydney. The AI system analyzes 14 different features in order to derive meaningful insights. These features include temperature, soil conditions, wind, light, and rain. The information is then combined with an existing dataset and predictions about individual crops are returned.

Costa Group operates several berry farms located throughout Queensland, New South Wales, and Tasmania. The berry farms in these locations contain polytunnels, and these polytunnels have their own microclimates. Because the climate of these tunnels is controlled, they require their own “weather service”. Internet of Things (IoT) devices within the tunnels collect a wide variety of data that is fed into the AI model.  The process is one of continual model creation, production, feedback, and refinement. The creators of the system describe it as a “maths robot”.

Similar AI models have been used to predict crop yield for spinach, lettuce, and other crops, yet the founder of The Yield, Ros Harvey, explained that their system is critical because berries are challenging to monitor as they grow. Unlike other vegetables or fruits, berries often go through a variety of stages very quickly and a single berry crop can have many growth stages at the same time. As Harvey explained to ZDNet:

“It’s been such a difficult problem for berry producers globally because unlike other crops, berries have many growth stages all at the same time… If you look at a berry plant, it’s fruiting, flowering, there are berries that are ready, and there are berries that are half produced because it continually fruits when it’s in season. Whereas other crops go through this linear growth stage where you harvest once at the end of the season.”

Currently, AI is typically used for just a few different applications in the AgTech industry. Among these applications are precision farming, agriculture robots, livestock monitoring, and drone analytics. In 2018, precision farming accounted for around 35.6% of AI usage in the agricultural sector. Applications like the type developed by The Yield, which assist farming operations in increasing yield and shielding themselves from risk by gaining valuable insight into growing trends, seem poised to see much more use in the near future.

The data returned by the AI system allows for the Costa Group to gain a better understanding of the yield, which in turn helps the company manage its logistical costs and price point. Harvey predicts that in the future more and more companies will begin using AI-powered applications to quantify yield and reduce risk, noting that as climate change makes weather more unpredictable more companies may choose to use polytunnels as well. The use of AI across the entire agricultural industry is predicted to grow rapidly in the near future. Machine learning, computer vision, and predictive analytics are helping agricultural operations increase yield and do more with less.

As a recent report released on the state of AI in agriculture found, AI AgTech is expected to grow dramatically over the course of the next five years. In 2018, the AI market in agriculture was valued at around 330 million USD, yet it is expected to reach a value of approximately 980 million USD by the end of 2024. Other recent applications of AI in the agriculture sector include small robots designed to weed fields and keeping track of growing conditions in vertical farming operations.

Spread the love
Continue Reading

Startups

Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians

mm

Published

on

Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians

Every year, hundreds of thousands of people die in accidents involving motor vehicles. Recently, an AI startup called VizibleZone devised a method to possibly prevent some of these deaths. As reported by VentureBeat, VizibleZone’s AI-powered system integrates data collected by both motor vehicles and smartphones in order to alert drivers to the potential location of a pedestrian, which can help drivers avoid tragic accidents.

According to the World Health Organization, in 2018 around 1.5 million people were killed in road accidents. More than half of all of the deaths associated with these accidents involved a collision between a pedestrian or cyclist and a motor vehicle. Over the past decade, consumer vehicles have become more high-tech and sophisticated, equipped with cameras, radar, and lidar, which are capable of detecting people near or on a road. However, a major cause of many fatal accidents is the “hidden pedestrian” problem,  named for instances where a pedestrian is obscured by an object until it’s too late.

VizibleZone devised a potential solution to this problem, making use of data from both smartphones and smart cars to create representations of city streets that pinpoint possible locations for both cars and pedestrians. If an AI model determines that there is a potential collision hazard, it will warn the driver of the vehicle, who can take the appropriate action to avoid a collision.

According to VentureBeat, Shmulik Barel, cofounder of VizibleZone, the applications based on their software development kit work by collecting large amounts of sensor and GPS data, which is anonymized before use. Hundreds of thousands of individuals contribute their data to a database used to train the main AI algorithms, which create behavioral profiles that take the environment surrounding these individuals into account. While the model’s assumptions about constant properties like the size of objects and vehicles might be generalizable, the model must be customized to fit the individual environment that the applications operate in.  This is because drivers and pedestrians display different behavior in different regions of the globe. In order to make the model reliable, these regional differences in behavior must be accounted for.

Once the behavioral profiles are constructed and fine-tuned, those who elect to use the app just allow the app to broadcast their location. The broadcast information is received by vehicles making use of Viziblezone’s software. An AI model then calculates the probability of an accident occurring based on variables like road conditions, the driver’s profile, and the pedestrian profile. If the risk of an accident exceeds a certain threshold, the driver will be alerted to the potential of an accident approximately 3 seconds in advance.

Barel explained that the system is capable of alerting the pedestrian to a possibly dangerous approaching vehicle as well if the user wishes to receive those notifications. The AI system is reportedly capable of detecting passengers approximately 500 feet away,  or 150 meters away, in any weather conditions and at any time of day. One concern is the fact that the app seems to drain battery life by approximately 5% every 24 hours, although the startup is currently attempting to reduce that energy usage by half.

According to Barel, as interviewed by VentureBeat, Uber has discussed the possibility of incorporating VizibleZone’s technology into its ride-hailing services. While collaborating with Uber might give VizibleZone a big break, the company’s current focus is improving the accuracy of the system by scaling up the number of devices that are networked together. VizibleZone would also like to integrate its technology with other smart devices and city infrastructure, such as traffic lights.

While devices like radar, lidar, and cameras have managed to cut down on many accidents, there still hasn’t been an application capable of tackling the “hidden pedestrian” problem. If VizibleZone can successfully adapt its current model and bring it to more places around the world, many lives could potentially be saved.

Spread the love
Continue Reading

Startups

How the U.S.-China Tech War is Changing CES 2020

Published

on

How the U.S.-China Tech War is Changing CES 2020

As CES 2020 continues to unfold in Las Vegas, so does the tech war between the United States and China. The ongoing conflict has led to some Chinese companies to miss the event. 

Major Chinese companies such as Alibaba, Tencent, and JD.com have skipped out on the world’s largest tech event. At the same time, China’s focus on major technologies such as artificial intelligence and 5G will be showcased.

CES 2020 has a total of 4,500 companies taking part, and around 1,000 of them are from China. That is less than the one-third that were Chinese in 2018 and the one-fourth in 2019. 

This comes as the U.S.-China trade war continues to affect many aspects of the tech industry. However, the two nations are expected to sign a “Phase One” trade agreement on Jan. 15.

China’s trade delegation is expected to travel to Washington for a total of four days, beginning on January 14. Advocates are hoping that an agreement can bring an end to the trade conflict between the globe’s two biggest economies. 

The delegation will be led by Vice-Premier Liu He. U.S. President Donald Trump has said that it is a “major win” for the country and himself, while the Chinese have been more quiet. According to Trump, he will visit Beijing at a later date. 

Within the CES 2020 expo, there is a Chinese consulate and commerce ministry-backed station offering free legal help to Chinese attendees, due to current issues revolving around intellectual property rights. Those attendees have been told to carry documents certifying those rights in order to avoid trouble. This comes as IP theft is one of the major issues within the trade negotiations between the two nations. 

Since the shift in U.S. policy against Chinese tech companies in 2019, China has been seeking to establish technological independence from the U.S. According to a January 6 Eurasia Group report on top risks for 2020, this could cause serious issues within the international community. 

“The decision by China and the United States to decouple in the technology sphere is the single most impactful development for globalization since the collapse of the Soviet Union,” the report said.

One of the reasons for the decrease in Chinese participation at CES 2020 is that it is harder to obtain U.S. visas, due to the ongoing conflict. 

“Our company decided not to attend this year because we knew it would take forever to get our visa, if they don’t get rejected after all,” according to a Chinese A.I. chip startup founder.

Only OnePlus and Huawei, two of the top domestic smartphone makers in China, are taking part in CES. Xiaomi, Oppo, and Vivo have skipped the event. 

One of the major areas of interest within CES is artificial intelligence (AI), and China is the global leader. The nation’s top AI startups, including Megvii, SenseTime, and Yitu, are absent. Those companies are listed on a U.S. government trade restricted “entity list.” They were put on the list due to the ongoing persecution of ethnic minorities in Xinjiang province, which they are alleged to have a role in. 

Another two companies that were put on the list are the voice recognition company iFlyTek and surveillance company Hikvision. They are not present at the event this year. 

Even with the ongoing issues and several Chinese companies being absent from the event, there are many that are attending. Some Chinese participation at CES 2020 comes from A.I. firms ForwardX Robotics and RaSpect Intelligence Inspection Limited, Huawei, Baidu, Lenovo, Haier, Hisense, DJI, and ZTE USA.

 

Spread the love
Continue Reading