Connect with us

Startups

Anduril Industries Scores Defense Contract for a Surveillance System

Published

 on

Anduril Industries Scores Defense Contract for a Surveillance System

Anduril Industries, the surveillance startup founded by Oculus Rift inventor Palmer Luckey, received a U.S. Marine Corps contract this month. The defense technology company and Project Maven contractor is two-years-old. Project Maven was the secretive Pentagon program that aimed to use artificial intelligence from the private sector for military purposes. 

Marine Corps Installations Command announced on July 15th that Anduril Industries had been awarded a $13.5 million sole source contract. Some additional information has been made public through documents published by the organization Mijente under the Freedom of Information Act. 

The new defense contract is for an Autonomous Surveillance Counter Intrusion Capability (ASCIC) that will be used to help secure installations with the use of artificial intelligence against any intrusions. This will be able to operate without the use of humans. The new system is set to be used at four Marine Corps bases. Two of them are in Japan, one is in Hawaii, and the last is in Yuma, Arizona close to the U.S. border with Mexico. 

The ASCIC system uses Anduril’s existing perimeter-monitoring system called Lattice that uses sensor towers, drones, and machine learning to automatically identify movements and intruders. 

Palmer Luckey spoke about the project all the way back in November 2018 at a summit in Lisbon, Portugal. 

“What we’re working on is taking data from lots of different sensors, putting it into an AI-powered sensor fusion platform so that you can build a perfect 3D model of everything that’s going on in a large area. Then we take that data and run predictive analytics on it, and tag everything with metadata, find what’s relevant, then push it to people who are out in the field.”

According to Anduril, the system can “detect, classify, and track any person, drone or other threat in a restricted area,” and it can “help identify terrorist threats faster and allow troops to instantly spot potential threats with confidence.” 

Anduril combines the virtual reality systems at Oculus, another project from Palmer Luckey, with advanced sensors from the Pentagon. These together create a simple mobile platform that is intelligent and can monitor whatever the installation needs. 

In March, the MCICOM command was looking for a system that provided “24/7/365 autonomous situational awareness and actionable, real-time intelligence of surrounding air, land, and sea, through all-weather conditions.” 

“The system shall autonomously detect, identify, classify, and track humans on foot, wheeled and tracked vehicles on land, surface vessels and boats,” according to the original contract. “It must be a scalable federated network of sensors (EO/IR/RADAR) with capacity to expand into acoustic, seismic, and other sensors that operate across the electromagnetic spectrum.” 

Anduril was able to take all of this and create a single system. MCICOM has said that Anduril is the only company on the market able to deliver this kind of system. This is the reason Anduril was awarded the contract so quickly. It is abnormal for a defense contract to be awarded with so little competition from other defense firms or private companies. 

Despite the controversy that surrounds the use of AI in the military, it is becoming increasingly prominent in defense technology. In the past, Google stopped helping the US military use artificial intelligence to analyze drone footage in what was part of the Pentagon’s Project Maven. There were concerns from within the program as well as controversy in the media. There is going to be an increasing amount of competition among private companies looking to score defense contracts. 

With the increasing development of artificial intelligence in all areas of society, it was only a matter of time before the U.S. government began to use it in the defense sector. Just like in almost every other sector, AI can greatly increase the effectiveness of many aspects of military defense for the U.S.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Startups

Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians

mm

Published

on

Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians

Every year, hundreds of thousands of people die in accidents involving motor vehicles. Recently, an AI startup called VizibleZone devised a method to possibly prevent some of these deaths. As reported by VentureBeat, VizibleZone’s AI-powered system integrates data collected by both motor vehicles and smartphones in order to alert drivers to the potential location of a pedestrian, which can help drivers avoid tragic accidents.

According to the World Health Organization, in 2018 around 1.5 million people were killed in road accidents. More than half of all of the deaths associated with these accidents involved a collision between a pedestrian or cyclist and a motor vehicle. Over the past decade, consumer vehicles have become more high-tech and sophisticated, equipped with cameras, radar, and lidar, which are capable of detecting people near or on a road. However, a major cause of many fatal accidents is the “hidden pedestrian” problem,  named for instances where a pedestrian is obscured by an object until it’s too late.

VizibleZone devised a potential solution to this problem, making use of data from both smartphones and smart cars to create representations of city streets that pinpoint possible locations for both cars and pedestrians. If an AI model determines that there is a potential collision hazard, it will warn the driver of the vehicle, who can take the appropriate action to avoid a collision.

According to VentureBeat, Shmulik Barel, cofounder of VizibleZone, the applications based on their software development kit work by collecting large amounts of sensor and GPS data, which is anonymized before use. Hundreds of thousands of individuals contribute their data to a database used to train the main AI algorithms, which create behavioral profiles that take the environment surrounding these individuals into account. While the model’s assumptions about constant properties like the size of objects and vehicles might be generalizable, the model must be customized to fit the individual environment that the applications operate in.  This is because drivers and pedestrians display different behavior in different regions of the globe. In order to make the model reliable, these regional differences in behavior must be accounted for.

Once the behavioral profiles are constructed and fine-tuned, those who elect to use the app just allow the app to broadcast their location. The broadcast information is received by vehicles making use of Viziblezone’s software. An AI model then calculates the probability of an accident occurring based on variables like road conditions, the driver’s profile, and the pedestrian profile. If the risk of an accident exceeds a certain threshold, the driver will be alerted to the potential of an accident approximately 3 seconds in advance.

Barel explained that the system is capable of alerting the pedestrian to a possibly dangerous approaching vehicle as well if the user wishes to receive those notifications. The AI system is reportedly capable of detecting passengers approximately 500 feet away,  or 150 meters away, in any weather conditions and at any time of day. One concern is the fact that the app seems to drain battery life by approximately 5% every 24 hours, although the startup is currently attempting to reduce that energy usage by half.

According to Barel, as interviewed by VentureBeat, Uber has discussed the possibility of incorporating VizibleZone’s technology into its ride-hailing services. While collaborating with Uber might give VizibleZone a big break, the company’s current focus is improving the accuracy of the system by scaling up the number of devices that are networked together. VizibleZone would also like to integrate its technology with other smart devices and city infrastructure, such as traffic lights.

While devices like radar, lidar, and cameras have managed to cut down on many accidents, there still hasn’t been an application capable of tackling the “hidden pedestrian” problem. If VizibleZone can successfully adapt its current model and bring it to more places around the world, many lives could potentially be saved.

Spread the love
Continue Reading

Startups

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

mm

Published

on

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

The autonomous vehicles developed by Waymo utilize computer vision techniques and artificial intelligence to perceive the surrounding environment and make real-time decisions about how the vehicle should react and move. When objects are perceived by the camera and sensors inside of the vehicle, they are matched against a large database compiled by Alphabet in order to be recognized.

Massive datasets are of great importance to the training of autonomous vehicles, as they enable the AI within the vehicles to get better and improve their performance. However, engineers need some way of efficiently matching items within the dataset to queries so that they can investigate how the AI performs on specific types of images. To solve this problem, as VentureBeat reports, Waymo recently developed a tool dubbed “Content Search”, which functions similarly to how Google Image Search and Google Photos operate. These systems match queries with the semantic content within images, generating representations of the objects that make image retrieval based on natural language queries easier.

Before the advent of Content Search, if Waymo’s researchers wanted to retrieve certain samples from the logs, they had to describe the object by using heuristics. Waymo’s logs had to be searched using commands that searched for objects based on rules, which meant running searches for objects that were “under X height” or objects that “traveled at y miles per hour”. The results of these rules-based searches could often be quite broad and researchers would have then need to comb through the returned results manually.

Content Search solves this problem by creating catalogs of data and conducting similarity searches on the different catalogs in order to find the most similar categories when presented with an object. If Content Search is presented with a truck or tree, it will return other trucks or trees that Waymo’s autonomous vehicles have encountered. As a Waymo vehicle drives around it records images of objects around it, then it stores these objects as embeddings/mathematical representations. This means that the tool can make a comparison between object categories and rank responses by how similar to the provided object the stored object images are. This is similar to how the embedding similarity matching service operated by Google works.

The objects that Waymo’s vehicles encounter can come in all different shapes and sizes, but they all need to be distilled down into their essential components and categorized in order for Content Search to work. In order for this to happen, Waymo makes use of multiple AI models that are trained on a wide variety of objects. The various models learn to recognize a variety of objects and they are supported by Content Search, which enables the models to understand whether or not items belonging to a specific category are found within a given image. An additional optical character recognition model is utilized alongside the main model, allowing the Waymo vehicles to add extra identifying information to objects in images, based upon any text found in the image. For example, a truck equipped with signage would have the text of the sign included in its Content Search description.

Thanks to the above models working in concert, Waymo’s researchers and engineers are capable of searching the image data logs for very specific objects like specific species of trees and makes of car.

According to Waymo, as quoted by VentureBeat:

“With Content Search, we’re able to automatically annotate … objects in our driving history which in turn has exponentially increased the speed and quality of data we send for labeling. The ability to accelerate labeling has contributed to many improvements across our system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street. As Waymo expands to more cities, we’ll continue to encounter new objects and scenarios.”

This isn’t the first time that Waymo has used multiple machine learning models to enhance the reliability and accuracy of their vehicles. Waymo has collaborated with Alphabet/Google in the past, helping develop an AI technique alongside DeepMind. The AI system takes inspiration from evolutionary biology. To begin with, a variety of machine learning models are created and after they are trained the models that underperformed are culled and replaced with offspring models. This technique reportedly managed to reduce false positives dramatically while also reducing the required computational resources and training time.

Waymo’s AI Content Search tool lets engineers quickly find objects in driving records

Spread the love
Continue Reading

Startups

How the U.S.-China Tech War is Changing CES 2020

Published

on

How the U.S.-China Tech War is Changing CES 2020

As CES 2020 continues to unfold in Las Vegas, so does the tech war between the United States and China. The ongoing conflict has led to some Chinese companies to miss the event. 

Major Chinese companies such as Alibaba, Tencent, and JD.com have skipped out on the world’s largest tech event. At the same time, China’s focus on major technologies such as artificial intelligence and 5G will be showcased.

CES 2020 has a total of 4,500 companies taking part, and around 1,000 of them are from China. That is less than the one-third that were Chinese in 2018 and the one-fourth in 2019. 

This comes as the U.S.-China trade war continues to affect many aspects of the tech industry. However, the two nations are expected to sign a “Phase One” trade agreement on Jan. 15.

China’s trade delegation is expected to travel to Washington for a total of four days, beginning on January 14. Advocates are hoping that an agreement can bring an end to the trade conflict between the globe’s two biggest economies. 

The delegation will be led by Vice-Premier Liu He. U.S. President Donald Trump has said that it is a “major win” for the country and himself, while the Chinese have been more quiet. According to Trump, he will visit Beijing at a later date. 

Within the CES 2020 expo, there is a Chinese consulate and commerce ministry-backed station offering free legal help to Chinese attendees, due to current issues revolving around intellectual property rights. Those attendees have been told to carry documents certifying those rights in order to avoid trouble. This comes as IP theft is one of the major issues within the trade negotiations between the two nations. 

Since the shift in U.S. policy against Chinese tech companies in 2019, China has been seeking to establish technological independence from the U.S. According to a January 6 Eurasia Group report on top risks for 2020, this could cause serious issues within the international community. 

“The decision by China and the United States to decouple in the technology sphere is the single most impactful development for globalization since the collapse of the Soviet Union,” the report said.

One of the reasons for the decrease in Chinese participation at CES 2020 is that it is harder to obtain U.S. visas, due to the ongoing conflict. 

“Our company decided not to attend this year because we knew it would take forever to get our visa, if they don’t get rejected after all,” according to a Chinese A.I. chip startup founder.

Only OnePlus and Huawei, two of the top domestic smartphone makers in China, are taking part in CES. Xiaomi, Oppo, and Vivo have skipped the event. 

One of the major areas of interest within CES is artificial intelligence (AI), and China is the global leader. The nation’s top AI startups, including Megvii, SenseTime, and Yitu, are absent. Those companies are listed on a U.S. government trade restricted “entity list.” They were put on the list due to the ongoing persecution of ethnic minorities in Xinjiang province, which they are alleged to have a role in. 

Another two companies that were put on the list are the voice recognition company iFlyTek and surveillance company Hikvision. They are not present at the event this year. 

Even with the ongoing issues and several Chinese companies being absent from the event, there are many that are attending. Some Chinese participation at CES 2020 comes from A.I. firms ForwardX Robotics and RaSpect Intelligence Inspection Limited, Huawei, Baidu, Lenovo, Haier, Hisense, DJI, and ZTE USA.

 

Spread the love
Continue Reading