Connect with us

Startups

Intel Acquires AI Startup Habana

Published

 on

Intel Acquires AI Startup Habana

Intel Corp has agreed to purchase the Israeli artificial intelligence (AI) startup firm Habana Labs for $2 billion. The news was announced on Monday by Intel, and the purchase will take the company further into the AI industry. 

Habana Labs was founded in 2015 in Israel, and it focuses on AI chips. They have raised a total of $75 million, and one of those investors was Intel Capital. 

“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need–from the intelligent edge to the data center,” Navin Shenoy, Intel’s executive vice president and general manager of the Data Platforms Group, said in the news release.  “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

Intel predicts that the AI chip market will be greater than $25 billion by the year 2024. Not only will the market price get larger, but the actual technology will continue to be extremely important in the economy and society. Because of this, any big company like Intel is jumping in. Their AI-driven revenues have risen 20% from 2018, and they are now more than $3.5 billion.

Intel’s growing interest in this area is also a result of PC sales stagnating, and the company now relies heavily on sales to data centers. 

Intel was involved in various other AI-related acquisitions in the past few years. They picked up Movidius, Nervana, Altera and Mobileye. 

While much of the focus in the AI industry is on software, chips are arguably just as important. Intel and other companies know this, and that is why there have been recent developments in chip innovation. 

Mike Leone is a senior analyst at ESG.

“Satisfying AI workload requirements is a growing challenge for many organizations,” he said. “Traditional compute is simply unable to keep up with the orders of magnitude improvements organizations are looking for in their respective compute infrastructure. And it’s a losing proposition to just keep throwing more and more processing power at the problem. It’s too expensive. It’s too big of a footprint. And it’s too power hungry. We’re seeing an increase in the need for specialized compute to address the different workloads in the AI space, mainly training and inference. Training addresses the algorithm creation process, by feeding a model data so it can learn. Inference refers to the stage where the trained model gets leveraged to make predictions based on new incoming data. Of the two, training is far more resource intensive. And while GPUs, for example, can address both types of workloads, the emergence of specialized compute based on the AI workload—that is, training vs. inference—has emerged and amassed a surprising number of startups looking to add their IP and approach into the mix.”

Mukesh Khare, the vice president of IBM’s AI Hardware Research Center, also believes in the importance of AI chips. 

“Today, AI applications are being executed on systems designed for other, non-AI purposes. The rapid escalation in AI deployments is straining the capabilities of these systems, and expected overall improvements in general-purpose computing systems cannot keep up with this escalation in demand. For example, the compute needed for AI training is doubling every 3.5 months. To address this AI compute demand growth and opportunity, heterogeneous systems and AI accelerator chips, designed specifically and from scratch for AI, are required.”

With the acquisition, Habana will become a major player in this space. The acquisition also follows a pattern of increasing consolidation in the AI chip market. This will jump-start major competition in developing these computing chips in order to help with AI technology.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Startups

AI “Maths Robot” Helps Manage Microclimates and Increase Berry Yield Predictions

mm

Published

on

AI "Maths Robot" Helps Manage Microclimates and Increase Berry Yield Predictions

One of the biggest agriculture/horticulture companies in Australia is Costa Group, and the company has recently employed an AI system intended to improve crop quality and yield by helping the company analyze its berry crops. As reported by ZDNet, the system that Costa Group employs was designed by The Yield, an AgTech company based in Sydney. The AI system analyzes 14 different features in order to derive meaningful insights. These features include temperature, soil conditions, wind, light, and rain. The information is then combined with an existing dataset and predictions about individual crops are returned.

Costa Group operates several berry farms located throughout Queensland, New South Wales, and Tasmania. The berry farms in these locations contain polytunnels, and these polytunnels have their own microclimates. Because the climate of these tunnels is controlled, they require their own “weather service”. Internet of Things (IoT) devices within the tunnels collect a wide variety of data that is fed into the AI model.  The process is one of continual model creation, production, feedback, and refinement. The creators of the system describe it as a “maths robot”.

Similar AI models have been used to predict crop yield for spinach, lettuce, and other crops, yet the founder of The Yield, Ros Harvey, explained that their system is critical because berries are challenging to monitor as they grow. Unlike other vegetables or fruits, berries often go through a variety of stages very quickly and a single berry crop can have many growth stages at the same time. As Harvey explained to ZDNet:

“It’s been such a difficult problem for berry producers globally because unlike other crops, berries have many growth stages all at the same time… If you look at a berry plant, it’s fruiting, flowering, there are berries that are ready, and there are berries that are half produced because it continually fruits when it’s in season. Whereas other crops go through this linear growth stage where you harvest once at the end of the season.”

Currently, AI is typically used for just a few different applications in the AgTech industry. Among these applications are precision farming, agriculture robots, livestock monitoring, and drone analytics. In 2018, precision farming accounted for around 35.6% of AI usage in the agricultural sector. Applications like the type developed by The Yield, which assist farming operations in increasing yield and shielding themselves from risk by gaining valuable insight into growing trends, seem poised to see much more use in the near future.

The data returned by the AI system allows for the Costa Group to gain a better understanding of the yield, which in turn helps the company manage its logistical costs and price point. Harvey predicts that in the future more and more companies will begin using AI-powered applications to quantify yield and reduce risk, noting that as climate change makes weather more unpredictable more companies may choose to use polytunnels as well. The use of AI across the entire agricultural industry is predicted to grow rapidly in the near future. Machine learning, computer vision, and predictive analytics are helping agricultural operations increase yield and do more with less.

As a recent report released on the state of AI in agriculture found, AI AgTech is expected to grow dramatically over the course of the next five years. In 2018, the AI market in agriculture was valued at around 330 million USD, yet it is expected to reach a value of approximately 980 million USD by the end of 2024. Other recent applications of AI in the agriculture sector include small robots designed to weed fields and keeping track of growing conditions in vertical farming operations.

Spread the love
Continue Reading

Startups

Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians

mm

Published

on

Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians

Every year, hundreds of thousands of people die in accidents involving motor vehicles. Recently, an AI startup called VizibleZone devised a method to possibly prevent some of these deaths. As reported by VentureBeat, VizibleZone’s AI-powered system integrates data collected by both motor vehicles and smartphones in order to alert drivers to the potential location of a pedestrian, which can help drivers avoid tragic accidents.

According to the World Health Organization, in 2018 around 1.5 million people were killed in road accidents. More than half of all of the deaths associated with these accidents involved a collision between a pedestrian or cyclist and a motor vehicle. Over the past decade, consumer vehicles have become more high-tech and sophisticated, equipped with cameras, radar, and lidar, which are capable of detecting people near or on a road. However, a major cause of many fatal accidents is the “hidden pedestrian” problem,  named for instances where a pedestrian is obscured by an object until it’s too late.

VizibleZone devised a potential solution to this problem, making use of data from both smartphones and smart cars to create representations of city streets that pinpoint possible locations for both cars and pedestrians. If an AI model determines that there is a potential collision hazard, it will warn the driver of the vehicle, who can take the appropriate action to avoid a collision.

According to VentureBeat, Shmulik Barel, cofounder of VizibleZone, the applications based on their software development kit work by collecting large amounts of sensor and GPS data, which is anonymized before use. Hundreds of thousands of individuals contribute their data to a database used to train the main AI algorithms, which create behavioral profiles that take the environment surrounding these individuals into account. While the model’s assumptions about constant properties like the size of objects and vehicles might be generalizable, the model must be customized to fit the individual environment that the applications operate in.  This is because drivers and pedestrians display different behavior in different regions of the globe. In order to make the model reliable, these regional differences in behavior must be accounted for.

Once the behavioral profiles are constructed and fine-tuned, those who elect to use the app just allow the app to broadcast their location. The broadcast information is received by vehicles making use of Viziblezone’s software. An AI model then calculates the probability of an accident occurring based on variables like road conditions, the driver’s profile, and the pedestrian profile. If the risk of an accident exceeds a certain threshold, the driver will be alerted to the potential of an accident approximately 3 seconds in advance.

Barel explained that the system is capable of alerting the pedestrian to a possibly dangerous approaching vehicle as well if the user wishes to receive those notifications. The AI system is reportedly capable of detecting passengers approximately 500 feet away,  or 150 meters away, in any weather conditions and at any time of day. One concern is the fact that the app seems to drain battery life by approximately 5% every 24 hours, although the startup is currently attempting to reduce that energy usage by half.

According to Barel, as interviewed by VentureBeat, Uber has discussed the possibility of incorporating VizibleZone’s technology into its ride-hailing services. While collaborating with Uber might give VizibleZone a big break, the company’s current focus is improving the accuracy of the system by scaling up the number of devices that are networked together. VizibleZone would also like to integrate its technology with other smart devices and city infrastructure, such as traffic lights.

While devices like radar, lidar, and cameras have managed to cut down on many accidents, there still hasn’t been an application capable of tackling the “hidden pedestrian” problem. If VizibleZone can successfully adapt its current model and bring it to more places around the world, many lives could potentially be saved.

Spread the love
Continue Reading

Startups

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

mm

Published

on

Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search

The autonomous vehicles developed by Waymo utilize computer vision techniques and artificial intelligence to perceive the surrounding environment and make real-time decisions about how the vehicle should react and move. When objects are perceived by the camera and sensors inside of the vehicle, they are matched against a large database compiled by Alphabet in order to be recognized.

Massive datasets are of great importance to the training of autonomous vehicles, as they enable the AI within the vehicles to get better and improve their performance. However, engineers need some way of efficiently matching items within the dataset to queries so that they can investigate how the AI performs on specific types of images. To solve this problem, as VentureBeat reports, Waymo recently developed a tool dubbed “Content Search”, which functions similarly to how Google Image Search and Google Photos operate. These systems match queries with the semantic content within images, generating representations of the objects that make image retrieval based on natural language queries easier.

Before the advent of Content Search, if Waymo’s researchers wanted to retrieve certain samples from the logs, they had to describe the object by using heuristics. Waymo’s logs had to be searched using commands that searched for objects based on rules, which meant running searches for objects that were “under X height” or objects that “traveled at y miles per hour”. The results of these rules-based searches could often be quite broad and researchers would have then need to comb through the returned results manually.

Content Search solves this problem by creating catalogs of data and conducting similarity searches on the different catalogs in order to find the most similar categories when presented with an object. If Content Search is presented with a truck or tree, it will return other trucks or trees that Waymo’s autonomous vehicles have encountered. As a Waymo vehicle drives around it records images of objects around it, then it stores these objects as embeddings/mathematical representations. This means that the tool can make a comparison between object categories and rank responses by how similar to the provided object the stored object images are. This is similar to how the embedding similarity matching service operated by Google works.

The objects that Waymo’s vehicles encounter can come in all different shapes and sizes, but they all need to be distilled down into their essential components and categorized in order for Content Search to work. In order for this to happen, Waymo makes use of multiple AI models that are trained on a wide variety of objects. The various models learn to recognize a variety of objects and they are supported by Content Search, which enables the models to understand whether or not items belonging to a specific category are found within a given image. An additional optical character recognition model is utilized alongside the main model, allowing the Waymo vehicles to add extra identifying information to objects in images, based upon any text found in the image. For example, a truck equipped with signage would have the text of the sign included in its Content Search description.

Thanks to the above models working in concert, Waymo’s researchers and engineers are capable of searching the image data logs for very specific objects like specific species of trees and makes of car.

According to Waymo, as quoted by VentureBeat:

“With Content Search, we’re able to automatically annotate … objects in our driving history which in turn has exponentially increased the speed and quality of data we send for labeling. The ability to accelerate labeling has contributed to many improvements across our system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street. As Waymo expands to more cities, we’ll continue to encounter new objects and scenarios.”

This isn’t the first time that Waymo has used multiple machine learning models to enhance the reliability and accuracy of their vehicles. Waymo has collaborated with Alphabet/Google in the past, helping develop an AI technique alongside DeepMind. The AI system takes inspiration from evolutionary biology. To begin with, a variety of machine learning models are created and after they are trained the models that underperformed are culled and replaced with offspring models. This technique reportedly managed to reduce false positives dramatically while also reducing the required computational resources and training time.

Waymo’s AI Content Search tool lets engineers quickly find objects in driving records

Spread the love
Continue Reading