Heliogen, a secretive startup backed by Bill Gates and AOL founder Steve Case, has announced that they are using artificial intelligence (AI) to tackle what many consider society’s greatest threat.
The company came out of the shadows on Tuesday to reveal that they discovered how to use AI, along with a field of mirrors, to reflect enough sunlight to generate extreme heat above 1,000 degrees Celsius.
According to the founders, this could replace fossil fuels in industrial plants, which are responsible for over 20 percent of the world’s carbon emissions. The newly generated heat can replace those same fossil fuels and be used in critical industrial processes, like the production of cement, steel, and petrochemicals.
The huge breakthrough happened at Heliogen’s commercial facility in Lancaster, California. The firm’s founder and CEO is Bill Gross, who is also the founder of Idealab. The team consists of scientists and engineers from Caltech, MIT, and other institutions.
According to the press release, Heliogen’s main mission is to create the world’s first technology capable of commercially replacing fossil fuels with carbon-free, ultra-high temperature heat from the sun. They aim to transform sunlight into fuel in order to help solve climate change.
“Today, industrial processes like those used to make cement, steel, and other materials are responsible for more than a fifth of all emissions,” Gates said. “These materials are everywhere in our lives, but we don’t have any proven breakthroughs that will give us affordable, zero-carbon versions of them. If we’re going to get to zero-carbon emissions overall, we have a lot of inventing to do. I’m pleased to have been an early backer of [Heliogen CEO] Bill Gross’s novel solar concentration technology.”
Heliogen uses advanced computer vision software, which allows them to precisely align a large array of mirrors. They then reflect sunlight to a single target. According to the company, the technology will make it capable to eventually create solar energy levels of 1,500 degrees Celsius. This would make it possible to create completely clean hydrogen.
Heliogen is currently working with a few different partners, including Parsons Corporation, who is a global leader in the defense, intelligence, and critical infrastructure markets. They have been working on developing and implementing innovative solar thermal projects for over 10 years.
“As a company, we deliver sustainable solutions to our customers and we look forward to bringing Heliogen’s breakthrough technology to scale with our industry partners,” said Michael Chung, Vice President of Energy Solutions, Parsons Corporation.
“The world has a limited window to dramatically reduce greenhouse gas emissions,” said Bill Gross. “We’ve made great strides in deploying clean energy in our electricity system. But electricity accounts for less than a quarter of global energy demand. Heliogen represents a technological leap forward in addressing the other 75 percent of energy demand: the use of fossil fuels for industrial processes and transportation. With low-cost, ultra-high temperature process heat, we have an opportunity to make meaningful contributions to solving the climate crisis.”
The project has other investors including venture capital firm Neotribe and Dr. Patrick Soon-Shiong. He is a Los-Angeles-based investor and entrepreneur, and he owns Nant Capital, an investment firm. Neotribe’s founder and managing director, Swaroop ‘Kittu’ Kolluri, and Dr. Soon-Shong are on Heliogen’s board of directors.
“For the sake of our future generations we must address the existential danger of climate change with an extreme sense of urgency,” said Dr. Patrick Soon-Shiong. “I am committed to using my resources to invest in innovative technologies that harness the power of nature and the sun. By significantly reducing greenhouse gas emissions and generating a pure source of energy, Heliogen’s brilliant technology will help us achieve this mission and also meaningfully improve the world we leave our children.”
AI “Maths Robot” Helps Manage Microclimates and Increase Berry Yield Predictions
One of the biggest agriculture/horticulture companies in Australia is Costa Group, and the company has recently employed an AI system intended to improve crop quality and yield by helping the company analyze its berry crops. As reported by ZDNet, the system that Costa Group employs was designed by The Yield, an AgTech company based in Sydney. The AI system analyzes 14 different features in order to derive meaningful insights. These features include temperature, soil conditions, wind, light, and rain. The information is then combined with an existing dataset and predictions about individual crops are returned.
Costa Group operates several berry farms located throughout Queensland, New South Wales, and Tasmania. The berry farms in these locations contain polytunnels, and these polytunnels have their own microclimates. Because the climate of these tunnels is controlled, they require their own “weather service”. Internet of Things (IoT) devices within the tunnels collect a wide variety of data that is fed into the AI model. The process is one of continual model creation, production, feedback, and refinement. The creators of the system describe it as a “maths robot”.
Similar AI models have been used to predict crop yield for spinach, lettuce, and other crops, yet the founder of The Yield, Ros Harvey, explained that their system is critical because berries are challenging to monitor as they grow. Unlike other vegetables or fruits, berries often go through a variety of stages very quickly and a single berry crop can have many growth stages at the same time. As Harvey explained to ZDNet:
“It’s been such a difficult problem for berry producers globally because unlike other crops, berries have many growth stages all at the same time… If you look at a berry plant, it’s fruiting, flowering, there are berries that are ready, and there are berries that are half produced because it continually fruits when it’s in season. Whereas other crops go through this linear growth stage where you harvest once at the end of the season.”
Currently, AI is typically used for just a few different applications in the AgTech industry. Among these applications are precision farming, agriculture robots, livestock monitoring, and drone analytics. In 2018, precision farming accounted for around 35.6% of AI usage in the agricultural sector. Applications like the type developed by The Yield, which assist farming operations in increasing yield and shielding themselves from risk by gaining valuable insight into growing trends, seem poised to see much more use in the near future.
The data returned by the AI system allows for the Costa Group to gain a better understanding of the yield, which in turn helps the company manage its logistical costs and price point. Harvey predicts that in the future more and more companies will begin using AI-powered applications to quantify yield and reduce risk, noting that as climate change makes weather more unpredictable more companies may choose to use polytunnels as well. The use of AI across the entire agricultural industry is predicted to grow rapidly in the near future. Machine learning, computer vision, and predictive analytics are helping agricultural operations increase yield and do more with less.
As a recent report released on the state of AI in agriculture found, AI AgTech is expected to grow dramatically over the course of the next five years. In 2018, the AI market in agriculture was valued at around 330 million USD, yet it is expected to reach a value of approximately 980 million USD by the end of 2024. Other recent applications of AI in the agriculture sector include small robots designed to weed fields and keeping track of growing conditions in vertical farming operations.
Smartphone Data Combined With AI To Help Stop Vehicles From Hitting Pedestrians
Every year, hundreds of thousands of people die in accidents involving motor vehicles. Recently, an AI startup called VizibleZone devised a method to possibly prevent some of these deaths. As reported by VentureBeat, VizibleZone’s AI-powered system integrates data collected by both motor vehicles and smartphones in order to alert drivers to the potential location of a pedestrian, which can help drivers avoid tragic accidents.
According to the World Health Organization, in 2018 around 1.5 million people were killed in road accidents. More than half of all of the deaths associated with these accidents involved a collision between a pedestrian or cyclist and a motor vehicle. Over the past decade, consumer vehicles have become more high-tech and sophisticated, equipped with cameras, radar, and lidar, which are capable of detecting people near or on a road. However, a major cause of many fatal accidents is the “hidden pedestrian” problem, named for instances where a pedestrian is obscured by an object until it’s too late.
VizibleZone devised a potential solution to this problem, making use of data from both smartphones and smart cars to create representations of city streets that pinpoint possible locations for both cars and pedestrians. If an AI model determines that there is a potential collision hazard, it will warn the driver of the vehicle, who can take the appropriate action to avoid a collision.
According to VentureBeat, Shmulik Barel, cofounder of VizibleZone, the applications based on their software development kit work by collecting large amounts of sensor and GPS data, which is anonymized before use. Hundreds of thousands of individuals contribute their data to a database used to train the main AI algorithms, which create behavioral profiles that take the environment surrounding these individuals into account. While the model’s assumptions about constant properties like the size of objects and vehicles might be generalizable, the model must be customized to fit the individual environment that the applications operate in. This is because drivers and pedestrians display different behavior in different regions of the globe. In order to make the model reliable, these regional differences in behavior must be accounted for.
Once the behavioral profiles are constructed and fine-tuned, those who elect to use the app just allow the app to broadcast their location. The broadcast information is received by vehicles making use of Viziblezone’s software. An AI model then calculates the probability of an accident occurring based on variables like road conditions, the driver’s profile, and the pedestrian profile. If the risk of an accident exceeds a certain threshold, the driver will be alerted to the potential of an accident approximately 3 seconds in advance.
Barel explained that the system is capable of alerting the pedestrian to a possibly dangerous approaching vehicle as well if the user wishes to receive those notifications. The AI system is reportedly capable of detecting passengers approximately 500 feet away, or 150 meters away, in any weather conditions and at any time of day. One concern is the fact that the app seems to drain battery life by approximately 5% every 24 hours, although the startup is currently attempting to reduce that energy usage by half.
According to Barel, as interviewed by VentureBeat, Uber has discussed the possibility of incorporating VizibleZone’s technology into its ride-hailing services. While collaborating with Uber might give VizibleZone a big break, the company’s current focus is improving the accuracy of the system by scaling up the number of devices that are networked together. VizibleZone would also like to integrate its technology with other smart devices and city infrastructure, such as traffic lights.
While devices like radar, lidar, and cameras have managed to cut down on many accidents, there still hasn’t been an application capable of tackling the “hidden pedestrian” problem. If VizibleZone can successfully adapt its current model and bring it to more places around the world, many lives could potentially be saved.
Waymo’s Self-Driving Technology Gets Smarter, Recognizes Billions of Objects Thanks To Content Search
The autonomous vehicles developed by Waymo utilize computer vision techniques and artificial intelligence to perceive the surrounding environment and make real-time decisions about how the vehicle should react and move. When objects are perceived by the camera and sensors inside of the vehicle, they are matched against a large database compiled by Alphabet in order to be recognized.
Massive datasets are of great importance to the training of autonomous vehicles, as they enable the AI within the vehicles to get better and improve their performance. However, engineers need some way of efficiently matching items within the dataset to queries so that they can investigate how the AI performs on specific types of images. To solve this problem, as VentureBeat reports, Waymo recently developed a tool dubbed “Content Search”, which functions similarly to how Google Image Search and Google Photos operate. These systems match queries with the semantic content within images, generating representations of the objects that make image retrieval based on natural language queries easier.
Before the advent of Content Search, if Waymo’s researchers wanted to retrieve certain samples from the logs, they had to describe the object by using heuristics. Waymo’s logs had to be searched using commands that searched for objects based on rules, which meant running searches for objects that were “under X height” or objects that “traveled at y miles per hour”. The results of these rules-based searches could often be quite broad and researchers would have then need to comb through the returned results manually.
Content Search solves this problem by creating catalogs of data and conducting similarity searches on the different catalogs in order to find the most similar categories when presented with an object. If Content Search is presented with a truck or tree, it will return other trucks or trees that Waymo’s autonomous vehicles have encountered. As a Waymo vehicle drives around it records images of objects around it, then it stores these objects as embeddings/mathematical representations. This means that the tool can make a comparison between object categories and rank responses by how similar to the provided object the stored object images are. This is similar to how the embedding similarity matching service operated by Google works.
The objects that Waymo’s vehicles encounter can come in all different shapes and sizes, but they all need to be distilled down into their essential components and categorized in order for Content Search to work. In order for this to happen, Waymo makes use of multiple AI models that are trained on a wide variety of objects. The various models learn to recognize a variety of objects and they are supported by Content Search, which enables the models to understand whether or not items belonging to a specific category are found within a given image. An additional optical character recognition model is utilized alongside the main model, allowing the Waymo vehicles to add extra identifying information to objects in images, based upon any text found in the image. For example, a truck equipped with signage would have the text of the sign included in its Content Search description.
Thanks to the above models working in concert, Waymo’s researchers and engineers are capable of searching the image data logs for very specific objects like specific species of trees and makes of car.
According to Waymo, as quoted by VentureBeat:
“With Content Search, we’re able to automatically annotate … objects in our driving history which in turn has exponentially increased the speed and quality of data we send for labeling. The ability to accelerate labeling has contributed to many improvements across our system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street. As Waymo expands to more cities, we’ll continue to encounter new objects and scenarios.”
This isn’t the first time that Waymo has used multiple machine learning models to enhance the reliability and accuracy of their vehicles. Waymo has collaborated with Alphabet/Google in the past, helping develop an AI technique alongside DeepMind. The AI system takes inspiration from evolutionary biology. To begin with, a variety of machine learning models are created and after they are trained the models that underperformed are culled and replaced with offspring models. This technique reportedly managed to reduce false positives dramatically while also reducing the required computational resources and training time.
- Researchers Improve Robotic Arm Used in Surgery
- DeepMind and Google Brain Aim Create Methods to Improve Efficiency of Reinforcement Learning
- Deep Learning Used to Find Disease-Related Genes
- AI “Maths Robot” Helps Manage Microclimates and Increase Berry Yield Predictions
- Computer Scientists Tackle Bias in AI