Heliogen, a secretive startup backed by Bill Gates and AOL founder Steve Case, has announced that they are using artificial intelligence (AI) to tackle what many consider society’s greatest threat.
The company came out of the shadows on Tuesday to reveal that they discovered how to use AI, along with a field of mirrors, to reflect enough sunlight to generate extreme heat above 1,000 degrees Celsius.
According to the founders, this could replace fossil fuels in industrial plants, which are responsible for over 20 percent of the world’s carbon emissions. The newly generated heat can replace those same fossil fuels and be used in critical industrial processes, like the production of cement, steel, and petrochemicals.
The huge breakthrough happened at Heliogen’s commercial facility in Lancaster, California. The firm’s founder and CEO is Bill Gross, who is also the founder of Idealab. The team consists of scientists and engineers from Caltech, MIT, and other institutions.
According to the press release, Heliogen’s main mission is to create the world’s first technology capable of commercially replacing fossil fuels with carbon-free, ultra-high temperature heat from the sun. They aim to transform sunlight into fuel in order to help solve climate change.
“Today, industrial processes like those used to make cement, steel, and other materials are responsible for more than a fifth of all emissions,” Gates said. “These materials are everywhere in our lives, but we don’t have any proven breakthroughs that will give us affordable, zero-carbon versions of them. If we’re going to get to zero-carbon emissions overall, we have a lot of inventing to do. I’m pleased to have been an early backer of [Heliogen CEO] Bill Gross’s novel solar concentration technology.”
Heliogen uses advanced computer vision software, which allows them to precisely align a large array of mirrors. They then reflect sunlight to a single target. According to the company, the technology will make it capable to eventually create solar energy levels of 1,500 degrees Celsius. This would make it possible to create completely clean hydrogen.
Heliogen is currently working with a few different partners, including Parsons Corporation, who is a global leader in the defense, intelligence, and critical infrastructure markets. They have been working on developing and implementing innovative solar thermal projects for over 10 years.
“As a company, we deliver sustainable solutions to our customers and we look forward to bringing Heliogen’s breakthrough technology to scale with our industry partners,” said Michael Chung, Vice President of Energy Solutions, Parsons Corporation.
“The world has a limited window to dramatically reduce greenhouse gas emissions,” said Bill Gross. “We’ve made great strides in deploying clean energy in our electricity system. But electricity accounts for less than a quarter of global energy demand. Heliogen represents a technological leap forward in addressing the other 75 percent of energy demand: the use of fossil fuels for industrial processes and transportation. With low-cost, ultra-high temperature process heat, we have an opportunity to make meaningful contributions to solving the climate crisis.”
The project has other investors including venture capital firm Neotribe and Dr. Patrick Soon-Shiong. He is a Los-Angeles-based investor and entrepreneur, and he owns Nant Capital, an investment firm. Neotribe’s founder and managing director, Swaroop ‘Kittu’ Kolluri, and Dr. Soon-Shong are on Heliogen’s board of directors.
“For the sake of our future generations we must address the existential danger of climate change with an extreme sense of urgency,” said Dr. Patrick Soon-Shiong. “I am committed to using my resources to invest in innovative technologies that harness the power of nature and the sun. By significantly reducing greenhouse gas emissions and generating a pure source of energy, Heliogen’s brilliant technology will help us achieve this mission and also meaningfully improve the world we leave our children.”
William Santana Li, CEO of Knightscope – Interview Series
Knightscope is a leader in developing autonomous security capabilities with a vision to one day be able to predict and prevent crime disrupting the $500 billion security industry. The technology is a profound combination of self-driving technology, robotics and artificial intelligence.
William Santana Li, is the Chairman and CEO of Knightscope. He is also a seasoned entrepreneur, intrapreneur and former corporate executive at Ford Motor Company. He is also the Founder and COO of GreenLeaf, which became the world’s 2nd largest automotive recycler.
Knightscope was launched in 2013 which was very forward thinking for the time. What was the inspiration behind launching this company?
A professional and a personal motivation. The professional answer is as a former automotive executive, I believe deeply that autonomous self-driving technology is going to turn the world upside down – but just not in agreement on how to commercialize the technology. Over $80 billion has been invested autonomous technology with something like 200 companies on it – for years. Yet, no one has shipped anything commercially viable. I believe Knightscope is literally the only company in the world operating fully autonomous 24/7/365 across an entire country, without human intervention, generating real revenue, with real clients, in the real world. Our crawl, walk, run approach is likely more suitable for this extremely complicated and execution intensive technology. My personal motivation: someone hit my town on 9/11 and I’m still furious – and I am dedicating the rest of my life to better securing our country. You can learn more about why we built Knightscope here.
Knightscope offers clients a Machine-as-a-Service (MaaS) subscription which aggregates data from the robots, analyzes it for anything out of the ordinary and serves that information to clients. What type of data is being collected?
Today we can read 1,200 license plates per minute, can detect a person, run a thermal scan, check for rogue mobile devices….it is over 90 terabytes of data a year that no human could ever process. So our clients utilize our state-of-the-art browser-based user interface to interact with the machines. You can get a glimpse of it here – we call the KSOC (Knightscope Security Operations Center). In the future, our desire is to have the machines be able to ‘see, feel, hear and smell’ and do 100 times more than a human could ever do – giving law enforcement and security professionals ‘superpowers’ – so they can do their jobs much more effectively.
K1 is a stationary machine which is ideal for entry and exit points. What are the capabilities that are offered with this machine?
Yes, the K1 operates primarily at ingress/egress points for either humans and/or vehicles. All our machines have the same suite of technologies – but at this time the K1 does have facial recognition capabilities which has proven to be quite useful in securing a location.
The K3 is an indoor autonomous robot, and the K5 is an outdoor autonomous robot, both capable of autonomous recharging and of having conversations with humans. What else can you tell us about these robots, and is there anything else that differentiates the two robots from each other?
The K3 is the smaller version capable of handling much smaller and dynamic indoor environments.
Obviously the K5 is weatherproof and can even go up ramps for vehicles – one of our clients is a 9-story parking structure – and the robot patrols autonomously on multiple levels on its own, which is a bit of a technical feat.
Your robots have been tested in multiple settings including shopping malls and parking lots. What are some other settings or use cases which are ideal for these robots?
Basically, anywhere outdoors or indoors you may see a security guard. Commercial real estate, corporate campuses, retail, warehouses, manufacturing plants, healthcare, stadiums, airports, rail stations, parks, data centers – the list is massive. Usually we do well when the client has a genuine crime problem and/or budget challenges.
Could you share with us some of the noteworthy clients which are currently using the robots in a commercial setting?
Ten of the Fortune 1000 major corporations are clients, Samsung, Westfield Malls, Sacramento Kings, City of Hayward, City of Huntington Park, Citizens Bank, XPO Logistics, Faurecia, Dignity Health, Houston Methodist Hospital – are just a few that come to mind. We operate across 4 time zones in the U.S. only. Can check them out on our homepage at www.knightscope.com
The K7 is Multi-Terrain Autonomous robot which is currently under development. The pictures of this robot look very impressive. What can you tell us about the future capabilities of the K7?
The K7 is technically challenging but is intended to handle much more difficult terrain and much larger environments – with gravel, dirt, sand, grass, etc. It is the size of a small car.
Knightscope is currently fundraising on StartEngine. What are the investment terms for investors?
We are celebrating our 7th anniversary and have raised over $40 million since inception to build all this technology from scratch. We design, engineer, build, deploy and support it. Made in the USA – and we are backed by over 7,000 investors and 4 major corporations and you learn about our investor base here. We are now raising $50 million in growth capital to scale the Company up to profitability – we can accept accredited and unaccredited investors as well as domestic and international investors from $1,000 to $10M completely online. You can learn more about the terms and buy shares here: www.startengine.com/knightscope
Is there anything else that you would like to share about Knightscope?
As I write this response, we are in complete lockdown in Silicon Valley due to the global pandemic. The crazy thing is that our clients are ‘essential services’ (law enforcement agencies, hospitals, security teams) so we must continue to operate 24/7/365. You can read more about why I think you should consider investing in Knightscope here – but these days the important thing to remember is that robots are immune!
Marc Sloan, Co-Founder & CEO of Scout – Interview Series
Marc Sloan is the Co-Founder & CEO of Scout, the world’s first web browser chatbot, a digital assistant for getting anything done online. Scout suggests useful things it can do for you based on what you’re doing online.
What initially attracted you to AI?
My first experience of working on AI was during a gap year I spent working in the natural language processing research team at GCHQ during my Bachelor’s degree. I got to see first-hand the impact machine learning could have on real world problems and the difference it makes.
It flipped a switch in my mind about how computers can be used to solve problems: software engineering teaches you to create programs that take data and produce results, but machine learning lets you take data and describe the results you want to produce a program. Meaning you can use the same framework to solve thousands of different problems. To me this felt far more impactful than having to write a program for each problem.
I was already studying optimisation problems in mathematics alongside computer science, so once I got back to university I focused on AI and completed my dissertation on speech processing before applying for a PhD in Information Retrieval at UCL.
You researched reinforcement learning in web search under supervision of David Silver, the founder of AlphaGo. Could you discuss some of this research?
My PhD was on the topic of applying reinforcement learning to learning to rank problems in information retrieval, a field I helped create called Dynamic Information Retrieval. I was supervised by Prof Jun Wang and Prof David Silver, both experts in agent-based reinforcement learning.
Our research looked at how search engines could learn from user behaviour to improve search results autonomously over time. Using a Multi-Armed Bandit approach, our system would attempt different search rankings and collect click behaviour to determine if they were effective or not. It could also adapt to individual users over time and was particularly effective in handling ambiguous search queries. At the time, David was focusing deeply on the Go problem and he helped me determine the appropriate reinforcement learning setup of states and value function for this particular problem.
What are some of the entrepreneur lessons that you learned from working with David Silver?
Research at UCL is often entrepreneurial. David had previously founded Elixir studios with Demis Hassabis and then of course joined DeepMind to work on Alpha Go. But other members of our Media Futures research group also ended up spinning out a range of different startups: Jun founded Mediagamma (applying RL to online ad spend), Simon Chan started prediction.io (acquired by SalesForce) and Jagadeesh Gorla started Jaggu (a recommendation service for e-commerce). Our team often discussed the commercial impact our research could have, I think perhaps because UCL’s base in London makes it a natural starting point for creating a business.
You recently launched Scout, the world’s first web browser chatbot. What was the inspiration behind launching Scout?
The idea naturally evolved from my PhD research. I went straight from finishing my PhD to joining Entrepreneur First where I started to think about how I could turn my research into a product.
Before I started this, I completed an internship at Microsoft Research where I applied my research to Bing. At the time, the main thing I learned from my research was that information finding could be predicted based on online user behaviour. But I became frustrated that the only real way to surface these predictions in a search engine was by making auto-suggest better. So I started to think about how the user’s entire online experience could be improved using these predictions, not just the search experience.
It was this thinking that led me and my new co-founder on Entrepreneur First to create a browser add-on that observes user behaviour, predicts what information the user is likely to need next online, and fetches it for them. After a few years of experiments and prototypes, this evolved into a chatbot interface where the browser ‘chats’ to you about what you’re up to online and tries to help you along the way.
Which web browsers will Scout be compatible with?
We’re focusing on Chrome at the moment due to it being the most popular web browser and having a mature add-on architecture, but we have prototypes working on Firefox and Safari and even a mobile app.
The Scout shopping assistant functionality sounds like it could save users both time and money. Assuming someone is researching a product on Amazon, what happens in the backend, and how does Scout interact with the user?
The idea is that once you have Scout installed, you just continue using the web as normal. If you’re shopping, you may visit Amazon to look at products. At this point, Scout recognises that you’re shopping on Amazon, and the product you’re looking at, and it will say “Hello”. It pops up as a chat widget on the webpage, kind of like how Intercom works, except Scout can appear on potentially any webpage. You can see what it looks like on my website.
Because you’re shopping, it’ll start to suggest ways it can help. It’ll ask you if you want to see reviews online, other prices, YouTube videos of the product and more. You interact by pressing buttons and the chatbot tailors the experience to what you want it to do. Whenever it finds information (like a YouTube video), it will embed it within the chat thread, just like how a friend might share media with you on WhatsApp. Over time, you end up having a dialogue with the browser about what you are doing online, with the browser helping you along the way.
The webpage processing happens within the browser itself. The only information our backend sees is the chat thread, meaning that the privacy implications are minimal.
We have a bespoke architecture for understanding online browsing behaviour and managing dialogues with the user. We use machine learning to identify what tasks we can help with online and how we should help. Originally, we used reinforcement learning to adapt to user preferences over time. However, one of the biggest lessons I’ve learned from running an AI startup is to keep processes simple and to try to only use machine learning to optimise an existing process. So instead, we now have a sophisticated rules engine for handling tasks over time that can be managed by reinforcement learning once we need to scale.
What are some examples of how Scout can assist with event planning?
We realised that event planning (and travel booking) are not so different from shopping online. You’re still looking at products, reading reviews and committing to purchase/attend. So a lot of what we’ve built for shopping also applies here.
The biggest difference is that time and location are now important. So for instance, if you’re looking at concert tickets on Ticketmaster, Scout can identify the address of the venue and suggest finding you directions from your current location to it, or find the price of an Uber, or suggest what time you should leave. If you’ve connected Scout into your calendar, then Scout can check to see if you’re available at the time of the event and add it to your calendar for you.
In the future, we foresee Scout users being able to communicate to their friends through the platform to discuss the things they’re doing online such as event planning, shopping, work etc.
Dialogue triggers will be used for Scout to initiate communications. What are some of these triggers?
By default, Scout won’t disturb you unless it encounters a trigger that tells it you may need help. There are several types of trigger:
- Visiting a specific website.
- Visiting a type of website (such as news, shopping etc.).
- Visiting a website containing a certain type of information (i.e. an address, a video etc.).
- Clicking links or buttons on webpages.
- Interacting with Scout by pressing buttons
- Scout retrieving certain types of media such as videos, music, tweets etc.
We plan to allow users to fine-tune what type of triggers they want Scout to respond to, and eventually, learn their preference automatically.
Can you discuss some of the difficulties behind ensuring that Scout is genuinely helpful when it decides to interact with a user without becoming annoying?
We take user engagement very seriously and try to measure whether interactions led to positive or negative outcomes. We try to maintain a good ratio for how often Scout tries to start a conversation and how often it’s used. However, it’s a tricky balance to get right and we’re always trying to improve.
Because of the intrusive nature of this product, getting the interface and UX right is critical. We’ve spent a lot of time trying completely different interfaces and user interaction methods. This work has led us to the current, chatbot style interface, which we find gives us the greatest flexibility in the help we can provide, coupled with user familiarity and minimal user effort for interactions.
Can you provide other scenarios of how Scout can assist end users?
Our focus at the moment is in market-testing specific applications for Scout. Shopping and event planning have already been mentioned, but we’re also looking at how Scout can help academics (with finding research papers, author details and reference networks) and even guitarists (finding guitar sheet music, playing music and videos alongside sheet music online and helping to tune a guitar). We’ve also spent some time exploring professional scenarios such as online recruitment, financial analysis and law.
Ultimately, Scout can potentially work on any website and help in any scenario, which is what makes the technology incredibly exciting, but also makes it difficult to get started.
Is there anything else that you would like to share about Scout?
If you’d like to see what it’s like if your browser could talk to you, you can read more on Scout’s blog.
Marcio Macedo, Co-Founder of Ava Robotics – Interview Series
Marcio Macedo is Co-Founder and VP of Product and Marketing at Ava Robotics, a recent spin-off of iRobot that focuses on autonomous navigating robots for enterprise, commercial and industrial environments.
Having previously worked at iRobot, what were some of the interesting projects that you worked on?
At iRobot we were fortunate to be designing and pioneering applications of telepresence, including an FDA-certified telemedicine robot for intensive care environments and the Ava telepresence product in partnership with Cisco.
Ava Robotics is a spinoff of iRobot, what was the inspiration behind launching a new company instead of keeping it in the iRobot family?
With iRobot’s strategic focus shifting to home products, Ava Robotics spun off to operate independently and better address the needs of our nascent markets. As an independent company we gain more flexibility in meeting our customers’ needs while enjoying the support of technology developed originally at iRobot.
The Ava Telepresence robot can be remotely controlled by users and features autonomous technology to have the robot simply move itself to a designated area. Could you walk us through the machine learning that is used to have the robot navigate through an environment without bumping into new objects?
When an Ava is installed at a location it learns its operating environment and creates a realistic topology map of the site. This map can be further annotated to force specific behaviors, such as speed zones, keep-out zones, etc.
Ava has built-in obstacle detection and obstacle avoidance (ODOA) capabilities, which leverage multiple sensors in the robot body so that Ava will not bump into people or objects in its path. Furthermore, if the most direct path to its destination is blocked, the Ava will search for and navigate through an alternative path if one is available.
What are the navigation sensors that are used, is it reliant on LiDAR or regular cameras?
Ava’s robotic navigation technologies use a variety of sensors (3-D cameras, LiDAR, and IMU) and they are combined for all actions, such as localization, planning, collision avoidance, cliff detections, etc. We operate in medium- and large-size spaces, so we think LiDAR is a very valuable part of a sensing package for real-world commercial spaces.
The telepresence robot looks like it would be extremely useful in the hospitality sector. Could you walk us through some of these potential use-cases?
Visits to Executive Briefing Centers provide access to senior-level executives and deliver value in the form of hands-on briefings, strategy reviews, product demonstrations and opportunities for relationship building. Customer Experience Centers offer organizations the opportunity to wow customers and show off their latest products and services. But with so many busy schedules, getting the right people to attend is not always easy.
For meeting planners, Ava provides the ability to “walk” the hotel and visit the meeting spaces, conference rooms and ballrooms that are available for their conference or event. In this application, the property’s sales and marketing team gain a unique tool to accelerate their sales cycles.
When invitees and guests can’t get to the conference or event, Ava allows them to attend and move around as if they were there. Whether it’s a business meeting, conference exhibit hall, or social event, Ava provides an immersive experience with freedom to move around.
What are some of the use-cases that are being targeted in the corporate sector?
Businesses benefit from Ava in many ways. The robot allows freedom of movement and access to meetings, corporate training, factory inspections, manufacturing sites, labs and customer experience settings.
Natural, face-to-face, ad-hoc conversations are critical to moving a business forward. Yet today’s globally distributed businesses have employees telecommuting from home or from across the world, who miss these vital interactions. With Ava, you unlock the ability to bring everyone back together as if they’re sitting together in the office and can walk up and interact naturally.
Use Case examples include:
- Agile Product Development: Agile product development teams come together for scheduled and unscheduled meetings, looking to achieve high levels of collaboration and communication. When remote workers are part of the team, existing collaboration tools are challenged to meet the need. With Ava, remote team members can actively participate in stand-up meetings, sprint planning and demos, and project reviews as if they were co-located with the team.
- Manufacturing: In manufacturing, remote visits by management, collaboration between experts at headquarters and staff at the plant, and remote tours by customers or suppliers are frequent – and necessary – events. Ava increases collaboration between those on the design team or in engineering and those building and delivering the final product on the plant floor. Also, imagine that the manufacturing plant is experiencing a production-line problem, but the person who knows how to fix it is thousands of miles away. In such a case, the technician needs to freely walk to different parts of the manufacturing floor to meet with someone or see something. Ava can by delivering that critical physical presence right to the factory floor. Ava allows the remote person to immediately connect via the robot as if she was physically present, put eyes on the problem, and communicate with the local team on the floor. As a result, she can deliver immediate insight into the problem and quickly resolve the issue.
- Laboratories and Clean Rooms: Those who work in laboratories and clean rooms work hard to ensure they are kept sterile and clean. While necessary, this can be a time-consuming process for employees entering and leaving these spaces repeatedly during the day. Due to the risks of potential contamination, companies often limit tours by customers and other visitors. Ava brings people right into a laboratory or a clean room without compromising the space. With Ava, remote visitors can easily move around as if they were there in person, observing the work being done and speaking with employees.
Ava Robotics recently partnered with Qatar Airways to Introduce Smart Airport Technologies at QITCOM 2019. Could you share with us some details in regards to this event and how those in attendance reacted?
We have been fortunate to work with Hamad International Airport in Qatar and Qatar Airways via our strategic partner Cisco building applications for robots in airports for a variety of use cases. Showing our work at QITCOM 2019 was a good opportunity to expose to the IT community to the applications that are now possible through different verticals and industries.
Is there anything else that you would like to share about Ava Robotics?
In these times of challenges to global travel, we have seen increased demand for solutions like telepresence robotics. Customers are just beginning to realize the potential of applications that truly empower remote workers to collaborate as if they were physically present at a distant location.