In states like California, the wildfire season has become longer and more intense, driven largely by climate change. In response to the growing threat from wildfires, according to CNN, various startups have created AI tools intended to assist in the detection of wildfires.
It may seem obvious, but early detection is important for wildfires. The earlier the blaze is detected the faster it can be contained and the less damage it will do. Thankfully, the AI tools designed by companies like Descartes Labs, based in Sante Fe, seem to be more effective at detecting wildfires than either firefighters or civilians.
The Fire-detecting tool from Descartes Labs samples images from government weather satellites every two minutes, comparing the images for differences. If there is any difference in the thermal signals in a region, it could potentially indicate the presence of a wildfire.
Current methods of detecting wildfires rely primarily on spotting fire with either planes or lookout towers, but a system that makes use of AI and satellites can detect wildfires much quicker than these methods. The New Mexico State Forestry Bureaus has stated that the AI tool has definitely helped the state locate wildfires much more quickly than before. The tool also provides first responders with descriptions that can help narrow down where a fire is, which can be difficult when there is a lot of smoke or over a mountain range at night.
Descartes isn’t the only company to try and use AI to detect forest fires. Northrop Grumman recently started a contract with the state of Calfornia to design wildfire analysis tools, and the startup Technosylva has also invested in the creation of wildfire prediction methods.
It isn’t clear yet if the technologies designed by these companies may increase the risk of false alarms as a result of increased sensitivity to possible fires. However, what is clear is that the AI tools designed by Descartes can genuinely detect forest fires much earlier than even some of the best currently exiting fire detection methods. For example, Descartes states that their detection systems were able to alert the Los Angeles Times to the coordinates of the Kincade fire very shortly after the fire started. Descartes states that so far their quickest detection time is nine minutes after the ignition of the fire. As reported by CNN, Ernesto Alvarado, wildfire expert and researcher at the University of Washington, any system that is able to detect a fire in under 30 minutes after the ignition is pretty impressive.
Descartes is beginning to explore other methods of using AI and data to help detect and track fires. For instance, the company is in the process of designing digital elevation models that can describe steep slopes that could hinder firefighting efforts. Descartes is accomplishing this by using a variety of algorithms that each vote on the position of a fire on a map and come to a consensus.
While the tools developed by Descartes and others may prove effective at enabling the quicker detection of fires, getting fire response teams into position is a challenge all its own and unless this problem is solved, fire detection algorithms may not be as effective as theoretically possible. As an example, even after a potential fire is flagged by Descartes’ tools, the fire has to be forwarded on to the correct authorities, such as a field office that can verify the existence of the fire. After this, the notification must go out to fire departments in the area who must assess the best way to respond to the fire. These logistical challenges may impose limits on just how effective fire-detection systems can be, but even so, when it comes to detecting fires, earlier is always better.
How the U.S.-China Tech War is Changing CES 2020
As CES 2020 continues to unfold in Las Vegas, so does the tech war between the United States and China. The ongoing conflict has led to some Chinese companies to miss the event.
Major Chinese companies such as Alibaba, Tencent, and JD.com have skipped out on the world’s largest tech event. At the same time, China’s focus on major technologies such as artificial intelligence and 5G will be showcased.
CES 2020 has a total of 4,500 companies taking part, and around 1,000 of them are from China. That is less than the one-third that were Chinese in 2018 and the one-fourth in 2019.
This comes as the U.S.-China trade war continues to affect many aspects of the tech industry. However, the two nations are expected to sign a “Phase One” trade agreement on Jan. 15.
China’s trade delegation is expected to travel to Washington for a total of four days, beginning on January 14. Advocates are hoping that an agreement can bring an end to the trade conflict between the globe’s two biggest economies.
The delegation will be led by Vice-Premier Liu He. U.S. President Donald Trump has said that it is a “major win” for the country and himself, while the Chinese have been more quiet. According to Trump, he will visit Beijing at a later date.
Within the CES 2020 expo, there is a Chinese consulate and commerce ministry-backed station offering free legal help to Chinese attendees, due to current issues revolving around intellectual property rights. Those attendees have been told to carry documents certifying those rights in order to avoid trouble. This comes as IP theft is one of the major issues within the trade negotiations between the two nations.
Since the shift in U.S. policy against Chinese tech companies in 2019, China has been seeking to establish technological independence from the U.S. According to a January 6 Eurasia Group report on top risks for 2020, this could cause serious issues within the international community.
“The decision by China and the United States to decouple in the technology sphere is the single most impactful development for globalization since the collapse of the Soviet Union,” the report said.
One of the reasons for the decrease in Chinese participation at CES 2020 is that it is harder to obtain U.S. visas, due to the ongoing conflict.
“Our company decided not to attend this year because we knew it would take forever to get our visa, if they don’t get rejected after all,” according to a Chinese A.I. chip startup founder.
Only OnePlus and Huawei, two of the top domestic smartphone makers in China, are taking part in CES. Xiaomi, Oppo, and Vivo have skipped the event.
One of the major areas of interest within CES is artificial intelligence (AI), and China is the global leader. The nation’s top AI startups, including Megvii, SenseTime, and Yitu, are absent. Those companies are listed on a U.S. government trade restricted “entity list.” They were put on the list due to the ongoing persecution of ethnic minorities in Xinjiang province, which they are alleged to have a role in.
Another two companies that were put on the list are the voice recognition company iFlyTek and surveillance company Hikvision. They are not present at the event this year.
Even with the ongoing issues and several Chinese companies being absent from the event, there are many that are attending. Some Chinese participation at CES 2020 comes from A.I. firms ForwardX Robotics and RaSpect Intelligence Inspection Limited, Huawei, Baidu, Lenovo, Haier, Hisense, DJI, and ZTE USA.
CES 2020 Kicks Off, Samsung Announces “Artificial Human”
The annual Consumer Electronics Show (CES) in Las Vegas has kicked off during our most advanced technological point in history. CES is the world’s largest tech show that has been running for more than 50 years, and over 170,000 people will take part. The CES showcases some of the biggest companies and their recent technological innovations and devices, including artificial intelligence (AI).
One of the most anticipated announcements came from Samsung’s Neon, a venture from the Samsung Technology and Advanced Research Labs (STAR). They introduced the Neon “artificial human” during the show. According to the company, the technology is “a computationally created virtual being that looks and behaves like a real human, with the ability to show emotions and intelligence.”
Neon develops avatars that resemble and act like real humans. However, they are not smart assistants, androids, surrogates or copies of real humans, according to the company’s FAQ.
“Neons are not AI assistants,” the company said. “Neons are more like us, an independent but virtual living being, who can show emotions and learn from experiences. Unlike AI assistants, Neons do not know it all, and they are not an interface to the internet to ask for weather updates or to play your favorite music.”
Neons are capable of holding conversations and mimicking the behavior of real humans. The company claims that Neons can also form memories and learn new skills, but they don’t have a physical body as of yet.
The technology can assist with “goal-oriented tasks or can be personalized to assist in tasks that require human touch.”
For example, the avatars can borrow traits and act like certain professions such as teachers, financial advisers, health care providers, concierges, actors, spokespeople, and TV anchors. With this ability, Samsung hopes that companies and people will be able to license Neons to be used as the listed categories.
“There are millions of species on our planet, and we hope to add one more,” Pranav Mistry, Neon CEO and head of STAR Labs, said in a press release. “Neons will be our friends, collaborators and companions, continually learning, evolving and forming memories from their interactions.”
According to another spokesperson for STAR, the avatars will “help enhance interactions people have with certain jobs, such as friendly customer service; a worker that will be able to remember your name if you do yoga a certain amount of times during the week.”
Each Neon will be able to have a different look and attitude.
With the development of this technology, one can wonder what it might mean for the economy. Will Neons be able to replace human jobs? According to the company, they are not trying to do that. Even with the company’s denial, many will still worry.
“We are not looking to replace human jobs, but rather enhance the customer service interactions, have customers feel as if they have a friend with Neons,” a spokesperson told the news outlet CNBC.
Some of the claims by the company, including those surrounding memory and emotion, are seen by some as extreme. If they are true, it would be a huge moment for computer science. However, the more likely case is that the Neons are able to simulate emotions and store data.
This technology might never leave CES, but the company said they have plans to release it.
“We plan to make Neon available to business partners as well to consumers all around the world,” according to the FAQ sheet. “It is too early for us to comment on the business model or pricing for Neon, but we plan to beta launch Neon in the real world with selected partners later this year.”
Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics – Interview Series
Deniz Kalaslioglu is the Co-Founder & CTO of Soar Robotics a cloud-connected Robotic Intelligence platform for drones.
You have over 7 years of experience in operating AI-back autonomous drones. Could you share with us some of the highlights throughout your career?
Back in 2012, drones were mostly perceived as military tools by the majority. On the other hand, the improvements in mobile processors, sensors and battery technology had already started creating opportunities for consumer drones to become mainstream. A handful of companies were trying to make this happen, and it became obvious to me that if correct research and development steps were taken, these toys could soon become irreplaceable tools that help many industries thrive.
I participated exclusively in R&D teams throughout my career, in automotive and RF design. I founded a drone service provider startup in 2013, where I had the chance to observe many of the shortcomings of human-operated drones, as well as their potential benefits for industries. I’ve led two research efforts in a timespan of 1.5 years, where we addressed the problem of autonomous outdoor and indoor flight.
Precision landing and autonomous charging was another issue that I have tackled later on. Solving these issues meant fully-autonomous operation with minimal human intervention throughout the operation cycle. At the time, solving the problem of fully-autonomous operation was huge and it enabled us to create intelligent systems that don’t need any human operator to execute flights; which resulted in safer, cost-effective and efficient flights. The “AI” part came into play later on in 2015, where deep learning algorithms could be effectively used to solve problems that were previously solved through classical computer vision and/or learning methods. We leveraged robotics to enable fully-autonomous flights and deep learning to transform raw data into actionable intelligence.
What inspired you to launch Soar Robotics?
Drones lack sufficient autonomy and intelligence features to become the next revolutionary tools for humans. They become inefficient and primitive tools in the hands of a human operator, both in terms of flight and post-operation data handling. Besides, these robots have very little access to real-time and long-term robotic intelligence that they can consume to become smarter.
As a result of my experience in this field, I have come to an understanding that the current commercial robotics paradigm is inefficient which is limiting the growth of many industries. I co-founded Soar Robotics to tackle some very difficult engineering challenges to make intelligent aerial operations a reality, which in turn will provide high-quality and cost-efficient solutions for many industries.
Soar Robotics provides a fully autonomous cloud connected robotics intelligence platform for drones. What are the types of applications that are best served by these drones?
Our cloud-connected robotics intelligence platform is designed as a modular system that can serve almost any application by utilizing the specific functionalities implemented within the cloud. Some industries such as security, solar energy, construction, and agriculture are currently in immediate need of this technology.
- Surveillance of a perimeter for security,
- Inspection and analysis of thermal and visible faults in solar energy,
- Progress tracking and management in construction and agriculture
These are the main applications with the highest beneficial impact that we focus on.
For a farmer who wishes to use this technology, what are some of use cases that will benefit them versus traditional human-operated drones?
As with all our applications, we also provide end-to-end service for precision agriculture. Currently, the drone workflow in almost any industry is as follows:
- the operator carries the drone and its accessories to the field,
- the operator creates a flight plan,
- the operator turns on the drone, uploads the flight plan for the specific task in hand,
- drone arms and executes the planned mission and return to its takeoff coordinates, drone lands,
- the operator turns off the drone,
- the operator shares the data with the client (or the related department if hired in-house),
- the data is processed accurately to become actionable insights for the specific industry.
It is crucial to point out that this workflow is proven to be very inefficient, especially in sectors such as solar energy, agriculture and construction where collecting periodic and objective aerial data for vast lands is essential. A farmer who uses our technology is able to get measurable, actionable and accurate insights on:
- plant health and rigor,
- nitrogen intake of the soil,
- optimization and effectiveness of irrigation methods
- early detection of disease and pest
Without having to go through all the hassle mentioned above, without even clicking a button every time. I firmly believe that enabling drones with autonomous features and cloud intelligence will provide considerable savings in terms of time, labor and money.
How will the drones be used for solar farm operators?
We handle almost everything that needs counting and measuring in all stages of the solar project. In the pre-construction and planning period, we generate topographic model, hydrologic analysis and obstacle analysis with high geographical precision and accuracy. During the construction period, we generate daily maps and videos of the site. After processing the collected media we measure the progress of the piling structures’, the mounting racks’ and the photovoltaic panels’ installations, position, area and volume measurements of trenches and inverter foundations as well as counting the construction machinery/vehicles and personnel on the site.
When the construction is over, and the solar site is fully operational Soar’s autonomous system continues its daily flights but this time generating thermal maps and videos along with visible spectrum maps and videos. From thermal data, Soar’s algorithms detect cell, multi-cell, diode, string, combiner and inverter level defects. From visible spectrum data, Soar’s algorithms detect shattering, soiling, shadowing, vegetation and missing panels. As a result, Soar’s software generates a detailed report of the detected faults and marks them on the as-built and RGB map of the site down to cell level, as well as showing all detected errors on a table; indicating string, row and module numbers with geolocations. Also clients’ total loss due to the inefficiencies caused by these faults and prioritize each depending on their importance and urgency.
In July 2019 Soar Robotics joined NVIDIA’s Inception Program which is an exclusive program for AI startups. How has this experience influenced you personally and how Soar Robotics is managed?
Throughout the months, this was proven to be an extremely beneficial program for us. We had already been using NVIDIA products both for onboard computation as well as the cloud side. This program has a lot of perks that streamlined our research, development and test processes.
Soar Robotics will be generating recurring revenue with Robotics-as-a-Service (RaaS) model. What is this model exactly and how does it differ from SaaS?
It possesses many similarities with SaaS in terms of its application and effects to our business model. RaaS model is especially critical since hardware is involved; most of our clients don’t want to own the hardware and only interested in the results. Cloud software and the new generations of robotics hardware blend together more and more each day.
This results in some fundamental changes in industrial robotics which used to be about stationary robots with repetitive tasks that didn’t need much of an intelligence. Operating under this mindset we provide our clients’ with robot connectivity and cloud robotics services to augment what their hardware would normally be capable of achieving.
Therefore Robotics-as-a-Service encapsulates all hardware and software tools that we utilize to create domain-specific robots for our clients’ purpose in the form of drones, communications hardware and cloud intelligence.
What are your predictions for drone technology in the coming decade?
Drones have clearly proven their value for enterprises, and the usage will only continue to increase. We have witnessed many businesses trying to integrate drones into their workflows, with only a few of them achieving great ROIs and most of them failing due to the inefficient nature of current commercial drone applications. Since the drone industry hype began to fade, we have seen a rapid consolidation in the market, especially in the last couple of years. I believe that this was a necessary step for the industry, which opened the path to real productivity and better opportunities for products and services that are actually beneficial for enterprises. The addressable market that the commercial drones will create until 2025 is expected to exceed $100B, which in my opinion is a fairly modest estimation.
- We will see an exponential rise in “Beyond Visual Line of Sight” flights, which will be the enabling factor for many use cases of commercial UAVs.
- The advancements in battery technology such as hydrogen fuel cells will extend the flight times by at least an order of magnitude, which will also be a driving factor for many novel use cases.
- Drone-in-a-box systems are still perceived as somehow experimental, but we will definitely see this technology become ubiquitous in the next decade.
- There have been ongoing tests that are conducted by companies of various sizes in urban air mobility market, which could be broken down into roughly three segments, namely last-mile delivery, aerial public transport and aerial personal transport. The commercialization of these segments will definitely happen in the coming decade.
Is there anything else that you would like to share about Soar Robotics?
We believe that the feasibility and commercialization of autonomous aerial operations mainly depend on solving the problem of aerial vehicle connectivity. For drones to be able to operate Beyond Visual Line of Sight (BVLOS) they need seamless coverage, real-time high throughput data transmission, command and control, identification, and regulation. Although there have been some successful attempts to leverage current mobile networks as a communications method, these networks have many shortcomings and are far from becoming the go-to solution for aerial vehicles.
We have been developing a connectivity hardware and software stack that have the capability of forming ad hoc drone networks. We expect that these networking capabilities will enable seamless, safe and intelligent operations for any type of autonomous aerial vehicle. We are rolling the alpha and beta releases of the hardware in the coming months to test our products with larger user bases under various usage conditions and start forming these ad-hoc networks to serve many industries.
- Quantum Stat Releases “Big Bad NLP Database”
- Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”
- AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized
- Computer Algorithm Can Identify Unique Dancing Characteristics
- DeepMind Discovers AI Training Technique That May Also Work In Our Brains