As the virus causing COVID-19 spreads across the planet, people are questioning if governments and officials had the right plans prepared to handle a threat like this. Populous areas such as New York City are being hit the hardest, as the population density makes transference seemingly unavoidable.
While we still have a long way to go to eliminate the threat, social distancing appears to be working by flattening the curve. Other countries such as China successfully implemented this strategy months before, and have toted smart technology as a way to easily monitor city ordinances and get residents the supplies and goods they need as they keep their distance.
Smart cities allow us to service a large number of people in a very short time by using data sensors to gather and process information. City leaders and scientists can then use this data to track diseases, deliver autonomously, survey infected citizens, and make predictions about human behavior.
By the year 2050, it is estimated that more than two-thirds of the planet’s population will live in cities. This will make urban planning even more crucial for future generations as they prepare to protect themselves from future pandemics.
Read on to discover how smart cities can defend against pandemics.
Disease-tracking software allows city officials and scientists to track the spread of viruses in real-time through artificial intelligence. So how does it work?
Artificial intelligence can be used to gather large sets of public data and information from thousands of databases and other sources. This data is used to track the spread of viruses and significantly improves response time
With information like this, countries cannot not only monitor themselves but also other countries around the world. For example, a company that specializes in disease tracking that is headquartered in Canada was the first to speak up about the spike in pneumonia cases specific to Wuhan, China. This was over a week before the World Health Organization (WHO) released anything to the public.
Due to the danger of the virus and its ability to quickly spread, many countries ordered their citizens to practice social distancing. This means staying at least six feet apart, not gathering in groups and avoiding any crowded places. Cities in China, Italy and other countries have issued shelter in place orders, which require citizens to stay in their homes.
Many delivery drivers, food chains and grocery stores have been working overtime in dangerous conditions to fulfill orders and keep up with demand. This will decrease significantly in smart cities as robot delivery services become more available.
Smart cities can prioritize the distribution of medicine, food, and other goods through autonomous delivery. Cities that have already adopted robot and autonomous delivery services like drones and driverless cars and trucks will have a much easier time moving food and supplies during a pandemic.
Geolocation data has the ability to predict human behavior in an instant. Your phone is likely tracking your movements this very instant, so why not use this data-tracking feature to help slow the spread of coronavirus and other pandemics?
Data like popular dining and shopping times can give city officials insight into how they should create their plans and structure ordinances for future pandemics.
After many questionable incidents, people have a right to their privacy concerns with drones. However, they have been a great help to cities like Wuhan, where drones replaced the need to have police officers on the ground to enforce shelter in place orders. Even Italy used drone supervision to alert citizens on the streets to go home and social distance.
Granted their reliability is still up for debate, but thermal cameras are designed to measure individuals’ body temperatures as they pass by to check for anyone running a fever. Some of these thermal cameras are also outfitted with facial recognition software.
China has thermal cameras installed on street corners so that if a citizen has a fever, city leaders or law enforcement can be dialed in and respond quickly.
One of the first and most common symptoms of COVID-19 is a fever. Thermal cameras are smart city technology that can be implemented even in cities today to help officials monitor the spread of disease.
During the time of a pandemic, one of people’s biggest worries is using power. Finding ways to power cities and provide energy in a time of crisis will be the key for smart cities during future pandemics.
The future of energy is going to go in a completely different direction than it has in the past. Rather than have a small number of large plants distribute energy, future plants will be scaled smaller and in higher numbers. Energy will run at the local level, allowing every person to generate energy.
Energy is becoming something that we can control through the internet. Appliances will soon be designed with interconnectivity. This means the appliance itself uses digital systems so that we have full control over how energy is stored and used in each appliance. Buildings will also become more power efficient.
The way we communicate and refine information has transformed because of our evolving digital world. For example, keeping residents up to date during a time of crisis has completely changed over time. Television was a huge change for people as they could get news within hours. Now you can get updates and news just seconds after and they occur by using social media like Twitter and Facebook.
This is something that wasn’t possible just 15 years ago. When people can get information that quickly, and in the palm of their hand, actions like a stay at home order can be issued very fast. Most cities already have wi-fi in many buildings and restaurants for almost a decade, but be ready to get it almost anywhere you go in smart cities.
For people that are older or those that can’t purchase a smart device, many cities have smart kiosks scattered throughout. These can be updated as quickly as a smartphone so that information can be shared with those people who need them.
The recent COVID-19 pandemic has shocked citizens and leaders all over the world. One of the most challenging obstacles for scientists and researchers has been the lack of information about the virus. New technology, however, can track data in real-time and could be the deciding factor for the outcome of this pandemic. Nations that are divided have come to work together to improve pandemic responses, and smart cities will be a big part of that improvement.
Netanel Eliav, CEO of Sightbit – Interview Series
Netanel Eliav is the CEO of Sightbit, a global development project that harnesses advances in AI and image recognition technology to prevent drowning and save lives.
How did the concept for Sightbit originate?
Friends Netanel Eliav and Adam Bismut were interested in using tech to improve the world. On a visit to the beach, their mission became clear. Adam noticed the lack of tech support for lifeguards, who monitored hard-to-see swimmers with binoculars.
The system uses standard cameras that cover a defined area and transmits that information in real-time to lifeguards. What type of range are the cameras capable of? Also, how much of the accuracy becomes reduced with greater range?
Sightbit’s innovation is in the software. We work with various off-the-shelf cameras of different ranges, customizing camera setup to meet the needs of each customer and to ensure that the desired area is protected.
At Israel’s Palmahim Beach, where we are conducting a pilot, we built a dedicated cement platform that holds three cameras. Each camera covers 300 meters out to sea in normal conditions, the range required at Palmahim Beach.
A monitor displays a panoramic view of the water and beach, like a security camera display. A dashboard is superimposed over the video feed. Sightbit alerts appear as flashing boxes around individuals and hazards. Multiple views from different camera vantage points are available on a single screen. When a lifeguard clicks on an alert, the program zooms in, allowing the lifeguard to see the swimmer much clear than is possible with the naked eye. Four additional cameras will be installed shortly.
Can you discuss some of the computer vision challenges behind being able to differentiate between a human swimming and a human struggling to stay afloat?
We can detect some of the signs of distress base on the following: Location of person who might be caught in a rip current, located far from shore or in a dangerous area. Movement/lack of movement or lack of movement. Our system can distinguish swimmers bobbing up and down in the water, floating face down, or waving for help as signs of distress.
Sightbit has developed software that incorporates AI, based convolutional neural networks, image detection, and other proprietary algorithms to detect swimmers in distress and avoid false positives.
What are the risk factors for false positives such as misidentifying someone as drowning, or false negatives such as misidentifying a potential drowning?
The drowning detection feature sometimes generates a low-level warning when a swimmer has remained underwater for long stretches of time.
Like lifeguards, Sightbit primarily detects swimmers in distress. A drowning alert is an alert that has come too late. We focus on dangerous situations that can lead to drowning, allowing for de-escalation before they get out of control. For example, we warn when swimmers are caught in rip currents so that lifeguards or other rescue personnel can reach the individual in time.
Our real-time alerts include:
- Swimmers in distress
- Rip currents
- Children alone in or by the water
- Water vessels entering the swim area
- Swimmers entering dangerous areas. This may be choppy water, deep water, are hazardous areas alongside breakwater structures or rocks.
- Drowning incidents – soon to be deployed at Palmahim
- And other situations
What type of training is needed to use the Sightbit system?
No special training is needed. Sightbit’s user interface takes five minutes to learn. We designed the system with lifeguards to ensure that it is easy for them to use and master.
Can you discuss what happens in the backend once an alert is triggered for a potential drowning?
The beach cameras feed into a GPU for video analysis and a CPU for analytics. When the CPU detects a threat, it generates an alert. This alert is customized to customer needs. At Palmahim, we sound alarms and generate visual alerts on the screen. Sightbit can also be configured to call emergency rescue.
Could you discuss some of your current pilot programs and the types of results that have been achieved?
Sightbit is conducting a pilot at Palmahim Beach in partnership with the Israel Nature and Parks Authority. The system is installed at the Palmahim lifeguard tower and is in use by lifeguards (see above for details about camera placement, warnings, and the Sightbit monitor). The pilot went live at the end of May.
At Palmahim, three lifeguards, all stationed at one central tower, guard the one-kilometer beach. Sightbit provides instantaneous alerts when swimmers are in danger and camera views of swimmers far from the tower.
Prior to the pilot partnership at Palmahim Beach, we conducted proof-of-concept testing at beaches throughout Israel at the invitation of local authorities.
How have government officials reacted so far when introduced to the technology?
Extreme enthusiasm! Cities and major government-run beaches as well as private beaches in Israel, the United States, the Balkans, and Scandinavia have invited Sightbit to conduct pilots. We have been granted permissions by all relevant government bodies.
Is there anything else that you would like to share about Sightbit?
- We are currently raising funds as part of a seed round. Investors around the world have reached out to us, and we have already received funding offers. We previously received pre-seed funding from the Cactus Capital VC fund in Israel.
- Long-Term Potential: People are not optimized for tracking dozens, and certainly not hundreds, of swimmers from a watchtower. Looking long term, Sightbit can enable agencies to guard more shoreline at lower costs by using Sightbit systems for front-line monitoring. Lifeguards can be assigned to headquarters or patrol duty, allowing teams to respond faster to incidents anywhere along the beach. This is lifesaving. Currently, even during peak summer months, lifeguards monitor less than half of the shoreline at designated public swimming beaches.
- Sightbit can safeguard sites 24/7, all year round. Where there is no lifeguard service, Sightbit alerts emergency dispatch or local rescue services when a swimmer is in danger (for example, a swimmer swept out to sea in a rip current). Sightbit software can also pinpoint and track a swimmer’s location and deliver rescue tubes via small drones.
- Sightbit can bring monitoring to many different aquatic sites that do not currently employ lifeguards. With Sightbit, aquatic work sites, marinas, reservoirs, and other sites can benefit from water safety alerts.
Sightbit also provides risk analytics and management insights, which allow customers to anticipate hazards in advance and improve operations. Customers can track water and weather conditions, crowding, and more.
Thank you for the interview regarding this important project, readers who wish to learn more should visit of Sightbit.
Mike Lahiff, CEO at ZeroEyes – Interview Series
Mike is the CEO of ZeroEyes a security company powered by AI. Lead by former Navy SEALS, they offer software to monitor camera systems and to detect weapons. They system notifies authorities on the risk of possible active shooters and it reduces response time, with the goal of keep schools and other public spaces safe.
Can you explain what ZeroEyes is, and how implementing this system can save lives?
ZeroEyes is an AI weapons detection platform that helps identify threats at first sight. Founded by a team of Navy SEALs and military veterans dedicated to ending mass shootings, our platform integrates with an organization’s existing IP security cameras to play one component of its overall security process, and provide security personnel and first responders with real-time information needed to keep people safe. ZeroEyes focuses only on the essential information needed to stop a threat, and closes the critical seconds between when a gun could be spotted to when it is fired to save lives.
Can you discuss the process for integrating ZeroEyes into an existing video camera infrastructure?
ZeroEyes’ AI weapons detection platform is one component of an organization’s multi-tiered security approach. Our software integrates with an organization’s existing camera systems and video analytics to detect weapons in real time. If ZeroEyes detects a gun, an alert with the image of the weapon goes to the ZeroEyes monitoring team. Once positively identified, an alert is sent to a local emergency dispatch (such as a 911 call center), onsite security staff, police and school administrators (via mobile and desktop). This process takes three to five seconds and bypasses the traditional dispatch process.
ZeroEyes’ software uses AI and computer vision, integrating with existing 3D satellite maps of a building so that as a visible weapon passes a camera, the map lights up. This allows first responders to know the precise location of a threat. By seeing exactly where a shooter(s) is in real time, security personnel can lock doors, move people to safety and enact other aspects of their security process, while first responders can go towards the shooter much faster with the knowledge of how many and what kinds of weapons the person has.
How much of a weapon needs to be visible for the system to correctly identify it as a weapon?
This can be dependent on multiple variables such as type of camera, height of camera, angle, lens, field of view, lighting, distance, and type of weapon. If a human eye can detect the gun on a camera feed, our system will detect the same gun.
How much of an issue are false positives and how is this minimized?
We are always looking to minimize false positives and are constantly improving our deep learning models based on data collected. In customer installations, we incorporate time upfront to collect data and custom tune the parameters for each camera, which allows us to more effectively filter out false positives. If a false positive happens, an alert gets sent to our team and we vet the threat in real-time. We then respond accordingly and let the customer know that it isn’t a serious threat.
Your initial focus was in installing this in schools, what are some other markets that ZeroEyes is targeting?
We sell to a broad list of decision makers, including school resource officers, school district administration, corporate security directors, chief security officers and chief risk officers. Our technology can be used in education (including K-12 schools, universities and training facilities) – our technology is used in Rancocas Valley Regional High School (NJ) and South Pittsburg High School (TN), commercial (including office buildings, malls and stadiums), and military/government installations (force protection). We partner closely with both our customers and local first responders to ensure that they have the additional layer of security to identify and stop threats.
Can you discuss the ZeroEyes app and how threat notifications work?
If a true weapon is detected, an alert is sent to ZeroEyes’ security monitoring team. Once positively identified, it is then sent to a local emergency dispatch (such as a 911 call center), onsite security staff, police and school administrators. This process takes three to five seconds and bypasses the traditional dispatch process. We include details such as the location of the camera, bounding box identifying the detected object and detection label.
The image lets first responders know the type of weapon (i.e. pistol or machine gun). This then dictates response tactics and amount of damage a shooter can cause. It also lets us know the total number of shooters and weapons so those responding to the alert are properly informed of the situation.
What type of relationship does ZeroEyes have with different law enforcement agencies and how are they set-up to receive dispatch alerts?
ZeroEyes works with local law enforcement to help decrease critical response time to serious threats to public safety like active shooter situations. If a threat is detected and verified, the alert is sent to a local emergency dispatch.
ZeroEyes provides real-time information to help first responders understand the situation at hand, allows security to quickly enact security protocols, and dramatically reduces response time which can mean the difference in saving lives.
Facial recognition capabilities are built into the system, but facial redaction is used to protect patrons’ privacy. Can you discuss these capabilities, for example is ZeroEyes able to identify specific individuals such teachers and principals in a school?
We do not use facial recognition, we solely focus on weapons detection. Our technology sits on top of existing IP security cameras, which could also have facial recognition software installed by the organization. We pursued weapons detection because we want to reduce mass shootings and active shooter threats, and security personnel should know where and when weapons are present regardless of who is carrying them.
Is there anything else that you would like to share about ZeroEyes?
Our mission is to detect a threat before it happens. We firmly believe that if this happens, we can reduce the amount of mass shootings and save lives.
Thank you for the interview, readers who wish to learn more should visit ZeroEyes.
Dr. Don Widener, Technical Director of BAE Systems’ Advanced Analytics Lab – Interview Series
Don Widener is the Technical Director of BAE Systems’ Advanced Analytics Lab and Intelligence, Surveillance & Reconnaissance (ISR) Analysis Portfolio.
BAE Systems is a global defense, aerospace and security company employing around 83,000 people worldwide. Their wide-ranging products and services cover air, land and naval forces, as well as advanced electronics, security, information technology, and support services.
What was it that initially attracted you personally to AI and robotics?
I’ve always been interested in augmenting the ability of intelligence analysts to be more effective in their mission, whether that is through trade-craft development or technology. With an intelligence analysis background myself, I’ve focused my career on closing the gap between intelligence data collection and decision making.
In August, 2019 BAE Systems announced a partnership with UiPath, to launch the Robotic Operations Center which will bring automation and machine learning capabilities to U.S. defense and intelligence communities. Could you describe this partnership?
Democratizing AI for our 2,000+ intelligence analysts is a prime driver for BAE Systems Intelligence & Security sector’s Advanced Analytics Lab. By using Robotic Process Automation (RPA) tools like UiPath we could rapidly augment our analysts with tailored training courses and communities of practice (like the Robotic Operations Center), driving gains in efficiency and effectiveness. Analysts with no programming foundation can build automation models or “bots” to address repetitive tasks.
How will the bots from the Robotic Operations Center be used to combat cybercrime?
There is a major need for applying AI to external threat data collection for Cyber Threat analysis. At RSA 2020, we partnered with Dell to showcase their AI Ready Bundle for Machine Learning, which includes NVIDIA GPUs, libraries and frameworks, and management software in a complete solution stack. We showcased human-machine teaming by walking conference goers through an object detection model creation used to filter publicly available data to identify physical threat hot spots, which may trigger cybercrime.
Vast seas of big data will be collected to train the neural networks used by the bots. What are some of the datasets that will be collected?
BAE Systems was recently awarded the Army’s Open Source Intelligence (OSINT) contract responsible for integrating big data capabilities into our secure cloud hosting environment.
Could you describe some of the current deep learning methodologies being worked on at BAE Systems?
Some of the deep learning methodologies we are working on are Motion Imagery, Humanitarian Disaster Relief, and COVID-19.
Do you believe that object detection, and classification, is still an issue when it comes to objects which are only partially visible or obscured by other objects?
Computer vision models are less effective when partially obscured, but for national mission initiatives like Foundational Military Intelligence, even high false positive rates could still support decision advantage.
What are some of the other challenges facing computer vision?
Data labeling is a challenge. We’ve partnered with several data labeling companies for labeling unclassified data, but for classified data we are using our intelligence analyst workforce to support these CV training initiatives and this workforce is a finite resource.
Thank you for this interview. For anyone who wishes to learn more they may visit BAE Systems.
- Dimitris Vassos, CEO, Co-founder, and Chief Architect of Omilia – Interview Series
- Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing
- Game Developers Look To Voice AI For New Creative Opportunities
- Udacity Launches RPA Developer Nanodegree Program in Conjunction with UiPath
- AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes