Connect with us

Interviews

Joy Mustafi, Chief Data Scientist of Aviso, Inc – Interview Series

mm

Published

 on

Ranked as one of India’s 10 top data scientists by Analytics India Magazine,  Joy Mustafi has led data science research at tech giants including Salesforce, Microsoft, and IBM, winning 50 patents and authoring over 25 publications on AI.

He was associated with IBM for a decade as Data Scientist involved in a variety of business intelligence solutions, including IBM Watson. He worked as Principal Applied Scientist at Microsoft, responsible for AI research. Most recently, Mustafi was the Principal Researcher for Salesforce’s Einstein platform.

Mustafi is also the Founder and President of MUST Research, a non-profit organization promoting excellence in the fields of data science, cognitive computing, artificial intelligence, machine learning, and advanced analytics for the benefit of society.

Recently Mustafi joined Redwood City-based Aviso, Inc as Chief data scientist, where he will leverage his decades of experience to help Aviso customers accelerate deal-closing and expand revenue opportunities.

What initially attracted you to AI?

I love mathematics a lot, and the same for programming. I did my graduate degree in statistics and post-graduate work in computer applications. When I started my AI research journey back in 2002 at the at Indian Statistical Institute in Kolkata, I used the C programming language to develop an Artificial Neural Network system for handwritten numeral recognition. That was 2500+ lines of code, all written from scratch without any inbuilt libraries apart from standard input / output. It consisted of data cleansing and pre-processing, feature engineering, and a back propagation algorithm with a multilayer perceptron. The entire process was a combination of all the subjects that I studied. At that time AI was not so popular in the corporate world, and few academic organisations were doing advanced research in the field. And, by the way, AI wasn’t new at the time! The field of AI research dates all the way back to 1956, when Prof. John McCarthy and others inaugurated the field at a now-legendary workshop at Dartmouth College.

 

You have worked with some of the most advanced companies in AI such as IBM Watson & Microsoft. What has been the most interesting project that you have worked on? 

I want to mention the first patent I was awarded while working at IBM: a  method for solving word problems in natural language, which was an open problem with IBM Watson. The system I developed can understand an arithmetic or algebraic problem stated in natural language and provide a solution in real-time as a natural language answer. To do that, the system had to handle the following key steps: Get the input problem statements and question to be answered; convert the input sentences to a sequence of sentences which are well-formed from a mathematical perspective; convert the well-formed sentences into mathematical equations; solve the set of equations; and narrate the mathematical result in natural language.

There’s also my best project for Microsoft — Softie! I invented and built a physical robot equipped with various types of interchangeable input devices and sensors to allow it to receive information from humans.  A standardized method of communication with the computer allowed the user to make practical adjustments, enabling richer interactions depending on the context. We were able to implement a robust system with features including a keyboard, pointing device, touchscreen, computer vision, speech recognition, and so forth. We formed a team from various business units, and encouraged them to explore research applications on artificial intelligence and related fields.

 

You’re also the Founder and President of MUST Research, a non-profit organization registered under Society and Trust Act of India. Could you tell us about this non-profit?

MUST Research is dedicated to promoting excellence and competence in the fields of data science, cognitive computing, artificial intelligence, machine learning, and advanced analytics for the benefit of the society. MUST aims to build an ecosystem to enable interaction between academia and enterprise, helping them to resolve problems and making them aware of the latest developments in the cognitive era to provide solutions, offer guidance or training, organize lectures, seminars and workshops, and collaborate on scientific programs and societal missions. The most exciting feature of MUST is its fundamental research on cutting-edge technologies like artificial intelligence, machine learning, natural language processing, text analytics, image processing, computer vision, audio signal processing, speech technology, embedded systems, robotics, etc.

 

What was it that inspired you to launch MUST Research?

My love of sci-fi movies and mathematics means I’m often thinking about how technology can change the world, and I’d been thinking about forming a group of like-minded experts on advanced technologies since 1993, when I was in 9th grade. Once I got my first job, it took 10 years to call for a meeting — and another 10 years to identify a group of suitable experts and form a non-profit society. Now, though, we have around 500 data scientists in MUST across India who are passionately contributing to research on emerging technologies.

 

Over the past several years the industry has been significant advances in deep learning, reinforcement learning, natural language processing, etc. Which area of machine learning do you currently view as the most exciting?

All machine-learning algorithms are exciting once they are implemented as a product or service that can be used by businesses or individuals in the real world. The Deep Learning era has pros and cons, though — sometimes it helps in automatic feature engineering, but at the same time it can work like a black box, and end up with a garbage-in-garbage-out scenario if proper datasets or algorithms aren’t used. Some of the latest technologies are also resource-hungry and require huge amounts of processing power, time, and data. The key thing to remember is that Deep Learning is a subset of Machine Learning (ML), which in turn is a subset of Artificial Intelligence (AI), and AI is a subset of Data Science — so it’s all connected. And it’s not about Python, R or Scala — I started my AI journey in C, and one can even write AI programs in assembly language code. Building successful AI systems depends first and foremost on understanding the business or research environment, and then connecting the dots between actions and data to build a system which genuinely helps various people in different domains. Whether you’re working with  Natural Language Processing, Computer Vision, Video Analytics, Speech Technology, or Robotics, the best way forwards is to start with the simplest possible approach, and then adopt more complex methods iteratively as you experiment with and refine your system.

 

You are a frequent guest speaker at leading universities in India. What is one question that you often hear from students, and how do you best answer it?

The single question I hear most often is: “How can I become a data scientist?”  I always tell young people that it’s definitely possible, and try to guide them towards using their love of mathematics, statistics, or computer science to try to solve real-world business problems. People also  ask how they can join MUST, and again, the answer  is simple: “Build your profile with multiple projects and focus on thinking outside of the box.” If you want to become a data scientist, you have to also prove that you can innovate. Without innovation, we can’t call ourselves scientists. Of course, being awarded patents or publishing your research in reputed journals and conferences also helps!

 

You recently joined Redwood City-based Aviso as chief scientist, in order to use your AI/ML expertise. Could you tell us a bit about Aviso and your role with this company?

Aviso uses AI and machine learning to guide sales executives and take the guesswork out of the deal-making process. That’s a fascinating challenge, and my primary responsibility is to help the organization grow in a positive direction, using deep research to set the stage for the customers’ success. I’m using my knowledge and experience in artificial intelligence and innovation to help make our core products and research projects more:

Adaptive: They must learn as information changes, and as goals and requirements evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real time.

Interactive: They must interact easily with users so that those users can define their needs comfortably. They must interact with other processors, devices, services, as well as with people.

Iterative and Stateful: They must aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must remember previous interactions in a process and return information that is suitable for the specific application at that point in time.

Contextual: They must understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They must draw on multiple sources of information, including both structured and unstructured digital information.

 

What was it that attracted you to this position with Aviso?

Aviso is working to replace bloated legacy CRM systems with frictionless, AI-enabled tools that can deliver actionable insights and unlock sales teams’ full potential. Our product is a smart system which understands the pain points of salespeople, does away with time-consuming data entry, and gives executives the suggestions and guidance they need to close deals effectively. I was attracted to the strong leadership team and customer  base, but also to Aviso’s commitment to using sophisticated AI tools to solve real-world challenges. Selling is a vital part of any business, and Aviso helps with that by leveraging the power of artificial intelligence. Bulls-eye! What more could you want?

 

Lastly, is there anything else that you would like to share about AI?

Artificial intelligence makes a new class of problems computable. To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of developing more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary: artificial intelligence is a place where a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience, and biology. That’s what makes it so exciting!

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Autonomous Vehicles

Phil Duffy, VP of Product, Program & UX Design at Brain Corp – Interview Series

mm

Published

on

Phil Duffy,  is the VP of Product, Program & UX Design at Brain Corp a San Diego-based technology company specializing in the development of intelligent, autonomous navigation systems for everyday machines.

The company was co-founded in 2009 by world-renowned computational neuroscientist, Dr. Eugene Izhikevich, and serial tech entrepreneur, Dr. Allen Gruber. Brain Corp’s initial work involved advanced R&D for Qualcomm Inc. and DARPA. The company is now focused on developing advanced machine learning and computer vision systems for the next generation of self-driving robots.

Brain Corp powers the largest fleet of  autonomous mobile robots (AMRs) with over 10,000 robots deployed or enabled worldwide and works with several Fortune 500 customers like Walmart and Kroger.

What attracted you initially to the field of robotics?

My personal interest in developing robots over the last two decades stems from the fact that intelligent robots are one of the two major unfulfilled dreams of the last century—the other dream being flying cars.

Scientists, science-fiction writers, and filmmakers all predicted we would have intelligent robots doing our bidding and helping us in our daily lives a long time ago. As part of fulfilling that vision, I am passionate about developing robots that tackle the repetitive, dull, dirty, and dangerous tasks that robots excel at, but also building solutions that highlight the unique advantages of humans performing creative, complex tasks that robots struggle with. Developing robots that work alongside humans, both empowering each other, ensures we build advanced tools that help us become more efficient and productive.

I am also driven by being part of a fledgling industry that is building the initial stages of the robotics ecosystem. The robotics industry of the future, like the PC or smartphone industry today, will include a wide array of technical and non-technical staff, developing, selling, deploying, monitoring, servicing, and operating robots. I’m excited to see how that industry grows and how decisions we make today impact the industry’s future direction.

 

In 2014, Brain Corp pivoted from performing research and development for Qualcomm, to the development of machine learning and computer-vision systems for autonomous robots. What caused this change?

It was really about seeing a need and opportunity in the robotics space and seizing it. Brain Corp’s founder, Dr. Eugene Izhikevich, was approached by Qualcomm in 2008 to build a computer based on the human nervous system to investigate how mammalian brains process information and how biological architecture could potentially form the building blocks to a new wave of neuromorphic computing. After completing the project, Eugene and a close-knit team of scientists and engineers decided to apply their computational neuroscience and machine learning approaches to autonomy for robots.

While exploring different product directions, the team realized that the robotics industry of the day looked just like the computer industry before Microsoft—dozens of small companies all adding custom software to a recipe of parts from the same hardware manufacturer. Back then, lots of different types of computers existed, but they were all very expensive and did not work well with each other. Two leaders in operating systems emerged, Microsoft and Apple, with two different approaches: while Apple focused on building a self-contained ecosystem of products and services, Microsoft built an operating system that could work with almost any type of computer.

The Brain Corp team saw the value in creating a “Microsoft of robotics” that would unite all of the disparate robot solutions under one cloud-based software platform. Their goal became to help build out the emerging category of autonomous mobile robots (AMRs) by providing autonomy software that others could use to build their robots. The Brain Corp team decided to focus on making a hardware-agnostic operating system for AMRs. The idea was simple: to enable builders of robots, not build the robot intelligence themselves.

 

What was the inspiration for designing an autonomous scrubber versus other autonomous technologies?

Industrial robotic cleaners were the perfect way to enter the market with our technology. The commercial floor cleaning industry was in the midst of a labor shortage when we started out—constant turnover meant many jobs were simply not getting done. Autonomous mobile cleaning robots would not only help fill the labor gap in an essential industry, they would also be scalable—every environment has a floor and that floor probably needs cleaning. Floorcare was therefore a good opportunity for a first application.

Beyond that, retail companies spend about $13B on floorcare labor annually. Most employ cleaning staff who use large machines to scrub store floors, which is rote, boring work. Workers drive around bulky machines for hours when their time could be better spent on tasks that require acuity. An automated floor cleaning solution would fill in for missing workers while optimizing the efficiency and flow of store operations. By automating the mundane, boring task of scrubbing store floors, retail employees would be able to spend more time with customers and have a greater impact on business, ultimately leading to greater job satisfaction.

 

Can you discuss the challenge of designing robots in an environment that often involves tight spaces and humans who may not be paying attention to their surroundings?

It’s an exciting challenge! Retail was the perfect first implementation environment for Brain Corp’s system because they are such complex environments that pose an autonomy challenge, and are ripe with edge cases that allow Brain Corp to collect data that refines the BrainOS navigation platform.

We addressed these challenges of busy and crowded retail environments by building an intelligent system, BrainOS, that uses cameras and advanced LIDAR sensors to map the robot’s environment and navigate routes. The same technology combination also allows the robots to avoid people and obstacles, and find alternate routes if needed. If the robot encounters a problem it cannot resolve, it will call its human operator for help via text message.

The robots learn how to navigate their surroundings through Brain Corp’s proprietary “teach and repeat” methodology. A human first drives the robot along the route manually to teach it the right path, and then the robot is able to repeat that route autonomously moving forward. This means BrainOS-powered robots can navigate complex environments without major infrastructure modifications or relying on GPS.

 

How has the COVID-19 pandemic accelerated the adoption of Autonomous Mobile Robots (AMRs) in public spaces?

We have seen a significant uptick in autonomous usage across the BrainOS-powered fleet as grocers and retailers look to enhance cleaning efficiency and support workers during the health crisis.

During the first four months of the year, usage of BrainOS-powered robotic floor scrubbers in U.S. retail locations rose 18% compared to the same period last year, including a 24% y-o-y increase in April. Of that 18% increase, more than two-thirds (68%) occurred during the daytime, between 6 a.m. and 5:59 p.m. This means we’re seeing retailers expand usage of the robots to daytime hours when customers are in the stores, in addition to evening or night shifts. We expect this increase to continue as the value of automation comes sharply into focus.

 

What are some of the businesses or government entities that are using Brain Corp robots?

Our customers include top Fortune 500 retail companies including Walmart, Kroger, and Simon Property Group. BrainOS-powered robots are also used at several airports, malls, commercial buildings, and other public indoor environments.

 

Do you feel that this will increase the overall comfort of the public around robots in general?

Yes, people’s perception of robots and automation in general is changing as a result of the pandemic. More people (and businesses) realize how robots can support human workers in meaningful ways. As more businesses reopen, cleanliness will need to be an integral part of their brand and image. As people start to leave their homes to shop, work, or travel, they will look to see how businesses maintain cleanliness. Exceptionally good or poor cleanliness may have the power to sway consumer behavior and attitudes.

As we’ve seen in the last months, retailers are already using BrainOS-powered cleaning robots more often during daytime hours, showing their commitment and investment in cleaning to consumers. Now more than ever, businesses need to prove that they’re providing a safe and clean environment for customers and workers. Robots can help them deliver that next level of clean—a consistent, measureable clean that people can count on and trust.

 

Another application by Brain Corp is the autonomous delivery tug. Could you tell us more about what this is and the use cases for it?

The autonomous delivery tug, powered by BrainOS, enables autonomous delivery of stock carts and loose-pack inventory for any indoor point-to-point delivery needs, enhancing efficiency and productivity. The autonomous delivery tug eliminates inefficient back and forth material delivery and works seamlessly alongside human workers while safely navigating complex, dynamic environments such as retail stores, airports, warehouses, and factories.

A major ongoing challenge for retailers—one that has been exacerbated by the COVID-19 health crisis—is maintaining adequate stock levels in the face of soaring demand from consumers, particularly in grocery. Additionally, the process of moving inventory and goods from the back of a truck, to the stockroom, and then out to store shelves, is a laborious and time-consuming process requiring employees to haul heavy, stock-laden carts back and forth multiple times. The autonomous delivery tug aims to help retailers address these restocking challenges, taking the burden off store workers and providing safe and efficient point-to-point delivery of stock without the need for costly or complicated facility retrofitting.

The autonomous delivery application combines sophisticated AI technology with proven manufacturing equipment to create intelligent machines that can support workers by moving up to 1,000 pounds of stock at a time. Based on an in-field pilot program, the autonomous delivery tug will save retail employees 33 miles of back-and-forth travel per week, potentially increasing their productivity by 67%.

 

Is there anything else that you would like to share about Brain Corp?

Brain Corp powers the largest fleet of AMRs operating in dynamic public indoor spaces with over 10,000 floor care robots deployed or enabled worldwide. According to internal network data, AMRs powered by BrainOS are currently collectively providing over 10,000 hours of daily work, freeing up workers so they can focus on other high value tasks during this health crisis, such as disinfecting high-contact surfaces, re-stocking, or supporting customers.

In the long term, robots give businesses the flexibility to address labor challenges, absentee-ism, rising costs, and more. From a societal standpoint, we believe robots will gain consumer favor as they’re seen more frequently operating in stores, hospitals, and health care facilities, or in warehouses providing essential support for workers.

We’re also excited about what the future holds for Brain Corp. Because BrainOS is a cloud-based platform that can essentially turn any mobile vehicle built by any manufacturer into an autonomous mobile robot, there are countless other applications for the technology beyond commercial floor cleaning, shelf scanning, and material delivery. Brain Corp is committed to continuously improving and building out our AI platform for powering advanced robotic equipment. We look forward to further exploring new markets and applications.

Thank you for the amazing interview, readers who wish to learn more should visit Brain Corp.

Spread the love
Continue Reading

Interviews

Adi Singh, Product Manager in Robotics at Canonical – Interview Series

mm

Published

on

Adi Singh, is the Product Manager in Robotics at Canonical.   Canonical specializes in open source software, including Ubuntu, the world’s most popular enterprise Linux from cloud to edge, and they have a global community of 200,000 contributors.

Ubuntu is the most popular Linux distribution for large embedded systems. As autonomous robots mature, innovative tech companies turn to Ubuntu, we discuss advantages of building a robot using open source software and other key considerations.

What sparked your initial interest in robotics?

A few years into software programming, I was dissatisfied with seeing my work only running on a screen. I had an urge to see some physical action, some tangible response, some real-world result of my engineering. Robotics was a natural answer to this urge.

Can you describe your day to day role with Canonical?

I define and lead the product strategy for Robotics and Automotive verticals at Canonical. I am responsible for coordinating product development, executing go-to-market strategies, and engagements with external organizations related to my domain.

Why is building a robot on open source software so important?

Building anything on open source software is usually a wise idea as it allows you to stand on the shoulders of giants. Individuals and companies alike benefit from the volunteer contributions of some of the brightest minds in the world when they decide to build on a foundation of open source software. As a result, popular FOSS repositories are very robustly engineered and very actively maintained; allowing users to focus on their innovation rather than the nuts and bolts of every library going into their product.

Can you describe what the Ubuntu open source platform offers to IoT and robotics developers?

Ubuntu is the platform of choice for developers around the world for frictionless IoT and robotics development. A number of popular frameworks that help with device engineering are built on Ubuntu, so the OS is able to provide several tools for building and deploying products in this area right out of the box. For instance, the most widely used middleware for robotics development – ROS – is almost entirely run on Ubuntu distros (More than 99.5% according to official metrics here: https://metrics.ros.org/packages_linux.html).

What are some of the key considerations that should be analyzed when choosing a robot’s operating system?

Choosing the right operating system is one of the most important decisions to be made when building a new robot, including several development factors. Hardware and software stack compatibility is key as ample time will be spent ensuring components will work well together so as to not hinder progress on developing the robot itself.

Also, prior familiarity of the operating systems by the dev team is a huge factor affecting economics, as previous experience will no doubt help to accelerate the overall robot development process and thereby cut down on the time to market. Ease of system integration and third-party add-ons should also be heavily considered. A robot is rarely a standalone device and often needs to seamlessly interact with other devices. These companion devices may be as simple as a digital twin for hardware-in-the-loop testing, but in general, off-device computation is getting more popular in robotics. Cloud robotics, speech processing and machine learning are all use-cases that can benefit from processing information in a server farm instead of on a resource-constrained robot.

Additionally, robustness and a level of security engineered into the kernel is imperative. Availability of long-term support for the operating system, especially from the community, is another factor. Something to keep in mind is that operating systems are typically only supported for a set amount of time. For example, long-term support (LTS) releases of Android Things are supported for three years, whereas Ubuntu and Ubuntu Core are supported for five years (or for 10 years with Extended Security Maintenance). If the supported lifespan of the operating system is shorter than the anticipated lifespan of the robot in the field, it will eventually stop getting updates and die early.

Thank for for the interview, readers who wish to learn more should visit Ubuntu Robotics.

Spread the love
Continue Reading

Interviews

Mike Lahiff, CEO at ZeroEyes – Interview Series

mm

Published

on

Mike is the CEO of ZeroEyes a security company powered by AI. Lead by former Navy SEALS, they offer software to monitor camera systems and to detect weapons. They system notifies authorities on the risk of possible active shooters and it reduces response time, with the goal of keep schools and other public spaces safe.

Can you explain what ZeroEyes is, and how implementing this system can save lives?

ZeroEyes is an AI weapons detection platform that helps identify threats at first sight. Founded by a team of Navy SEALs and military veterans dedicated to ending mass shootings, our platform integrates with an organization’s existing IP security cameras to play one component of its overall security process, and provide security personnel and first responders with real-time information needed to keep people safe. ZeroEyes focuses only on the essential information needed to stop a threat, and closes the critical seconds between when a gun could be spotted to when it is fired to save lives.

 

Can you discuss the process for integrating ZeroEyes into an existing video camera infrastructure?

ZeroEyes’ AI weapons detection platform is one component of an organization’s multi-tiered security approach. Our software integrates with an organization’s existing camera systems and video analytics to detect weapons in real time. If ZeroEyes detects a gun, an alert with the image of the weapon goes to the ZeroEyes monitoring team. Once positively identified, an alert is sent to a local emergency dispatch (such as a 911 call center), onsite security staff, police and school administrators (via mobile and desktop). This process takes three to five seconds and bypasses the traditional dispatch process.

ZeroEyes’ software uses AI and computer vision, integrating with existing 3D satellite maps of a building so that as a visible weapon passes a camera, the map lights up. This allows first responders to know the precise location of a threat. By seeing exactly where a shooter(s) is in real time, security personnel can lock doors, move people to safety and enact other aspects of their security process, while first responders can go towards the shooter much faster with the knowledge of how many and what kinds of weapons the person has.

 

How much of a weapon needs to be visible for the system to correctly identify it as a weapon?

This can be dependent on multiple variables such as type of camera, height of camera, angle, lens, field of view, lighting, distance, and type of weapon. If a human eye can detect the gun on a camera feed, our system will detect the same gun.

 

How much of an issue are false positives and how is this minimized?

We are always looking to minimize false positives and are constantly improving our deep learning models based on data collected. In customer installations, we incorporate time upfront to collect data and custom tune the parameters for each camera, which allows us to more effectively filter out false positives. If a false positive happens, an alert gets sent to our team and we vet the threat in real-time. We then respond accordingly and let the customer know that it isn’t a serious threat.

 

Your initial focus was in installing this in schools, what are some other markets that ZeroEyes is targeting?

We sell to a broad list of decision makers, including school resource officers, school district administration, corporate security directors, chief security officers and chief risk officers. Our technology can be used in education (including K-12 schools, universities and training facilities) – our technology is used in Rancocas Valley Regional High School (NJ) and South Pittsburg High School (TN), commercial (including office buildings, malls and stadiums), and military/government installations (force protection). We partner closely with both our customers and local first responders to ensure that they have the additional layer of security to identify and stop threats.

 

Can you discuss the ZeroEyes app and how threat notifications work?

If a true weapon is detected, an alert is sent to ZeroEyes’ security monitoring team. Once positively identified, it is then sent to a local emergency dispatch (such as a 911 call center), onsite security staff, police and school administrators. This process takes three to five seconds and bypasses the traditional dispatch process. We include details such as the location of the camera, bounding box identifying the detected object and detection label.

The image lets first responders know the type of weapon (i.e. pistol or machine gun). This then dictates response tactics and amount of damage a shooter can cause. It also lets us know the total number of shooters and weapons so those responding to the alert are properly informed of the situation.

 

What type of relationship does ZeroEyes have with different law enforcement agencies and how are they set-up to receive dispatch alerts?

ZeroEyes works with local law enforcement to help decrease critical response time to serious threats to public safety like active shooter situations. If a threat is detected and verified, the alert is sent to a local emergency dispatch.

ZeroEyes provides real-time information to help first responders understand the situation at hand, allows security to quickly enact security protocols, and dramatically reduces response time which can mean the difference in saving lives.

 

Facial recognition capabilities are built into the system, but facial redaction is used to protect patrons’ privacy. Can you discuss these capabilities, for example is ZeroEyes able to identify specific individuals such teachers and principals in a school?

We do not use facial recognition, we solely focus on weapons detection. Our technology sits on top of existing IP security cameras, which could also have facial recognition software installed by the organization. We pursued weapons detection because we want to reduce mass shootings and active shooter threats, and security personnel should know where and when weapons are present regardless of who is carrying them.

 

Is there anything else that you would like to share about ZeroEyes?

Our mission is to detect a threat before it happens. We firmly believe that if this happens, we can reduce the amount of mass shootings and save lives.

Thank you for the interview, readers who wish to learn more should visit ZeroEyes.

Spread the love
Continue Reading