Emrah is the co-founder and CEO of Chooch, an end-to-end visual AI solution. Chooch provides fast, accurate facial authentication and object recognition for the media, advertising, banking, medical and security industries. Chooch offers an easy-to-use and deployable API, a dashboard and mobile app SDK.
What was your inspiration for launching Chooch AI?
In our previous entrepreneurial experiences, my co-founder and I saw that there were a multitude of data-driven challenges that needed to be solved in a wide-variety of verticals, so I decided to dive in and solve the ones that I could. I had started companies before, but this was my first true “deep tech” company.
With our broader team, we’ve worked to develop a visual AI product that is sustainable, scalable, robust, and usable for an array of enterprises. The product is now being utilized by companies in the healthcare, public safety, industrial, media and geospatial industries, with uses that range from fraud prevention and decreasing medical errors to deepening the understanding of our world.
Can you share with us what Chooch AI does?
Chooch copies human visual intelligence into machines. We train and deploy visual AI for customers in the cloud and on the edge and deliver fast and accurate computer vision for any visual process.
We can do that because Chooch AI is a platform for every step of the visual AI process from data collection, annotation and labeling, to AI training, model deployment, and integration. Because of the broad range of problems we’ve solved, our team now has deep expertise in scoping and developing computer vision projects that are ready for global scale. This can be everything from cell identification, geospatial image analysis and public safety.
What type of imagery can be processed by the computer vision system?
What the human eye can do, Chooch can do better and at scale. For example, the human eye cannot process any spectrum from visible to CT scans, but Chooch can detect fevers with IR sensors and process x-rays to detect lung damage. We can do this for video or still imagery, both faster and more accurately than the human eye and have deployed over 2400 models for a variety of applications.
Chooch AI connects to the cloud but is also able to run on a local machine, can you elaborate on how this works?
Yes, this is one of our breakthroughs. We launched with the Chooch AI API, which allows companies to use our cloud server to process their images, but our customers wanted to deploy AIoT on the edge in places with no connectivity. So, we created Chooch Edge AI, which is basically a standalone AI container that is generated by our Chooch Cloud AI. For instance, we are able to remotely deploy that AI software on NVIDIA Jetson devices, which are amazing by the way, and we can then remotely update the edge AI as needed from the Chooch Dashboard. Technically, that AI software on the edge is called an inference engine. Chooch is able to connect up to four cameras to the edge devices and the AI can recognize thousands of classes on the edge. We are able to iterate on models, remove models and train new models on the Edge. This is always improving, because as chip and hardware providers release more powerful devices, we are generating more and more powerful AIoT deployments. We can now run multiple models on the edge with multiple layers of dense classification at very low latency.
Is facial recognition technology used?
We don’t do facial recognition as a company policy. We only do facial authentication with liveness detection with the caveat that it will always be consent-based, like providing permission to check in to a location or for a flight with your face instead of a ticket. Chooch AI can be trained with as few a couple of images. Facial authentication files are not stored as pictures of faces. And we do liveness detection to make sure people are not able to spoof the system.
Training AI models can be a steep learning curve for the uninitiated, what assistance do you provide for data labelling and annotating?
For the uninitiated, we offer end-to-end training assistance. When companies come to Chooch with a visual problem to solve, our team works in partnership with them to train and deploy AI models. It’s as simple as that. We do labelling and annotation as a service, and generally speaking users supply the data, but we help them organize that. Our training platform can use still images,but with videos, we can generate over 1,000 annotated images per minute, that’s another breakthrough, by the way. We take on the whole process from planning and consulting on data collection to model creation and testing and support. Our customer relationships become ongoing partnerships.
Chooch AI can assist enterprises with COVID-19. Can you detail how it can be of assistance?
Essentially, Chooch AI is supporting public safety with several visual AI models all while working with partners to deploy complete solutions. One such solution detects the presence or absence of masks and another detects fevers with IR cameras, these two solutions can be deployed as a complete solution. Of note, these AI models do not include any facial recognition features. Additionally, we have a research model that we are providing to researchers for detecting the signs of COVID-19 related pneumonia that looks at x-rays and detects lung injury.
Is there anything else that you would like to share about Chooch AI?
As a proof point for our technology, our system is live and is being utilized by numerous clients. Our customers are driving real ROI because we can automate literally any visual process at scale, reducing costs and human error.
Thank you for the interview. Readers who wish to learn more should visit Chooch AI.
- Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model
- Developers Create Open Source Software To Help AI Researchers Reduce Carbon Footprint
- How AI Will Impact Both Cybersecurity and Cyber Attacks
- Researchers Create Robot That Displays Basic Empathy to a Robot Partner
- What Is K-Means Clustering?