David is a a Senior Lecturer (assistant professor) at the ME department of Ben Gurion University of the Negev, and director of the Bioinspired and Medical Robotics Laboratory. His interests are interests are in the fields of biomimetics, millisystems, miniature robotics, flexible and slippery interactions, space robotics, underactuated and minimally actuated mechanisms and theoretical kinematics.
What is it that initially attracted you to the field of robotics?
Since my childhood, I was always fascinated with machines. I always tried to build them and eventually after graduating with my BSc in Mechanical Engineering, I’ve been thrilled to be able to focus on developing robots at Ben-Gurion University of the Negev that can crawl inside the body.
You have Ph.D. in medical robotics. What are some of the types of medical robotic applications that have you the most excited?
Any application that involves precision that can be programmed is a possible candidate for a robotic solution. Two robots I worked on in the past involved those that crawl inside the body and performing brain surgeries using needles.
One robot you created is called The Flying Star which is a hybrid Crawling and Flying Robot. What was the inspiration behind this robot?
The sprawl mechanism of the STAR robots is inspired by insects but includes wheels which combines the advantages of bio-inspired creatures and wheeled vehicles.
What were some of the challenges behind building The Flying Star?
The Flying STAR is not a regular quadcopter as it changes the orientation of its wings which influences it general control dynamics. The different design variables were challenging at the beginning and the transition between flying to driving modes required unique parts which we had to develop by ourselves.
I was impressed with how versatile The Flying Star is, it can literally dodge obstacles, crawl beneath them, fly overhead them, etc. Can you discuss how The Flying Star makes the decision on which mode of transport to use? How does it choose whether to crawl underneath an object or fly overtop?
The flying STAR is initially designed for search and rescue purposes and for last mile package delivery. We are developing algorithms to determine how when to fly or to drive based on distances and energy requirements but also on the shape of the obstacle. The decision algorithm, which are still being developed, will be based on camera mapping of the surrounding. If an opening is high enough to crawl underneath it, the FSTAR will simple drive through it. Otherwise, it will fly. A human operator may still be needed in challenging confined spaces (such as rubble).
My first impression when I saw the video for the Minimally actuated Reconfigurable Continuous Track Robot, is that with a camera at its helm it would be perfect for search and rescue. What are some use cases that you envision for such a robot?
The reconfigurable continuous track robot is primarily developed for search and rescue purposes in difficult terrain such as rubble. But it can also be used for other applications such excavation, agriculture and crawling inside pipes for industrial maintenance.
One of your previous projects is SAW, a Minimally actuated Reconfigurable Continuous Track Robot. What was the inspiration behind this robot?
The SAW (single actuator wave) robot was originally inspired by miniature biological organism which swim by undulating their tails. Creating this robot was very challenging. Although the equations showed that a single motor is needed to develop the wave motion, realizing this motion mechanically was not simple. I found the solution when I was teaching the course Mechanical design and realized that the side projection of a spring is a sine function that advances when the spring is rotated
How small could you ultimately make SAW? Is it possible to have a similar sized robot in the future that could be used to travel inside the human body?
The main purpose of the SAW robot is to crawl inside the body. Our latest design is less than 1.5 cm wide and it is capable of crawling inside the intestine of the pig (ex-vivo). Currently, we are seeking funding to develop smaller robots to crawl inside the digestive system. We believe this is very possible.
One of the observations that I made from your robots is that many of them are based on simplicity. Do you intentionally try to be minimalist when it comes to the number of working components in any robot?
We do follow the logic of simplicity. A saying attributed to Albert Einstein says ”Everything should be as simple as possible but no simpler”. A smaller number of components means better reliability, longer working life, higher power density and makes it much easier to reduce the size of the robots.
What are you currently working on?
In my Ben-Gurion University lab, we are currently working on multiple projects which include modeling of a robot which can crawl inside the body, serial robots for agricultural applications and some small search and rescue robots.
Anything else that you would like to share with our readers?
I strongly encourage parents and children to engage in mechatronics/robotics. With today’s technology, it is possible to buy user friendly components (3D printers, arduino controllers, motors, sensors, etc.) at low cost and program them with available home resources. It can be a fun activity for the whole family (especially in this period of time where we are mostly at home). I also encourage kids to engage in sciences and the usage of computers for educational purposes (not just gaming).
Thank you for the interview. I really enjoy learning about your unique approach to designing trully innovative robotics. Readers who wish to learn more should visit the Bioinspired and Medical Robotics Laboratory.
Akilesh Bapu, Founder & CEO of DeepScribe – Interview Series
Akilesh Bapu is the Founder & CEO of DeepScribe, which uses natural language processing (NLP) and advanced deep learning to generate accurate, compliant, and secure notes of doctor-patient conversations.
What was it that introduced and attracted you to AI and natural language processing?
If I remember correctly, Jarvis from “Iron Man” was the first thing that really attracted me to the world of natural language processing and AI. Particularly, I found it fascinating how much faster a human was able to not only go through tasks but also go into an incredible level of depth into certain tasks and unveil certain information that they wouldn’t have even known about if it weren’t for this AI.
It was this concept of “AI by itself won’t be as good as humans at most tasks but put a human and AI together and that combination will dominate.” Natural language processing is the most efficient way for this human/AI combination to happen.
From then on, I was obsessed with Siri, Google Now, Alexa, and the others. While they didn’t work as seamlessly as Jarvis, I so badly wanted to make them work as Jarvis did. Particularly, what became apparent was, commands such as “Alexa do this,” “Alexa do that,” were pretty easy and accurate to do with the current state of technology. But when it comes to something like Jarvis, where it can actually learn and understand, filter, and pick up on important topics during another conversational exchange—that hadn’t really been done before. This actually directly relates to one of my core motivations in founding DeepScribe. While we are solving the issue of documentation for physicians, we’re attempting a whole new wave of intelligence while doing it: ambient intelligence. AI that can dig through your day-to-day utterances, find useful information, and use that information to help you out.
You previously did some research using deep learning and NLP at UC Berkeley College of Engineering. What was your research on?
Back at the Berkeley AI Research Lab, I was working on a gene ontology annotator project where we were summarizing PubMed articles with specific output parameters.
The high-level overview: Take a task like the CNN news article summarization. In that task you’re taking news articles and summarizing them into roughly a few sentences. In your favor you have data and the ability to train these models on over a million articles. However, the problem space is enormous since you have limited structure to the summaries. In addition, there is hardly any structure to the actual articles. While there have been quite a few improvements since 2.5 years ago when I was working on this project, this is still an unsolved problem.
In our research project, however, we were developing structured summaries of articles. A structured summary in this case is similar to a typical summary except we know the exact structure of the output summary. This is helpful since it dramatically reduces the output options for our machine learning model—the challenge was that there was not enough annotated training to run a data-hungry deep learning model and get usable results.
The core of the work I did on this project was to leverage the knowledge we have around the input data and develop an ensemble of shallow ML models to support it—a technique we invented called the 2-step annotator. The 2-step annotator benchmarked at nearly 20x the accuracy as the previous best (54 percent vs 3.6 percent).
While side by side, this project and DeepScribe may sound entirely different, they were highly similar in how they used the 2-step annotation method to vastly improve results on a limited dataset.
What was the inspiration behind launching DeepScribe?
It all started with my father, who was a medical oncologist. Before electronic health record systems took over health care, physicians would jot down things on paper and spend very little time on notes. However, once EHRs started becoming popular as part of the HITECH Act of 2009, I started noticing that my dad spent more and more time at the computer. He’d start coming home later. On the weekends, he’d be sitting on the couch dictating notes. Simple things like him picking me up from school or basketball practice became a thing of the past as he’d be spending most of his evening hours catching up on documentation.
As a nerdy kid growing up, I would try to find solutions for him by searching the web and having him try them out. Sadly, nothing worked well enough to save him from the long hours of documentation.
Fast forward several years to the summer of 2017—I’m a researcher working at the Berkeley AI Research Lab, working on projects in document summarization. One summer when I’m back at home, I notice that my dad is still spending copious amounts of time documenting. I ask, “What’s new in the world of documentation? Alexa is everywhere, Google Assistant is so good now. Tell me, what’s the latest in the medical space?” And his answer was, “Nothing has changed.” I thought that it was just him but when I went and surveyed several of his colleagues, it was the same issue: not what the latest is in cancer treatment or the novel problems their patients were having—it was documentation. “How can I get rid of documentation? How can I save time on documentation? It’s taking so much of my time.”
I also noticed several companies that had emerged to try to solve documentation. However, either they were too expensive (thousands of dollars per month) or they were too minimal in terms of technology. The physicians at that time had very few options. That was when the opportunity opened up that if we could create an artificially intelligent medical scribe, a technology that could follow physicians’ patient visits and summarize them, and offer it at a cost that could make it accessible for everyone, it could truly bring the joy of care back to medicine.
You were only 22 years old when you launched DeepScribe. Can you describe your journey as an entrepreneur?
At Berkeley, I continued to delve into the world of entrepreneurship as much as possible, primarily with their wide array of classes. My favorites were:
- The Newton Lecture Series—people like Jessica Mah from InDinero or Diane Greene from VMWare who were Cal alums gave highly relatable talks about their time at Berkeley and how they started their own companies
- Challenge Lab—I actually met my co-founder Matt Ko through this class. We were placed in groups and went through a semester-long journey of creating a product and being mentored on what it takes during the early stages to get an idea going.
- Lean Launchpad—By far my favorite of the three; this was a grueling and rigorous process where we were guided by Steve Blank (acclaimed billionaire and the man behind the lean startup movement) to take an idea, validate it through 100 customer interviews, build a financial model, and more. This was the type of class where we pitched our “startup” only to get stopped on slide 1 or 2 and get grilled. If that wasn’t hard enough, we were also expected to interview 10 customers a week. Our idea at the time was to create a patent search that would give similar results to an expensive prior art search, which meant we were pitching to 10 enterprise customers a week. It was great because it taught us to think fast on our feet and be extra resourceful.
DeepScribe started when an investor group called The House Fund was writing checks for students who would turn down their summer internships and spend their summer building their company. We had just shut down Delphi (the patent search engine) and Matt and I had been constantly talking about medical documentation and everything fell in place since it was the perfect time to give it a shot.
With DeepScribe, we were lucky to have just come fresh out of Lean Launchpad since one of the most important factors in building a product for physicians was to iterate and refine the product around customer feedback. A historical issue with the medical industry has been that software has rarely had physicians in the design loop, therefore resulting in software that wasn’t optimized for the end user.
Since DeepScribe was happening at the same time as my final year at Berkeley, it was a heavy balancing act. I’d show up to class in a suit so I could be on time for a customer demo right after. I’d use all the EE facilities and professors not for anything to do with class but 100 percent for DeepScribe. My meetings with my research mentor even turned into DeepScribe brainstorming sessions.
Looking back, if I had to change one thing about my journey, it would’ve been to put college on hold so I could spend 150 percent of my time on DeepScribe.
Can you describe for a medical professional what the advantages of using DeepScribe are versus the more traditional method of voice dictation or even taking notes?
Using DeepScribe is meant to be very similar to using an actual human scribe. As you talk naturally to your patient, DeepScribe will listen in and pick up on the medically relevant speech that usually goes in your notes and puts it in there for you, using the same medical language that you yourself use. We like to think of it as a new AI-powered member of your medical staff that you can train as you’d like to help with documentation in your electronic health record system as you’d like. It’s very different from using voice dictation service as it eliminates the entire step of having to go back and document. While typical dictation services turn 10 minutes of documentation into 7-8 minutes, DeepScribe turns it into a few seconds. Our physicians report anywhere from 1.5 to 3 hours of time saved per day depending on how many patients they see.
DeepScribe is device-agnostic, operable from an iPhone, Apple Watch, browser (for telemedicine), or hardware device.
What are some of the speech recognition or NLP challenges that DeepScribe may encounter due to complex medical terminology?
Contrary to popular opinion, complex medical terminology is actually the easiest part for DeepScribe to pick up. The trickiest part for DeepScribe is to pick up on unique contextual statements a patient may give a physician. The more they stray from a typical conversation, the more we see the AI stumble. But as we collect more conversational data, we see it improve on this dramatically every day.
What are the other machine learning technologies that are used with DeepScribe?
The large umbrellas of speech recognition and NLP tend to cover most of the machine learning we’re doing at DeepScribe.
Can you name some of the hospitals, nonprofits, or academic institutions that are using DeepScribe?
DeepScribe started out through a pilot program with the UC Berkeley Health Center. Hartford Healthcare, Texas Medical Center, and Cedar Valley Medical Specialists are a handful of the larger systems DeepScribe is working with.
However, the larger percentage of DeepScribe users are 50 private practices from Alaska to Florida. Our most popular specialties are primary care, orthopedics, gastroenterology, cardiology, psychiatry, and oncology, but we do support a handful of other specialties.
DeepScribe has recently launched a program to assist with COVID-19. Could you walk us through this program?
COVID-19 has hit our doctors hard. Practices are only seeing 30-40 percent of their patient load, scribe staffing is being cut, and providers are being forced to rapidly switch all their patients on to telemedicine. All this ends up leading to more clerical work for providers—we at DeepScribe firmly believe that in order for this pandemic to come to a halt, physicians must devote 100 percent of their attention and time to taking care of their patients.
To help aid this cause, we are proud to launch a free telemedicine solution to health care professionals fighting this pandemic. Our telemedicine solution is fully integrated with our AI-powered medical scribe solution, eliminating the need for clinical documentation for encounters made on our platform.
We’re also offering our scribe service for free during the pandemic. This means that any physician can get access to a scribe for free to handle their documentation. Our hopes are that by doing this, physicians will be able to focus more of their attention on their patients and spend less time thinking about documentation, leading to a faster halting of the COVID-19 outbreak.
Thank you for the great interview, I really enjoyed learning about DeepScribe and your entrepreneurial journey. Anyone who wishes to learn more should visit DeepScribe.
Stefano Pacifico, and David Heeger, Co-Founders of Epistemic AI – Interview Series
Epistemic AI employs state-of-the-art Natural Language Processing (NLP), machine learning and deep learning algorithms to map relations among a growing body of biomedical knowledge, from multiple public and private sources, including text documents and databases. Through a process of Knowledge Mapping, users’ work interactively with the platform to map and understand subsets of biomedical knowledge, which reveals concepts and relationships and that are otherwise missed with traditional search.
We interviewed both Co-Founders of Epistemic AI to discuss these latest advances.
Stefano Pacifico comes from 10+ years in applied AI and NLP development. Formerly at Bloomberg, where he spent 7 years, and was at Elemental Cognition before starting Epistemic.
David Heeger is a Silver Professor of data science and neuroscience at NYU, and has spent his career bridging computer science, AI and bioscience. He is a member of the National Academy of Sciences. As founders they bring together the expertise of building applied large-scale AI and NLP systems for understanding large collections of knowledge, with expertise in computational biology and biomedical science from years of research in the area.
What is it that introduced and attracted you to AI and Natural Language Processing (NLP)?
Stefano Pacifico: When I was in college in Rome, and AI was not popular at all (in fact it was very fringe), I asked my then advisor what specialization I should have taken among those available. He said: “If you want to make money, Software Engineering and Databases, but if you want to be weird but very advanced, then choose Artificial Intelligence”. I was sold at “weird”. I then started working on knowledge representation and reasoning to study how autonomous agents could play soccer or rescue people. Then two realizations made me fall in love with NLP: first, autonomous agents might have to communicate with natural language among themselves! Second, building formal knowledge bases by hand is hard, while natural language (in text) already provides the largest knowledge base of all. I know today these might seem obvious observations, but they were not as mainstream before.
What was the inspiration behind launching Epistemic AI?
Stefano Pacifico: I am going to make a bold claim. Nobody today has adequate tooling to understand and connect the knowledge present in large, ever-growing collections of documents and data. I had previously worked on that problem in the world of finance. Think of news, financial statements, pricing data, corporate actions, filings etc. I found that problem intoxicating. And of course, it’s a difficult problem; and an important one! When I met my co-founder, Dr. David Heeger, we spent quite a bit of time evaluating startup opportunities in the biomedical industry. When we realized the sheer volume of information generated in this field, it’s as if everything fell in its right place. Biomedical researchers struggle with information overload, while attempting to grapple with the vast and rapidly expanding base of biomedical knowledge, including documents (e.g., papers, patents, clinical trials) and databases (e.g., genes, proteins, pathways, drugs, diseases, medical terms). This is a major pain point for researchers and, with no appropriate solution available, they are forced to use basic search tools (PubMed and Google Scholar) and explore manually-curated databases. These tools are suitable for finding documents matching keywords (e.g., a single gene or a published journal paper), but not for acquiring comprehensive knowledge about a topic area or subdomain (e.g., COVID-19), or for interpreting the results of high throughput biology experiments, such as gene sequencing, protein expression, or screening chemical compounds. We started Epistemic AI with the idea to address this problem with a platform that allows them to iteratively:
- Shorten the time to gather information and build comprehensive knowledge maps
- Surface cross-disciplinary information that can be otherwise difficult to find (real discoveries often come from looking into the white space between disciplines);
- Identify causal hypotheses by finding paths and missing links in your knowledge map.
What are some of both the public and private sources that are used to map these relations?
Stefano Pacifico: At this time, we are ingesting all the publicly available sources that we can get our hands on, including Pubmed and clinicaltrials.gov. We ingest databases of genes, drugs, diseases and their interactions. We also include private data sources for select clients, but we are not at liberty to disclose any details yet.
What type of machine learning technologies are used for the knowledge mapping?
Stefano Pacifico: One of the deeply held beliefs at Epistemic AI is that zealotry is not helpful for building products. Building an architecture integrating several machine learning techniques was a decision made early on, and those range from Knowledge Representation to Transformer models, through graph embeddings, but include also simpler models like regressions and random forests. Each component is as simple as it needs to be, but no simpler. While we believe to have already built NLP components that are state-of-the-art for certain tasks, we don’t shy away from simpler baseline models when possible.
Can you name some of the companies, non-profits, or academic institutions that are using the Epistemic platform?
Stefano Pacifico: While I’d love to, we have not agreed with our users to do so. I can say that we had people signing up from very high-profile institutions in all three segments (companies, non-profits, and academic institutions). Additionally, we intend to keep the platform free for academic/non-profit purposes.
How does Epistemic assist researchers in Identifying central nervous system (CNS) and other disease-specific biomarkers?
Dr. David Heeger: Neuroscience is a very highly interdisciplinary field including molecular and cellular biology and genomics, but also psychology, chemistry, and principles of physics, engineering, and mathematics. It’s so broad that nobody can be an expert at all of it. Researchers at academic institutions and pharma/biotech companies are forced to specialize. But we know that the important insights are interdisciplinary, combining knowledge from the sub-specialties. The AI-powered software platform that we’re building enables everyone to be much more interdisciplinary, to see the connections between their individual subarea of expertise and other topics, and to identify new hypotheses. This is especially important in neuroscience because it is such a highly interdisciplinary field to begin with. The function and dysfunction of the human brain is the most difficult problem that science has ever faced. We are on a mission to change the way that biomedical scientists work and even how they think.
Epistemic also enables the discovery of genetic mechanisms of CNS disorders. Can you walk us through how this works?
Dr. David Heeger: Most neurological diseases, psychiatric illnesses, and developmental disorders do not have a simple explanation in terms of genetic differences. There are a handful of syndromic disorders for which a specific mutation is known to cause the disorder. But that’s not typically the case. There are hundreds of genetic differences, for example, that have been associated with autism spectrum disorders (ASD). There is some understanding for some of these genes about the functions they serve in terms of basic biology. For example, some of the genes associated with ASD hold synapses together in the brain (note, however, that the same genes typically perform different functions in other organ systems in the body). But there’s very little understanding about how these genetic differences can explain the complex suite of behavioral differences exhibited by individuals with ASD. To make matters worse, two individuals with the same genetic difference may have completely different outcomes, one diagnosed with ASD and the other, not. And two individuals with completely different genetic profiles may have the same outcome with very similar behavioral deficits. To understand all this requires making the connection from genomics and molecular biology to cellular neuroscience (how do the genetic differences cause individual neurons to function differently) and then to systems neuroscience (how do those differences in cellular function cause networks of large numbers of interconnected neurons to function differently) and then to psychology (how do those differences in neural network function cause differences in cognition, emotion, and behavior). And all of this needs to be understood from a developmental perspective. A genetic difference may cause a deficit in a particular aspect of neural function. But the brain doesn’t just sit there and take it. Brains are highly adaptive. If there’s a missing or broken mechanism then the brain will develop differently to compensate as much as possible. This compensation might be molecular, for example, upregulating another synaptic receptor to replace the function of a broken synaptic receptor. Or the compensation might be behavioral. The end result depends not only on the initial genetic difference but also on the various attempts to compensate relying on other molecular, cellular, circuit, systems, and behavioral mechanisms.
No individual has the knowledge to understand all this. We all need help. The AI-powered software platform that we’re building enables everyone to collect and link all the relevant biomedical knowledge, to see the connections and to identify new hypotheses.
How are biopharma and academic institutions using Epistemic to tackle the COVID-19 challenge?
Stefano Pacifico: We have released a public version of our platform that includes COVID specific datasets and is freely accessible to anyone doing research on COVID-19. It is available at https://covid.epistemic.ai
What are some of the other diseases or genetic issues that Epistemic have been used for?
Stefano Pacifico: We have collaborated with autism researchers and are most recently putting together a new research effort for Cystic Fibrosis. But we are happy to collaborate with any other researchers or institutions that might need help with their research.
Is there anything else that you would like to share about Epistemic?
Stefano Pacifico: We are building a movement of people that want to change the way biomedical researchers work and think. We sincerely hope that many of your readers will want to join us!
Thank you both for taking the time to answer our questions. Readers who wish to learn more should visit Epistemic AI.
Emrah Gultekin, CEO and Co-founder of Chooch AI – Interview Series
Emrah is the co-founder and CEO of Chooch, an end-to-end visual AI solution. Chooch provides fast, accurate facial authentication and object recognition for the media, advertising, banking, medical and security industries. Chooch offers an easy-to-use and deployable API, a dashboard and mobile app SDK.
What was your inspiration for launching Chooch AI?
In our previous entrepreneurial experiences, my co-founder and I saw that there were a multitude of data-driven challenges that needed to be solved in a wide-variety of verticals, so I decided to dive in and solve the ones that I could. I had started companies before, but this was my first true “deep tech” company.
With our broader team, we’ve worked to develop a visual AI product that is sustainable, scalable, robust, and usable for an array of enterprises. The product is now being utilized by companies in the healthcare, public safety, industrial, media and geospatial industries, with uses that range from fraud prevention and decreasing medical errors to deepening the understanding of our world.
Can you share with us what Chooch AI does?
Chooch copies human visual intelligence into machines. We train and deploy visual AI for customers in the cloud and on the edge and deliver fast and accurate computer vision for any visual process.
We can do that because Chooch AI is a platform for every step of the visual AI process from data collection, annotation and labeling, to AI training, model deployment, and integration. Because of the broad range of problems we’ve solved, our team now has deep expertise in scoping and developing computer vision projects that are ready for global scale. This can be everything from cell identification, geospatial image analysis and public safety.
What type of imagery can be processed by the computer vision system?
What the human eye can do, Chooch can do better and at scale. For example, the human eye cannot process any spectrum from visible to CT scans, but Chooch can detect fevers with IR sensors and process x-rays to detect lung damage. We can do this for video or still imagery, both faster and more accurately than the human eye and have deployed over 2400 models for a variety of applications.
Chooch AI connects to the cloud but is also able to run on a local machine, can you elaborate on how this works?
Yes, this is one of our breakthroughs. We launched with the Chooch AI API, which allows companies to use our cloud server to process their images, but our customers wanted to deploy AIoT on the edge in places with no connectivity. So, we created Chooch Edge AI, which is basically a standalone AI container that is generated by our Chooch Cloud AI. For instance, we are able to remotely deploy that AI software on NVIDIA Jetson devices, which are amazing by the way, and we can then remotely update the edge AI as needed from the Chooch Dashboard. Technically, that AI software on the edge is called an inference engine. Chooch is able to connect up to four cameras to the edge devices and the AI can recognize thousands of classes on the edge. We are able to iterate on models, remove models and train new models on the Edge. This is always improving, because as chip and hardware providers release more powerful devices, we are generating more and more powerful AIoT deployments. We can now run multiple models on the edge with multiple layers of dense classification at very low latency.
Is facial recognition technology used?
We don’t do facial recognition as a company policy. We only do facial authentication with liveness detection with the caveat that it will always be consent-based, like providing permission to check in to a location or for a flight with your face instead of a ticket. Chooch AI can be trained with as few a couple of images. Facial authentication files are not stored as pictures of faces. And we do liveness detection to make sure people are not able to spoof the system.
Training AI models can be a steep learning curve for the uninitiated, what assistance do you provide for data labelling and annotating?
For the uninitiated, we offer end-to-end training assistance. When companies come to Chooch with a visual problem to solve, our team works in partnership with them to train and deploy AI models. It’s as simple as that. We do labelling and annotation as a service, and generally speaking users supply the data, but we help them organize that. Our training platform can use still images,but with videos, we can generate over 1,000 annotated images per minute, that’s another breakthrough, by the way. We take on the whole process from planning and consulting on data collection to model creation and testing and support. Our customer relationships become ongoing partnerships.
Chooch AI can assist enterprises with COVID-19. Can you detail how it can be of assistance?
Essentially, Chooch AI is supporting public safety with several visual AI models all while working with partners to deploy complete solutions. One such solution detects the presence or absence of masks and another detects fevers with IR cameras, these two solutions can be deployed as a complete solution. Of note, these AI models do not include any facial recognition features. Additionally, we have a research model that we are providing to researchers for detecting the signs of COVID-19 related pneumonia that looks at x-rays and detects lung injury.
Is there anything else that you would like to share about Chooch AI?
As a proof point for our technology, our system is live and is being utilized by numerous clients. Our customers are driving real ROI because we can automate literally any visual process at scale, reducing costs and human error.
Thank you for the interview. Readers who wish to learn more should visit Chooch AI.
- Microsoft to Replace Dozens of Journalists With AI
- AI Model Might Let Game Developers Generate Lifelike Animations
- Akilesh Bapu, Founder & CEO of DeepScribe – Interview Series
- AI Models Trained On Sex Biased Data Perform Worse At Diagnosing Disease
- Stefano Pacifico, and David Heeger, Co-Founders of Epistemic AI – Interview Series