Interviews
Tomer Aharoni, CEO and Co-Founder of Nagish – Interview Series

Tomer Aharoni, CEO and Co-Founder of Nagish, brings together a strong technical foundation from his work as a software engineer at Bloomberg, research in NLP and IoT at Columbia University, and earlier experience in technology intelligence roles within the Israel Defense Forces, all driven by his passion for accessibility and the intersection of technology and communication.
Nagish is an AI-powered communication platform designed to make phone calls fully accessible for people who are deaf or hard of hearing. The app provides real-time captioning and text-to-speech capabilities while allowing users to keep their existing phone number, maintain complete privacy, and manage conversations through features like personalized dictionaries, saved transcripts, and seamless device integration.
You’ve worked at Bloomberg and conducted NLP research at Columbia, what moment or insight led you to channel that experience into creating Nagish?
During my undergraduate studies at Columbia, I was sitting in class one day when I got a phone call. I couldn’t pick it up because that would have interrupted the entire class, and that got me thinking about how you can conduct a phone call if you can’t hear or speak? This thought led to a bigger question: how do Deaf and Hard-of-Hearing people communicate on the phone?
That was 2019, and we (Alon Ezer, my co-founder, and I) discovered that the Deaf community was heavily reliant on interpreters and captioning assistants. We thought it was crazy, so we started reaching out to folks from the local Deaf community, and what we heard was really surprising for us. “I just hang up when someone calls me,” “I don’t use the phone,” or “I ask my brother to call for me,” were just some of the answers we received when we asked people how they use the phone.
Later that summer, I interned as a software engineer at Bloomberg. On my team, we had another intern who was deaf. Every time I wanted to meet with her, I had to align schedules with her and two interpreters. The casual “let’s jump on a quick call to figure this out” was just impossible. After talking with HR about it, I learned that finding these two interpreters who were familiar with technical jargon was nearly impossible and that we use them whenever they are available, but they are not available full-time.
The more we learned, it became clear that these weren’t isolated inconveniences but part of a much larger pattern. Even today, with the advances that have improved accessibility, there are still lots of challenges and areas that need to be addressed. At Nagish we recently conducted a survey and released a report, The Impact of Communication Technology in Empowering the Deaf and Hard-of-Hearing, which found that 65% of Deaf individuals said they need assistance from a hearing person at least once a week to communicate effectively. That reliance creates real barriers in professional settings, reflected in the fact that 62% of Deaf respondents said communication challenges shaped their career decisions and limited their ability to pursue or advance in certain roles.
These experiences, and my growing connections with deaf individuals, led me to build the first iteration of Nagish. We have a single belief that hasn’t changed – communication should be accessible and private.
Alon and I built a prototype, and the response was incredible. We realized how life-changing Nagish could be. Then COVID hit, and the need exploded as the world went remote, and the lack of accessibility in how people communicate really became apparent.
Can you share what the early days of Nagish were like, and what challenges you faced in merging accessibility goals with cutting-edge AI technology?
The early days of Nagish were during the pandemic, so there wasn’t a lot happening in our lives beyond work. Alon and I lived around the block from each other and had a lot of time to brainstorm, prototype, and implement the latest technologies. We worked out of our apartments for 12+ hours a day for months.
Having this amount of time on our hands let us spend a lot of it talking to our users and understanding their needs. We didn’t want to make assumptions. At this point, we still had no intention of making it a company. What gave us the drive was hearing from users about their struggles and knowing that we had a chance to solve them with technology.
How does Nagish’s AI technology bridge communication between Deaf or hard-of-hearing individuals and the hearing world in ways that existing tools cannot?
Nagish uses AI to bridge communication gaps. Our engines turn speech into text, text back into speech, and sign language into text (and vice versa) in real-time. That means a Deaf or Hard-of-Hearing person can simply see what’s being said on a call and reply by typing or speaking, while the hearing person on the other end just experiences a standard phone call. Before this kind of AI existed, people had to rely on human-operated relay services where a third person sat on the line and did all the transcribing.
With Nagish, there’s no relay operator, no interpreter to schedule, and no waiting around for someone else to be available. The app puts immediacy, privacy, and independence back into phone calls, something traditional relay services just can’t offer.
Since Nagish is AI-powered, it can scale to every type of call: work meetings, family check-ins, emergencies, and customer-service calls. The app is designed to easily integrate into regular life: users can keep their own number, get real-time captions, and use the same app across phone calls and in-person conversations. The whole experience is designed to reduce friction and make communication feel as natural and seamless as possible.
In what ways does your platform go beyond standard transcription or captioning to make interactions more natural and inclusive?
We know that language isn’t just words, it’s also culture, identity, and nuance. That’s especially true for sign languages, which rely on facial expression, emotion, and regional variation. To make interactions feel natural instead of mechanical, we collaborate directly with Deaf linguists and sign-language experts. They help shape how our AI learns and behaves, so the technology is built with the community, not just trained on their data.
Standard transcription tools often stop at “here are the words that were said.” Our goal is to support an actual conversation. We’re implementing AI Agents that can provide context and manage the flow of the call beyond just providing captions or reading out text to speech. In addition, Nagish offers real-time captions optimized for conversational flow, with features like adjustable fonts, spam filtering, voicemail transcription, and the ability to save and review transcripts on your own device when you choose to. All of that creates an equivalent experience to the one hearing people have on phone calls.
What role does natural language processing play in ensuring your platform captures not just words but intent and tone?
Natural language processing and natural language understanding are at the core of how Nagish captures not just what someone says, but what they mean. Speech is full of cues that add context, like tone, emphasis, and more, and our NLP models are designed to pick up on those layers so users get more than a basic transcript. The goal is to make the captions feel as close to a natural conversation as possible.
Because Nagish is built for real-world situations, such as medical calls, work meetings, and even emergencies, our models are trained to handle fast speech, overlapping voices, and emotional nuance. Context awareness is a big reason we often outperform both human transcribers and other AI tools. The system doesn’t just guess at words; it uses the flow of the conversation to understand intent.
How is Nagish helping employers build more inclusive workplaces while addressing the financial and logistical barriers that have long limited accessibility?
At Nagish, we are helping employers build more inclusive workplaces by removing the financial and logistical barriers that have made accessibility difficult to scale. Traditionally, creating an accessible workplace has meant relying on scheduled interpreters, which are essential but not always practical for everyday communication, such as quick calls, impromptu meetings, or time-sensitive tasks. These limitations create delays, add cost, and can unintentionally exclude Deaf and Hard-of-Hearing employees from the flow of work.
Nagish is working to change that dynamic, giving employees the ability to communicate independently and on demand. When companies remove those barriers, people can participate fully, leading to stronger teams, better retention, and a more equitable workplace.
According to a recent survey we conducted, more than 60% of Deaf and Hard-of-Hearing respondents said communication barriers had impacted their career decisions and professional growth. It’s a serious challenge that, even with all the progress made in the past few years, shows that there is still a lot of work to be done.
We enable employers to move from reactive accommodations to proactive inclusion, creating workplaces where every employee can contribute independently and confidently.
What kind of feedback have you received from Deaf and hard-of-hearing users, and how has it influenced the product’s evolution?
We built Nagish with the deaf community from day one, and since then, we’ve been receiving a mix of excitement, curiosity, and in rare cases, some hesitation, which is exactly how it should be. The Deaf community is very mindful and inquisitive about new technology, and with good reason. They’ve heard so many over-promises in the past, and we’re trying to avoid that. We are prioritizing progress over perfection, which takes time – but our end goal is perfection.
This community-first mindset is reinforced by what we learned in our recent report. After adopting assistive technology, users showed a major increase in daily independence: the number of people who could communicate independently rose from 37% to 60% for Deaf users, and from 32.9% to 63% for Hard-of-Hearing users. That shift mirrors the feedback we hear every day: people want tools that make communication easier, more consistent, and available in moments when interpreters aren’t accessible or when they prefer privacy.
When it comes to our research into creating better sign language interpretation technologies, our goal isn’t to replace human interpreters or existing communication methods, but to add another option, a tool that makes accessibility more consistent and available anywhere, anytime. Feedback from users has reinforced how important an “additional option” is, especially in moments when an interpreter isn’t available or when someone simply wants privacy and independence. For many, it creates situations where communication would have otherwise felt inconvenient, delayed, or out of reach.
We’re taking a community-first approach to make sure the technology feels authentic, accurate, and respectful. As long as we keep building with sign language users, we believe this will be received as an empowering step forward.
Privacy is a key concern in accessibility tech — how does Nagish handle sensitive conversations and maintain user trust?
Privacy is critical to Nagish’s mission to empower Deaf and Hard-of-Hearing users. The first thing to mention is that with Nagish, you are already able to eliminate the need for a live transcriber, so right away, there is already a feeling of privacy that wasn’t possible before.
On the technical side, Nagish is private by design. We don’t record calls, and never store call transcripts on our servers beyond the duration of a call. We also don’t use any call data for training purposes. When users choose to save transcripts, they’re stored locally on their device rather than in a shared cloud. Features like end-to-end secure captioning and local storage of transcripts are there specifically to protect highly sensitive conversations—whether they’re about health, employment, or personal relationships.
How do you see AI reshaping accessibility in the coming decade, and what gaps still remain for technology to fill?
One of the biggest issues with digital accessibility is the lack of education and observability: Engineers are not implementing alt-text, designers pick inaccessible colors because they look good, and product managers make product decisions for KPIs.
As AI gets more and more involved in each aspect of product development, from engineering to design to copywriting, we’re seeing a proactive approach to accessibility. AI could change accessibility from something reactive and “patched on” into something proactive and ambient. We’ll also see a new wave of tools that will augment communication in various settings- not just calls, but workplaces, classrooms, transportation, and public services—so that people with disabilities, and Deaf and Hard-of-Hearing people in particular, don’t have to constantly request accommodations; they’ll just be there by default.
How do you envision the collaboration between human interpreters and AI evolving — will one eventually replace the other, or do they strengthen each other?
Sign language interpreters do incredible work. They’re essential for the community, accessibility, and communication. But the reality is, there simply aren’t enough of them. In the U.S., for example, there are over 500,000 people who use American Sign Language as their primary language, and only about 10,000 certified interpreters. That means a huge number of situations – from doctor visits, parent-teacher meetings, job interviews, and more – often lack accessible communication.
Even when interpreters are available, there are challenges around scheduling, cost, and geography. Someone living in a rural area would have a much harder time getting an interpreter, and that delay can have real-world consequences, especially in healthcare or emergency settings.
AI can help bridge that gap. What we’re building isn’t meant to replace interpreters, but to complement their work and make accessibility more scalable. Think of it as a tool that steps in when a human interpreter isn’t available.
Google Translate didn’t replace professional translators, but it made it possible to bridge communication gaps on a day-to-day basis.
With advances in computer vision and natural language processing, AI holds the promise of being able to begin to interpret sign language in real time. This means more people can communicate instantly, whether it’s through a video call, a public kiosk, or an emergency service.
Thank you for the great interview, readers who wish to learn more should visit Nagish.












