stub Donny White, CEO & Co-Founder of Satisfi Labs - Interview Series - Unite.AI
Connect with us

Interviews

Donny White, CEO & Co-Founder of Satisfi Labs – Interview Series

mm

Published

 on

Donny White

Founded in 2016, Satisfi Labs is a leading conversational AI company. Early success came from its work with the New York Mets, Macy’s, and the US Open, enabling easy access to information often unavailable on websites.

Donny spent 15 years at Bloomberg before entering the world of start-ups and holds an MBA from Cornell University and a BA from Baruch College. Under Donny’s leadership, Satisfi Labs has seen significant growth in the sports, entertainment, and tourism sectors, receiving investments from Google, MLB, and Red Light Management.

You were at Bloomberg for 14 years when you first felt the entrepreneurial itch. Why was being an entrepreneur suddenly on your radar?

During my junior year of college, I applied for a job as a receptionist at Bloomberg. Once I got my foot in the door, I told my colleagues that if they were willing to teach me, I could learn fast. By my senior year, I was a full-time employee and had shifted all of my classes to night classes so I could do both. Instead of going to my college graduation at age 21, I spent that time managing my first team. From that point on, I was fortunate to work in a meritocracy and was elevated multiple times. By 25, I was running my own department. From there, I moved into regional management and then product development, until eventually, I was running sales across all the Americas. By 2013, I began wondering if I  could do something bigger. I went on a few interviews at young tech companies and one founder said to me, “We don’t know if you’re good or Bloomberg is good.” It was then that I knew something had to change and six months later I was the VP of sales at my first startup, Datahug. Shortly after, I was recruited by a group of investors who wanted to disrupt Yelp. While Yelp is still good and well, in 2016 we aligned on a new vision and I co-founded Satisfi Labs with the same investors.

Could you share the genesis story behind Satisfi Labs?

I was at a baseball game at Citi Field with Randy, Satisfi’s current CTO and Co-founder, when I heard about one of their specialties, bacon on a stick. We walked around the concourse and asked the staff about it, but couldn’t find it anywhere. Turns out it was tucked away on one end of the stadium, which prompted the realization that it would have been much more convenient to inquire directly with the team through chat. This is where our first idea was born. Randy and I both come from finance and algorithmic trading backgrounds, which led us to take the concept of matching requests with answers to build our own NLP for hyper-specific inquiries that would get asked at locations. The original idea was to build individual bots that would each be experts in a particular field of knowledge, especially knowledge that isn’t easily accessible on a website. From there, our system would have a “conductor” that could tap each bot when needed. This is the original system architecture that is still being used today.

Satisfi Labs had designed its own NLP engine and was on the cusp of publishing a press release when OpenAI disrupted your tech stack with the release of ChatGPT. Can you discuss this time period and how this forced Satisfi Labs to pivot its business?

We had a scheduled press release to announce our patent-pending Context-based NLP upgrade for December 6, 2022. On November 30, 2022, OpenAI announced ChatGPT. The announcement of ChatGPT changed not only our roadmap but also the world. Initially, we, like everyone else, were racing to understand the power and limits of ChatGPT and understand what that meant for us. We soon realized that our contextual NLP system did not compete with ChatGPT, but could actually enhance the LLM experience. This led to a quick decision to become OpenAI enterprise partners. Since our system started with the idea of understanding and answering questions at a granular level, we were able to combine the “bot conductor” system design and seven years of intent data to upgrade the system to incorporate LLMs.

Satisfi Labs recently launched a patent for a Context LLM Response System, what is this specifically?

This July, we unveiled our patent-pending Context LLM Response System. The new system combines the power of our patent-pending contextual response system with large language model capabilities to strengthen the entire Answer Engine system. The new Context LLM technology integrates large language model capabilities throughout the platform, ranging from improving intent routing to answer generation and intent indexing, which also drives its unique reporting capabilities. The platform takes conversational AI beyond the traditional chatbot by harnessing the power of LLMs such as GPT-4. Our platform allows brands to answer with both generative AI answers or pre-written answers depending on the need for control in the response.

Can you discuss the current disconnect between most company websites and LLM platforms in delivering on-brand answers?

ChatGPT is trained to understand a wide range of information and therefore does not have the level of granular training needed to answer industry-specific questions with the level of specificity that most brands expect. Additionally, the accuracy of the answers LLMs provide is only as good as the data provided. When you use ChatGPT, it is sourcing data from across the internet, which can be inaccurate. ChatGPT does not prioritize the data from a brand over other data.  We have been serving various industries over the past seven years, gaining valuable insight into the millions of questions asked by customers every day. This has enabled us to understand how to tune the system with greater context per industry and provide robust and granular intent reporting capabilities, which are crucial given the rise of large language models. While LLMs are effective in understanding intent and generating answers, they cannot report on the questions asked. Using years of extensive intent data, we have efficiently created standardized reporting through their Intent Indexing System.

What role do linguists play in enhancing the abilities of LLM technologies?

The role of prompt engineer has emerged with this new technology, which requires a person to design and refine prompts that elicit a specific response from the AI. Linguists have a great understanding of language structure such as syntax and semantics, among other things. One of our most successful AI Engineers has a Linguistics background, which allows her to be very effective in finding new and nuanced ways to prompt the AI. Subtle changes in the prompt can have profound effects on how accurate and efficient an answer is generated, which makes all the difference when we are handling millions of questions across multiple clients.

What does fine-tuning look like on the backend?

We have our own proprietary data model that we use to keep the LLM in line. This allows us to build our own fences to keep the LLM under control, opposed to having to search for fences. Secondly, we can leverage tools and features that other platforms utilize, which allows us to support them on our platforms.

Fine-tuning training data and using Reinforcement Learning (RL) in our platform can help mitigate the risk of misinformation. Fine-tuning, opposed to querying the knowledge base for specific facts to add, creates a new version of the LLM that is trained on this additional knowledge. On the other hand, RL trains an agent with human feedback and learns a policy on how to answer questions. This has proven to be successful in building smaller footprint models that become experts in specific tasks.

Can you discuss the process for onboarding a new client and integrating conversational AI solutions?

Since we focus on destinations and experiences such as sports, entertainment, and tourism, new clients benefit from those already in the community, making onboarding very simple. New clients identify where their most current data sources live such as a website, employee handbooks, blogs, etc. We ingest the data and train the system in real-time. Since we work with hundreds of clients in the same industry, our team can quickly provide recommendations on which answers are best suited for pre-written responses versus generated answers. Additionally, we set up guided flows such as our dynamic Food & Beverage Finder so clients never need to deal with a bot-builder.

Satisfi Labs is currently working closely with sports teams and companies, what is your vision for the future of the company?

We see a future where more brands will want to control more aspects of their chat experience. This will result in an increased need for our system to provide more developer-level access. It does not make sense for brands to hire developers to build their own conversational AI systems as the expertise needed will be scarce and expensive. However, with our system feeding the backend, their developers can focus more on the customer experience and journey by having greater control of the prompts, connecting proprietary data to allow for more personalization, and managing the chat UI for specific user needs. Satisfi Labs will be the technical backbone of brands’ conversational experiences.

Thank you for the great interview, readers who wish to learn more should visit Satisfi Labs.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.