stub Trey Doig, CTO & Co-Founder at Pathlight - Interview Series - Unite.AI
Connect with us

Interviews

Trey Doig, CTO & Co-Founder at Pathlight – Interview Series

mm

Published

 on

Trey Doig is the Co-Founder & CTO at Pathlight. Trey has over ten years of experience in the tech industry, having worked as an engineer for IBM, Creative Commons, and Yelp. Trey was the lead engineer for Yelp Reservations and was responsible for the integration of SeatMe functionality onto Yelp.com. Trey also led the development of the SeatMe web application as the company scaled to support 10x customer growth.

Pathlight helps customer-facing teams boost performance and drive efficiency with real-time insights into customer conversations and team performance. The Pathlight platform autonomously analyzes millions of data points to empower every layer of the organization to understand what's happening at the front lines of their business, and determine the best actions for creating repeatable success.

What initially attracted you to computer science?

I’ve been toying with computers as far back as I can remember. When I turned 12, I picked up programming and taught myself Scheme and Lisp, and soon thereafter started building all sorts of things for me and my friends, primarily in web development.

Much later, when applying to college, I had actually grown bored with computers and set my sights on getting into design school. After being rejected and waitlisted by a few of those schools, I decided to enroll in a CS program and never looked back. Being denied acceptance to design school ended up proving to be one of the most rewarding rejections of my life!

You’ve held roles at IBM, Yelp and other companies. At Yelp specifically, what were some of the most interesting projects that you worked on and what were your key takeaways from this experience?

I joined Yelp through the acquisition of SeatMe, our previous company, and from day one, I was entrusted with the responsibility of integrating our reservation search engine into the front page of Yelp.com.

After just a few short months, we are able to successfully power that search engine at Yelp’s scale, largely thanks to the robust infrastructure Yelp had built internally for Elasticsearch. It was also due to the great engineering leadership there that allowed us to move freely and do what we did best: ship quickly.

As the CTO & Cofounder of a conversational intelligence company, Pathlight, you are helping build an LLM Ops infrastructure from scratch. Can you discuss some of the different elements that need to be assembled when deploying an LLMOps infrastructure, for example how do you manage prompt management layer, memory stream layer, model management layer, etc.

At the close of 2022, we dedicated ourselves to the serious undertaking of developing and experimenting with Large Language Models (LLMs), a venture that swiftly led to the successful launch of our GenAI native Conversation Intelligence product merely four months later. This innovative product consolidates customer interactions from diverse channels—be it text, audio, or video—into a singular, comprehensive platform, enabling an unparalleled depth of analysis and understanding of customer sentiments.

In navigating this intricate process, we meticulously transcribe, purify, and optimize the data to be ideally suited for LLM processing. A critical facet of this workflow is the generation of embeddings from the transcripts, a step fundamental to the efficacy of our RAG-based tagging, classification models, and intricate summarizations.

What truly sets this venture apart is the novelty and uncharted nature of the field. We find ourselves in a unique position, pioneering and uncovering best practices concurrently with the broader community. A prominent example of this exploration is in prompt engineering—monitoring, debugging, and ensuring quality control of the prompts generated by our application. Remarkably, we are witnessing a surge of startups that are now providing commercial tools tailored for these higher-level needs, including collaborative features, and advanced logging and indexing capabilities.

However, for us, the emphasis remains unwaveringly on fortifying the foundational layers of our LLMOps infrastructure. From fine-tuning orchestration, hosting models, to establishing robust inference APIs, these lower-level components are critical to our mission. By channeling our resources and engineering prowess here, we ensure that our product not only hits the market swiftly but also stands on a solid, reliable foundation.

As the landscape evolves and more commercial tools become available to address the higher-level complexities, our strategy positions us to seamlessly integrate these solutions, further enhancing our product and accelerating our journey in redefining Conversation Intelligence.

The foundation of Pathlight CI is powered by a multi-LLM backend, what are some of the challenges of using more than one LLM and dealing with their different rate limits?

LLMs and GenAI are moving at neck-break speed, which makes it absolutely critical that any business application heavily relying on these technologies be capable of staying in lockstep with the latest-and-greatest trained models, whether those be proprietary managed services, or deploying FOSS models in your own infra. Especially as the demands of your service increase and rate-limits prevent the throughput needed.

Hallucinations are a common problem for any company that is building and deploying LLMs, how does Pathlight tackle this issue?

Hallucinations, in the sense of what I think people are generally referring to as such, are a huge challenge in working with LLMs in a serious capacity. There is certainly a level of uncertainty/unpredictability that occurs in what is to be expected out of an even identical prompt. There’s lots of ways of approaching this problem, some including fine-tuning (where maximizing usage of highest quality models available to you for the purpose of generating tuning data).

Pathlight offers various solutions that cater to different market segments such as travel & hospitality, finance, gaming, retail & ecommerce, contact centers, etc. Can you discuss how the Generative AI that is used differs behind the scenes for each of these markets?

The instant ability to address such a broad range of segments is one of the most uniquely valuable aspects of GenerativeAI. To be able to have access to models trained on the entirety of the internet, with such an expansive range of knowledge in all sorts of domains, is such a unique quality of the breakthrough we’re going through now. This is how AI will prove itself over time ultimately, in its pervasiveness and it is certainly poised to be so soon given its current path.

Can you discuss how Pathlight uses machine learning to automate data analysis and discover hidden insights?

Yes definitely! We have a deep history of building and shipping several machine learning projects for many years. The generative model behind our latest feature Insight Streams, is a great example of how we’ve leveraged ML to create a product directly positioned to uncover what you don’t know about your customers. This technology makes use of the AI Agent concept which is capable of producing a steadily evolving set of Insights that makes both the recency and the depth of manual analysis impossible. Over time these streams can naturally learn from itself and

Data analysis or data scientists, business analysts, sales or customer ops or whatever a company designates as the people responsible for analyzing customer support data are completely inundated with important requests all the time. The deep kind of analysis, the one that normally requires layers and layers of complex systems and data.

What is your personal view for the type of breakthroughs that we should expect in the wave of LLMs and AI in general?

My personal view is incredibly optimistic on the field of LLM training and tuning methodologies to continue advancing very quickly, as well as making gains in broader domains, and multi modal becoming a norm. I believe that FOSS is already “just as good as” GPT4 in many ways, but the cost of hosting those models will continue to be a concern for most companies.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.