stub Deon Nicholas, Co-Founder & CEO of Forethought - Interview Series - Unite.AI
Connect with us

Artificial Intelligence

Deon Nicholas, Co-Founder & CEO of Forethought – Interview Series

mm
Updated on

Deon Nicholas is the Co-Founder & CEO of Forethought. Previously, Deon built products and infrastructure at Facebook, Palantir, Dropbox, and Pure Storage. He has ML publications and infrastructure patents, was a World Finalist at the ACM International Collegiate Programming Contest, and was named to Forbes 30 under 30. Originally from Canada, Deon enjoys spending time with his wife and son, playing basketball, and reading as many books as he can get his hands on.

Forethought, the AI and Machine Learning platform for the enterprise, began with its focus on customer support. The company’s AI can learn from internal documents, email, chat and even old support tickets to automatically resolve auto-route tickets correctly, and quickly surface the most relevant institutional knowledge.

We sat down for our interview with Deon at the annual 2023 Upper Bound conference on AI that is held in Edmonton, AB and hosted by Amii (Alberta Machine Intelligence Institute).

This is our second interview with Deon, in our first interview we focused on his past and how he got his start in AI, in this interview we focused on his vision for the future.

Could you summarize what Forethought is?

At Forethought, we are the generative AI for customer support company. We launched in 2018 at TechCrunch Disrupt, and since then we've grown to powering support for large-scale companies like Instacart and others. And basically, we provide an intelligence layer on top of their customer support ticket data, which then translates into things like chatbots, but hopefully smarter, all the way through to agent assistance, all leveraging nowadays, generative AI and things like that.

How long ago did you make the switch to generative AI?

What’s interesting is even when we were founded, we have been leveraging some form of generative AI since the early days. Nothing I would argue, as powerful as what's available today. But for example, GPT two was launched, I believe in 2018, 2019, and open source, and there were other models like T5. So, we had been leveraging various large language models, tinkering with those as well as some of our own internally that we've been building. But what's changed, I think, is that used to be a feature that gave you some advantages when you use sparingly. And now I think over the last six months, what's really changed is that it's become this seminal shift in how business models are happening and how the engines are happening. And for us, we actually had to rethink our engine, I would say, in the last few months. And we launched SupportGPT in March of this year, 2023. And that was leveraging large language models like OpenAI's GPT, and really rethinking how we do the whole thing. And it's not actually necessarily a new product, but a new engine that powers all our products, which has led to a ton of improvements across the stack.

What’s the process for a company that wants to begin using SupportGPT?

At the end of the day, it's embedded into one of our products. So what I'll say is no matter what, it always starts with your data, and that's our differentiator. And the unique thing about Forethought with most companies, you start with a blank canvas, and you have to hard-code these rules. For us, you start by integrating. So, if you're leveraging a popular help desk or a CRM like a Zendesk, a Salesforce service cloud, you want to work with us, you sign up and then install our integration into your help center. And that kicks off indexing, that kicks off training, fine-tuning, and all those things, and builds the model and builds the engine. And then from there, you can configure, you can edit, and you can deploy one of our products. And our most popular product is Solve, which is an AI agent that can sit on a website, almost think about a chatbot, or can exist in email in really any form and start automatically conversing and responding to your customers, leveraging the SupportGPT engine like ChatGPT for your website, so to speak.

But then that automation only handles 50% of issues, so to speak. And so, what about the issues that still need to go to a human? Well, we also Triage, which is the name of our second product, route issues, tag them, make sure they get to the right agent in the right channel, the right time, so you can deploy that. And then Assist is an agent co-pilot, so internal facing GPT for the customer support agents, and then ultimately, Discover, which is our most recent, but in many ways, most powerful product, that's looking for insights and making recommendations, also using generative AI to the business on what should be updated and what should be changed.

With so much reliance on generative AI, is hallucinations an issue at all?

Yeah. I think hallucination is one of the big problems leveraging generative AI for most practical use cases, and I recorded a video on LinkedIn about this, which went a little bit viral, on hallucination being one of the big problems with generative AI. There are many cases where it's not an issue, like if you have a human in the loop or it's a truly just creative use case, and you want something new like marketing, you're coming up with ad copy blogs and you're going to have somebody edit at the end of the day. That's really good that they're hallucinating a little bit. It's the form of creativity. But in cases like finance or healthcare, or customer support where your health, your wealth, your liveliness, or something, or you want a correct answer, hallucination is huge. It's a huge problem, and it's a big limiter. So, in many ways, one of the cool things we realized and why we can do this and nobody else can, was because we've been leveraging some form of generative AI for the past, call it five years.

All of our models have been focused on correctness from the get-go, understanding the policies, understanding the workflows that when somebody asks for a refund, if it's within 30 days, you can issue the refund. If it's not, you can't. And all of that stuff that we'd already built. And when it came time to leverage these more modern large language models, we found that the humanization of the large language models, plus the actual information and correctness that we could provide from the fourth-up models was a perfect match. And then you can leverage all of that correctness through prompt engineering, through fine-tuning, which actually brings the hallucination problem down. It's not eliminated completely, but it gets minimized to the point where it's very effective. And our eventual goal, obviously, is, have less hallucinations than humans would have errors. As long as your accuracy rate is on par with human accuracy, then you're in a really strong spot.

We last interviewed you in 2021. What have you learned since about being an entrepreneur?

Oh, my goodness. The whole world has changed. From an entrepreneurship perspective, a few things have happened. In late 2021, shortly after our interview, we raised our series C. We raised 65 million series C from Steadfast Capital, NEA, as well as luminaries like Gwyneth Paltrow, Baron Davis, and Robert Downey Jr. In many ways, we saw this ton of excitement around our vision, and I guess these were people who saw generative AI before. It was cool in many ways. That was an exciting and crazy time for us to just grow and pour fuel on the fire of something that was working. But what was interesting is that shortly after, mid-2022, so six months later, recession. The whole world blows up, so to speak, right? And so that taught me a lot because where some businesses were spending, they cut spending, and the whole business model of even being an entrepreneur changed from grow at all costs.

Hey, you have this giant war chest of capital, burn it and grow to get to your next round. To wait, suddenly everyone, including our VC, everybody's VCs, and board stars, wake up and said, well, hey, money's not free, and you need to be building either a profitable business, or it doesn't necessarily need to be profitable, but it needs to be efficient. And it's not about growing at all costs, but it's about growing at reasonable efficiency. And you still obviously want to grow maximum given the reasonable efficiency, but the model had almost been flip turned overnight because there's no guarantee you're going to have access to capital for another one, two years. If we slip into a recession, maybe three. Buyers are tightening up their purse string, so to speak. And so, you really have to focus on efficiency and also driving efficiency for your customers.

In many ways, that forced us to get a lot more focused on, I mean, we were always delivering value in many ways because it's just the industry we're in. If we can help you help your agents produce more, solve more questions in a given time, you're going to save a lot of costs and you're going to have a better customer experience, and you're going to have more attention. All these things have to happen, but we have to get tighter on our pitches, tighter on our messaging, and on our product focus in order to share that. And then internally, ourselves, we had to focus on efficiency. It wasn't just, hey, you have this big war chest, and you can always raise more money, and yada, yada, yada. No, it was like, let's figure out how we're going to build a business. What if we can never fundraise again?

What if it's going to take us years to fundraise before the VC market opens up? Well, that's fine. We have to be building a very efficient business so that when it comes time for a series D, we have all the metrics that success looks like, at least in this new world, in terms of what's being measured. I think that was big, and layoffs and everything, and just the whole world shifted. And I think it's been a very tough year. 2022 was a tough year, and then 2023, generative AI is hot again. And so, there's ups and downs in this.

So, anyway, I'm talking a lot there. But yeah, I think all of that teaches you to be focused and to be resilient. That's the other thing that's important is just, it's a 10-year journey, if you're successful. If you're not, it's early, and so remember that through the ups and downs, you got to take it in strides and that you're always building towards that eventual vision.

What are the biggest challenges that you face trying to build chatbots for customer service and other use cases?

Yeah, biggest challenges. I'll go in chronological order. The first was technical, do the models work? And so that's why we started even back in 2017, leveraging a lot of these modern, what we call, natural language understanding and natural language generation engines. And just making sure that you're always staying on the forefront of research, but then, at the same time, making sure you're delivering value. Because one of the things we realized is when we started asking our customers like, Hey, why? In the early days, they told us that this problem is huge. If you can build this for us, we'll pay for it. The problem is chatbots have never worked.

Even in the early days, like 2017 for us, before we had officially launched, we're like, oh, well then should we even be in this market? What's going on? But then you dig deeper. You know there's a problem, a clear-cut problem, but the solutions haven't worked. And then when I kept poking and asking, I'd be like, why? Why not? And then I realized that we, just, either by accident or because of our backgrounds, we were approaching it in a very different way. And though people had heard the word, AI, had heard the word chatbots, had heard these things, what we were doing was not what they had been doing. They had been approaching everything from a decision tree perspective. You hard code rules. If I see the word refund, go and issue refund, but if somebody says the phrase, I just want my money back, it has no idea how to handle it, because it didn't say the magic keyword.

It was all decision trees, it was all rules, and these were just no code builders that made it conversational, but they weren't AI. And I was like, oh, that's what you've been doing? And I didn't even realize that's how all chatbots are built. And I was like, but that's not what we want to do. We want to do this. And they're like, really? You can do that? And I think the first and foremost thing was just recognizing there's the nuance that though AI as a buzzword has existed for probably 10 years, this modern form of AI, what we're now seeing is possible with generative AI. And if you backtrack a few years, what was starting to become possible was not actually being applied to this space. And I think that was big. And then there's a lot of fall on effects from that.

There's a lot of noise in the market because we go and we're like, guys, we're doing something different. And then they're like, Hey, are you sure? This other company said they had AI too. And we had to bang our heads against the wall a lot to just prove our value. Once we got into a POC, once we got into their system and showed how it was working, how you can learn, then it was game over. And we start, as you can see from all of these logos and all the customers, but it kind of felt like a slow slog to prove our value every step of the way. I think that's probably been the biggest, interesting thing in this space, particular, one that's been noisy, but with bad solutions.

It's kind of obvious when you think about it that decision trees won't work. It just wasn't really obvious at the time.

It wasn't really obvious at the time. And I think it's also that was the best you could do, right?

That's true too.

People conflate conversational customer support with AI. And the two, there's a large overlap of where it is, like ChatGPT is a good conversational example, but GPT-3, the engine powering it existed before, and that was the AI. And you needed that to make the conversational better. In the past, what happened was we wanted conversational, we wanted AI, we could do conversational, but the only way to do that was to script the conversations. It's a big IVR (Interactive voice response), or a phone tree, in chat form. But then people started slapping the word AI on, and so we were getting conversational, we weren't getting AI. I think that was where the breakdown happened, not that it wasn't obvious, but just that it wasn't even possible, or people didn't even realize it was possible, or that these are two different things.

You’ve described yourself as an AI optimist in a recent podcast. Why are you so optimistic about AI?

Well, a few things. First, just broadly, I think it's a new business model or platform shift is happening. The same way computing in and of itself became a thing. Computers going from giant mainframes to the personal computing. The going from the internet, the internet being a platform shift, and then cloud going from desktop to cloud being that new business model then to mobile, and all these things. Every decade or so, there's this new shift and these shifts bring about new business models. And in fact, trillions of dollars worth of new businesses in of themselves, Salesforce, for example, a 200 billion dollar juggernaut, and all they did was they recognized this shift to the cloud. They probably would claim they brought the shift to the cloud, but they recognized the shift to the cloud, and they took databases, your system of record, and took it from on-premise, Oracle, Siebel, whatever it was to the cloud.

And then they made a killing off of that and building a system of record for salespeople, IPO, billion-dollar business. And, by the way, then they did it for service, service cloud, marketing cloud, boom, boom, boom, $200 billion business. And now I ask myself, what would've happened if that Salesforce were built in our time? In 10 years later, whatever it is, I don't know how old they are now, but in 2030. The same thing is going to happen. And I believe, which is why I'm working on this company, that it's these AI first companies, like a Forethought that will become the next Salesforces, that will create all this opportunity, trillions of dollars’ worth of opportunity. I think that that's going to mean great business, but it's also going to be amazing for customers, for consumers. When phones came around, you now have a supercomputer in your pocket.

Now you're going to have a super brain in your pocket with GPT. The next time you need to have a haircut, you just have your thing, your AI go and talk to the AI of the barbershop, and you can book a haircut. And hopefully, those are all powered by Forethought, fine. But you know what I mean? That's just a completely different way of doing things. I think economic models are going to shift. I also think this is an equalizer because it's the kind of AI innovation or the kind of innovation that is easy to adopt. Before, you had to have a PhD in AI to even know how to do this stuff. And now, my mom is using ChatGPT, and people are starting to get exposed to AI, it's opening it up, more people are probably going to want to learn how to code, more people are going to be prompt engineers, whatever it is. And I think it's actually an equalizer, whether you are an expert or not. And that's what I think I'm most optimistic about is just that economic opportunity.

What are your predictions for AGI? Do you think we'll see it in our lifetime?

Yeah. First, I'll start by saying nobody knows. This is probably true.

Anyone who claims to know is lying. But yeah, I think we're seeing enough advances. There could be AGI that exists today. Who's to say that GPT and that class of models aren't roughly as intelligent as the human brain? I don't know, maybe. It's probably not. The answer's probably not, but a human brain has what? A trillion neurons, GPT's getting up there. And so, what happens when this thing has a trillion neurons and it's trained with the right data, and our brain's evolved due to evolution, but in some senses, reinforcement learning is evolution on steroids for machines.

And all of these techniques exist right now. And it just doesn't seem that farfetched to imagine. And we might need new techniques like the transformer and the recurrent neural network, were all advances that, in hindsight, seemed obvious, but at the time or just before that, were impossible. And we couldn't even do language before we had RNs, and then transformers completely killed the RN, and then now we have generative pre-trained transformers and GPT. And so there might be new research to get there, but it's no longer far-fetched because the stuff is that good. And I think we're either already there in some senses. But we're either already, we already have all the ingredients, and we just need to find the right configuration, or we'll probably be there in the next 30 to 50 years.

And I don't know if you asked me this or not, but what do I think about it, is I think there are some risks. The best analogy to this was somebody, I think it was Andrej Karpathy, Tesla's AI person, described it as it's like nuclear energy. It's about as powerful as nuclear energy. Unlimited capacity with fusion, whatever, to create unlimited energy and allow us to do things we never thought, travel through space, because we'll have enough energy to make the trips. There's going to be so many things that are possible with nuclear energy, but you also have the dark side of those things, which are nuclear bombs, which I don't know if I'm allowed to talk about in these kind of interviews, but you also have this stuff, or nuclear weapons, I'll say, right? And that massive capacity for destruction.

And the best we've figured out as a society on how to cope with that is just everyone has it, and everyone mutually just put in regulations to make sure that we all use this stuff correctly. I think there's an equivalent amount of power there, it starts on the digital sphere. So, you almost have less fear originally, because unless something like that gets access to the nuclear weapons, then, but my point though is, if you can create the guardrails and the safeguards as AI is advancing, which I think we're doing as a society, then you're actually in a pretty good spot. Having AI that can detect fake news which is as powerful as GPT to detect GPT generated fake news is super important. I think I'll be, again, optimist about it, but that's not to say we shouldn't be simultaneously developing whatever the technology is to make it safe.

You love to read, what are some good books that you'd recommend?

The most recent book I read was ‘The Advantage’ by Patrick Lencioni, which teaches about how to build a healthy organization, everything from how you do meetings to executive team alignment, to communicating mission and values, and truly building a healthy organization. And it's a culmination of all his other books, like Death by Meeting and whatever. I would say though, the Advantage is probably the most recent one I've read. On AI, I haven't read, ironically, I haven't read any AI books recently. I read some of the AI papers, but I think there's a book called ‘Superhuman’, which was highly recommended to me, that I need to go and read. And then lastly, I will say, one more non-fiction, ‘Good to Great’. My favorite management book of all time. Fiction, the Expanse. It's now a TV series.

I've seen that TV series.

You've seen it, right? I think it's on Amazon Prime. I love it because it does such a good job of not really science fiction, but science realism. I mean, there's a lot of science fiction in it. But they take a lot of really subtle concepts, like what would happen in a world where we had space travel, and how would politics evolve? And that's how humans would probably behave. If you notice, it's very, very, very subtle. But the phones have an advanced form of AGI built in, and it's not highlighted. It's not even the central point. And that's why I like it. That's why I'm mentioning it here. But you'll notice they can just say something to their phone, and it goes off and does it, and then a couple of hours later or whatever, in movie time, it'll have done the task. And it's very subtle. You just see people just randomly talking on their phone as if they're talking to a person, and then they just carry on, and then it comes back with some more and talks to them.

And I think that's how AGI is going to operate in our world. Everyone's going to have their own personal Einstein or C3PO, or R2-D2, whether it's on a phone or eventually in robotics. And they're just going to do tasks and answer questions for you and figure things out. And they're going to be as smart as a Einstein. And that's it. If every human on the planet had a friend named Albert Einstein, AGI, intelligence as well, or better than, humans, then you'd just be able to do more stuff. And everyone always defaults, okay, well, what if all those guys became malevolent? Well, we have human level intellectual creatures, eight billion of them. And I mean, humans suck in so many ways, but it's also beautiful that we've been able to create society and civilization that we carry on.

It's interesting how opposite it is of Star Trek where they make a show of using technology. 

We don't talk about phones, but 100 years ago, if you told somebody you had a phone and you could talk to somebody in Qatar instantaneously, there's so many things now that are literal magic to people who would've been 100 years ago.

Like even how a Kindle would have been magic 100 years ago. It's a book that's always different depending on what you want.

Depending on what you want. It's literally like it's straight out of Harry Potter, right? Anyway, I think that's how I am when I think about these technologies. You take the analogy of anything else that rapidly changed, and there are lots. Electricity was powerful, the printing press, and then it's crazy and big for a while, and then the next generation just takes it for granted and it's just built in. Imagine a world where AGI is just taken for granted and just built into everything we do.

You recently tweeted, “Measure success, not by how many things you start, but how many things you finish.” What does the finish line look for your company?

What is the end game? Going back to the concept of new economic business models, one of the things that I find super invigorating, it's like when people say Google has no mote. In fact, Google said that it leaked or something, and everyone had a panic. I mean, it's true. I think they have no mote. But also, OpenAI probably doesn't have a mote. Nobody has a mote. But more importantly, the existing business models of today, there are going to be many more stories in the next coming months around business X has no mote. It's being disrupted by AI. And that's going to happen in customer service. It's going to happen to help desks, it's going to happen to CRMs, it's going to happen to search engines, it's going to happen to everything.

I think the most exciting part about that is there are a lot of companies who are just jumping on the bandwagon right now, but this technology is as much a sustaining technology as it is a disruptive, and what I mean by that is there are going to be people who adapt faster, but also people who have a reasonable amount of scale can actually accelerate at a higher derivative than the people who are not yet at scale.

And sometimes that won't bear true. There might be some smaller companies, and it's going to be all over the place, but it's not necessarily a given. And so, you have these different strata of companies, and there might be the oldest ones who are going to be too slow to adapt and get toppled over. You might have the smallest companies who move faster, and you might have somewhere in between. So that thought that we could be the kind of company that makes every single touchpoint between people and businesses faster and more intelligent. Today, that's in customer service because that's the most often interaction we have. You have a problem; you ask a question. We're helping power that. Today, we process over 100 million support tickets a year, and we're just getting started. Eventually, that's going to be a billion, 10 billion, trillion. And that's in customer support alone.

I think AI, and hopefully through Forethought, you can apply this same technology to marketing. Why do you only ever reach out to a company when you have a question? What about when you want to learn about a new product? What about when you have a service that you're interested in? What about sales? What about you reach out to businesses that you work for? You have a question about IT? You have a question about HR? Every single touchpoint can be transformed through AI. And I think we have the opportunity, genuinely have the opportunity, to be a part of that story, to be that company or one of those companies that brings about this future, and that's the end game, that's our mission. And ultimately, it's about unlocking human potential. Ultimately, this is going to make everything faster, everything more efficient, give people time, energy, and attention back so they can focus on spending time with their loved ones, whatever it is. Humanity is leveling up.

And I think we have that opportunity. I think we can do that. And I think from a business perspective, that could be a multi-billion, if not tens or hundreds of billions dollar worth of company. It could be the same way, same as the Salesforce story. What if Salesforce were built in 2023?

Thank you for great interview, readers who wish to learn more should read our first interview with Deon, or visit Forethought.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.