Artificial Intelligence
transcribe
by Southwest is um I’ve been coming for the last decade and we’re always talking about what’s the next big thing in Tech
and I would say like artificial intelligence and Chachi PT is like couldn’t be more relevant so glad to be
sitting here with you um how many folks in the audience have used chat GPT
okay so it feels like this is an audience that like we can that’s good I can be very specific on this stuff
um and remember you guys ask questions um I’m Gonna Leave 15 minutes at the end to get to it so I want to get to open Ai
and I want to talk about the company behind Chachi PT but I would love to start with Chachi BT so let’s go it’s
November 22nd you guys release chat GPT this is an AI chat bot that’s developed
by openai it’s built uh on top of large language models a large language model
called gpt3 You release it November 2022 over 100 million users in two months
this becomes the fastest growing application in history um I just for some perspective it took
Facebook meta 4.5 years to reach 100 million users took Tick Tock nine months
like why was chat GPT the killer app yeah I actually think about this
question a lot because for us you know we actually had the
technology behind it the model behind it created almost a year prior so it wasn’t new new technology right um but the
thing that we really did differently is that we did a little bit of extra work to make it more aligned so it really you
could talk to it it would do what you wanted but secondly we made it accessible right we built an interface
that was super simple it was kind of the simplest interface we could think of um we made it available for free to anyone yeah and I think that the thing
that was very interesting was as this this app really took off and people started using it we could see the gap
between what people thought was possible and what actually had been possible for quite some time right and I think to me
this is actually maybe the biggest takeaway is that I really want us as a
company and as a field to be informing people to make sure that they know what’s possible what’s not kind of
what the Forefront is going to look like and where things are going because I think that’s actually really important to figure out how to absorb this in the
society like how do we actually get all the positives and how do we mitigate the negatives like in the past I mean I mean should we talk about Tay we won’t talk
too much about today but like chat Bots are danger like are hard to put out there but there was something about what
you put out there you talk about that Gap right that it didn’t implode right
it learned a lot and all of a sudden it’s almost for this whole new era of everyone
saying could we do this could we do this could we do this why now yes so I as we
were preparing chat apt for release the thing I kept telling to the team was the most important thing we can be overly
conservative in terms of like refusing to do anything that seems even a little bit sketchy that’s fine most important thing is we don’t have to like turn it
off in three days yeah because that is worried when you when you kind of like pressed published on this yeah you’re
worried how could you not right like you know we we’ve been doing lots of testing right we have our own internal red teams we’d had beta testers on it hundreds of
beta testers for many many months but it’s very different from kind of exposing it to kind of the the full diversity and adversarial and sort of
beautiful force of of the world and where people are going to apply it and so for us I think that you know we have
been doing iterative deployment for a very long time right we’ve been you know ever since you know 2020 June or so is
when we first released a product you know an API so people could use these language models
um we’ve been making them more capable getting into more people’s hands but we kind of knew this was going to be just a
different dimension yeah and it was our first time building a consumer-facing app and so we definitely were nervous
but I think that the team really Rose to the occasion yeah well I want to look I definitely want to talk about the future
of Chad GPD because I know a lot of folks especially we have a lot of users in the audience are curious about it but let’s look I want to start at the I want
to go to the Past right because the company behind Chachi PT Dali um is open Ai and this is it’s
interesting because in the Silicon Valley world you have like a sexy company it comes out everyone’s talking about it open AI was just kind of the
opposite it just was kind of like hanging out in the background until this thing came out until you you know you
put out these products that could shift culture and start all these questions um and so let’s go back it’s 2015 July
and you’re in Menlo Park at a fancy hotel called the Rosewood I don’t know if anyone here has been to the Rosewood
it’s certainly a scene you’re sitting there who’s there what are we eating why are we there what’s
the topical conversation well I promise I’m going somewhere with this well I couldn’t tell you what was on the menu that night but yeah we just want to know
what Elon Musk was eating yeah okay sorry I got ahead of it go ahead so we so we were having a dinner uh to discuss
AI in the future and kind of just what might be possible and whether we could do something positive to affect it um
and so my co-founders at opening eyes that’s Elon Sam Ilya uh and other people were all there and kind of the question
was is it too late to start a lab with a bunch of the best people at it right we all kind of saw that like AI feels like
it’s going to happen it feels like AGI really building human level machines will be achievable and what can we do as
technologists as just people who care about this problem should try to steer in a positive direction and kind of the
conclusion from the dinner was it’s not obviously impossible to do something here and you felt a sense of urgency I did
why sure um the moment I think I think the thing that is easy to miss here right is I
think now people see chatch EBT and they say wow like suddenly you feel the possibilities right and you both see
what’s possible like not science fiction anymore right actually usable today um but it’s still hard to kind of
extrapolate to really follow the exponential to think well they might be possible tomorrow and I think that the
mode that I have been in for a long time has been really thinking about that exponential like I remember reading Alan
turing’s 1950 paper on uh the Turing test and the thing that really stuck out to me and this was you know right after
high school was he said look you’re never going to program a machine to solve this problem instead you need a
machine that can actually learn how to do it and that for me was the aha moment the idea that you could have a machine
that could solve problems that I could not that no human could figure out how to solve like that’s so clearly it could
be so transformational right there’s all these challenges global warming you know just like medicine for everyone like all
these things that are kind of Out Of Reach yeah I don’t know how we’re going to do it but if you could use machines
to Aid in that process we want to and so I think we all kind of felt like okay the technology is starting to happen you
know deep learning is an overnight success that took 70 years right it’s like you know 2012 there was a big breakthrough on image recognition but it
really took another decade to start to get to the point that we’re at now but we could all see that exponential and I
think we really wanted to to Really push it along and really steer it and I mean you at the time so you before you were
the CTO of stripe this little company called stripe and you really felt
felt time Elon Elon we can get into all this later but um that you guys could build something
better and you guys could build something that was pro-humanity and not anti-humanity which is always that fine
line in technology which I think the last decade has kind of taught us yeah and I I would I would quibble a little
bit with you know I don’t know that at least for me personally that I viewed it as we would build something better you
know in the sense of like you know there’s lots of other people who are in this field doing great work too um but I wanted to contribute you know
and I think it’s one thing that’s actually very important about Ai and something that’s very core to our values and our mission is that we think this
really should be an Endeavor of humanity right if we’re all thinking about well what’s my part of it you know like what what do I get to own
um I think that is actually one place where the danger really lies and so so tell me about how the company was and is
structured because now that was seven years ago so take us behind the curtain I saw something Sam Altman wrote he said
we’ve attempted to set up our structure in a way that aligns our incentives with a good outcome what does that even mean yeah so uh we are a weird looking
company uh in what sense uh so we started as a non-profit because we had this Grand Mission but we did not know
how to operationalize it right we know that we want to have AGI benefit all of
humanity but what is what does that mean what are you supposed to do and so we started as a research lab we hired some phds we did
some research we open sourced some code and our original plan was open source everything right you think about how you can have a good impact maybe if you just
make everything available to anyone that can make any changes they want um then you know if there’s one bad
actor well you’ve got seven billion good actors who can keep them in check and you know I think that this plan was
a good place to start but you know Italy and I we were really the ones running the company in the early days
um spent a lot of time really thinking about how do you turn this into
the kind of impact that we think is possible into something that really can make a difference in terms of just how
beneficial AGI ends up being and I think that we found kind of two
important pieces um one was simply a question of scale right
the we you know all the results that we were getting that were impressive and really pushing things forward were
requiring bigger and bigger computers and we kind of realized that okay well you’re just going to need to raise billions of dollars to build these super
computers um and we actually tried really hard to raise that money as a non-profit like I remember sitting in a room during one of
these fundraises and uh looking in the eyes of a well-known Silicon Valley investor who is that uh well I I
wouldn’t I wouldn’t share the name but uh but I he was like 100 million dollars
which is what we’re trying to race he’s like that’s a staggering amount for a non-profit [Music]
right and we looked at each other we were like it is yeah and we actually we actually succeeded we
actually raised the money um but we’ve realized that 10x that
that was not going to happen I mean if if anyone in this audience knows how to do that as a non-profit like please we
will hire you in an instant um but but we realized that that you
know that that if we wanted to actually achieve the mission that we needed a vehicle that could get us there and you
know we’re not anti-capitalists like that’s not why we started non-profit the way open as a non-profit um actually capitalism is a very good
mechanism within the bounds that it’s designed for but if you do build sort of the most powerful technology ever in a
single company and that thing becomes just like way more valuable or powerful than any company we have today
a lot of those are not really designed for that so we ended up sort of Designing this custom bespoke structure
it’s super weird like we have this limited partnership with all custom docs um you know if you’re if you’re a legal
nerd like it’s the kind of thing that like you know is like actually really really fun to dig into
um but the way we design things is that we actually have the non-profit is the governing body so there’s a board of a
nonprofit that kind of owns everything it owns this limited partnership that actually has profit interest but they’re
capped so there’s only a fixed amount that investors and shareholders are able to get and that I there’s a very careful
balance in a lot of these details in terms of like you know having the board have a majority of Entry of of people
who don’t have profit interest all these things in order to really try to change the incentive and make it so that you
know that the way that we operate the company is comports with the mission and so I think that that you know this kind
of approach of like really trying to figure out how do you balance how do you approach the mission but how do you make it practical how do you operationalize
it that is something that has come up again and again in our history I I get the history of I mean artificial
intelligence like this is nothing new obviously so like what is it about now that feels like a watershed moment and
why why now are all companies putting money into this why now is this the thing that we all are talking about what
what is it about the technology now yeah well I think the fundamental thing here is really about exponentials right it’s
like no matter how many times you hear it it is still hard to impossible to internalize
and I when I look back like we’ve done these studies on the growth of compute power in the field and we see this nice
exponential uh with a doubling period of like every 3.5 months you know as opposed to 18 months for for Moore’s Law
it’s been going on for the past 10 years or so but we actually extrapolated back even further and you can see that this exponential continues all the way
slightly smaller slope it used to be Moore’s law but over the past 10 years basically people have been being like
well you could go faster than Moore’s law by just spending more money and I think that what’s been happening is we’ve been having this accumulated value
with a slow roll rather than trying to do a flash in the pan like just get rich quick kind of a thing that maybe other
fields have been accused of uh AI I think has been a much more steady incremental build of value and I think
that the thing that’s so interesting is normally if you have a technology in search of a problem adoption is hard
it’s a new technology everyone has to change their business they don’t know where it fits in for AI for language in
particular every business is already a language business every flow is language flow and so if you can add a little bit
of value then everyone wants it and I think that is the fundamental thing that really has driven the adoption of the
excitement is that it just fits into what everyone already wants to do well and also in 2017 you know uh model
called Transformers right these large language models and this idea that you could treat everything as a language
music and code and speech and image the entire world almost looks like a
sequence of tokens right if we could put a language behind it that was really an accelerant for a lot of what you’re
building too yeah I think that that it’s uh you know the way they think about the
progress like the technological driver behind this is that it’s very easy to latch onto any one piece of it right
Transformer definitely a really important thing but where the Transformer came from was really trying to figure out how do you get good
compute utilization out of the compute Hardware that we use these gpus right the gpus themselves are really
impressive feed of engineering that has required just huge amounts of investment to get there and the software stack on
top of them and so it’s kind of each of these pieces and each one kind of has its time like one thing that’s that’s
super interesting to me looking from the inside was that we were working on language models that look very similar
to what we do today starting 2016 you know we had one person Al gradford who
was really excited about language and you know like he just was kind of working on building these little chat Bots and like we really liked Alec and
so we were just like very supportive of him doing whatever he wanted and meanwhile we were off like investing in serious projects and stuff and we’re
just like you know whatever whatever Alec needs like we’ll make sure he gets um and 2017 you know we had a first
really interesting result uh which was that we had a model that was trained on
Amazon reviews and that it was just predicting the next character the next character just what letter comes next
and it actually learned a state-of-the-art sentiment analysis classifier you could give it a sentence
and it would say like this is positive or negative may not sound very impressive but this was the moment where
we kind of knew it was going to work right it’s so clear that you would transcended just syntax where the commas
go and you’d move to semantics right and so we just knew we had to push and push and push
Amazon Amazon reviews who knew that this is the real story behind it exactly exactly you always start small um you
know every day there’s a new headline on how this technology is being adapted I just literally was Googling it yesterday it’s like the latest headlines or
companies are harnessing the power of a chatbot to write and automate emails with a little bit of personalization another headline how Chachi PT can help
abuse survivors represent themselves in court if they can’t afford otherwise we obviously know about Microsoft’s being
in A disruption search from the seat that you’re sitting in what for you and if you could be as specific as possible
what do you think are the most interesting and disruptive use cases for generative AI yeah well you know I
actually first want to just tell a personal anecdote of the kind of thing that I am very hopeful for
um so you know medicine is definitely a very high stakes area we’re very cautious with you know how people should
use this kind of Technology there but even today I want to talk about a place where I have just been like I really
want for my own use um so you know my wife a number of years ago I had a mysterious
ailment um that she had this pulsating pain right here on her abdomen bottom right side and wasn’t appendicitis I you know
we went to first doctor and the doctor was like oh I know what this is um and prescribe some antibiotic nothing
happened went to a second doctor who said oh it’s a super rare bacterial infection you need this other super powerful antibiotic took that and over
the course of three months we went to four different doctors until finally someone just like did an ultrasound and found uh what it was and
I kid you not I just typed in you know couple sentences of description
that I just gave here into chat GPT it said number one make sure it’s not appendicitis number two rupture to
Varian cyst and that is in fact what it was wow and so the kind of thing that I want is I personally in in the medical
field once I think that I don’t rely on I don’t want it to replace a doctor I don’t want it to tell me like oh go take
this you know super rare you know antibiotic I don’t want a doctor tell me that either there’s also chat sometimes confidently says the exact wrong thing
it’s kind of like a drunk crack exactly so you got to be here you gotta be careful something something we’re working on yeah yeah right yeah it’s
actually interesting we’re actually it’s just quick aside we’re actually finding that our models actually are much more calibrated than we realize and can say
when they’re they’re right or wrong but we currently destroy that information in some of the the training processes we do
so more to say there um but but yeah I think this this suggests give you ideas really you know
in in writing it’s like the blank page problem but I think this for me is where generative AI can really shine right is
really about sort of unblocking you giving you ideas and just giving you an assistant that is willing to do whatever
you want 24 7. and so let’s you’ve now the Chachi BT’s been deployed to
Millions um has there been anything that’s really shocked you or surprised you um and how people have been utilizing it
I mean of course yeah I mean I I do think that for me the overall
most interesting thing has just been seeing just how many people engage with it for so many just sort of surprising
aspects of life right like what well you know I think that the knowledge work is
maybe the area that I kind of see as most uh important for us to really focus
on and you know we see people within open AI who don’t have who aren’t native
English speakers use it to improve their writing and that you you know at first
that there’s someone with an open AI who is suddenly his his uh you could just tell it the writing style of everything
changed and it was just like way more fluid and just also just like honestly just like way more understandable uh and
at first you’re like what just happened and uh he literally at one point had hired someone to uh to do the writing
for him um but that was actually really hard it was just like a lot of overhead and he wasn’t able to get the points across um
but with catchy BT he really was able to and I think that that for me is just like so interesting to see that people
just use it as this Aid it’s cognitive Aid uh to think just more clearly into and to communicate with others well you
always know you have disruptive technology when you put it out there and people misuse it I I remember a decade
ago doing like a story on like pimps recruiting women on Facebook right which is like okay you know if someone’s using
your technology in a bad way like you have something that’s hitting mainstream so like can you tell us like what how are people using it in ways that it’s
not designed for have you what have you learned from putting this out there and what have you learned from how people are misusing it yep um well misuse is
definitely also very core to what we think about um part of why we wanted to put this out there was to get feedback to see how
people use it for good and for bad and to continually tune um and honestly one of the biggest things that we’ve seen you know we
always anticipate all the different things that might go wrong for gpt3 we really focused on misinformation and
that actually the most common people the most common abuse Vector was generating spam for drugs you know for uh you know
various medicines and so uh you don’t necessarily see the problems around the corner for Chach BT one thing we’ve just
seen is people just creating thousands or hundreds of thousands of accounts in order to just be able to use it much
more some people generating lots of spam it’s clear that that people are using it
for all sorts of different things um I think for individuals uh there’s definitely I think actually I would say
this is an interesting category of you know to your point where it says something that is confidently wrong my
drunk guy point exactly yeah over every line that’s right I’m thinking oh because it said that it must be true
yeah and that’s not true for humans it’s not quite true for AIS I think we will
get there at one point but I think that it’s going to be a process and something we all need to participate in right and
and so I mean I would love to get into kind of what we can predict in the future with AI but I
before we leave chat GPT this isn’t really chat GPT but I feel
like we have to talk about Sydney for a moment um people in the audience people who
heard of who read Kevin roos’s article in the New York Times right so just a little background
um you know you guys put chat gbt out there Microsoft Google racing to get searched products out there
um the Microsoft releases its own AI powered search of being chatbot and all
of a sudden Kevin Roos great writer of the New York Times is playing with it Sid with the Bing chat but it reveals
that its name is the shadow name is Sydney um and also try and and tells Kevin when
prompted a certain way I want to be alive and try to persuade him to leave his wife so obviously that’s like an
awkward conversation so what are the Garter and I and to be clear Microsoft’s an investor and partner this isn’t
something that open a specifically put out there but I do think it’s an interesting point of saying you put this
stuff out there the next thing you know like I don’t know Sydney’s trying to make you leave your wife um so like what are the guard rails that
to be put in like what have you learned just after after over the last couple months where you’ve seen the misuse and
what can you put in to make sure that we’re not all you know trying to leave our security and others
because Bots are telling us to I mean look like there’s I think that even the I think this is actually a great
question right and I think that even the most high order bit right the most important thing in my mind is this
question of when when do you want to release right right and my point earlier of well there was this overhang in terms
of this gap between people’s expectations what they were prepared for and what was actually possible and I
think that’s actually where a lot of the danger lies you know we we can kind of joke about or laugh about this article
because it wasn’t very convincing you know just like chatbot saying you know leave your wife Sydney was pretty spicy
very spicy but did not actually have an impact you know and and I think that is
actually in my mind the most important thing is trying to surface these things as early in the process as possible
right before you have some system that is much more persuasive or capable or
able to operate more subtle ways because we want to build try trust and figure out where we can’t trust yet you know
figure out where we put the guardrails in so that to me this is the process right this is the pain of the learning
and that we’ve seen this across the board right we’ve seen places where people try really hard to get the model
to do something and says sorry nope can’t do that um we’ve seen places where people use it for positive things and we’ve seen
people where people where cases where people have outcomes like this and so I think
that my answer is that you know we have a team uh that works really hard on these problems um you know that we have
people who build on top of us who customize the technology in different ways um but fundamentally that I think that we’re all very aligned in terms of
trying to make this technology more trustworthy and usable and you know we
do a lot of red teaming internally and so that’s you know we hire experts in different domains we hire just lots of
people to try to break the models um you know when we actually released it we knew like we’d kind of cleared a bar
we felt in terms of just how hard it was to get it to go off the rails um but we knew it wasn’t perfect we knew
that we had come up with some ways to get around it with sufficient effort and we knew that other people would find more too
um but we’ve been feeding all that back in we’ve been learning from what we see in practice and so I think that this
this sort of loop of their being failures I think that’s important because if not it means you’re kind of
holding it too long um because you’re being too conservative and then when you do release it now you
actually are taking on much more risk and much more danger it’s not 100 True in all cases but I think that that heuristic I think is is important well I
think it’s also we’ll get to a little bit later but an important segue too to talk about the future of misinformation
and how we can prep now for what’s coming with this Innovation um before we get to it I mean I I think
one of the most interesting things to me is the ability for this technology to synthesize information and make predictions and identify patterns so I
can you tell me what you think the most interesting future use cases of what artificial intelligence will be able to
predict will be like predict disease predict stock market predict if you’re going to get it not you if someone’s
going to get a divorce you know like what what could this predict take us paint the image of the future well I I
think that the real story here in my mind is amplification of what humans can do
and I think that that will be true on knowledge work I think that it will just be that we’re all it’s kind of like if
you hire six assistants who are all like you know they’re not perfect they need to be trained up a little bit
um they don’t quite know exactly what you want always but they’re so eager they never sleep they’re there to help
you they’re willing to do the drug work and you get to be the director and I
think that that is going to be what writing will look like I think that’s what coding will look like I think
that’s what sort of you know business communication will look like but I also think that is what
entertainment will look like you think about today where everyone watches the same TV show
and you know maybe people are still upset about the last season of Game of Thrones
but imagine if you could ask your AI to make a new ending that goes a different
way and maybe even put yourself in there as a main character or something
having interactive experiences and so I think it’s just going to be every aspect of life is going to be sort of Amplified by this
technology and I’m sure there are some aspects where people or companies that will say I don’t want that and that’s
okay like I think it’s really going to be a tool just like the cell phone in your pocket that is going to be uh is
going to be available when it makes sense I think we think a lot at um at my company about we’re knee deep in
exploring how artificial intelligence can personalize content develop closer relationships with the audience which is
a wide open space and an interesting space but also there’s so many ethics that come up with that so we’re
developing a lot of these ethical Frameworks around it I’m curious like when you you talk about Game of Thrones
and personalized media and being able to put yourself in it when we look at the future of media and entertainment like
would you say this is a new frontier for personalized media yeah I think I think for sure I mean I I kind of think it’s a
new frontier for for most areas you know it may not be it may not be great yet at
some some domains but I think that we we are just going to see just like way more creative action happening and to me
actually the thing that’s I think most sort of encouraging is I think it will be the barriers to entry decrease and
this is by the way how we thought about things at stripe decrease the barrier to people making payments online
integrating them into their services way more activity happens things you would never think of and I think we’ll see
this in content like individuals who you know have a creative idea that they want to see realized they now have a whole
creative Studio at their at their disposal but also the pros the people who really want to make something good or makes it something way better than
than any of the amateurs could and we’ve seen this with Dolly like there’s literally these hundred page books that people write on how to prompt Dolly and
so I think that skill doesn’t go away I think it’s this like multiplicative effect I mean but there will also be all
these murky questions around identity and attribute attribution as these models go mainstream so it’s not
perfectly clear what the data sets are used to train so when we take a step back and this is a more fundamental
question should an artist’s Style with models trained on their work should it be available to folks um to anyone
without use of attribution what are you guys thinking about when it comes to these ethical yeah so we’re so we engage
very closely with policy makers and I think this is a really important conversation to have you know
fundamentally we as a company want to provide information and to show just like kind of what’s possible and let
there be a public conversation about these topics like I don’t think that we have all the answers but we think it’s
really important to be talking about right so take from me take me for example right I like to put myself in the I’m like the beta test I’ll put
myself drivers see so let’s say someone took all the footage of me interviewing folks like you Zuckerberg whatever
throughout the years um and they included my voice my body they trained this as like Allure Lori
model I’ve already named it I don’t know please don’t do it guys um and then I don’t know why I’m like inviting this
um but then they launched a podcast um using my likeness my style my voice hopefully it’ll have fabulous style that
would be all I’d ask but like could they do it should they get a cut like should
I get a say in it like these are the as a content creator as someone who said at the the center of these ethical
questions about the future like what does that look like yeah no again I think I think this is a great question
um and I I think I think it would be kind of futuristic of me to say that I have all the answers but I can tell you a little bit of how we think about it
yeah um you know as a company like our mission to build AGI that benefits all of humanity right we’ve
kind of felt with this this cap profit structure and I really think that an answer on this question but more broadly
how do you make sure that all of humanity are kind of stakeholders in what gets built and everyone benefits
and if it’s access to these Services if it’s that you know you’re able to kind of have your AI personality or this AI
that you build up uh that represents you and and and you know sort of build a business with that
um I think all this is on the table um and I do think that there’s some there’s we need some sort of like you
know like I think that society as a whole needs to adapt here like there’s no question that something is changing
and I think that we need to lean into that question do you think don’t you get a little black mirror but why not um do
you see a future where we verify our own AI identities and we can license them out so like I could license out my
likeness to some degree yeah you know I I think again I think kind of everything
is on the table um I think actually this to your earlier question too of like why now what’s
happening now is I think everyone kind of Senses it right that we’re building almost this like new kind of internet or something like
that and in what sense well I think that the where content comes from
you know good and bad ways right how it’s created like what an application is
you know there’s web 1.0 and 2.0 or something and you know I’m not going to talk about web three uh but is it too
soon there you go yeah I know I’ve never never uh yeah uh more to say there uh
but uh I I think that where we’re going is what an application is will be very
different right that you’re right now you think of this content that was written by someone that’s very static you can’t really interact with it but
we’re clearly moving to a world where it’s alive right you can talk to it and it understands you and helps you like
honestly every time I like go through some menu and I keep trying to find like where I’m supposed to click I’m like why
is this still here yeah and I think in the future it will not um going back to kind of the next
iteration of um Chachi PT and it was built on gpt3 correct 3.5 okay 3.5 how much powerful
is the current technology You’re Building uh well I you know we’re we are
continuing to make Isa significant progress um but like blink twice if it’s 10 times
more powerful or okay uh-huh three times there we go
uh I guess I guess all I can say is that you know can’t comment on unreleased work but I can say that uh we work
really hard both on the capability side and on the safety side and that you know there’s been a lot of rumors swirling
around about what we’re going to be releasing and what’s coming out and what I can definitely say is that we do not
release until we feel good about the safety and the risk mitigations and I mean and you guys have the ability to
turn up the dial turn down the dial and we’ve seen I joke about Chachi PT confidently it’s it does so many
fascinating things and it sometimes confidently says the wrong thing like I was asking it my bio and it confidently said three out of four things that were
correct right um so can you can you give any insight maybe speaking like I don’t know we
could speak around it kind of about what future versions are going to look like will it be more cautious more creative
like yeah and let me give you a mental model for kind of how we build these systems
um so there’s the first step in the process of training what we call the base model and the base model is just
trained to predict the next word you just give it a bunch of text you give it all the good stuff and all the bad stuff
it sees true facts it sees math problems with good answers and sort of incorrect
answers that you know no one tells it’s incorrect answers it sees everything and it learns to predict it learns to
just give in some document it’s supposed to predict what comes next and has to think through everything of like okay I see some math problem but is this maybe
written by a student who doesn’t really know that much was this written by Terence Tau like you know it has to kind of infer all these contextual things to
figure out just what’s the next word um so that model it has every bias it has every ideology it has every idea
that has been almost expressed in in this system kind of compressed and
and learned and um in a real way and then we do a second step of reinforcement learning from
Human preferences of what we call Post training and here you move from this like giant sort of sea of data of
everything to really trying to hint to the model okay you kind of know all this stuff but here’s what you really should do right um and here I think there’s
something that’s very important very fraught right this question well what should the AI do who should pick that
and that I think is also a whole different conversation and something that we’re really trying to get some some legitimacy around
um but that second step is where these these sort of behaviors come from and I alluded to earlier that we found that
the base model itself is actually very calibrated on its uncertainty you know
that that if it’s it spits out like yeah there’s like a 10 chance this is right 10 of the time that thing will be right
um with with quite quite good Precision um but our current post training process
this this sort of Next Step that we do to really say no no this is what you’re supposed to do you
we don’t really include any of that calibration in there you know that the model really learns like you know what just go for it uh and that I think is
sort of a engineering challenge for us to address and so you should expect that
even with the current chat gbt we’ve released like four or five different versions since December
um and they’ve gotten a lot better if actuality improves you know that hallucinations are a problem people talk about those have improved a lot of the
jailbreaks that used to work don’t don’t work anymore and that is because of the post-training process and so I would
expect that we will have systems that are much more calibrated that are able to sort of you know check their own work
um that are able to be much more calibrated on when they should refuse when they should help you um but also that are able to help you
solve more ambitious tasks like what um well you know I think that the kinds of things that I want as a programmer is
that you know right now we started with a program called copilot which can do sort of you know just like autocomplete
online and it was very useful if you don’t really know the programming language that you’re in or you don’t know specific Library functions that
kind of stuff so it’s basically like you know being able to to get and skip the dictionary look up and it
just does it for you right there in your text editor um with chat TPT you can start being more ambitious you can start asking to
write whole functions for you or like oh like you write the skeleton of writing the bot in this way and I think that
where we’re going to go is towards systems that could help you be much more like a manager right where you can
really be like okay I want a software system that’s architected in this way and the the system goes and it writes a
lot of the pieces and it actually tests them and runs them and I think this this kind of like moving moving the you know
giving everyone a promotion right like making you into into more of the uh you know bumping up a couple pay grades I
think literally and figuratively um I think that’s like the kind of thing that they will do so the future of chat
tbts we’re all getting a promotion I think so and then I think so it’s not too bad I think there’s obviously a lot
of fear around the future of artificial intelligence right people say ai’s coming for our jobs
um be honest with all of our friends here what jobs are most at risk yeah
well the funny thing is the way I think everyone used to think about this certainly that that I did was it’s very
clear the AI is coming for the jobs just a question of what order and clearly the
like you know ones that don’t you know that are like menial or you know just
like uh you know require physical work or something like that oh the robots will come for that first and in reality
it’s been very different right that actually we’ve made great strides on cognitive labor right on you know think
about writing poems or or you know anything like that I and we have not made very much progress on physical
things and I think that that this amplification is kind of showing showing a very
different character from what was expected but it’s also the case that we haven’t really automated a whole job right that you think about I think the
lesson from that is that humans I think are much more capable than we give ourselves credit for right to actually
you know do do your job to do what you’re doing right now these aren’t the chat gpt’s questions I
had to follow up and say can you be more hard-hitting there you go oh thank you Ohio yeah yeah are these the hard-hitting ones or no they’re coming
here we go we’re about to go into the future of truth right after there we go perfect yeah um but chat EBT it’s not up
here on stage with me you know there’s a personal relationship aspect there’s this judgment aspect there’s so many
details that are are what you want from the person in charge but the like writing up the actual copy I mean ah you
know who cares about the specific question the chat gbt cannot replace me because it won’t do the follow well probably will be the follow-up question my follow-up question is so give us a
couple jobs most at risk yeah well I’ll tell you the one that I think is um is actually content moderator
um so jobs what I’ve really seen is jobs that you kind of didn’t want human judgment there in the first place
right you really just wanted a set of rules that could be followed and you kind of wanted a computer to do it but like you know and like content
moderation I think is is just a difficult thing like I think we’ve all read about people having to read these like pretty horrible posts and decides
this thing sufficiently horrible or just like slightly not sufficiently horrible to be disallowed um and that’s something
I already see this technology impacting I am so that might be a good segue into the
future of truth right because I think we’re entering this really fascinating exciting and scary era of you have the
rise of deep fakes he’s automatic automated chat Bots that could have the ability to persuade someone one way or
the other um what happens to truth and an era where
AI just makes fiction so believable well I have a slightly spicy take here which
is that you know I think technology has not been kind in a lot of ways to to journalism uh and I think that Ai and
this particular problem might actually be something that is quite kind and actually really reinforces the need for
authoritative sources that can tell you this is real right we actually went out
had humans investigate this that we looked at all the different sides of this thing and this is actually you know
these are authenticated uh you know videos or whatever it is I that can tell
you like what happened and what the facts are and so I think that where we’re going to go is away from a world
where because certainly you saw some text somewhere that you can trust it’s
true it’s never really been the case humans have always been very good at writing fake text
um images doctored images those have existed since the invention of Photography
um but this gives us the ability to do this at virals 100 right all the bad things that happened over the last
decade if we’re not careful looking at this will amplify yes and I
think I think this is this to me I I agree with this right I think this is this is kind of the Crux is that the
fact of being able to do these things at all not new the fact that being able to do it with a much lower barrier to entry
that’s new and that will I think spark the need for new Solutions we’ve never
had real answers for sort of chain of custody of information online we’ve
never really had verified identities all these things people talked about since the beginning of the internet but I
think there was never really a need for it and I think that I think the need will will come yeah I
um the folks I was at an event for the folks the center for Humane technology they’re the folks who did also like the
social dilemma which in my opinion socialism was great but it’s like we’ve been having these conversations for 10 years before Netflix puts out a doc and
asks these questions right so we’re at the beginning of an interesting era and we should ask these questions you know before like we have to do a sexy doc on
it in 10 years so um there was something that was said there that I thought was really important they said that 2024 will be
the last human election meaning by 2028 we will see synthesized ads viral information powered by artificial
intelligence someone releases a Biden Trump filter tons tens of millions of videos are going out there we don’t know who’s saying what
so what can be built now like what has to happen now in your opinion to get
ahead of what will be the inevitable downside of this yeah so I think I think
this is a great question and I think this is like maybe also going to be a tip of an iceberg kind of problem where
it’s like it’s the most visible one it’s query extremely impactful it’s one that you know has been very topical for a
long time but I think that we’re going to see the same questions appearing across all sorts of human endeavor of
just as there’s more access to Creation how do you sift through for good creation how do you actually you know
find what is true or find what is high quality or you know how do you how do you make sense of it
um I think some of this is really going to be about what tool people building good tools like we’ve seen this within I
think the social media space it’s like even for example uh you know people building tools for uh for for cyber
harassment you know to make it so that people can easily block you know various uh efforts and things like that
um and I think that we need lots of tools to be built here that are really tackling this problem and so that’s one
reason that we you know we don’t just build chat GPT the app actually our main focus is building a platform
um so we release an API uh anyone can can use this to build applications and I
think that that you have a an opportunity some using traditional technology some using uh you know the
AIS technology itself in order to actually sift through and figure out like what is high quality curated and
people want to put their stamp of approval on it right um you I remember the move fast and break
things era of meta Facebook remember they used to have the signs that said move fast and break things I know open
AI puts these things out there in an iterative way and as the philosophy about you know limiting growth to some degree and and getting uh feedback but
now I would say because of what’s launched there’s this AI race with the biggest companies throwing in money
investing and we both know that the economic incentives don’t always align with what’s best for society
um what do you think we’ve learned from the last decade of tech Innovation um that we must use as we enter into
this new era where the stakes You could argue or even higher yeah we think about this a lot like I have spent a lot of
time really trying to understand for each of the big tech companies you know what did they do wrong
and right you know but but like to the extent that the things that mistakes were made like like what are they what can we learn and actually one thing I
found very interesting is that there’s not really consensus on that answer like I wish there was a clean narrative that
everyone knew and it’s just like just don’t do this thing well I I could give an opinion I love it um
many times and I would say just having been across from some of those folks I think
the biggest mistake is you is not understanding humans in a nutshell right so how I think like
you know we’ve got the stamp of approval on them right great in the audience okay so I think it’s you I mean it’s um it
sounds like you’ve done a lot of you guys have done a lot of thinking into how you put this out there and how you build out these apis that other people
can build on who are the people that need to build up for for these Solutions like who can you guys now that you have
a seat in Silicon Valley and you’re at this really powerful place like who do you guys bring in that’s different diverse and interesting yeah so we so we
do we do quite a lot of Outreach and I actually think this is one of the things that’s going to be most important like even for example on uh on how we make
decisions on the limits of what they I should say um we’ve written a blog post about this
um but we think that this is something that really needs legitimacy it can’t just be a company in Silicon Valley it can’t just be us who’s making these
decisions it has to be Collective and so we’re actually uh and we’ll have more to share soon in terms of exactly what
we’re doing but we’re really trying to scale up efforts to get input to to actually be able to help make Collective decisions
um and this just kind of like question of global governance is something that has been really core to our goals from the beginning and so I think that the uh
it’s just so clear that you do need everyone to have a seat at the table here and that’s something we’re very committed to and and then talking like
regulation I think it’s open AI talks about moving at a bit of a slower Pace but these tools are being deployed to
Millions so the FDA doesn’t allow a drug to go out to the market unless it’s safe so what is the right regulation looks
like for artificial intelligence and what’s happening so yeah this is again something we’ve been we’ve been engaging with policy makers since day one really
um I did a couple of congressional testimonies back in like 2016 2017 uh it’s so interesting to see the the
policymakers were already quite smart on these issues and already starting to engage um and I think that you know one thing
we think is really important is really about focusing regulation on regulating harms right that it’s very tempting to
regulate the means and we’re actually seeing this right now with like the EU AI act that’s kind of a question of exactly how to sort of operationalize
some some of these issues um and that the thing you really want is to really say like let’s think about the stakes and what really parse apart what
are high stakes areas what are low stakes areas what does it mean to do a good job how do you know yeah and these
sort of measurements and and evaluations like those are really really critical and so we think the government it’s a
key part of the issue right like this question of how do you get everyone involved the answer is we have institutions that are meant for that
right um and so should there be a new regulatory body for artificial intelligence because often remember when
Zuckerberg went to Congress and they asked how Facebook made ads sorry I have Facebook made money and the answer was like we sell ads you know so really
understanding because it certainly seems like there’s going to be all these new issues should there be a new regulatory body for this again I think it’s on the
table I think more more likely what I see happening is like I think that AI is just going to be so baked into so many
different pieces and honestly so helpful in so many different areas that you kind of can’t have the FDA not know about AI
right you can’t have any of these institutions be like ah someone else has got it it’s all good right and so I think that you do need some cohesive
strategy but I think that every organization government or otherwise is
going to have to understand Ai and really really figure it out um well I know we have to wrap soon because I want to get to questions but I
thought we could do a little lightning round I love a good lightning round okay AI will be sentient when uh a long time
from now like how long uh this kind of question I prefer not to
comment okay hard to answer most interesting use future future use cases for Dolly I I think it’s going to be I
just making your dreams come to life huh in what sense
it’s rendering and like you’ll you’ll get great visions of your dreams spiciest take on the future of it is
spiciest take on the future of AI that you’re generally not allowed to say publicly [Laughter]
oh man uh I think that I think we’re gonna figure
it out I think it’s going to go well you’re optimistic I’m optimistic I consider myself an optimistic realist I
think it’s not going to go well by default but I think that like I think Humanity can rise to this challenge Elon Musk no longer really really involved
with open AI Professor failure
well I think a failure in our part for sure um what sense well I think we were not fast enough to address uh biases in chat
GPT and we did not intend them to be there um that our goal really was to
have a system that would kind of you know be be sort of egalitarian sort of treat all the the sort of mainstream
sides equally um and we actually made a lot of improvements on this over over the past month and we’ll we’ll have we’ll have
more to share soon um but yeah I think that that people were right to criticize us and I think that we really uh we really sort of you
know responded to that it’s one of the pieces of feedback that I think is most valuable fill in the blank a world powered by AI in 2050 is
unimaginable okay I like that um single most important ethical issue
we’re facing when it comes to the future of AI and humans
this one’s hard I I think
I think it’s the whole package honestly I think it’s this question of how the values get in there who’s in
control how did the benefits get distributed how do you make sure that the technology itself is safe and kind of used in the
right ways and the you know sort of the emergent risks that are going to appear at some point with very capable systems
don’t end up overwhelming the positives that we’re going to get and so yeah I think that it’s it’s the whole thing and
at some point to your first question you know the sentence question at what point do the systems have moral you know moral
value and the answer today is definitely not um but you know I am not I don’t know we
need to to engage the moral philosophers to help answer some of these questions are you guys going to hire philosophers uh we we’re going to hire I think kind
of everyone across the board like I think that this is this is not a like this is like this is one key thing to
get across is like I think that that within AI I’ve definitely seen this fallacy of people thinking this is a technology problem or just saying like
look there’s the sort of alignment problem of how do you make the AI to sort of you know not go off the rails um
but the society thing that’s the hard part I’m not going to worry about that and I think you can’t do that I think
that it really has to be that you engage with the whole package and that um and that I think is going to require everyone I think I like the
understanding of understanding the people behind the code that transforms uh society and so I’ve just met you in
person today but we’ve spoken a little bit and about some of the ethical stuff too you’re at the helm of one of the
most important technical technological advances of our time what do you you want people here to know about you
um well I love my wife I’m not going to listen to the chatbot it is fabulous
he’s not being replaced Sydney cannot break up that package I and you know she actually we were
talking about this last night she was asking like why you know why why do I do it because I work I work a lot um you know I think I
you know we give up a lot of time together as a result of of just like how much I really try to focus on on the
work and trying to kind of move the company forward and I hadn’t really thought about that
question for a while and I thought about it my my true answer was because it’s the right thing to do like
I just I think that this technology really can help everyone can help the world I think it’s
you know these problems that we just see coming down the pipe you know climate change again being one of them
I think we have a way out and if I can move the needle on that
and you know I’m grateful to be in the position that I am but honestly when we started the company what I cared about
most was I was just like I’m happy to do anything you know like first aid two people were arguing about something they
didn’t have a whiteboard I was a great I’ll go get the Whiteboard and I think that this problem is just so important
it transcends each of us individually it transcends our own position in it and I think it is really about trying to get
to that good future thank you I’m gonna get to some questions because people have some great
questions um do you believe that there’s a risk of a decline in human intelligence as we
start to Outsource our cognition the AI yeah it’s this is this is definitely
something that keeps me up at night um although it’s interesting to see this trend across all previous Technologies
you know radio television um you know I’ve talked to some esteem States people who have said like the
politicians these days nothing compared to Teddy Roosevelt like read you know all of Teddy Roosevelt’s like great great thoughts and people just like
don’t don’t read enough anymore and so they just like don’t think as well um it’s so unclear to me like you know I
feel like is this true or is it not um but I think that what is definitely important as we see this new technology
coming is figuring out how to have it be an intelligence multiplier right so that you know sometimes yeah you do need to
solve the problem yourself but what you really want is you want a great tutor you want someone who breaks down the
problem to you really understands what motivates you and if you have a different learning style and so I think there’s an opportunity here like if
you’re just blindly like not thinking anymore yeah you’re probably not going to learn to think but if you have something that actually is figuring out
the how do I help you fish how to help you learn to fish I think you could go way further
what is your opinion on this one was upvoted a lot so I’m I’m being true to the audience they have a good question
all right what is your opinion on intellectual property rights for AI generated content trained on the work of a particular artist we we talked a
little bit about this but the people want more uh the people the people want more um I mean honestly this is this is I
think like an important question exactly I think this is this is like asking a question about exactly how copyright
should work you know right at the creation of of the Gutenberg Press right where it’s like we are going to need to
have an answer we’re engaging with the copyright office we’re engaging with lots of different areas and I don’t I
don’t personally know what exactly the answer should be but I do think that like one thing that I I do want to say
you know not to not to kind of hedge everything here is that I do think that the content creators should be sort of
you know it should be a more prestigious a more compensated a more just like just like good thing for people to pursue now
than ever and I think if we don’t achieve that in some way then I think that something has gone wrong will there be new laws that didn’t exist oh for
sure I mean there should there should be what do you think they will be like well again I I don’t I I don’t want to speak
out of turn um I don’t want to be too yeah I just don’t want to speak out of turn on these issues um but I think that to me the
process that’s happening right now is really important you know there’s a lot of just like conversation about these
things people really care and they should um and that we are trying to figure out mechanisms just within our own you know
sort of you know slice of how we Implement things and how we um sort of work with different different partners
um you know for Dolly for example the very first people that we invited to use it we’re artists right because we really wanted to figure out how do we make this
be a tool that you are excited about and that you feel like yes like I want this I want there to be more of this in the world what I um someone had the question
what should I teach my one-year-old daughter so she can have a job 20 years from now
I think that that that the most important thing is really going to be these higher level skills right judgment
really figuring out is this good is this bad do I like this do I not knowing when to sort of you know sort of dig more
into the details um and really I think today just just even playing with these systems
um like I think that it will be the case that we’re going to make you know the next Generations of the dollies and
these these other systems just be you don’t even have to know language right they should become much more child
accessible and I think that children being sort of AI native users I think you’re going to find that you’re going
to figure out how to just use these in totally unimaginable ways um let’s see
sorry this one’s not working I’m going to this one okay how can we maintain the Integrity of AI models like chat GPT
when capital from corporates has entered the space monetizing a tool run by a
non-profit and you’ve I mean a lot of folks this is actually this is what chat gbt also asked me to ask you which is
interesting it’s very topical and so if you could give us a little
more insight because obviously we’re very far from when you guys sat at that dinner and said we want to we want to
change things and now there is there’s money there’s profit there’s all these other things so how do you guys maintain that yep well I think that our answer to
this question um and you should hold us accountable by the way um is really about structure right that
we’ve really set up our structure in a very specific way which by the way has turned off a lot of investors we have
this big purple box at the top of all of our investment docs that say the mission comes first that we may have to you know
if there’s a conflict with with achieving the mission cancel all of your profit interests which you yeah you know
sends many traditional investors running for the Hills uh and I think that you
know like there’s part there’s a part of the frame of the question that I you know sort of don’t agree with which is that I don’t think that the existence of
capital is itself a problem like I think that you know we’re all using iPhones we’re using TVs created by companies
there’s a lot of good things but I do think it comes with great incentives right it comes with this pressure to you
know sort of do what’s good for you specifically but not necessarily for the rest of society not to internalize those
externalities and so I think that the important thing for us has been to really figure out how do you set up the
incentives that are on yourself so that you do as much as possible get people to
you know the best people to join you can build the massive super computers you can actually build these tools and get
them out there but at the same time if you do succeed massively and mildly beyond anything that’s happened how do
you make sure that you don’t you know once you’ve kind of gotten to everything you don’t have to then 2x everything you know and I think that these kinds of
very subtle choices make a huge difference in terms of outcome um I want to end with a quote from your
co-founder Stan maltman he wrote a misaligned super intelligent AGI could cause Grievous harm to agree with harm
to the world an autocratic regime with a just uh with a super intelligence could
lead could lead to that too successfully transitioning to a world where super intelligence is perhaps the most
important and hopeful and scary project in human it is perhaps the most sorry I’m really messing this up
um is the most um important hopeful and scary project in human history success is far from
guaranteed and the stakes boundless downside and boundless upside are there to hopefully unite us all so last
question do you Greg is do you think we’re heading towards boundless upside and what it and if so what is the one
thing that we can do right now to make sure we tip the scales in that favor yeah so I I think I think that what
we’re seeing is looking very consistent with a slow roll you know people kind of
have thought that maybe what’s going to happen is kind of nothing and then boom you know like nothing you get either the
great outcome or the terrible outcome and I think what we’re seeing is much more of a gradual integration and that
it’s scarier because it’s much harder to kind of solve that problem in the lab and in your head right it’s not a math
problem it’s not a it’s not a code problem it’s a human problem and I think that this is the key
um and so I think that by really engaging with these Technologies all these questions that we don’t know the
answers to yet like that’s the responsibility not just of us right that’s the responsibility of
everyone not just in this room but really in this in this world and I think it’s going to be you know it’s going to be a project of of decades right to
really go from where we are to the kinds of systems that we’re talking about there and all along the way there’s going to be surprising things there’s
going to be great things that happen there’s going to be causes for Joy causes for grief um and you know I think that they all
happen in small ways now and I think in the future maybe it’ll happen bigger and bigger ways and I think that just really
engaging with this process right just really everyone educating themselves as much as possible trying the tools to
understand what is possible and figuring out how can this fit in right I love the question about what should I teach my
one-year-old because that is a hope for the future kind of question right and I think that that I am very optimistic
again I think I consider myself this realist realistic Optimist um that you really have to be calibrated
you have you can’t just blindly think it’s all going to work out but you have to engage with all the problems um but I
think it is possible that we will end up in this world of abundance and and sort of you know the all the the real good
future um I think it won’t be perfect I think there will be problems um and there will certainly be many problems along the way
but I think we can rise to the occasion do you have children uh not yet not yet
yeah we’re working on convincing my wife though okay so if you I was gonna say Do you believe that
um kids of your friends you’re if you end up having children will grow up in a better world I I do think so I think I
think we have a shot at it right and again it’s not guaranteed like I do not consider myself to to think that any of
this is for certain and I think the moment you think that it is that’s when things go wrong and so I think we all
have to be sort of constantly asking uh what can go wrong and what can we do to
prevent that right Greg Brooklyn thank you so much so much thank you appreciate it
