IBM's Dario Gil on AI Evolution
Key Points
- The conversation introduces Dario Gil, IBM’s chief AI executive, highlighting IBM’s decades‑long role in AI milestones such as Deep Blue and Watson.
- Gil notes that although AI research dates back to the 1950s, the term “AI” was once disfavored in academia and only regained credibility with the deep‑learning breakthroughs of the last decade.
- He emphasizes the need to demystify AI by stripping away jargon so people can better understand its real capabilities and limitations.
- Using the rapid transcription of interview recordings as an example, Gil illustrates how AI is already transforming ordinary workflows and dramatically increasing efficiency.
Sections
- AI Evolution: IBM's Dario Gil Interview - The host introduces IBM's AI chief Dario Gil, referencing IBM’s historic projects like Deep Blue and Watson, and frames a conversation aimed at demystifying modern AI and its limits.
- AI’s Resurgence and IBM Milestones - The speaker discusses how deep‑learning’s revival revived AI’s legitimacy, recalls IBM’s historic involvement from the Dartmouth conference to the 2000 Jeopardy breakthrough, and highlights the promise of leveraging vast digital knowledge for productive collaboration.
- AI Democratization and Value Concentration - The speaker argues that although AI tools will become universally accessible, the power to create AI and embed proprietary data for lasting competitive advantage will stay concentrated, shaping equity between the haves and have-nots.
- AI as Catalyst for Institutional Collaboration - The speaker critiques institutions that overstate their AI capabilities, highlights the massive resources required for modern foundation models, and proposes that AI can serve as a tool to dismantle siloed structures and promote cross‑disciplinary collaboration.
- AI Ethics in Hollywood Contracts - A speaker outlines how AI’s capabilities, such as voice cloning, should shape writer protections and contract negotiations during the recent Hollywood writers strike.
- AI Redefining Doctor-Patient Roles - A speaker explains that AI will take over tedious tasks, allowing doctors to spend more time on meaningful patient interaction, and stresses the importance of preparing upcoming physicians for a fundamentally altered, AI‑augmented profession rather than dissuading them from it.
- Rethinking Assessment in Medical Education - The speakers debate how to embed data‑driven quality tools and benchmarking into medical training, questioning traditional essay assignments and proposing a new instructional lens for better patient outcomes.
- Banning Calculators for Deeper Learning - The speaker recounts a reform that removed calculators from exams, compelling students to think conceptually—a move that sparked resentment yet ultimately enhanced learning—and speculates on similar future pushes against digital tools to spur creative pedagogy.
- Beyond Tech: Human Dynamics of Revolution - The speaker contends that true societal transformation hinges less on technical capability—such as data‑science teams building applications—and more on contested, non‑technical discussions about values, credit, and new human arrangements, rejecting technological determinism.
- AI's Invisible Power and Adoption - The speaker argues that AI’s value lies in quietly enhancing everyday systems, warns against misattributing human actions to AI, and emphasizes responsible, unobtrusive integration.
Full Transcript
# IBM's Dario Gil on AI Evolution **Source:** [https://www.youtube.com/watch?v=Gxc7pDDbn7U](https://www.youtube.com/watch?v=Gxc7pDDbn7U) **Duration:** 00:29:05 ## Summary - The conversation introduces Dario Gil, IBM’s chief AI executive, highlighting IBM’s decades‑long role in AI milestones such as Deep Blue and Watson. - Gil notes that although AI research dates back to the 1950s, the term “AI” was once disfavored in academia and only regained credibility with the deep‑learning breakthroughs of the last decade. - He emphasizes the need to demystify AI by stripping away jargon so people can better understand its real capabilities and limitations. - Using the rapid transcription of interview recordings as an example, Gil illustrates how AI is already transforming ordinary workflows and dramatically increasing efficiency. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=0s) **AI Evolution: IBM's Dario Gil Interview** - The host introduces IBM's AI chief Dario Gil, referencing IBM’s historic projects like Deep Blue and Watson, and frames a conversation aimed at demystifying modern AI and its limits. - [00:03:04](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=184s) **AI’s Resurgence and IBM Milestones** - The speaker discusses how deep‑learning’s revival revived AI’s legitimacy, recalls IBM’s historic involvement from the Dartmouth conference to the 2000 Jeopardy breakthrough, and highlights the promise of leveraging vast digital knowledge for productive collaboration. - [00:06:11](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=371s) **AI Democratization and Value Concentration** - The speaker argues that although AI tools will become universally accessible, the power to create AI and embed proprietary data for lasting competitive advantage will stay concentrated, shaping equity between the haves and have-nots. - [00:09:22](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=562s) **AI as Catalyst for Institutional Collaboration** - The speaker critiques institutions that overstate their AI capabilities, highlights the massive resources required for modern foundation models, and proposes that AI can serve as a tool to dismantle siloed structures and promote cross‑disciplinary collaboration. - [00:12:30](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=750s) **AI Ethics in Hollywood Contracts** - A speaker outlines how AI’s capabilities, such as voice cloning, should shape writer protections and contract negotiations during the recent Hollywood writers strike. - [00:15:40](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=940s) **AI Redefining Doctor-Patient Roles** - A speaker explains that AI will take over tedious tasks, allowing doctors to spend more time on meaningful patient interaction, and stresses the importance of preparing upcoming physicians for a fundamentally altered, AI‑augmented profession rather than dissuading them from it. - [00:18:54](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1134s) **Rethinking Assessment in Medical Education** - The speakers debate how to embed data‑driven quality tools and benchmarking into medical training, questioning traditional essay assignments and proposing a new instructional lens for better patient outcomes. - [00:22:04](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1324s) **Banning Calculators for Deeper Learning** - The speaker recounts a reform that removed calculators from exams, compelling students to think conceptually—a move that sparked resentment yet ultimately enhanced learning—and speculates on similar future pushes against digital tools to spur creative pedagogy. - [00:25:13](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1513s) **Beyond Tech: Human Dynamics of Revolution** - The speaker contends that true societal transformation hinges less on technical capability—such as data‑science teams building applications—and more on contested, non‑technical discussions about values, credit, and new human arrangements, rejecting technological determinism. - [00:28:19](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1699s) **AI's Invisible Power and Adoption** - The speaker argues that AI’s value lies in quietly enhancing everyday systems, warns against misattributing human actions to AI, and emphasizes responsible, unobtrusive integration. ## Full Transcript
Seems like AI has kind of bubbled into our consciousness in the last year or so.
And the question is, who do you talk to?
Who can give you the best possible perspective?
Today I'm talking to Dario Gil, who's the big AI honcho at IBM, and we're going to be talking about the future of AI.
And IBM has been at the center of of AI research for decades now.
I mean, I'm sure we'll talk about that today.
But going back to Deep Blue and Watson and all these kinds of -
So, you know, there's sort of no better place to start
than with someone who's been at the center of that work for a long time.
So that's one of the reasons I'm I'm so excited about this conversation.
I'm just excited about demystifying the technology
and removing a lot of the lingo that is associated with this topic
and try to bring it to a point where we can have a better understanding of what it is, what it can and cannot do.
I wanted to say before we get started, there's something I said backstage that ...
I feel very guilty today because you're the one, you are arguably one of the most important figures in AI research in the world,
and we have taken you away from your job for a morning.
It's like if, you know, Oppenheimer's wife in 1944 said, "Let's go and have a little getaway in the Bahamas".
It's that kind of thing.
You know, what do you say to your wife?
I can't.
We have got to work on this thing I can't tell you about.
I do interviews for a living.
It's like, you know, I generate hours and hours and hours and hours and hours of transcripts
of interviews, tape of interviews.
It used to be we would send the tapes out to be transcribed.
Now they're transcribed in literally five seconds.
That's like day one trivial case.
But multiply that out and extended into the future
and you start to see how, wow, this makes a lot of ordinary operations a lot more efficient.
Well, I think the first thing is that even though AI as a field, has been with us for a long time, since the mid-1950s.
At that time, AI was not a very polite word to say.
Meaning within the scientific community, people didn't use that term.
They would have said things like, you know, maybe I do things related to machine learning, right?
Or statistical techniques in terms of classifiers and so on.
But AI had a mixed reputation, right?
It had gone through different cycles of hype.
And it's also moments of, you know, a lot of negativity towards it because of lack of success.
And so I think that that will be the first thing we probably say, like AI, like what is that like, you know?
Respectable scientists are not working on on AI defined as such.
And that really changed over the last 15 years only.
I would say with the advent of deep learning over the last decade is when that reentered again, the lexicon
of saying AI and that that was a legitimate thing to work on.
So I would say that that's the first thing I think we would have noticed that contrast 20 years ago.
For AI, you know, at the heart of it
is the ability to build machines and systems that are able to learn and to learn by example.
So on the positive side,
there's just so much digital knowledge that we have accumulated over the last number of decades
that we have this tremendous potential to train these machines,
to learn from all the past knowledge that humans have accumulated,
and then to use those machines to help us with productivity,
to in some way to collaborate with us or automate things that we don't want to do, etc..
So at what point in your 20 year tenure at IBM would you say you kind of snapped into present kind of "wow" mode?
I would say in late 2000
when IBM was working on the Jeopardy project
and just seeing the demonstrations of what could be done in question answering.
It literally, Jeopardy is this crucial moment in the history of AI.
Yeah, you know, there had been a long and wonderful history in inside IBM on AI.
So for example, you know, in terms of like these grand challenges,
at the very beginning of the field founding, which is this famous Dartmouth conference that actually IBM sponsored to create,
there was an IBMer there called Nathanial Rochester,
and there were a few others who right after that they started thinking about demonstrations of this field.
And they, for example, they created the first game to play checkers
and to demonstrate that you could do machine learning on that.
Obviously we saw later in the '90s like chess, that was very famous example of that.
That was Deep Blue.
With deep blue, yeah, right, and playing with Kasparov.
And then, but I think the moment that was really -
those are the ones felt like, you know, kind of like brute force anticipating sort of like moves ahead.
But this aspect of dealing with language and question answering felt different.
And I think for for us internally and many others was when a moment of saying like,
"Wow, you know, what are the possibilities here?".
And then soon after that, connected to the sort of advancements in computing
and with deep learning the last decade, it's just been an all out, you know, sort of like front of advancements
and that and I just continue to be more and more impressed.
And the last few years have been remarkable, too.
My hope is that we, the good outweighs the bad.
And my real hope is that the benefits are distributed.
So if all it does is make the wealthiest nations wealthier,
that's a good thing but it doesn't solve the fundamental problem we have as a world,
which is that there is a big gap between the haves and the have-nots.
If AI ends up helping the have-nots more than the haves, then it becomes really interesting.
That's actually one thing I really want to talk to Dario about.
What is the kind of, what's the shape of the impact?
You know, is it widely distributed or is it concentrated near the top?
The use of AI will be highly democratized,
meaning the number of people that have access to its power
to make improvements in terms of efficiency and so on will be fairly universal.
And that the ones who are able to create AI may be quite concentrated.
So if you look at it from the lens of who creates wealth and value
over sustained periods of time, particularly, say in a context like business,
I think just being a user of a technology is an insufficient strategy.
And and the reason for that is like, yes, you will get the immediate productivity boost of like just making API calls
and you know, that will be a new baseline for everybody,
but you're not accruing value in terms of representing your data
inside the AI way that gives you a sustainable competitive advantage.
So I always try to tell people is don't just be on our user, be an AI value creator
and I think that that will have a lot of consequences
in terms of the haves and have-nots as an example, and that will apply both to institutions and regions and countries, etc..
So I think it would be kind of a mistake, right, to just develop strategies that are just about usage.
So there's a lot of considerations in terms of equity
about the data, the datasets that we accrue and what problems are we trying to solve.
I mean, you mentioned agriculture or healthcare and so on.
If we only solve problems that are related to marketing as an example,
that will be a less rich world in terms of opportunity, that if we incorporate many, many other broader set of problems.
Yeah.
Who do you think, what do you think are the biggest impediments to the adoption of of AI,
as you would like, as you think I ought to be adopted?
I mean, what are the sticking points that you would.
Look in the end I'm going to give a non-technological answer as a first one has to do with workflow, right?
So even if the technology's very capable,
the organizational change inside a company to incorporate into the natural workflow of people and how we work
is it's a lesson we have learned over the last decade is hugely important.
So there's a lot of design considerations.
There's a lot of how do people want to work, right?
How did it work today and what is the natural entry point for AI?
So that's like number one.
And then the second one is for the broad value creation aspect of it is the understanding inside the companies
of how you have to curate and create data
to combine it with external data such that you can have powerful AI models that actually fit your need
and that aspect of what it takes to actually create and create data for these modern AI, it's still a work in progress, right?
I think part of the problem that happens very often when I talk to institutions is that they say, "Yeah, yeah, yeah, I'm doing it.
I've been doing it for for a long time".
And the reality is that that answer can sometimes be a little of a cop out, right?
It's like, I know you were doing machine learning, you were doing some of these things,
but actually the latest version of AI, what's happening with foundation models, not only is it very new, it's very hard to do.
And honestly, if you haven't been, you know, assembling very large teams
and spending hundreds of millions of dollars of compute and such, you're probably not doing it.
Right, you're doing something else that is in the broad category.
And I think the lessons about what it means to make this transition
to this new wave is still in the early phases of understanding.
Now one of the most persistent critiques of academia, but also of many of many corporate institutions
in recent years has been siloing, right?
Different parts of the of the organization are going off on their own and not speaking to each other.
Is a real potential benefit to AI the kind of breaking down, a simple tool for breaking down those kinds of barriers?
Is that a very is that elegant way of sort of saying what...
I really think, and I was actually just having a conversation with our provost very much on this topic very recently.
Exactly on that, which is all these, you know, this appetite to collaborate across disciplines.
There's a lot of attempts towards that goal,
like creating interdisciplinary centers, creating dual degree programs or dual appointment programs.
But actually a lot of progress in academia happens by methodology too,
right, like a new, you know when some methodology gets adopted.
I mean, the most famous example of that is a scientific method as an example of that.
But when you have a methodology that gets adopted,
it also provides a way to speak to your colleagues across different disciplines.
And I think what's happening in AI is linked to that.
That within the context of the scientific method, as an example,
the methodology about which we do discovery, the role of data,
the role of these neural networks, of how we actually find proximity to concepts to one another
is actually fundamentally different than how we've traditionally applied it.
So as we see across more professions,
people applying this methodology is also going to give some element of common language to each other, right?
And in fact, you know, in this very high dimensional representation of information that is present in neural networks,
we may find amazing adjacencies or connections of things and topics in ways that the individual practitioners can not describe,
but yet will be latent in these large cultural networks.
We are going to suffer a little bit from causality, from the problem of like, "Hey,what's the root cause of that?".
Because I think one of the unsatisfying aspects that these methodologies will provide is
they may give you answers for which they don't give you good reasons for where the answers came from.
And and then there will be the traditional process of discovery of saying, "If that is the answer, what are the reasons?".
So we're going to have to do this sort of hybrid way of understanding the world.
But I do think that common layer of AI is a powerful new thing.
I would say my favorite movie for AI is Space Odyssey,
because it really has shaped so profoundly in this case, kind of like the bad side of AI.
But it has shaped the way we talk about the topic sometimes.
In the writers strike that just ended in Hollywood,
one of the sticking points was how the studios and writers would treat AI-generated content.
Would writers get credit if their material was somehow the source for a ...
but, more broadly, did the writers need protections against the use of ...
I could go on, you know what, we're all familiar with all of this.
Had you been, I don't know whether you were, but had either side called you in for advice during that -
the writers, had the writers called you and said, "Dario, what should we do about AI?"
"and how should should that be reflected in our contract negotiations?"
What would you have told them?
The way I think about that is that I would divided into two pieces.
First is what's technically possible, right?
And anticipate scenarios like, you know, what can you do with voice cloning, for example.
You know, now, for example, it is possible, there's been dubbing, right?
Let's just take that topic right?
Around the world there was all these folks that would dub people in other languages.
Well, now you can do these incredible renderings.
I mean, I don't know if you've seen them where, you know, you match the lips -
it's your original voice, but speaking any language that you want as an example.
So obviously that has a set of implications around it.
I mean, just to give an example,
so I would say create a taxonomy that describes technical capabilities that we know of today
and applications to the industry and to examples
of like, "Hey, you know, I could film you for five minutes and I could generate two hours of content of you"
"and I don't have to, you know, then if you get paid by the hour, obviously I'm not paying you for that other thing".
So I would say technological capability and then map with their expertise, consequences of how it changes the way they work,
or the way they interact, or the way they negotiate and so on.
So that would be one element of it.
And then the other one is like a non-technology related matter,
which is an element of almost of distributive justice is like who deserves what, right?
And who has the power to get what.
And and then that's a completely different discussion.
That is to say, well, if this is the scenario of what's possible, you know, what do we want and what are we able to get?
And I think that that's a different discussion, which is which for all of is life.
Which one do you do first?
I think it's very helpful to have an understanding of what's possible
and how it changes the landscape as part of a broader discussion, right, and a broader negotiation.
Because you also have to see the opportunities, because there will be a lot of ground to say,
"actually, you know, if we can do it in this way then we can all be that much more efficient
in getting this piece of work done or this filming done."
But we have a reasonable agreement about how both sides benefit from it, right?
Then that's a win-win for everybody.
This will remind us about how much we like real interaction
and it will improve the nature of our person-to-person interactions
by removing the onerous tasks that human beings are not very good at doing and were never meant to do in the first place.
So when we're talking about doctors,
I think when you go to the doctor and the diagnosis is really quick and easy,
and the doctor can spend the rest of their time talking to you about what's really wrong with you,
that's a much better interaction.
And it's better because, not because a AI is duplicating what the doctor does,
but because AI is doing something completely different.
But one of your daughters, you said, is thinking that she wants to be a doctor.
But being a doctor in a post-AI world is truly a very different proposition than being a doctor in a pre-AI world.
Do you think, have you tried to prepare her for that difference?
Have you explained to her what you think will happen to this profession she might enter?
Yeah.
I mean, not in like, you know, incredible amount of detail.
But yes, at the level of understanding what is changing.
Like this lens of, the information lens with which you can look at the world and what is possible, and what it can do.
Like what is our role and what is the role of the technology
and how that shapes, at that level of abstraction, for sure,
but not at the level of like, "don't be a radiologist, you know, because this is what we ..." This is what we want for you.
I was going to say, if you if you're unhappy with your current job, you could do a podcast called "Parenting Tips with Dario",
which is just an AI person, gives you advice on what your kids should do based on exactly that like, "should I be a radiologist?"
Dario, tell me!
Like, it seems to be a really important question!
Let me ask this question in a more ...
I'm joking, but in a more serious way.
Surely it would if, I don't mean to use your daughter as an example,
but let's imagine we're giving advice to someone who wants to enter medicine.
A really useful conversation to have is what are the skills that will be most prized
in that profession 15 years from now, and are they different from the skills that are prized now?
How would you answer that question?
Yeah, I think, for example, this goes back to how is this scientific method,
and in this context, like the practice of medicine going to change?
I think we will see more changes in how we practice the scientific method and so on as a consequence
of what is happening with the world of computing and information, how we represent information,
how we represent knowledge, how we extract meaning from knowledge as a method than we have seen in the last 200 years.
So therefore, what I would like strongly encourage is not about like, "hey, use these tools for doing this or doing that",
but in the curriculum itself, in understanding how we do problem solving
in the age of like data and data representation and so on, that needs to be embedded in the curriculum of everybody
that is, I would say actually quite horizontally, but certainly in the context of medicine and scientists and so on for sure.
And to the extent that that gets ingrained,
that will give us a lens that no matter what specialty they go with in medicine,
they will say, actually the way I want to be able to tackle improving the quality of care, the way to do that is
in addition to all the elements that we have practiced in our in the field of medicine is this new lens
and are we representing the data the right way?
Do we have the right tools to be able to represent that knowledge?
Am I incorporating that in my own, sort of with my own knowledge in a way that gives me better outcomes?
Do I have the rigor of benchmarking too and quality of the results?
So that is what needs to be incorporated.
I really can't assign an essay anymore, can I?
Can I assign an essay?
Yeah, can I say, "write me a research paper and come back to me in three weeks?" Can I do that anymore?
I think you can.
How do I do that?
I think you can do that. Look, so there's two questions around that.
I think that if one goes and explains in the context like, "what is it?
Why are we here?
Why in this class, what is the purpose of this?".
And one starts, well, assuming like an element of like decency and people are people, are they like to learn and so on,
and you just give a disclaimer,
"Look, I know that one option you have is like, just put the essay question and click go on like and give an answer, you know,
but that is not why we're here and that is not the intent of what we're trying to do.".
So first I would start with the, sort of like, the norms of intent and decency and appeal to those as step number one.
Then we all know that there will be a distribution of use cases,
that people like that will come in one ear and come out of the other and do that.
And so for a subset of that, I think the technology is going to evolve in such a way that
we will have more and more of the ability to discern right when that has been AI-generated, right, and and created.
It won't be perfect, right, but there's some element that, you can imagine inputting the essay
and you say, "hey, this is likely to be generated", right, around that.
And for example, one way you can do that, just to give you an intuition,
you could just have an essay that you write with pencil and paper at the beginning.
You get a baseline of what your writing is like.
And then later when you are, you know, generate it,
there'll be obvious differences around what kind of writing has been generated.
Yeah but you've turned ...
everything you're describing makes sense, but it greatly, in this respect at least,
it seems to greatly complicate the life of the teacher,
whereas the other two use cases seem to kind of clarify and simplify the roll, right?
Suddenly, you know, reaching student prospective students sounds like that can do that much more kind of efficiently.
Like, yeah, I can bring out administration costs, but the teaching thing is tricky.
Well, until we develop the new norms, right?
I mean, again, I mean, I know it's an abused analogy, but calculators, we dealt with that too, right?
And I said, well, calculator, what is the purpose of math, how we're going to do this?
Can I tell you my dad's calculator story?
Yes, please.
My father was a mathematician, taught mathematics at University of Waterloo in Canada.
In the '70's when people started to get pocket calculators,
his students demanded that they be able to use them and he said no.
And he they took him to the administration and he lost.
So he then changed completely throughout all of his old exams, introduced new exams where there was no calculation.
It was all like deep thinking, you know, figure out the problem on a conceptual level and describe it to me.
And they were all students deeply unhappy that he had made their lives more complicated.
But to your point, probably, the result was probably a better education, right?
He just removed the element that they could gain with their pocket calculators.
I suppose it's a version.
Of I think it's a version of that.
And so I think they will develop the equivalent of what your father did.
And I think people say, you know what, It's like these kinds of things.
Everybody's doing it generically and none of us have any meaning because all you're doing is pressing buttons.
And like, the intent of this was something which was to teach you how to write or to think of something.
That may be a variant of how we do all of this.
I mean, obviously some version of that that has happened is like,
"OK, we're all going to sit down and doing with pencil and paper and no computers in the classroom",
but there'll be other variants of creativity that people will put forth to say, "You know what?
You know, that's a way to solve that problem, too." And I'm really interested in the pace.
How how quickly does he think we go from here to something, you know, even more dramatic?
Are we talking about, you know, when people talk about the AI-driven future,
are they talking about five years, or ten years or 20 years? That's one question.
I'm curious to find out his level of optimism about AI.
I mean, there's a band of people who think that it could have really destructive effects on,
and bring all kinds of dangers, and others who point out the kind of positive aspects.
How does he balance those those two sides of it?
That's the other big question I have for him.
I think we're in a significant inflection point
that it feels the equivalent of the first browsers when they appeared and people imagined the possibilities of the internet,
or more imagined the experience of the internet.
The Internet has been around for quite a few decades.
AI has been around for many decades, and in the moment we find ourselves is that people can touch it.
And they can - before there were AI systems that were like behind the scenes,
like your search results or our translation systems,
but they didn't have the experience of like, this is what it feels like to interact with this thing.
So, so that's why I mean, I think maybe that analogy of the browser is appropriate because it's all of a sudden it's like,
whoa, you know, there's this network of machines and content can be distributed and everybody can self-publish.
And there was a moment that we all remember that.
And I think that that is what the world has experienced over the last nine months or so on.
So, and but fundamentally, also what is important is that
this moment is where the ease of the number of people that can build and use AI has skyrocketed.
So over the last decade, you know, technology firms
that had large research teams could build AI that work really well, honestly.
But when you went down into say, "hey, can everybody use it?".
"Can a data science team in a bank, you know, go and develop these applications?".
It was like more complicated.
Some could do it, but it was more the barrier of entry was high.
Now is very different.
What struck me, Dario, throughout our conversation is how much of this revolution is non-technical.
That is to say, you guys are doing the technical thing here, but the real, the revolution is going to require
a whole range of people doing things that have nothing to do with software,
that have to do with working out new, new human arrangements.
Talking about that, I mean, I keep going back to the Hollywood strike thing,
that you have to have a conversation about our values as creators of movies.
How are we going to divide up the credit and the - like that's a that's a conversation about philosophy.
It is and it's in the grand tradition of why, you know, a liberal education is so important in the broadest possible sense, right?
There's no common conception of the good, right?
That is always a contested dialog that happens within our society.
And technology is going to fit in that context, too, right?
So that's why I personally, as a philosophy, I'm not a technological determinist, right?
And I don't like when colleagues in my profession start saying like, well, this is the way the technology is going to be,
and by consequence, this is how society is going to be.
I'm like, that's a highly contested goal,
and if you want to enter into the realm of politics or the realm of other ones, go and stand up on a stool
and discuss whether that's what society wants.
You will find there is a huge diversity of of opinions and perspective.
And that's what makes you know, you know, in a democracy the richness of our society.
And in the end, that is going to be the centerpiece of the conversation.
What do we want?
You know, who gets what, and so on.
And that is actually, I don't think it's anything negative.
That's as it should be, because in the end it's anchored of who we want as humans, you know, as friends, families, citizens.
And we have many overlapping sets of responsibilities, right?
And as a technology creator, my only responsibility is not just as a scientist and a technology creator,
I'm also a member of family, I'm a citizen, and I'm many other things that I care about.
And I think that that sometimes in the debate of the technological determinist,
they start now butting into what is the realm of of justice and, you know, in society and philosophy and democracy.
And that's where they get the most uncomfortable because it's like I'm just telling you like, you know, what's possible.
And when there's pushback, it's like, yeah, but but now we're talking about how we live,
and how we work, and how much I get paid or not paid.
So, that technology is important.
Technology shapes that conversation,
but we're going to have the conversation with a different language, as it should be,
and technologies need to get accustomed to if they want to participate in that world with the broad consequences,
hey, get accustomed to deal with the complexity of that world.
Of politics, society, institutions, unions, all that stuff.
And you know, you can't be like whiney about, it is like they're not adopting my technology.
That's what it takes to bring technology into the world.
I think one of the the challenges that we have in this conversation
is that there is a lot of sort of attribution to AI, to actions that are inherently about humans and about institutions.
It's energy, its adding a huge jolt of electricity to a lot of things that we do.
You can put that electricity to use in a dangerous way, or you can use it to, you know, light homes and make cars go.
When AI might be at its best is when we don't even notice that it's there
but is just making whatever we're interacting with better.