Learning Library

← Back to Library

IBM's Dario Gil on AI Evolution

Key Points

  • The conversation introduces Dario Gil, IBM’s chief AI executive, highlighting IBM’s decades‑long role in AI milestones such as Deep Blue and Watson.
  • Gil notes that although AI research dates back to the 1950s, the term “AI” was once disfavored in academia and only regained credibility with the deep‑learning breakthroughs of the last decade.
  • He emphasizes the need to demystify AI by stripping away jargon so people can better understand its real capabilities and limitations.
  • Using the rapid transcription of interview recordings as an example, Gil illustrates how AI is already transforming ordinary workflows and dramatically increasing efficiency.

Sections

Full Transcript

# IBM's Dario Gil on AI Evolution **Source:** [https://www.youtube.com/watch?v=Gxc7pDDbn7U](https://www.youtube.com/watch?v=Gxc7pDDbn7U) **Duration:** 00:29:05 ## Summary - The conversation introduces Dario Gil, IBM’s chief AI executive, highlighting IBM’s decades‑long role in AI milestones such as Deep Blue and Watson. - Gil notes that although AI research dates back to the 1950s, the term “AI” was once disfavored in academia and only regained credibility with the deep‑learning breakthroughs of the last decade. - He emphasizes the need to demystify AI by stripping away jargon so people can better understand its real capabilities and limitations. - Using the rapid transcription of interview recordings as an example, Gil illustrates how AI is already transforming ordinary workflows and dramatically increasing efficiency. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=0s) **AI Evolution: IBM's Dario Gil Interview** - The host introduces IBM's AI chief Dario Gil, referencing IBM’s historic projects like Deep Blue and Watson, and frames a conversation aimed at demystifying modern AI and its limits. - [00:03:04](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=184s) **AI’s Resurgence and IBM Milestones** - The speaker discusses how deep‑learning’s revival revived AI’s legitimacy, recalls IBM’s historic involvement from the Dartmouth conference to the 2000 Jeopardy breakthrough, and highlights the promise of leveraging vast digital knowledge for productive collaboration. - [00:06:11](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=371s) **AI Democratization and Value Concentration** - The speaker argues that although AI tools will become universally accessible, the power to create AI and embed proprietary data for lasting competitive advantage will stay concentrated, shaping equity between the haves and have-nots. - [00:09:22](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=562s) **AI as Catalyst for Institutional Collaboration** - The speaker critiques institutions that overstate their AI capabilities, highlights the massive resources required for modern foundation models, and proposes that AI can serve as a tool to dismantle siloed structures and promote cross‑disciplinary collaboration. - [00:12:30](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=750s) **AI Ethics in Hollywood Contracts** - A speaker outlines how AI’s capabilities, such as voice cloning, should shape writer protections and contract negotiations during the recent Hollywood writers strike. - [00:15:40](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=940s) **AI Redefining Doctor-Patient Roles** - A speaker explains that AI will take over tedious tasks, allowing doctors to spend more time on meaningful patient interaction, and stresses the importance of preparing upcoming physicians for a fundamentally altered, AI‑augmented profession rather than dissuading them from it. - [00:18:54](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1134s) **Rethinking Assessment in Medical Education** - The speakers debate how to embed data‑driven quality tools and benchmarking into medical training, questioning traditional essay assignments and proposing a new instructional lens for better patient outcomes. - [00:22:04](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1324s) **Banning Calculators for Deeper Learning** - The speaker recounts a reform that removed calculators from exams, compelling students to think conceptually—a move that sparked resentment yet ultimately enhanced learning—and speculates on similar future pushes against digital tools to spur creative pedagogy. - [00:25:13](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1513s) **Beyond Tech: Human Dynamics of Revolution** - The speaker contends that true societal transformation hinges less on technical capability—such as data‑science teams building applications—and more on contested, non‑technical discussions about values, credit, and new human arrangements, rejecting technological determinism. - [00:28:19](https://www.youtube.com/watch?v=Gxc7pDDbn7U&t=1699s) **AI's Invisible Power and Adoption** - The speaker argues that AI’s value lies in quietly enhancing everyday systems, warns against misattributing human actions to AI, and emphasizes responsible, unobtrusive integration. ## Full Transcript
0:06Seems like AI has kind of bubbled into our consciousness in the last year or so. 0:11And the question is, who do you talk to? 0:13Who can give you the best possible perspective? 0:15Today I'm talking to Dario Gil, who's the big AI honcho at IBM, and we're going to be talking about the future of AI. 0:23And IBM has been at the center of of AI research for decades now. 0:30I mean, I'm sure we'll talk about that today. 0:32But going back to Deep Blue and Watson and all these kinds of - 0:37So, you know, there's sort of no better place to start 0:40than with someone who's been at the center of that work for a long time. 0:45So that's one of the reasons I'm I'm so excited about this conversation. 0:49I'm just excited about demystifying the technology 0:52and removing a lot of the lingo that is associated with this topic 0:56and try to bring it to a point where we can have a better understanding of what it is, what it can and cannot do. 1:09I wanted to say before we get started, there's something I said backstage that ... 1:11I feel very guilty today because you're the one, you are arguably one of the most important figures in AI research in the world, 1:23and we have taken you away from your job for a morning. 1:27It's like if, you know, Oppenheimer's wife in 1944 said, "Let's go and have a little getaway in the Bahamas". 1:36It's that kind of thing. 1:37You know, what do you say to your wife? 1:39I can't. 1:40We have got to work on this thing I can't tell you about. 1:48I do interviews for a living. 1:49It's like, you know, I generate hours and hours and hours and hours and hours of transcripts 1:53of interviews, tape of interviews. 1:55It used to be we would send the tapes out to be transcribed. 1:58Now they're transcribed in literally five seconds. 2:02That's like day one trivial case. 2:05But multiply that out and extended into the future 2:09and you start to see how, wow, this makes a lot of ordinary operations a lot more efficient. 2:15Well, I think the first thing is that even though AI as a field, has been with us for a long time, since the mid-1950s. 2:22At that time, AI was not a very polite word to say. 2:27Meaning within the scientific community, people didn't use that term. 2:31They would have said things like, you know, maybe I do things related to machine learning, right? 2:36Or statistical techniques in terms of classifiers and so on. 2:39But AI had a mixed reputation, right? 2:42It had gone through different cycles of hype. 2:44And it's also moments of, you know, a lot of negativity towards it because of lack of success. 2:53And so I think that that will be the first thing we probably say, like AI, like what is that like, you know? 2:58Respectable scientists are not working on on AI defined as such. 3:02And that really changed over the last 15 years only. 3:05I would say with the advent of deep learning over the last decade is when that reentered again, the lexicon 3:11of saying AI and that that was a legitimate thing to work on. 3:15So I would say that that's the first thing I think we would have noticed that contrast 20 years ago. 3:18For AI, you know, at the heart of it 3:21is the ability to build machines and systems that are able to learn and to learn by example. 3:27So on the positive side, 3:29there's just so much digital knowledge that we have accumulated over the last number of decades 3:35that we have this tremendous potential to train these machines, 3:39to learn from all the past knowledge that humans have accumulated, 3:44and then to use those machines to help us with productivity, 3:46to in some way to collaborate with us or automate things that we don't want to do, etc.. 3:51So at what point in your 20 year tenure at IBM would you say you kind of snapped into present kind of "wow" mode? 4:02I would say in late 2000 4:08when IBM was working on the Jeopardy project 4:15and just seeing the demonstrations of what could be done in question answering. 4:20It literally, Jeopardy is this crucial moment in the history of AI. 4:24Yeah, you know, there had been a long and wonderful history in inside IBM on AI. 4:30So for example, you know, in terms of like these grand challenges, 4:34at the very beginning of the field founding, which is this famous Dartmouth conference that actually IBM sponsored to create, 4:41there was an IBMer there called Nathanial Rochester, 4:44and there were a few others who right after that they started thinking about demonstrations of this field. 4:51And they, for example, they created the first game to play checkers 4:56and to demonstrate that you could do machine learning on that. 5:00Obviously we saw later in the '90s like chess, that was very famous example of that. 5:04That was Deep Blue. 5:05With deep blue, yeah, right, and playing with Kasparov. 5:08And then, but I think the moment that was really - 5:10those are the ones felt like, you know, kind of like brute force anticipating sort of like moves ahead. 5:15But this aspect of dealing with language and question answering felt different. 5:19And I think for for us internally and many others was when a moment of saying like, 5:24"Wow, you know, what are the possibilities here?". 5:27And then soon after that, connected to the sort of advancements in computing 5:31and with deep learning the last decade, it's just been an all out, you know, sort of like front of advancements 5:36and that and I just continue to be more and more impressed. 5:39And the last few years have been remarkable, too. 5:44My hope is that we, the good outweighs the bad. 5:50And my real hope is that the benefits are distributed. 5:55So if all it does is make the wealthiest nations wealthier, 6:02that's a good thing but it doesn't solve the fundamental problem we have as a world, 6:08which is that there is a big gap between the haves and the have-nots. 6:12If AI ends up helping the have-nots more than the haves, then it becomes really interesting. 6:18That's actually one thing I really want to talk to Dario about. 6:22What is the kind of, what's the shape of the impact? 6:29You know, is it widely distributed or is it concentrated near the top? 6:36The use of AI will be highly democratized, 6:39meaning the number of people that have access to its power 6:41to make improvements in terms of efficiency and so on will be fairly universal. 6:46And that the ones who are able to create AI may be quite concentrated. 6:53So if you look at it from the lens of who creates wealth and value 6:58over sustained periods of time, particularly, say in a context like business, 7:03I think just being a user of a technology is an insufficient strategy. 7:09And and the reason for that is like, yes, you will get the immediate productivity boost of like just making API calls 7:14and you know, that will be a new baseline for everybody, 7:17but you're not accruing value in terms of representing your data 7:22inside the AI way that gives you a sustainable competitive advantage. 7:26So I always try to tell people is don't just be on our user, be an AI value creator 7:31and I think that that will have a lot of consequences 7:35in terms of the haves and have-nots as an example, and that will apply both to institutions and regions and countries, etc.. 7:42So I think it would be kind of a mistake, right, to just develop strategies that are just about usage. 7:49So there's a lot of considerations in terms of equity 7:52about the data, the datasets that we accrue and what problems are we trying to solve. 7:57I mean, you mentioned agriculture or healthcare and so on. 8:00If we only solve problems that are related to marketing as an example, 8:03that will be a less rich world in terms of opportunity, that if we incorporate many, many other broader set of problems. 8:09Yeah. 8:10Who do you think, what do you think are the biggest impediments to the adoption of of AI, 8:17as you would like, as you think I ought to be adopted? 8:20I mean, what are the sticking points that you would. 8:23Look in the end I'm going to give a non-technological answer as a first one has to do with workflow, right? 8:29So even if the technology's very capable, 8:32the organizational change inside a company to incorporate into the natural workflow of people and how we work 8:38is it's a lesson we have learned over the last decade is hugely important. 8:43So there's a lot of design considerations. 8:47There's a lot of how do people want to work, right? 8:50How did it work today and what is the natural entry point for AI? 8:53So that's like number one. 8:55And then the second one is for the broad value creation aspect of it is the understanding inside the companies 9:02of how you have to curate and create data 9:07to combine it with external data such that you can have powerful AI models that actually fit your need 9:13and that aspect of what it takes to actually create and create data for these modern AI, it's still a work in progress, right? 9:23I think part of the problem that happens very often when I talk to institutions is that they say, "Yeah, yeah, yeah, I'm doing it. 9:29I've been doing it for for a long time". 9:32And the reality is that that answer can sometimes be a little of a cop out, right? 9:35It's like, I know you were doing machine learning, you were doing some of these things, 9:40but actually the latest version of AI, what's happening with foundation models, not only is it very new, it's very hard to do. 9:47And honestly, if you haven't been, you know, assembling very large teams 9:51and spending hundreds of millions of dollars of compute and such, you're probably not doing it. 9:55Right, you're doing something else that is in the broad category. 9:58And I think the lessons about what it means to make this transition 10:02to this new wave is still in the early phases of understanding. 10:05Now one of the most persistent critiques of academia, but also of many of many corporate institutions 10:12in recent years has been siloing, right? 10:16Different parts of the of the organization are going off on their own and not speaking to each other. 10:22Is a real potential benefit to AI the kind of breaking down, a simple tool for breaking down those kinds of barriers? 10:33Is that a very is that elegant way of sort of saying what... 10:36I really think, and I was actually just having a conversation with our provost very much on this topic very recently. 10:42Exactly on that, which is all these, you know, this appetite to collaborate across disciplines. 10:48There's a lot of attempts towards that goal, 10:50like creating interdisciplinary centers, creating dual degree programs or dual appointment programs. 10:56But actually a lot of progress in academia happens by methodology too, 11:02right, like a new, you know when some methodology gets adopted. 11:05I mean, the most famous example of that is a scientific method as an example of that. 11:10But when you have a methodology that gets adopted, 11:12it also provides a way to speak to your colleagues across different disciplines. 11:17And I think what's happening in AI is linked to that. 11:20That within the context of the scientific method, as an example, 11:24the methodology about which we do discovery, the role of data, 11:30the role of these neural networks, of how we actually find proximity to concepts to one another 11:35is actually fundamentally different than how we've traditionally applied it. 11:40So as we see across more professions, 11:42people applying this methodology is also going to give some element of common language to each other, right? 11:49And in fact, you know, in this very high dimensional representation of information that is present in neural networks, 11:55we may find amazing adjacencies or connections of things and topics in ways that the individual practitioners can not describe, 12:04but yet will be latent in these large cultural networks. 12:08We are going to suffer a little bit from causality, from the problem of like, "Hey,what's the root cause of that?". 12:13Because I think one of the unsatisfying aspects that these methodologies will provide is 12:19they may give you answers for which they don't give you good reasons for where the answers came from. 12:24And and then there will be the traditional process of discovery of saying, "If that is the answer, what are the reasons?". 12:30So we're going to have to do this sort of hybrid way of understanding the world. 12:35But I do think that common layer of AI is a powerful new thing. 12:43I would say my favorite movie for AI is Space Odyssey, 12:47because it really has shaped so profoundly in this case, kind of like the bad side of AI. 12:54But it has shaped the way we talk about the topic sometimes. 12:57In the writers strike that just ended in Hollywood, 13:00one of the sticking points was how the studios and writers would treat AI-generated content. 13:06Would writers get credit if their material was somehow the source for a ... 13:12but, more broadly, did the writers need protections against the use of ... 13:16I could go on, you know what, we're all familiar with all of this. 13:19Had you been, I don't know whether you were, but had either side called you in for advice during that - 13:25the writers, had the writers called you and said, "Dario, what should we do about AI?" 13:30"and how should should that be reflected in our contract negotiations?" 13:34What would you have told them? 13:38The way I think about that is that I would divided into two pieces. 13:42First is what's technically possible, right? 13:44And anticipate scenarios like, you know, what can you do with voice cloning, for example. 13:50You know, now, for example, it is possible, there's been dubbing, right? 13:55Let's just take that topic right? 13:56Around the world there was all these folks that would dub people in other languages. 14:00Well, now you can do these incredible renderings. 14:03I mean, I don't know if you've seen them where, you know, you match the lips - 14:07it's your original voice, but speaking any language that you want as an example. 14:11So obviously that has a set of implications around it. 14:13I mean, just to give an example, 14:14so I would say create a taxonomy that describes technical capabilities that we know of today 14:19and applications to the industry and to examples 14:23of like, "Hey, you know, I could film you for five minutes and I could generate two hours of content of you" 14:27"and I don't have to, you know, then if you get paid by the hour, obviously I'm not paying you for that other thing". 14:32So I would say technological capability and then map with their expertise, consequences of how it changes the way they work, 14:39or the way they interact, or the way they negotiate and so on. 14:42So that would be one element of it. 14:43And then the other one is like a non-technology related matter, 14:46which is an element of almost of distributive justice is like who deserves what, right? 14:51And who has the power to get what. 14:53And and then that's a completely different discussion. 14:56That is to say, well, if this is the scenario of what's possible, you know, what do we want and what are we able to get? 15:03And I think that that's a different discussion, which is which for all of is life. 15:06Which one do you do first? 15:08I think it's very helpful to have an understanding of what's possible 15:13and how it changes the landscape as part of a broader discussion, right, and a broader negotiation. 15:21Because you also have to see the opportunities, because there will be a lot of ground to say, 15:26"actually, you know, if we can do it in this way then we can all be that much more efficient 15:32in getting this piece of work done or this filming done." 15:35But we have a reasonable agreement about how both sides benefit from it, right? 15:41Then that's a win-win for everybody. 15:48This will remind us about how much we like real interaction 15:52and it will improve the nature of our person-to-person interactions 15:57by removing the onerous tasks that human beings are not very good at doing and were never meant to do in the first place. 16:05So when we're talking about doctors, 16:07I think when you go to the doctor and the diagnosis is really quick and easy, 16:14and the doctor can spend the rest of their time talking to you about what's really wrong with you, 16:18that's a much better interaction. 16:21And it's better because, not because a AI is duplicating what the doctor does, 16:28but because AI is doing something completely different. 16:31But one of your daughters, you said, is thinking that she wants to be a doctor. 16:36But being a doctor in a post-AI world is truly a very different proposition than being a doctor in a pre-AI world. 16:43Do you think, have you tried to prepare her for that difference? 16:48Have you explained to her what you think will happen to this profession she might enter? 16:51Yeah. 16:52I mean, not in like, you know, incredible amount of detail. 16:55But yes, at the level of understanding what is changing. 16:59Like this lens of, the information lens with which you can look at the world and what is possible, and what it can do. 17:07Like what is our role and what is the role of the technology 17:10and how that shapes, at that level of abstraction, for sure, 17:13but not at the level of like, "don't be a radiologist, you know, because this is what we ..." This is what we want for you. 17:17I was going to say, if you if you're unhappy with your current job, you could do a podcast called "Parenting Tips with Dario", 17:22which is just an AI person, gives you advice on what your kids should do based on exactly that like, "should I be a radiologist?" 17:30Dario, tell me! 17:31Like, it seems to be a really important question! 17:35Let me ask this question in a more ... 17:37I'm joking, but in a more serious way. 17:40Surely it would if, I don't mean to use your daughter as an example, 17:43but let's imagine we're giving advice to someone who wants to enter medicine. 17:47A really useful conversation to have is what are the skills that will be most prized 17:54in that profession 15 years from now, and are they different from the skills that are prized now? 18:00How would you answer that question? 18:02Yeah, I think, for example, this goes back to how is this scientific method, 18:08and in this context, like the practice of medicine going to change? 18:11I think we will see more changes in how we practice the scientific method and so on as a consequence 18:16of what is happening with the world of computing and information, how we represent information, 18:22how we represent knowledge, how we extract meaning from knowledge as a method than we have seen in the last 200 years. 18:30So therefore, what I would like strongly encourage is not about like, "hey, use these tools for doing this or doing that", 18:36but in the curriculum itself, in understanding how we do problem solving 18:41in the age of like data and data representation and so on, that needs to be embedded in the curriculum of everybody 18:48that is, I would say actually quite horizontally, but certainly in the context of medicine and scientists and so on for sure. 18:55And to the extent that that gets ingrained, 18:57that will give us a lens that no matter what specialty they go with in medicine, 19:02they will say, actually the way I want to be able to tackle improving the quality of care, the way to do that is 19:08in addition to all the elements that we have practiced in our in the field of medicine is this new lens 19:14and are we representing the data the right way? 19:16Do we have the right tools to be able to represent that knowledge? 19:20Am I incorporating that in my own, sort of with my own knowledge in a way that gives me better outcomes? 19:25Do I have the rigor of benchmarking too and quality of the results? 19:30So that is what needs to be incorporated. 19:32I really can't assign an essay anymore, can I? 19:36Can I assign an essay? 19:37Yeah, can I say, "write me a research paper and come back to me in three weeks?" Can I do that anymore? 19:42I think you can. 19:43How do I do that? 19:44I think you can do that. Look, so there's two questions around that. 19:48I think that if one goes and explains in the context like, "what is it? 19:53Why are we here? 19:54Why in this class, what is the purpose of this?". 19:56And one starts, well, assuming like an element of like decency and people are people, are they like to learn and so on, 20:03and you just give a disclaimer, 20:04"Look, I know that one option you have is like, just put the essay question and click go on like and give an answer, you know, 20:11but that is not why we're here and that is not the intent of what we're trying to do.". 20:15So first I would start with the, sort of like, the norms of intent and decency and appeal to those as step number one. 20:23Then we all know that there will be a distribution of use cases, 20:26that people like that will come in one ear and come out of the other and do that. 20:30And so for a subset of that, I think the technology is going to evolve in such a way that 20:35we will have more and more of the ability to discern right when that has been AI-generated, right, and and created. 20:43It won't be perfect, right, but there's some element that, you can imagine inputting the essay 20:47and you say, "hey, this is likely to be generated", right, around that. 20:51And for example, one way you can do that, just to give you an intuition, 20:54you could just have an essay that you write with pencil and paper at the beginning. 20:58You get a baseline of what your writing is like. 21:01And then later when you are, you know, generate it, 21:04there'll be obvious differences around what kind of writing has been generated. 21:08Yeah but you've turned ... 21:09everything you're describing makes sense, but it greatly, in this respect at least, 21:15it seems to greatly complicate the life of the teacher, 21:18whereas the other two use cases seem to kind of clarify and simplify the roll, right? 21:25Suddenly, you know, reaching student prospective students sounds like that can do that much more kind of efficiently. 21:31Like, yeah, I can bring out administration costs, but the teaching thing is tricky. 21:36Well, until we develop the new norms, right? 21:39I mean, again, I mean, I know it's an abused analogy, but calculators, we dealt with that too, right? 21:45And I said, well, calculator, what is the purpose of math, how we're going to do this? 21:50Can I tell you my dad's calculator story? 21:51Yes, please. 21:52My father was a mathematician, taught mathematics at University of Waterloo in Canada. 21:57In the '70's when people started to get pocket calculators, 22:00his students demanded that they be able to use them and he said no. 22:04And he they took him to the administration and he lost. 22:07So he then changed completely throughout all of his old exams, introduced new exams where there was no calculation. 22:16It was all like deep thinking, you know, figure out the problem on a conceptual level and describe it to me. 22:22And they were all students deeply unhappy that he had made their lives more complicated. 22:32But to your point, probably, the result was probably a better education, right? 22:34He just removed the element that they could gain with their pocket calculators. 22:39I suppose it's a version. 22:40Of I think it's a version of that. 22:41And so I think they will develop the equivalent of what your father did. 22:44And I think people say, you know what, It's like these kinds of things. 22:46Everybody's doing it generically and none of us have any meaning because all you're doing is pressing buttons. 22:51And like, the intent of this was something which was to teach you how to write or to think of something. 22:55That may be a variant of how we do all of this. 22:57I mean, obviously some version of that that has happened is like, 23:00"OK, we're all going to sit down and doing with pencil and paper and no computers in the classroom", 23:04but there'll be other variants of creativity that people will put forth to say, "You know what? 23:08You know, that's a way to solve that problem, too." And I'm really interested in the pace. 23:19How how quickly does he think we go from here to something, you know, even more dramatic? 23:28Are we talking about, you know, when people talk about the AI-driven future, 23:32are they talking about five years, or ten years or 20 years? That's one question. 23:37I'm curious to find out his level of optimism about AI. 23:41I mean, there's a band of people who think that it could have really destructive effects on, 23:47and bring all kinds of dangers, and others who point out the kind of positive aspects. 23:53How does he balance those those two sides of it? 23:55That's the other big question I have for him. 23:57I think we're in a significant inflection point 24:00that it feels the equivalent of the first browsers when they appeared and people imagined the possibilities of the internet, 24:09or more imagined the experience of the internet. 24:12The Internet has been around for quite a few decades. 24:15AI has been around for many decades, and in the moment we find ourselves is that people can touch it. 24:21And they can - before there were AI systems that were like behind the scenes, 24:24like your search results or our translation systems, 24:27but they didn't have the experience of like, this is what it feels like to interact with this thing. 24:32So, so that's why I mean, I think maybe that analogy of the browser is appropriate because it's all of a sudden it's like, 24:37whoa, you know, there's this network of machines and content can be distributed and everybody can self-publish. 24:43And there was a moment that we all remember that. 24:45And I think that that is what the world has experienced over the last nine months or so on. 24:50So, and but fundamentally, also what is important is that 24:53this moment is where the ease of the number of people that can build and use AI has skyrocketed. 25:00So over the last decade, you know, technology firms 25:04that had large research teams could build AI that work really well, honestly. 25:10But when you went down into say, "hey, can everybody use it?". 25:13"Can a data science team in a bank, you know, go and develop these applications?". 25:17It was like more complicated. 25:19Some could do it, but it was more the barrier of entry was high. 25:22Now is very different. 25:23What struck me, Dario, throughout our conversation is how much of this revolution is non-technical. 25:31That is to say, you guys are doing the technical thing here, but the real, the revolution is going to require 25:36a whole range of people doing things that have nothing to do with software, 25:41that have to do with working out new, new human arrangements. 25:45Talking about that, I mean, I keep going back to the Hollywood strike thing, 25:49that you have to have a conversation about our values as creators of movies. 25:57How are we going to divide up the credit and the - like that's a that's a conversation about philosophy. 26:06It is and it's in the grand tradition of why, you know, a liberal education is so important in the broadest possible sense, right? 26:15There's no common conception of the good, right? 26:19That is always a contested dialog that happens within our society. 26:23And technology is going to fit in that context, too, right? 26:26So that's why I personally, as a philosophy, I'm not a technological determinist, right? 26:30And I don't like when colleagues in my profession start saying like, well, this is the way the technology is going to be, 26:36and by consequence, this is how society is going to be. 26:39I'm like, that's a highly contested goal, 26:42and if you want to enter into the realm of politics or the realm of other ones, go and stand up on a stool 26:47and discuss whether that's what society wants. 26:49You will find there is a huge diversity of of opinions and perspective. 26:53And that's what makes you know, you know, in a democracy the richness of our society. 26:58And in the end, that is going to be the centerpiece of the conversation. 27:01What do we want? 27:03You know, who gets what, and so on. 27:05And that is actually, I don't think it's anything negative. 27:07That's as it should be, because in the end it's anchored of who we want as humans, you know, as friends, families, citizens. 27:15And we have many overlapping sets of responsibilities, right? 27:18And as a technology creator, my only responsibility is not just as a scientist and a technology creator, 27:23I'm also a member of family, I'm a citizen, and I'm many other things that I care about. 27:27And I think that that sometimes in the debate of the technological determinist, 27:32they start now butting into what is the realm of of justice and, you know, in society and philosophy and democracy. 27:42And that's where they get the most uncomfortable because it's like I'm just telling you like, you know, what's possible. 27:48And when there's pushback, it's like, yeah, but but now we're talking about how we live, 27:53and how we work, and how much I get paid or not paid. 27:58So, that technology is important. 28:01Technology shapes that conversation, 28:02but we're going to have the conversation with a different language, as it should be, 28:07and technologies need to get accustomed to if they want to participate in that world with the broad consequences, 28:12hey, get accustomed to deal with the complexity of that world. 28:16Of politics, society, institutions, unions, all that stuff. 28:20And you know, you can't be like whiney about, it is like they're not adopting my technology. 28:24That's what it takes to bring technology into the world. 28:33I think one of the the challenges that we have in this conversation 28:37is that there is a lot of sort of attribution to AI, to actions that are inherently about humans and about institutions. 28:45It's energy, its adding a huge jolt of electricity to a lot of things that we do. 28:50You can put that electricity to use in a dangerous way, or you can use it to, you know, light homes and make cars go. 28:58When AI might be at its best is when we don't even notice that it's there 29:02but is just making whatever we're interacting with better.