Learning Library

← Back to Library

From Turing to Chatbots: AI History

Key Points

  • AI’s roots stretch back over 70 years, evolving from simple mathematical puzzles to today’s deep neural networks.
  • In 1950 Alan Turing introduced the Turing Test, a benchmark where a machine is deemed intelligent if a human cannot distinguish its responses from another person’s.
  • The term “artificial intelligence” was officially coined in 1956, marking the start of more focused research and development.
  • Early AI work relied heavily on hand‑coded programs such as Lisp (introduced in the late 1950s), which used recursion to create powerful but complex logic that required constant manual updates.
  • The 1960s saw the creation of ELIZA, the first chatbot‑style program mimicking a psychotherapist, foreshadowing today’s conversational AI while still being far less sophisticated.

Full Transcript

# From Turing to Chatbots: AI History **Source:** [https://www.youtube.com/watch?v=ZHCB09O6zUk](https://www.youtube.com/watch?v=ZHCB09O6zUk) **Duration:** 00:12:43 ## Summary - AI’s roots stretch back over 70 years, evolving from simple mathematical puzzles to today’s deep neural networks. - In 1950 Alan Turing introduced the Turing Test, a benchmark where a machine is deemed intelligent if a human cannot distinguish its responses from another person’s. - The term “artificial intelligence” was officially coined in 1956, marking the start of more focused research and development. - Early AI work relied heavily on hand‑coded programs such as Lisp (introduced in the late 1950s), which used recursion to create powerful but complex logic that required constant manual updates. - The 1960s saw the creation of ELIZA, the first chatbot‑style program mimicking a psychotherapist, foreshadowing today’s conversational AI while still being far less sophisticated. ## Sections - [00:00:00](https://www.youtube.com/watch?v=ZHCB09O6zUk&t=0s) **Untitled Section** - - [00:03:45](https://www.youtube.com/watch?v=ZHCB09O6zUk&t=225s) **Early AI Languages and Expert Systems** - The speaker recounts the 1970s‑80s shift from Lisp to Prolog for rule‑based programming, the rise of expert systems, and their limited learning capabilities compared to modern AI. - [00:07:15](https://www.youtube.com/watch?v=ZHCB09O6zUk&t=435s) **Watson's Jeopardy Triumph and Language Challenges** - The 2011 breakthrough where IBM's Watson leveraged deep learning to parse and answer complex, idiom‑filled Jeopardy! clues demonstrated the power and difficulty of scaling machine learning to understand nuanced human language. - [00:10:33](https://www.youtube.com/watch?v=ZHCB09O6zUk&t=633s) **AI Agents, Deepfakes, and Future Intelligence** - The speaker discusses current AI capabilities such as image, sound, and deep‑fake generation, the emergence of autonomous “agentic” AI in 2025, and charts the progression from narrow AI toward artificial general and superintelligence. ## Full Transcript
0:00Artificial intelligence may feel like some brand-new tech trend, but the truth is AI has been 0:05evolving for over 70 years. From simple math puzzles to today's powerful neural networks, each 0:11generation built on the previous one. Let's take a look at where we've been and where we might be 0:17going with this two-part AI series, beginning with A Brief History of AI. 0:24Let's start our tour of AI with a guy named Alan Turing, who back in 0:301950, proposed what became known as the Turing Test. Now, Turing is known as the father 0:37of computer science. So, the guy did a lot. And one of his contributions was this as a way to 0:43measure if the a computer was really intelligent or not. So, this is how the Turing Test works. You have 0:50a human subject and they're separated it by a wall. They can't see who it is. They're typing on a 0:56keyboard, and they're gonna communicate with either an a computer or another 1:03person on the other side of this. And if they're typing messages and getting responses back with 1:09these two things, if this person cannot tell if they're talking to another person or a computer, 1:15then we will judge that this thing is considered to be intelligent. So that was what he proposed 1:21with this. And that was the gold standard that was taught to me back when I was in undergrad, riding 1:25my dinosaur to class. This is how we measured things, and this is where all of that stuff 1:30started off. The term AI actually was coined a little bit later in 1956, and 1:37then we started really progressing along this timeline.Ah. Back in the late, ah, 50s, there was a 1:44programing language that came out called Lisp. And Lisp was for short for list processing. And in 1:50my early days of AI programming, this is what we used. So, that was back in the the early 80s. That 1:57was really still considered to be the predominant way you you did things with AI. Now remember, I said 2:04programming. Our modern stuff isn't so much programmed as it is learning and will come to that 2:09in a few minutes. But Lisp, ah, interestingly enough, was first implemented on an IBM 704 2:16system. So, IBM was back there, ah, in those very early days, and it relied very heavily on this notion of 2:23recursion, which is something that doubles back on itself. Ah. It was very complicated to program in. But 2:29think about it this way if you don't know what recursion is, I I saw a definition that said the 2:34definition of recursion is c recursion. So again, the thing doubles back on itself. It gets very 2:41complicated really quickly, but it can also be very powerful and very elegant if you do it right. 2:46But if you wanted to change and make your system smarter, you had to go back in and write more code. 2:52This was programming. Now, in ah, in later in the 60s, we came out with something called ELIZA. 2:58And ELIZA was really the first, ah, chatbot if you wanna think of it that way, ah, well 3:05before the chatbots of today, and not nearly as sophisticated. It was designed to kind of be 3:10conversational and it talked to you very much like a psychologist would. So, it would ask you, you know, 3:17"How are you doing today?" You would respond and whatever you responded with, it would do the 3:21standard kind of "And how do you feel about that?" and, and go with those kind of of responses. But it 3:27gave us the first sense of a system that felt like it was understanding us. Now, it it also 3:33did, ah, some crude version of natural language processing. So you could put your your words not 3:40just in specific commands, but you could actually put it in a way that you could express yourself. 3:45And people started getting the sense that they were talking to an intelligent being. In the 70s 3:51then, we started having a different programing language that people started to, to glom on to, for 3:56doing AI programming, and ah I I really began ah to start using it in the 80s. And the the name 4:03of the language is called Prolog. It was a short for programming in logic. And the idea was instead 4:09of having these recursive systems that that we had with Lisp, with Prolog, we had a bunch of rules. 4:15And you would set down a whole bunch of rules, maybe relationships or things like that, and then 4:20have it run inferences against those things. But again, with both of these systems, one of the major 4:26hallmarks was if you wanted to make your system more intelligent, add more capability to it. You 4:31had to go back and add more code. So you were programming these systems. They were not really 4:36learning in the in the sense that we think of it today.Ah. Then in the 80s, this is when we started 4:42having a boom in the area of expert systems. The idea was that we could have systems that would 4:49learn a certain amount of things. We could put certain kind of constraints in it, and then it 4:53would be able to figure out ah certain advice that it could give us in particular context. Businesses 4:58were really big on the potential and there was a lot of hype, a lot of expectation, but it never 5:05really delivered on that expectation, not in the big way that everybody was thinking. So, this kinda 5:11went through ah, ah, if if people were getting a little bit interested. Then they started getting a 5:16little less interested when they saw that the expert systems were kinda brittle. They were not 5:20able to really be malleable and learn as quickly as we'd like them to. Then there was a big 5:26milestone that occurred in 1997. IBM built an AI system called Deep Blue. 5:33And what Deep Blue did was for the first time in history, we had a computer that beat the che the best 5:40chess player in the world, Garry Kasparov. Now, it had been thought that you could write a computer 5:45program that would be able to beat an average chess player, maybe even a very good chess player. 5:51But to overcome the ah intelligence, the expertise, the planning skills, the strategy, 5:58the creativity, the just sheer genius of what it would take to be a chess grandmaster, it was 6:04thought no computer would ever be able to do that. Well, again, that happened in 1997. That was 6:09actually a a good while back. And when it happened, it really signaled again a resurgence 6:16in the thoughts around AI and what this thing might be able to do. Then, ah, we moved on to 6:23in the in the 2000s on. Now, this technology had actually been around in research for a 6:29while, but it's when it really started to catch people's imagination that we started to see the 6:36growth of machine learning and deep learning algorithms, where machine learning was now doing 6:41pattern matching and deep learning was simulating human intelligence through neural networks. So, this 6:48thing then started to grow across. And in fact, we're still using that technology today as 6:53the basis for how we're doing AI. But this was a big departure from the Prologs and the Lisps 6:58where we were programing a system. In this case, the system was learning. We would show it a lot of 7:03different things and then ask it to predict what the next thing was, or I show it a bunch of things 7:09and ask it to tell me which one doesn't belong in this group. So it was pattern matching on steroids. 7:15That was machine learning, and it was learning through seeing these patterns and recognizing 7:20them. But it could do it on a massive scale that would be very hard for humans to be able to 7:25accomplish. Then we took machine learning and deep learning capabilities, and there was another huge 7:30milestone that happened in 2011, when the TV game show, ah, IBM used a computer called 7:37Watson to play Jeopardy! And Jeopardy! is a game, if you're not familiar with it, asks a lot of 7:44trivia questions in a lot of different areas. This was actually a very difficult problem to solve 7:50for a number of reasons. One, because the questions come in natural language form, and the the way we 7:55express ourselves with language can be varied, ah, in the great degree. There are things that we use 8:02like puns and idioms, figures of speech. If I say that, ah, it's raining cats and dogs outside, you know 8:08I don't mean that that there are small animals falling out of the sky. But those are the kinds of 8:13things that go into the clues that are in Jeopardy! And we had to have a computer that would 8:18understand those vagaries of human language and understand what to take literally and what not to. 8:24You couldn't just program rules or, ah, some sort of list processing that would know and anticipate 8:31all of those. You can't even list all of those that you know, those idioms. So this was a really 8:36hard problem. IBM had, ah, a, a case where we use our Watson computer to play against two of the 8:43all-time Jeopardy! champions. That was again in 2011, and we beat them both, ah, three nights in a row. 8:49This was another big milestone in AI. And it's interesting to me that this actually came along 8:56much later than winning at chess, ah, where, ah, it's because there's so much variability in this and 9:03the subject matter is so broad. So you had to be an expert in this, this system, and it couldn't 9:10just be going out to the internet and querying all these things. It had to be coming up with 9:14answers very quickly because, you know, if you've ever seen the game show Jeopardy! if you don't 9:19answer quickly, then someone else will answer it for you. And, if you answer if you're the first one 9:23that answers and you're wrong, then you lose points. So you had to calculate how confident am I 9:29in my answer? So, this was, ah, a lot of really important work that showed the possibility again 9:35for AI, after there had been a period of kind of disappointment and people hadn't really seen much 9:41come out of all of this. Around about 2022, we had another major inflection point where AI 9:48suddenly got real for a lot of people, and that was when we introduced this idea of generative AI 9:55based on foundation models. And here is where we started to see the rise of the chatbots. And 10:00that's what got everyone's imagination, because now we were seeing not a a fairly stiff natural 10:06language processor like ELIZA was. It was very limited in terms of what it could talk about. Now 10:12we had something that acted like an expert, and it would do all kinds of amazing things. seemed to 10:17know the answer to everything, be very conversational. And this is when for a lot of 10:22people, it felt like AI finally got real. And it generates more than just text. You know, we could 10:28have it write a report for us. We could have it summarize emails or documents, things like that. 10:33Also, we could use it to generate images or generate sounds And from that we could also 10:39generate deepfakes. So I could have something that is an impersonation of a real person that looks 10:45realistic enough that it would fool someone. So, a lot of good, a lot of bad, a lot of all of this 10:51happening, but a lot of excitement. And as I said, for a lot of people, this is when AI suddenly 10:58got real even though it had been happening for a long time. And then where are we going with this? 11:04Well, we're already seeing 2025 I think has been the year of the agents. This is when we start 11:10seeing agentic AI coming in, where we're taking an AI and giving it more autonomy, where it's able 11:17to operate on its own. We give it certain goals and things that it's supposed to accomplish, and 11:22then it uses different services in order to accomplish those things for us. So. we're gonna 11:27see a lot more of this happening as well. And now where does the future head for us? Well, the short 11:33version is if all of this is a sort of artificial, narrow intelligence where the intelligence is 11:39specific in particular areas, things that it can do, well, the next thing to be would be artificial 11:45general intelligence, where we have something that is as smart or smarter than a person in 11:50essentially every area that we could imagine. And then the next area would be artificial superintelligence, 11:56where we have something that far exceeds human capabilities in terms of 12:01intelligence across a wide variety of things. So you can see with this, basically, we've it's been a 12:07what felt like a snail's pace of progress as we move from these early days until we started 12:13adding more and more capabilities with machine learning. And then we started introducing 12:18generative AI, and now we're off to the moon. For decades, it felt like AI was 12:25just a pipe dream. Then suddenly it seems like AI can do everything. But can it really? 12:32Well, in the next video, in this two-part series, we'll take a look at what are the limits of AI, 12:37both in terms of what it can do and what it can't do, at least not yet.