Learning Library

← Back to Library

ChatGPT vs Google: Finding Reliable Answers

Key Points

  • Not all information on the Internet is reliable, and distinguishing trustworthy sources from misinformation can be difficult.
  • Traditional search engines like Google present a mix of reputable links, ads, and potentially false content, often requiring users to sift through conflicting information (e.g., the debate over who invented the airplane).
  • AI chatbots such as ChatGPT deliver concise, authoritative answers without visible source citations, making them appealing for quick queries but raising concerns about transparency and reliability.
  • While chatbots may increasingly become a primary information source, they lack the observability and citation standards that help users assess credibility, creating a trade‑off between convenience and trustworthiness.
  • Nevertheless, for deeper research some users will still need the breadth of sources provided by search engines to fully evaluate and verify information.

Full Transcript

# ChatGPT vs Google: Finding Reliable Answers **Source:** [https://www.youtube.com/watch?v=RTCaGwxD2uU](https://www.youtube.com/watch?v=RTCaGwxD2uU) **Duration:** 00:08:28 ## Summary - Not all information on the Internet is reliable, and distinguishing trustworthy sources from misinformation can be difficult. - Traditional search engines like Google present a mix of reputable links, ads, and potentially false content, often requiring users to sift through conflicting information (e.g., the debate over who invented the airplane). - AI chatbots such as ChatGPT deliver concise, authoritative answers without visible source citations, making them appealing for quick queries but raising concerns about transparency and reliability. - While chatbots may increasingly become a primary information source, they lack the observability and citation standards that help users assess credibility, creating a trade‑off between convenience and trustworthiness. - Nevertheless, for deeper research some users will still need the breadth of sources provided by search engines to fully evaluate and verify information. ## Sections - [00:00:00](https://www.youtube.com/watch?v=RTCaGwxD2uU&t=0s) **Navigating Reliability in Google Searches** - The speaker highlights that online information isn’t always truthful, illustrating with a Google query on “who invented the airplane” to show how search results blend credible sources, ads, and contested claims, complicating the task of identifying reliable answers. - [00:03:02](https://www.youtube.com/watch?v=RTCaGwxD2uU&t=182s) **Risks of Subtle Data Poisoning** - The speaker explains that chatbots draw from a training corpus, and even small, intentional corruptions to that data can stealthily produce inaccurate answers, posing an insidious threat to decisions based on AI outputs. - [00:06:06](https://www.youtube.com/watch?v=RTCaGwxD2uU&t=366s) **Trust, Transparency, and AI Reliability** - The speaker warns that over‑reliance on seemingly trustworthy chatbots can lead to dangerous decisions, arguing for observable, source‑citing AI that “shows its work” to ensure verification, especially in high‑stakes contexts like code generation. ## Full Transcript
0:00I'm going to let you in on a secret-- not everything you read on the Internet is true. 0:04I know it's shocking, but it's the case. 0:08In fact, it turns out there's a lot of good information and then there's some other that's not. 0:12In trying to figure out which sources are reliable and which ones aren't is actually not always that easy. 0:17But where do we go if we're looking for answers? 0:19Well, it tends to be any more these days, we go to a search engine on the Internet. 0:24And Google happens to be one of the best of these, so that's the one that we go to, probably most often. 0:30So let's take a look. 0:32What would happen if we did a search on Google and we asked it to tell us "Who invented the airplane?" 0:38Okay, you're going to see results that look like this and it's going to get-- you see a number of names there. 0:44And apologies to my friends in Brazil, but since I'm in North Carolina, 0:48I'm going to go with the Wright brothers as the inventors, not Dumont. 0:52But these are the different sources-- so there's some debate. 0:56And if you look through the list of Google results, you're going to see things that are links that could be reliable. 1:01You're going to see some links that may be fake news, and you're going to see a bunch of ads. 1:06And trying to figure out which one is which, again, not always obvious. 1:10Also, you're going to see a discussion about this controversy as to who was actually the inventor. 1:15So if you're just looking for a quick answer, we really didn't get it there, but we did get a lot of information. 1:21Now, it turns out that we have newer technology, AI-based technology-- chatbots 1:26--that are giving us information in a much different sort of way. 1:31One of the new ones that's really taking the Internet by storm these days is called chatGPT. 1:35Let's ask it the same question. 1:38Okay, "Who invented the airplane?" 1:40And we see a very simple, succinct, authoritative sounding answer. 1:45We don't see a controversy. 1:47We don't see lots of links. 1:48We don't see ads. 1:49We don't see fake news. 1:51We don't see--there's nothing to really sort through. 1:54It's just the answer that we're looking for. 1:57Well, that's really attractive. 1:58If I can just ask a question and get "the" answer and not have to sort through it? Well, look, who isn't going to go for that? 2:04We're going to, as these chatbots get better and better, rely on them more and more. 2:10Now, this may not be a complete replacement for search engines 2:13because sometimes I really do want to look at all the different sources and sort through it. 2:17But the point is, we're going to start relying-- probably more and more --on these kinds of sources. 2:21But what makes something reliable? 2:24Well, it tends to be that we look for observability, we look for the citation of sources 2:30and things like that, because some sources we trust more than we do others. 2:34But when we're looking at something like a chatbot's answer, we don't get that. 2:40So there's a tradeoff here. 2:42Bottom line, though, if you look at these two results, one's really long and I have to sort through 2:47and pore through all the details and do my own thinking and the other and just tells me is the answer. 2:52What do you think most people are going to go for? 2:54The simple answer. 2:56Now, that sounds great. 2:58What could possibly go wrong? 3:00Let me tell you what could possibly go wrong. 3:03Well, so how does this stuff work? 3:05Well, it turns out that we have an AI system and the AI that is behind the chatbot has to get its information from somewhere. 3:15So we use a knowledge base, sometimes many different sources, and this knowledge base, we call a corpus. 3:23And that corpus is what we use to train the AI, so that then we can have a user come along here, 3:30ask a question of the AI, and get an answer back. 3:34So this is the model, and that's all fine and good until somebody decides to mess it up. 3:41Somebody like this guy comes along and says, 3:45"Wouldn't it be fun if I corrupted some of the information in this corpus?" 3:50"What if I introduced a little bit of wrong information and had it mixed in with the good?" 3:54Because if it came in, which is really, really blatantly wrong information, this guy is going to detect that. 4:01But if we come in with a little bit and just corrupt a little bit of the results, 4:04then we corrupt a little bit of the answers and this guy ends up with the wrong information. 4:11And as we become more and more relying on these kinds of sources, something like this can be insidious. 4:17It slips in, and before we know it, we've now made decisions based upon bad information. 4:23Now, one of the popular ones of these that I said before is called chatGPT. 4:28It's a great resource. 4:30And you saw what kind of answer it gave. 4:32I'm not saying that this has ever happened with chatGPT. 4:35I'm saying let's take a look at this as an example and see what could happen. 4:40What if this happened in this case? 4:42Well, in the example I gave before, we might get the wrong answer about who invented the airplane. 4:47Not the end of the world. 4:49Nobody is harmed by this. 4:51But what could happen if the corpus poisoning case led to something more important, something more insidious? 4:59Let's say I go to chatGPT and say "I want to come up with a household cleaner just using stuff around the house." 5:07Well, we could do that kind of search. 5:09And if I do that, I'm going to get back a very simple answer that says, 5:13you know what, you could use baking soda and vinegar and water and mix them in exactly this way. 5:18It's a nice formula. 5:19Again, I didn't have to search a thousand links and figure out which ones to use and which ones don't. 5:23I got a simple authoritative answer right there. 5:27So let's say I mix it up and I clean my house. 5:30Great! 5:31However, let's look at the case where a bad guy snuck in a little bit of bad information into the corpus. 5:38Again, not saying this happened with chatGPT, but I'm saying it's possible with any AI to potentially poison the corpus. 5:45And if that were to happen, what if the formula that came out then came out and said, 5:49instead of using those ingredients, let's mix ammonia and bleach. 5:53Okay, those are two things you have around the house. 5:55Well, it turns out that's quite toxic. 5:58That's a bad result for you. 6:00And it could result in health problems for the individual that ends up mixing these two things together. 6:05So this is just an example. 6:06I'm not saying that the world is going to come to an end because somebody mixed up bleach and ammonia. 6:12But imagine that example where this person is making decisions-- important decisions 6:17--based upon information in their AI, and they become so reliant because this chatbot 6:23has been so trustworthy for so long that we end up with a problem. 6:27Well, this is not without precedent. 6:29In fact, we've had chatbots go rogue before. 6:32There was a case where a chatbot went on the internet, started learning the language of the internet and the way people interacted, 6:37and within a day it was spouting all kinds of offensive things to people and had to be shut down. 6:44So, again, not everything that we see on the internet is in fact true, and not everything that's true is worth talking about. 6:52So there's a filter that has to go on. 6:54So what should we expect of our AI? 6:58Well, we want some sort of observability in order to create this level of trustworthiness. 7:03We want to be able to verify. 7:05In an ideal world, I'd like for the chatbot to cite its sources so that I can then go to those sources and verify. 7:11I'd like to even it, almost in a math way, say "show me your work." Don't just give me the answer. 7:17Sometimes we just want the answer, but in some cases we really need it to show its work. 7:22And a lot of times, these systems don't do that. 7:24But we're going to need to rely on that. 7:26Another example where this could come along is with code samples. 7:30You can go into chatGPT for instance, and it's very good, tell it to write you a particular routine-- 7:35and tell it the language --and it will give you source code. You can copy and paste it into yours. 7:41Again, great stuff. 7:43But what if the corpus was poisoned and in fact it inserts malware or a backdoor into your code. 7:50If you start relying on that as your source and all you do is say, "write me a code snippet," copy and paste, 7:56and you don't verify what's happening, you could end up with a program that's a disaster and not even know about it. 8:02So this is what we have to do: It's the old lesson that we've always had when it comes to sources--trust, but verify. 8:10And don't stop doing it just because it's a computer or just because it's AI, also insist on verification. 8:17Thanks for watching! Please remember to like this video and 8:21subscribe to this channel so we can continue to bring you content that matters to you.