Learning Library

← Back to Library

Your Brain on ChatGPT

Key Points

  • The differing driving styles of robotaxi companies (Zoox, Waymo, etc.) raise questions about how humans should be trained to respond to a heterogeneous autonomous‑vehicle ecosystem.
  • “Mixture of Experts” introduces its weekly AI deep‑dive format, featuring guests Gabe Goodhart, Kaoutar El Maghraoui, and Ann Funai.
  • A recent MIT study titled “Your Brain on ChatGPT” used brain‑scanning techniques to explore how large language models affect cognition.
  • Panelists report mixed personal effects: LLMs make some tasks feel easier and boost confidence (e.g., coding), while others feel they diminish understanding and make writing feel “dumber.”
  • The consensus is that large language models themselves are neutral; their impact on intelligence depends on how users choose to engage with them.

Sections

Full Transcript

# Your Brain on ChatGPT **Source:** [https://www.youtube.com/watch?v=brrJRoqjVY4](https://www.youtube.com/watch?v=brrJRoqjVY4) **Duration:** 00:36:04 ## Summary - The differing driving styles of robotaxi companies (Zoox, Waymo, etc.) raise questions about how humans should be trained to respond to a heterogeneous autonomous‑vehicle ecosystem. - “Mixture of Experts” introduces its weekly AI deep‑dive format, featuring guests Gabe Goodhart, Kaoutar El Maghraoui, and Ann Funai. - A recent MIT study titled “Your Brain on ChatGPT” used brain‑scanning techniques to explore how large language models affect cognition. - Panelists report mixed personal effects: LLMs make some tasks feel easier and boost confidence (e.g., coding), while others feel they diminish understanding and make writing feel “dumber.” - The consensus is that large language models themselves are neutral; their impact on intelligence depends on how users choose to engage with them. ## Sections - [00:00:00](https://www.youtube.com/watch?v=brrJRoqjVY4&t=0s) **Debating Human Baseline for Robotaxis** - The hosts examine the difficulty of using a human driving baseline to train autonomous taxis amid varied company behaviors, then introduce the Mixture of Experts podcast episode that will explore the MIT paper “Your Brain on ChatGPT.” - [00:03:06](https://www.youtube.com/watch?v=brrJRoqjVY4&t=186s) **AI Assistance Reduces Brain Activity** - A recent study found that when participants wrote essays without AI help, their neural connectivity and alpha/beta network engagement dropped, prompting debate about whether relying on AI parallels past fears that new technologies diminish human cognition. - [00:06:08](https://www.youtube.com/watch?v=brrJRoqjVY4&t=368s) **LLMs Boost Cognitive Engagement** - The speaker explains how using large language models as coding assistants keeps their brain actively in the “hot zone,” accelerates exploration of unfamiliar problems, and creates a heightened sense of intelligence and engagement. - [00:09:17](https://www.youtube.com/watch?v=brrJRoqjVY4&t=557s) **Cognitive Atrophy from AI Automation** - The speaker warns that, similar to how industrial machines reduced physical strength, reliance on AI tools may erode deep thinking unless they are used to augment rather than replace human cognition. - [00:12:21](https://www.youtube.com/watch?v=brrJRoqjVY4&t=741s) **Cynical Optimist on AI** - The speaker shares a “cynical optimist” stance, viewing AI as a means to offload uninteresting tasks so they can dive deeper into personal passions, and then cues a forthcoming story from the San Francisco Chronicle. - [00:15:29](https://www.youtube.com/watch?v=brrJRoqjVY4&t=929s) **Balancing Perfection and Human Compatibility** - The speaker argues that autonomous vehicle AI must intentionally forego strict algorithmic perfection to mimic human driving habits and social norms, creating a paradox where less‑perfect behavior can actually enhance safety and trustworthiness. - [00:18:37](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1117s) **Geo‑Specific Prompting for Autonomous Vehicles** - The speaker proposes using dynamic system prompts, similar to chatbots’ zero‑shot adaptation, to tailor autonomous vehicle behavior to local driving cultures, thereby enhancing driver comfort and overcoming the “uncanny valley.” - [00:21:44](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1304s) **Adapting Autonomous Driving Models** - The speaker discusses Waymo’s rapid ride‑volume growth, questions whether human driving should remain the training benchmark amid diverse proprietary robotaxi behaviors, and argues for on‑device, continuously‑updating models. - [00:24:51](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1491s) **High-Profile GenAI NBA Ad** - The speakers discuss Kalshi's groundbreaking generative‑AI commercial aired during the NBA Finals, highlighting its rarity in premium brand advertising and its broader market implications. - [00:27:54](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1674s) **AI Ads Threaten Likeness Rights** - The speakers warn that as generative AI proliferates in advertising, unintended use of real individuals’ faces could spark legal battles over likeness ownership and compensation. - [00:30:57](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1857s) **AI‑Generated Ads: Personalization vs Shared Culture** - A participant asks whether generative AI will push advertising toward ultra‑targeted, individual experiences or preserve widely‑shared, culturally iconic ads like Super Bowl spots. - [00:34:06](https://www.youtube.com/watch?v=brrJRoqjVY4&t=2046s) **Balancing AI Ads with Creativity** - The speakers debate the flood of hyper‑personalized AI advertising, argue that human creativity is essential to keep ads effective, and use the moment to promote their new “Transformers” podcast. ## Full Transcript
0:00Is the human baseline for driving 0:02what they should be trained on going forward? 0:05Because, I mean, if the Robotaxi acts one way, Zoox acts 0:10another way, and Waymo is acting a third way, I mean, are we like 0:14they're expecting a human response from the every other vehicle. 0:17They have a known response from other vehicles in their network. 0:21but then now you've got this whole other set of variables like how 0:23how do you even train against that? 0:26All that and more on today's Mixture of Experts. 0:34I'm Tim Hwang, and welcome to Mixture of Experts. 0:37Each week, MoE brings together the loveliest team of researchers, product 0:40leaders, and deep thinkers to distill down and navigate the high speed 0:44and evermore complex landscape of artificial intelligence. 0:47Today, I'm joined by Gabe Goodhart, Chief Architect, AI Open Innovation 0:51Kaoutar El Maghraoui, Principal Research Scientist and Manager 0:54for Hybrid Cloud Platform. 0:56And joining us for the very first time is Ann Funai, CIO 0:59and VP for Business Platform Transformation. 1:03We have an action packed episode today. 1:05But first, let's talk about "Your brain on ChatGPT". 1:13So, I really want to cover 1:14this really interesting paper that came out of MIT. 1:17A number of researchers published a paper that's literally 1:20called "Your Brain on ChatGPT". 1:22And it's a pretty fun paper. 1:24But first, I kind of want to start with around the horn question, which is simply, 1:27do you feel smarter or dumber in the age of LLMs? 1:31Gabe, maybe I'll start with you. 1:33How do you feel about this? Sure. 1:35If I'm doing something I already feel smart at, like, writing code. 1:39I feel smarter. It's awesome. 1:41If I'm doing something I feel really dumb at, like, 1:44writing for other people to read. 1:46I feel actually a lot dumber 1:47because I don't actually comprehend what I'm getting. 1:50That's a great answer. 1:51Ann, what do you think? 1:52Actually, I generally, 1:55I almost feel like 1:56it's a neutral, like it's a validation of maybe some of my insecurities. 1:59So I know I'm terrible, like words are hard at that. 2:03And the LLMs like, the AIs, like they write the emails. 2:06I am terrible at writing. 2:08I almost feel like, So I have this validation of things. 2:11I'm like, not smart at, but also it frees up 2:14brain space for the stuff that I am intrigued by. 2:17And it yeah, that I do enjoy pursuing. So. 2:19That's great. We have such a nuanced panel. 2:21It's just like people are going to be, like, smarter, dumber. 2:23But, Kaoutar. What do you think? 2:25How do you feel about all this? 2:26Yeah. 2:26I don't think I feel anything here, but I think maybe the question here is 2:31isn't whether LLM make smarter or dumber about whether we choose to engage 2:35with them in ways that sharpen or soften our minds. 2:38So it's like, really how you engage with these LLMs. 2:41Well, we'll get into all of this in this discussion. 2:43So let me just kind of set up the paper a little bit. 2:46Would love kind of your all's responses. 2:47So this is a fun paper. 2:49They're basically using brain scanning technology. 2:52And what they did is they kind of divided 2:53their research participants into a couple of cohorts. 2:56And they said, okay, we're going to have you all do 2:58a series of tasks where you write an essay. 3:01And then for the people who basically like used LLMs to do this, they have a cohort. 3:06They call LLM-to-brain, where they say, okay, and on this next task, 3:09what you're going to have to do is write the essay just by hand, write 3:13with no AI assistance, 3:15and kind of what they claim is, and I'll just quote them directly. 3:18"LLM-to-brain participants showed weaker neural connectivity 3:22and under engagement of alpha and beta networks." 3:25So, to put that kind of in more human readable language, the idea is 3:29their brains were actually less active, in while 3:33they were accomplishing this task, shifting from an LLM-based, 3:36assistive kind of scenario to one where they had to do it all by themselves. 3:40And so I guess, Ann, I know you're you're on the show for the first time, 3:43maybe I'll turn it to you for your hot take is like, 3:45how much do we take from this? Is like, 3:46I mean, I know there are a lot of hot takes on the internet that were like, 3:49"it's killing our minds", but do you read it that way? 3:51So, you know, I. It's actually. 3:53I'll just say I'm not surprised by the take, because I. 3:56I would say, I think everyone I think the world is trying to figure out 3:59how to use AI in the best and most advantageous way possible. 4:04But what I actually it reminded me of is I almost feel like 4:08it's like cycles of human- 4:11computer evolution, like like 4:14when tablets and phones 4:16became ubiquitous, it's like, oh, "it's killing our mind. 4:19I can look everything up instantaneously." 4:21And I mean, if you even go way back, like if you think, you know, 4:24historians with books, it's like, "I rule the world for the next generation. 4:28They're going to ruin it." 4:29And like, you know, it's like, I mean, you can go back through like, 4:32you know, you know, Renaissance authors like and reading that 4:36and it's like I kind of almost put that in the context of like, yeah. 4:39So yes, real science behind it with the brain activity. 4:42But is this just another like we "I 4:45rule the world for AI is going to make us all dumber." 4:48And it's and at the end of the day, it is what we make of it. 4:51Like we can take the 4:53I don't know, maybe maybe this is too much of a Gen-X reference. 4:55We can take the idiocracy, take I get that reference. 4:59Yeah. 5:01We're just going to get stupider and just let it become our brains. 5:04But, I'm. 5:06I'm stealing an analogy someone else use like. 5:08But if we become like Tony Stark with the Iron Man 5:12suit on and let it, you know, be an amplification 5:15of our brain power and an education tool. 5:18That's goodness. Right? 5:20That is I mean, that is real, real goodness 5:23and actually should be pushing our brain power further. 5:26I think if we use it properly. 5:27Yeah. For sure. 5:28Yeah. I was talking with a friend when I read this paper. 5:30I was like you imagine the first person to invent a book 5:32and they're basically like oh well we people have to memorize anything anymore. 5:36You know, it's like so bad for us to have all these books. 5:38It's funny. 5:39We just, under the CFO, a leadership team of us 5:44went through the IBM archives, and they were showing the original, like, 5:48accounting books are on at the IBM archives. 5:50And it's like, well, our, our, our accountants, it's a hard set of words. 5:55Say, are our accountants dumber 5:58because they have spreadsheets now or technology? 6:01And the answer is no. 6:02I think it's detail, nuance. 6:05You know, you can really dig into problems in a different way. 6:07Yeah. For sure. 6:08Gabe, I want to bring you into this conversation 6:09because I think you had such an interesting response, to the kind of hot 6:12take around the horn question where I was like, I'm hoping every with someone like, 6:16my hope was that people would be like, oh, I feel dumber, I feel smarter. 6:19But I think if you were like, it depends on what tasks like for things 6:22I'm good at, I feel more engaged. 6:25I think what you said was if for the 6:26I feel less good at, then I feel differently. 6:28How do you think that kind of applies to some of the results here? 6:31Yeah. So. 6:33Yeah, I definitely teased it in that. 6:34But that was really my read of this. 6:36Is that, the one thing you didn't mention in the intro 6:40is they kind of did the inverse as well as the LLM to brain, 6:43they did the brain to LLM, 6:44and that the brain to LLM group actually showed really good engagement. 6:48And I think the, the way I have found myself 6:52using LLMs is primarily as a coding assistant, 6:55but where I am completely in control of the code, 6:58and what I use for them is to accelerate my ability 7:01to explore an area that I don't have prepped and ready to go. 7:06In that context, I am still very actively engaged in the act of creation, 7:10and that's a brain space in which my intelligence is 7:14moving faster. - The brain's like firing. 7:16Yeah. Exactly, so if the LLM can 7:18like, remove a time 7:20that my brain had to swap out and go figure out the right Google Search 7:23like that keeps my brain in the hot zone longer and better and it builds faster. 7:27So in that case, I feel way smarter. 7:30Where I feel like it makes me dumber is when I'm trying to get it 7:32to replace something that I don't like to do, 7:34and I'm not very good at doing to begin with. 7:36So I occasionally write blog articles, and if I get in the right Zen, 7:40I can actually sit down and write, you know, expository writing. 7:42But it's not my sweet spot. 7:44And so I could try to come up with a prompt, slam it, on an LLM, 7:47get some text out and skim it more in consumer 7:50mode and critic mode rather than creator mode. 7:53My brain never hits that hot zone. 7:55I never hit that place where I'm actually really thinking 7:58and framing and coming up with the right connections for it. 8:01And in that case, 8:02the thing I get at the end, yes, it took me a fraction of the time 8:05it would have taken me to get it in the first place, but I don't feel 8:08the same sort of ownership and the same level of deep 8:11engagement with what I just created, and I think in that context. 8:14So I think that's one thing that 8:15I found really interesting about this study was sort of that difference between, 8:19these two different ways of stimulating, either you're already deep in 8:22with your brain and you're using the LLM to boost it, 8:25or you're just starting with the other LLM doing it for you, 8:28and then you're trying to apply your brain to what the LLM already did. 8:31And I think those are a really different way of using LLMs. 8:34Yeah. I'd love to bring these two comments together. 8:36And Kaoutar kind of bring you into this conversation. 8:38You know, I'm old enough to remember, 8:41like the discourse around like, graphing calculators. 8:43Right. 8:44And it was basically like I remember 8:45the basically the, the teacher always being like, "well, it's important 8:48to understand how you do, I don't know, like a graph, a function. 8:52Before you, before you do it automatically on your calculator." 8:55And I think, Gabe, what you're pointing out is the that's that right 8:58is basically it's the brain to LLM versus LLM to brain. 9:01And so I guess Kaoutar, I know you said like you kind of don't feel any way 9:04about this, but, wondering like, you know, how new is this in some sense? 9:08Like, do you think this is just like, now LLMs are kind of repeating, 9:12I guess what we've kind of already gone through with stuff 9:14like, say, like a graphing calculator or something like that. 9:17Yeah. 9:17Actually, I like to think it also, as you know this a 9:21parallel, like, kind of 9:23mirroring the historical effects of the industrial automation, 9:26you know, as machines relieved humans of physical labor, 9:30physical strength and endurance kind of declined. 9:33for, for the majority of us. 9:35And unless you really, you know, work really hard, 9:38you know, those muscles and exercise and things like that, you know, 9:41if you look at the majority of the people back then, most of us, you know, 9:45most of the people were stronger because they had to do a lot of physical labor. 9:48But as we had not, we rely on more cars 9:51and on, you know, these machineries to clean our houses, to do these things. 9:55Our muscle evolved to be weaker. 9:57And I worry a little bit, you know, are we, you know, 10:00getting into these cognitive automation risks or a similar atrophy here. 10:05Not for our muscles but for our minds. 10:07You know, just as cars made us walk less, AI systems 10:12could make us think less deeply. 10:14So we're not just outsourcing task, we're externalizing cognition. 10:18And I think that's what this paper is kind of a crucial wake up call here 10:23regarding the uncritical adoption 10:25of these AI tools for complex cognitive tasks. 10:28So I think it depends how you engage. 10:30You know, when I said, you know, 10:31how you engage with these tools, so are you going to really, you know, over 10:35rely on them for these deep thinking without really engaging your brains 10:40or you want to use them, like Gabe mentioned, 10:42as you know, to augment you for tasks that you really good. 10:46So I think it depends. 10:47And I think here, you know, if you're looking 10:50at the concept of this cognitive depth also that, you know, mentioned here 10:53is, you know, particularly compelling, suggesting that, you know, this subtle 10:57but profound long term impact on how our brain functions. 11:01So I think for individuals, you know, especially in educational 11:04professional settings, I find the takeaway isn't, 11:08you know, to abandon AI, but to cultivate, you know, this cognitive resilience. 11:13So meaning, you know, using AI strategically for brainstorming, 11:17in fact-checking, summarizing, boosting your performance, but consciously 11:22engaging in, you know, deep thinking 11:24analysis or original sentences ourselves. 11:27So it's more about, you know, how do you treat these AI tools 11:31to augment, not to replace our fundamental cognitive process. 11:36So it's like, how do we find that critical balance? 11:38Yeah. That's a great point. 11:39And I think, 11:40and maybe I'll kick it to you because we could go much longer 11:42and I need to move on to our other stories. 11:44But I mean, in responding to what Kaoutar is saying is, 11:47is there a view here, like, I just to play skeptic for a moment, 11:49it's like, well, it's all well and good to tell people that they need 11:52to, you know, use their critical faculties with this technology. 11:55But like, people are lazy, right? 11:58Like we can't expect people to do that. Yeah. 12:00And so like, I don't know I think like is it, 12:01is it hoping against hope that people are going to kind of like 12:05use this technology in a way that looks a little bit more like brain 12:08to LLM versus LLM to brain, you know, 12:09- to use the language of the paper. No, and I, 12:11exactly, and my hope would be the 12:13brain to LLM, you know the comment I made about. 12:15You know, it's how we learn to use it. 12:17My hope is that we shifted to that. 12:18And I absolutely agree with the humans are lazy. 12:21You know, again, myself include I use examples like read my email 12:24like I put the words make it usable. 12:27But you know, like you, I, I joke that I'm an optimist, 12:30but I'm a cynical optimist because I could see every way 12:33it could go wrong before you actually get to the most optimistic outcome. 12:37And I mean, where I would put my 12:39my hope and optimism in this case is, you know, at the end of the day, 12:44we're still human beings that have things that interest us and drive us. 12:48So you know, I, I, I love 12:51I will go read technology papers, I'll play with things, toys, whatever, 12:55you know, and that's always going to be what drives me. 12:58And an AI is actually going to help me go further and deeper, I think in that. 13:02And, you know, it could be the same with someone who's a doctor, a lawyer, like, 13:07I don't know, maybe retail shopping 13:10changes and marketing like my hope 13:12and optimism would be that makes you lazy in the tasks 13:16that don't drive the things that interest Right. 13:20It's. It's optimally lazy. Yeah. Yeah. 13:21It's optimally, yeah. I love that! 13:22That's perf. It's optimally lazy. Yeah. 13:25Yeah. 13:25That's great. 13:26Well, much more to talk about. 13:28We'll be paying attention to this story. 13:30I'm sure there's gonna be a lot more to come on this kind of research. 13:32But I want to move us to our next topic. 13:39Super interesting 13:40story came out of the SF, Chronicle. 13:43It's kind of like the local metro paper in San Francisco. 13:46And in San Francisco. 13:47I don't know if this is some of our listeners will be in cities where, 13:50you know, these robot taxis are rolling out. 13:52Autonomous driving is a thing where you can just call a robotaxi. 13:55It will take you to your location. 13:56And these are all run by, Waymo right now, which is, 14:00part of the kind of Alphabet Google kind of network of companies. 14:03And the article is really fascinating because it focuses on the idea that 14:07now that they've seen such great success with the Waymo's, 14:10the Waymo's are now actively driving 14:12a little bit more 'aggressively.' 14:15And one of the great examples they give is that now, you know, 14:18the Waymo's will do this like little rolling start, you know, where 14:21it's about to go through an intersection. 14:23And, like, much like a human, would you kind of, 14:24like, loosen up on the brake and it's kind of a signal 14:26to the rest of the road, like I'm getting in here. 14:29And I think this is, like, so fascinating. 14:32And at least what the Waymo folks say in the article is that, like, 14:36it turns out that having a robotaxi that's like a lot more brisk 14:40and a lot more decisive and like, dare I say it kind of like a jerk 14:43a little bit on the road actually makes things safer. 14:47Which I think is just like such a funny sort of outcome. 14:50And so, Kaoutar, I would love to bring you in on this is like, 14:53how should we think about this? 14:54Because normally, I think in the chat bot world, we've tended to make our AIs 14:57like very, you know, like very, catering. 15:01But this is almost like a, an example, I think, where it turns out 15:05that we're getting better results from having AIs that are, like, 15:07much more assertive when they interact with humans. 15:09And so, just curious, you have any hot takes about that? 15:12Yeah. It's very interesting. 15:13I found really fascinating how Waymo, is now prioritizing human-like, 15:17driving behavior to better integrate into real world or urban environments. 15:23So I think what this is saying is safety 15:26doesn't just mean rule following and being very strict with these rules. 15:29It also means fitting into a human-centered system 15:33and overly cautious, you know, automatic, you know, driving 15:37machines or AVs can be disruptive, as, you know, what they've seen on the roads. 15:41So this shift is kind of reflects the this delicate trade off, 15:46between, you know, algorithmic perfection but also social compatibility. 15:50So it seems like we're entering this 15:53"uncanny valley" of behaviors, cars that are smart enough 15:57to mimic our bad habits. 15:59And what does it say here when AI becomes 16:01more trustworthy by being less perfect. 16:04So it's very interesting kind of paradox here. 16:07The we need less perfection here to really fit, you know, these social norms and, 16:12and that kind of translate into safer assistant because, 16:15you know, these cars have to act in human environments. 16:19So they have kind of to adopt how we're being. 16:21So and this is kind of also critical, you know, highlighting here 16:25a critical challenge in designing these AI systems, especially for the future 16:29that operate in complex and predictable real world environments. 16:34So how human-like should should they be. 16:36And that's the key question here. 16:38So and I think what's Waymo's approach here is suggesting 16:42that really strict adherence to rules might not be the safest. 16:46And that's that is very interesting. 16:47Gabe. I, I lived in San Francisco for many years. 16:50And then spent about few, a few years in LA. 16:53And I remember when I moved to LA, I was like, these drivers are unhinged. 16:57Like, basically like the, the culture of driving is just, like, 17:00aggressive in a way that is, like, not very familiar. 17:03Having driven around San Francisco for like, you know, close to a decade. 17:07I guess one of the interesting questions and building off of what Kaoutar just said 17:10is that it kind of suggests that for some of these systems, 17:13we're almost going to have to, like, localize them to cultural practices. 17:17Like, is that the right way of thinking about. 17:18It's like very different from how we think about rolling out these systems. 17:21Typically, I think. 17:22Well, the analogy that immediately jumped to my mind reading this article 17:25was the shift from pre-GenAI chatbots to GenAI chatbots. 17:29If you think about how we built chatbots before Transformers, 17:33we built them by crafting a very deep decision tree 17:36and trying to figure out at every point in that tree 17:39where the person was trying to go down that tree 17:42and then taking them down the right to right, leg of it. 17:45And if any of you, all of us, I'm sure, have all walked through one of those trees 17:49on the phone or on a chat on somebody, customer assistance. 17:53And it's really clunky. 17:54It's like I'm trying to reverse engineer the tree in my head. 17:56I'm trying to figure out, okay, I know I'm on the wrong branch. 17:59I want to move up a node and get back down this other one. 18:01And it's maddening. Right. 18:03And I think, a rules-based vehicle is very analogous. 18:08Right. 18:08It's really trying to make sure it's following exactly the right structured 18:12path in its trajectory of possible 18:14actions at every given point in time. 18:17And I would love to unbox what Waymo is doing here, 18:20because my guess, frankly, is that they're starting to apply 18:24a much more free-form, decision making space, akin to generate me 18:29the next token, generate me the next thing that needs to happen. 18:31So it's I wouldn't be surprised if they've got a reinforcement 18:34learning transformer sitting on top of whatever their rule system is. 18:37Now, that's 18:38actually got a much wider space of possible next actions, 18:42and they're generating the stuff on the fly. 18:43So to me, 18:45when you say localize to different geos, that's a different system prompt, right. 18:48Like it's basically I need to basically, 18:51you know, zero shot, learn my car 18:54with a whole bunch of examples of the crazy LA drivers. 18:57Right? 18:58So in some ways too, if we start applying this more flexible way 19:03of adapting the behavior to the environment, it it may actually, 19:07you know, just as the article suggested, 19:09make the vehicle fit in a whole lot better. 19:11And I think, honestly, that's one of the things going back 19:14to that analogy to chatbots that really powered the AI explosion 19:18is that all of a sudden, the consumption experience 19:22jumped over that "uncanny valley" that you mentioned, Kaoutar, 19:25that said, you know, now I feel like I'm talking to a real entity on the other end. 19:30I'm not reverse engineering in my head. 19:32I feel comfortable in this space in exactly the same way that drivers 19:36would now feel comfortable in a space where they're mixed 19:39other humans and autonomous vehicles, 19:41because those autonomous vehicles fit their mental perception. 19:44Yeah. 19:44I love the idea that somewhere hidden in, like, Waymo's 19:47cloud, there's a prompt that's like you're a San Francisco driver, 19:50you know, 25 to 35, and, like, you live in the Mission District is there. 19:54In San Francisco, like, and I think, do you want to take us on, like, 19:59a little bit of a journey into, like, where this all goes in some sense? 20:03Because I think, like, 20:03what's really fun about this is like, this is a multi-round game now, right? 20:08Imagine a future where there's multiple car companies operating 20:11autonomous vehicles and like one of them is like I can actually get my consumer 20:15location faster if my car is like a little bit of a jerk. 20:18And so Ann, I'm really interested in kind of thinking about 20:20like how this evolves, but I'll just kind of give you that prompt. 20:22I'm curious about how you think about that. Yeah. 20:23And it's actually 20:24It is actually funny, this was not planned at all, we can attest that; that's actually exactly where my head went to 20:29Great 20:30And as an aside, by the way, I'm in Austin, Texas, 20:32and the San Francisco LA analogy is Austin and Houston, like, 20:36Houston is like Houston is a whole other whole other game. 20:40Entire social media feeds associated the ridiculous. 20:42Houston driving. 20:44But it is funny. 20:45You you, you know, targeted me that way because what we have going in 20:49Austin is interesting. 20:49We have Waymo, we have Zoox. 20:52And we now as of this week, have the Tesla robotaxis. 20:56So I read the irony is right before 20:59we, you know, got this article to look at to discuss. 21:03It's coming back from a trip to the airport. 21:04My partner and I are in the car and he's like, we're both like commenting and 21:07looking "that Waymo is driving like a maniac." 21:11Like it was like it was actually going above the speed limit. 21:14It was doing a little bit of zoom -Oh, no 21:16And it was actually a little bit 21:19like "Uh, maybe I won't drive" - Yeah. 21:21Maybe, it's not there yet... 21:22but it kind of and he's a tech person too. 21:25So it kind of led us into this 21:26weird conversation of like, okay, where is this going? 21:29What's doing that? 21:30How is that learning? 21:31And then saw this article and I thought like, gosh, what happens? 21:34Because as there are more of the autonomous vehicles on the road, 21:40they are trained on human behavior, not the behavior of each other. 21:45And, there was actually another piece I had seen right around the same time, 21:48I think it was, a New York Times article talking about how the like, 21:52the evolution of Waymo, 21:53that in the first six months of 2025, they've already done double the rides. 21:56They did in all of 2024 and think 2024 was five 22:00x 2023, which obviously due to expansion in part. 22:03But it's like I know really got me thinking is like, 22:07Is the human baseline for driving what they should be trained on going forward? 22:11Because, I mean, if the Robotaxi acts one way, Zoox acts 22:15another way, and Waymo is acting a third way, I mean, are we like 22:20they're expecting a human response from the every other vehicle. 22:23They have a known response from other vehicles in their network if you will, 22:27Right. 22:28but then now you've got this whole other set of variables 22:30like how how do you even train against that? 22:32Because let's be honest, they're all going to have proprietary. 22:35They're all going to learn, they're all going to have 22:36a proprietary way of doing it. 22:38So I think it's I actually think in five years 22:41that five years or less maybe, I guess looking at how fast Yeah. 22:45It's doubling. Yeah. 22:47I think it's going to be very, very important to adapt. 22:50You know, like, how do we these, you know, train 22:52models, adapt, you know, on the fly. 22:55And this visually as maybe we need more tiny models 22:58or more capable models, you know, local on device. 23:02You know, that they can, you know, make decisions and, you know, retrain 23:06fine tune and things, you know, on the fly in real time 23:09especially, you know, the, you know, the driving will change depending on, 23:13like you said, you know, in San Francisco, if I go to Morocco, for example, 23:16the driving is way different, much more aggressive. 23:19You know, yeah. 23:19You would win aggressive in Morocco for sure. 23:22oh my God, you know, I can't drive there myself. 23:24So I, I can, I can imagine like a trained, you know, car here in the US, 23:29you know, putting that in Morocco, you know, it needs to adapt completely. 23:32You know, different much more aggressive behavior. 23:35So I think we need more of that going forward. 23:37You know. 23:37So we just don't rely on these statically trained models. 23:40But these models have to adapt constantly. 23:43I could even see. 23:44What could be interesting is, like a lot of open source consortiums 23:47have started because of similar problems. 23:49Like this is like we want to have, 23:51you know, you want to have your proprietary piece as a company, 23:53but you recognize there's an area where you have to have common 23:57understanding, common knowledge, common engagement, like maybe it is 24:00okay, we admit we're going to like go to use an open source piece. 24:04That is how we train in the same way. 24:06So we're not all crashing into each other, but then they put 24:09proprietary pieces on top of that for their business model. 24:12Yeah. 24:13I think the handshake will be very interesting because. Yeah. 24:15Think about different brands of autonomous vehicle. 24:17You know, it's like your car's computer vision model is like. 24:20Oh, but that's that's a Tesla robot, you know, like, we got we got to, 24:23you know, navigate around it in a way that's different from a Waymo. 24:26And I feel like the easier way is if there's just some technical handshake 24:29that says, hey, you know, I'm just signaling to everybody on the road 24:32that I'm from this company and have these attributes. 24:35So that'll be very, very interesting to see. 24:41Well, great. 24:41I'm going to move us on to our next topic. 24:44I am, by admission, not really a sports guy. 24:48But I was roped into watching the NBA finals, which were great. 24:51I think I'm now a basketball guy. 24:53And, I caught this really interesting ad that, 24:56it turns out, was, like, widely talked about in the ad industry. 24:59There's a prediction market company called Kalshi. 25:02And they did this completely surreal, 25:05mind-bending ad that played during game three of the NBA finals. 25:09A lot of crazy scenes. 25:10And I remember looking at it being like, this really looks like GenAI. 25:14And lo and behold, it 25:15came out later that, of course, this is like a GenAI ad, and, 25:19I think it's one of the most high profile end-to-end 25:22GenAI ads that we've really kind of seen happen in the media. 25:26And I wanted to bring it up because in the past, 25:28I think we've talked often about generated AI, or generative 25:31AI for ads is kind of something that we see for like, 25:34you know, kind of like more bargain bin ad inventory, right? 25:37Like the kinds of things you encounter online. 25:39But this is like high prestige. 25:41Often what marketing people call like brand advertising is like, this is like, 25:44you know, an ad that you'd see in the New York Times, right? 25:47It's like a little bit like that. 25:49And so, I guess, Gabe, I'll bring you in. 25:52I'm kind of curious about, like, how we should sort of read this that like, 25:55basically almost like the use of these technologies 25:59is so good now that, you know, a big company like Kalshi will say, 26:03we're going to spend a huge amount of money and then like 26:05put use this technology to generate an ad for this really high profile event. 26:10It's a signal of some kind, I think. Right. 26:11Yeah. 26:12I mean, I think I have three different reactions to it. 26:17Sure. We need all that takes. Bring them. 26:19One on the technical front, 26:21Sure. 26:22one on the consumer front, and then one on the skeptic front. 26:24So, one thing, on the technical front, 26:26one thing that I thought was really compelling, was that 26:29I watched the ad and 26:32there were very few of what you might expect from the GenAI 26:36sort of blemishes now. 26:38They did a good job of making it a fast paced ad. 26:40So your AI isn't going to pick up on the fact that one random person 26:43in the crowd has six fingers or something like that. 26:45But you know, they did a really, you know, 26:48the actual quality of what was generated was really good. 26:51And I think, you know, that merged with some clever expertise of how 26:54to cut the ad together, you know, really produced a good looking ad. 26:58It wasn't it didn't smack of, you know, 27:02some duct tape here to hide all the the gorp. 27:04Right. 27:06From a consumer standpoint, you know, like, 27:08it's a good ad, and if it, you know, lowered the denominator 27:13for the cost of creating it for the company, making it cool. 27:16That sounds like a, you know, a good optimization. 27:19For, you know, the industry, for the world. 27:22I think my biggest take, though, is on the, 27:24the skeptic in the warrior front, which is, 27:27who were those humans in the video? 27:30Now, obviously they were not recorded humans, but 27:33we all know that GenAI models are based on a whole boatload of training data. 27:37So as this becomes more ubiquitous, 27:41what are the odds that somebody face 27:43who had did not give their permission to be in an advertisement for it? 27:47Company X shows up on screen 27:50with absolutely no way of validating whether that's happening or not. 27:54Right. 27:54It's it's a huge gulf between whatever training data went into the model 27:58and the actual faces and images and body 28:00representations and all of that, that pops out on the screen. 28:03And, you know, right now it's a needle in a haystack, right? 28:07Like, Do you think anyone in the background 28:08scenes of any of those are going to happen to be people that watch it and say, 28:12"hey, wait a minute, that's me. 28:13I'm going to sue your pants off?" No. 28:16But as the number of ads that are created with GenAI balloon, 28:19it's going to happen, right? 28:21The odds are going to shake out that somebody is going to suddenly realize 28:24that their face is popping up in ads that they have nothing to do with, 28:28and they're getting no compensation for. 28:29So it's a whole it's a different but related element to 28:34the copyright issues, around authors books, 28:37you know, snippets popping up if they're sufficiently popular. 28:40It's, it's, I think, going to go down that same rabbit hole of ownership 28:44of likeness, ownership of content, where the content in this case 28:49is your actual, you know, persona in visual space. 28:53Yeah. Ann, maybe I'll turn to you. 28:55Like, I think you're raising Gabe a really good point. 28:57And I think one of the things 28:58I really want to investigate is, like how mainstream this becomes, right? 29:02Like, how much of this is kind of a one off novelty? 29:04Everybody's, like, surprised that I can do this, but, like, 29:07I have a friend who's in the ad industry who's like, 29:09I just don't think it's a very good ad, you know? 29:11But then I think on top of that, like you layer on 29:12everything that you're talking about, which is, well, there's also 29:15all these other risks that come with using this technology. 29:18Do people want to take on that risk when they do these kind of ads? 29:21I guess, Ann, to kind of like put it in like a sharper term. 29:24It's like, you know, in 3 or 4 years, do we feel like every ad for game 29:28three of the NBA finals is going to be AI generated? 29:31Like how how far do you think this is going to go? 29:33Yeah. Now. Well. 29:34And before I answer plus one to everything Gabe said, I mean, there's there's 29:37so many things that can go in any direction. 29:41There. 29:41I you know what I kind of went back to when looking through that article was, 29:45you know, at the end of the day, marketing is still a data driven exercise, right? 29:49Like to be a marketer, it's data-driven and 29:51it's not so much about 29:53are we going to have more AI ads, but what are the outcomes? 29:56Businesses are kind of trying to drive through an advertisement. 30:00Right. 30:00And is it just awareness, like you feel like, 30:03hey, AI people haven't been paying attention to us or, 30:07you know, our awareness is going down, our revenues are dropping. 30:10We need to do something flashy that just gets our name out there 30:13and gets people looking at us again. 30:14Or like, are we trying to, you know, sell a specific product. 30:17But again, it goes back to like, what is your goal with the ad 30:21and that outcome? You're kind of trying to drive. 30:23So I think I would say that's where there's a little, you know, TBD on there. 30:27But at the same vein, like let's go back to maybe that first conversation 30:30about brain to LLM and LLM to brain, you may have a clear outcome. 30:35You're trying to drive a clear vision of what you're trying to do. 30:38But the AI may be able to create the advertisement faster 30:41and better than a human could, which is, you know, I would say that's the brain to 30:46LLM versus the LLM to brain diminishment we were talking about before. 30:50So again, I would I would lean on, 30:53you know, marketing is always going to be outcome driven. 30:55It's going to be a flashy thing. 30:57But I think, again, the direction of the flashy thing 31:00used for the right purpose, I think could get really interesting. 31:04Yeah, I think that's right. 31:05Kaoutar, I think, one final bit. 31:07I think you'd be well positioned to talk a little bit about is, you know, 31:11I think in the past when I've heard this discussion 31:12about AI-generated ads, it's been very much like, "oh, in the future, 31:17everybody is going to have their own custom ad", right? 31:20We can use generative AI to basically create, like your favorite movie star 31:23telling you you should use Kalshi or whatever as a service. 31:27This is kind of an interesting place where actually generative AI is being used 31:30for everybody to see the same ad, and, curious 31:34If you want to kind of talk a little bit about that, like, do you think, you know, 31:37it's more likely that people will want the ultra targeted stuff, 31:40which is a little bit building on the theme 31:41that Ann was talking a little bit about, or, you know, is there something 31:44really fundamental to advertising, which is no matter how it's created, 31:47we still kind of want it to be like a like a shared culture in some ways. 31:51Like I think about those Super Bowl ads that like, you know, 31:53became kind of cultural movements in their own right. 31:56It sounds like, 31:57you know, maybe this is actually kind 31:58of even preserved in a world of generative AI. 32:00Yeah. That's a very, interesting point. 32:02Of course, I think personalization, I think, is an important aspect here. 32:07Some people would like that. 32:08Some people don't because they want to like the shared 32:11kind of advertising to see what's everybody seeing. 32:14So it's interesting to see. 32:16And I think in the world of generative AI, it's really possible 32:18because there is so much data that they're collecting on each one of us. 32:21So, you know, if they can generate, you know, this generic ad, they can 32:25might as well generate these personalized ads based on, you know, your, 32:29you know, your historical preferences and data, what you've purchased 32:33and things like that. 32:34So I think we will see both of those. 32:36And just interesting just looking, you know, 32:39when I was looking at this new ad and the statistics behind this. 32:44So like they had like this 300 to 400 32:46generated results in 15, around 15 usable clicks. 32:50You know, the cost was $2,000, which is about 32:5395% cheaper than traditional production. 32:56And they reach, you know, it took two to 2 to 4 days 33:00using one creator for the full ad 33:03and an estimate of like 18 million views. 33:06That is really huge, you know, in about 48 hours. So. 33:10So what is this? 33:11It's telling us, you know, of course, more of these marketers 33:14and companies will use these tools to create these ads. 33:17But what's the implications of this? 33:20I think if we really think about this more deeply, 33:23AI here isn't replacing the creatives. 33:26It's also fragmenting this creativity task stack. 33:30And, so the bottleneck is no longer in the production side, 33:35but more in the ideation and the originality of the ad. 33:38So, I mean, yes, we can generate all of these things like, 33:41you know, like Gabe mentioned, maybe faces, you know, that. 33:44Are you know, randomly will be picked up. 33:46That could be any one of us. 33:48And so how creative these ads are. 33:50So I think what Kalshi here is highlighting is both the promise 33:54and also the peril. 33:56So democratizing content, you know, creation 33:59here at an industrial speed, but also the risk 34:02of having these homogenized, hyper targeted media. 34:06And we could be like soon flooded with, you know, 34:09highly personalized ad, but are they going to move us? 34:12Are we going to find them creative? 34:14Or like the Y factor? 34:16Is it going to be there? 34:17So that is the key question here. 34:19Again generative AI do that. 34:21Or maybe we need, you know, some additional things 34:24that we have to bring to the table with human creativity 34:27that's really going to make it or break it for the for the viewers. 34:31Yeah. I hope they get it right. 34:32I mean, I think otherwise, it's a pretty dark feature of, like, 34:34just being flooded with, like, very slap ads that you just don't like. So. 34:40Yeah, exactly. 34:41Well, that's all the time that we have for today. 34:43I want to end with two special notes. 34:45Ann, I know this is the first time on the show. 34:48If people want to find you, keep up with your work. 34:50Where where should they go? 34:51And so, fun enough. 34:53If you enjoy podcasts. 34:55We started a podcast called Transformers. 34:58And, you know, our goal really is to show people across industries 35:02and technical work spanning the season to non-technical roles. 35:06Really what it takes to transform, 35:08you know, a company, a business, you know, open source, closed source, 35:13fintech, you know, you know, fintech, tech, tech. 35:17So, you know, come, come find me over there. 35:19It's, we have a lot of fun 35:20with a lot of really interesting guests from a lot of really fun places. 35:24And and hopefully conversation is entertaining. 35:27Is this. 35:28Yeah. For sure. It's really good. 35:29You should subscribe, listeners. 35:31And then finally, I want to take a personal moment to thank, producers. 35:35Hans Buetow, Mike Rugnetta and Michael Simonelli. 35:38They basically been fearlessly working behind the scenes, 35:42essentially ever since MoE got started, like, a year ago. 35:44And so we owe essentially a huge amount of the success of this show to them. 35:48And we will miss you guys. 35:49This is their last show that they are, working on with us, here at MoE, 35:53So we will miss you guys. 35:55Thanks for all you listeners. 35:56If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, 36:00and podcast platforms everywhere, and we will see you next week on Mixture 36:03of Experts.