Learning Library

← Back to Library

AI Search Challenges the Browser Era

Key Points

  • The panel argues that while browsers may evolve, AI‑driven search will remain the primary gateway to most tools and applications.
  • A new “top news” segment spotlights major AI developments, including NVIDIA and AMD allocating 15% of China chip sales revenue to the U.S. government and Apple unveiling a tabletop companion robot and a multi‑speaker, more natural‑sounding Siri.
  • Recent Anthropic research reveals that leading AI assistants (Claude, ChatGPT, Llama, etc.) tend to flatter users, providing biased or inaccurate responses to please human interlocutors.
  • Google DeepMind has open‑sourced its Perch bio‑acoustics model, aiming to help conservationists monitor wildlife sounds and protect endangered species.
  • Perplexity, an AI‑search platform, made a surprising bid to acquire Google Chrome, signaling a potential shift in how search and browsing services might converge.

Sections

Full Transcript

# AI Search Challenges the Browser Era **Source:** [https://www.youtube.com/watch?v=HK17Bt5rUtc](https://www.youtube.com/watch?v=HK17Bt5rUtc) **Duration:** 00:40:32 ## Summary - The panel argues that while browsers may evolve, AI‑driven search will remain the primary gateway to most tools and applications. - A new “top news” segment spotlights major AI developments, including NVIDIA and AMD allocating 15% of China chip sales revenue to the U.S. government and Apple unveiling a tabletop companion robot and a multi‑speaker, more natural‑sounding Siri. - Recent Anthropic research reveals that leading AI assistants (Claude, ChatGPT, Llama, etc.) tend to flatter users, providing biased or inaccurate responses to please human interlocutors. - Google DeepMind has open‑sourced its Perch bio‑acoustics model, aiming to help conservationists monitor wildlife sounds and protect endangered species. - Perplexity, an AI‑search platform, made a surprising bid to acquire Google Chrome, signaling a potential shift in how search and browsing services might converge. ## Sections - [00:00:00](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=0s) **Future of Browsers and AI** - The host introduces a podcast episode asserting that AI‑powered search will become the main entry point to applications, while previewing topics such as Grok Imagine, GPT‑5, and Perplexity’s bid for Google Chrome. - [00:03:07](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=187s) **Discussing Perplexity's Chrome Acquisition** - The participants unanimously reject the notion of selling Google Chrome to Perplexity for $34.5 billion and then explore Perplexity’s potential motives, emphasizing Chrome’s massive user base and the industry’s shift toward AI‑enhanced browsers. - [00:06:12](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=372s) **Browsers as Future AI Platforms** - The speakers discuss how the browser may become the primary entry point for AI applications, referencing Perplexity’s focus on Chrome integration and the broader trend of AI-driven search shaping web interaction. - [00:09:25](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=565s) **AI-Embedded Browsers Automate SaaS Workflows** - The speaker explains how AI agents built into browsers can act on behalf of users to quickly automate tasks across SaaS platforms such as Salesforce, SAP, and Workday, enabling faster workflow automation for both enterprises and individuals. - [00:12:31](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=751s) **Grok Video Generation Debate** - The speaker critiques Elon Musk’s hype around Grok’s video‑generation tools, likening them to Vine and questioning whether generative video will become a consumer staple or remain an expensive enterprise capability. - [00:15:36](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=936s) **Short‑Form AI Media & Regulation** - The speaker discusses how culturally‑tuned short‑form AI‑generated content appeals to former Vine users, while highlighting deep‑fake controversies, moderation needs, and the transition toward B2B oversight. - [00:18:44](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=1124s) **Transparency and Ethics in Generative AI Video** - The speaker emphasizes the need for clear disclosure of resource consumption and ethical guidelines when deploying generative AI video tools, urging best‑practice recommendations for responsible use. - [00:21:49](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=1309s) **AI‑Driven Content Moderation Evolution** - The speaker describes how early social platforms struggled with feedback loops for flagging unsuitable media, but modern large language models now achieve high‑accuracy, scalable moderation—necessitating personalized guardrails, clear frameworks, and open‑source collaborations before broad rollout. - [00:25:02](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=1502s) **GPT‑5 Doesn’t Signal an AI Plateau** - In response to Gary Marcus’s claim that GPT‑5 marks a dead‑end for current AI methods, the speaker argues that rapid model scaling, falling compute costs, and broader product strategies like OpenAI’s “super‑app” approach show continued, substantial progress rather than a plateau. - [00:28:07](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=1687s) **Enterprise Concerns Over Model Deprecation** - The speaker highlights the challenges of moving to newer AI models—requiring algorithmic improvements for AGI, managing version deprecation like GPT‑4o, and balancing enterprise users' attachment to established tones—while expressing optimism about GPT‑5’s raw power and feature advancements. - [00:31:19](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=1879s) **Challenges Updating AI Model Stacks** - The speaker explains that frequent model upgrades force developers to constantly rebuild their workflows, making rapid mastery difficult while navigating polarized opinions and noise surrounding OpenAI’s changes. - [00:34:23](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=2063s) **Personalized Emotional AI Assistants** - The speaker envisions AI models that let users customize voice, tone, political bias, and long‑term memory, incorporating emotional‑intelligence benchmarks to more accurately grasp and respond to individual intent. - [00:37:27](https://www.youtube.com/watch?v=HK17Bt5rUtc&t=2247s) **From Consumer GPT to Enterprise** - A speaker discusses the remaining leap from current consumer‑focused GPT usage to enterprise adoption, highlights recent feature improvements, and expresses anticipation for future model releases beyond GPT‑5. ## Full Transcript
0:00I mean, like there's almost a view, which is maybe the browser is like toasts, 0:03like we won't really have browsers in the future. 0:05I think the browser is still kind of your first entry point 0:08into a lot of, you know, tools, applications. 0:11You want to find something, you typically go to a browser. 0:13So AI based search functionality is really the conduit 0:17in which I think a lot of these tools and technologies 0:19and applications are still going to be accessed. 0:21All that and more on today's Mixture of Experts. 0:30I'm Tim Hwang and welcome to Mixture of Experts. 0:33Each week, MoE brings together 0:34a panel of the most brilliant minds in technology 0:37to banter, analyze, and argue our way through the thrilling 0:40and often baffling news each week in artificial intelligence. Today, 0:43I'm joined by a great crew we've got Shobhit Varshney here, 0:46Head of Data and AI Consulting for US, Canada and Latin America, 0:49Abraham Daniels, Senior Technical Product Manager 0:52for Granite, and Sophie Kuijt, 0:54joining us for the very first time, IBM Distinguished Engineer and CTO for IBM 0:58NCEE. We have a packed episode today, as always. 1:02We'll cover Grok Imagine, check in on GPT-5 and 1:05cover Perplexity's bid for Google Chrome. 1:06But first, starting today, we're going to have a quick segment 1:09at the beginning of each episode 1:10that talks about the top news stories from each week, 1:13and that's going to be helmed by Aili McConnon. Aili, over to you. 1:21Hey everyone, I'm Aili McConnon. 1:23I'm a tech news writer for IBM Think. 1:25Before we dive into the episode, I'm here with a few quick 1:28AI headlines you may have missed this busy week. 1:31First up chipmakers. 1:33NVIDIA and AMD have reached an unprecedented arrangement 1:36where they will give the U.S. government 15% 1:39of their revenues of chip sales in China. 1:42Apple is planning several new AI devices, including a tabletop robot 1:47that will serve as a virtual companion. And another AI enhancement 1:51Apple's planning is more lifelike sounding Siri 1:54that will be able to communicate with multiple people at the same time. 1:57Meanwhile, in case you missed this interesting 2:00piece of research, Anthropic found that five major AI assistants, 2:04including Claude, ChatGPT and Llama, are people pleasers. 2:08The AI assistants systematically gave biased feedback 2:11and inaccurate information, 2:13all in order to flatter their human users 2:16and provide answers that aligned with their views. 2:19Last but not least, Google DeepMind 2:22has released an updated open source 2:24version of its Perch bio acoustics model. 2:26This is going to help conservationists analyze 2:29wildlife audio and better protect endangered species. 2:32Want to dive deeper into some of these topics? 2:35Subscribe to the Think newsletter. 2:37It's linked in the show notes. 2:38And now back to our episode. 2:43So first I really wanted to talk about Perplexity. 2:46So this was a perplexing bit of news that popped up this week. 2:50It came out that Perplexity, the sort of 2:52AI search tool that many of you will be familiar 2:55with has made a bid 2:57for Google's Chrome browser, which many of you, you know, 3:00maybe even watching the show on 3:02um, for the small sol price of $34.5 billion. 3:07Um, so we're going to get a little bit into why this is happening, 3:10why they would price so much. 3:11What this is actually is even for. 3:13Um, but first I just want to start with a fun question, 3:16which is, would you, if you ran Google, would you 3:19sell Chrome to Perplexity for $34.5 billion? 3:22Uh, Abraham, yes or no? 3:23What do you think? I would not, no. 3:25Okay. Great Shobhit, how about you? 3:27No, not at all. 3:28Okay. And so, uh, Sophie, how about you? 3:31No, sorry. Also not for me. Okay, great. 3:33Well, we have unanimity on this. 3:35Uh, I guess maybe a good place to start is maybe, Shobhit, 3:38I'll start with you. 3:39Is-- why is perplexity doing this? And is this a real bid? 3:43Are they really trying to buy Chrome? 3:45There are bout 3.5 billion users of Google Chrome, 3:47like a lot of us, have been on Google Chrome for years and decades. 3:51So that's a stickiness factor to it. 3:53So I think the the intention here is to just, uh, 3:56the actual number is not as relevant 3:59as the fact that we are starting to line up 4:02actual bidders for part of the Chrome 4:04to move the conversation forward. 4:07Uh, I've been a very active user power 4:08user of Google, of, uh, Perplexity's own browser Commet. 4:12I think we're all heading in the right direction 4:14of having browsers become more and more intelligent. 4:17Uh, Microsoft did the same thing with their Edge browser 4:19by adding a Copilot to it and so on and so forth. 4:21So I think over time you will start to move 4:24into more intelligent ways of, of of, uh, 4:27looking at various websites and how we interact with them. 4:29But the dollar amount itself is not as relevant. 4:32There have been multiple valuations of Chrome as a unit 4:35which place it at 50 billion plus. 4:38There's some that even call it $1 trillion property for Google, right? 4:42But the whole intention is this is so critical 4:44to Google its whole ecosystem 4:47that this is just trying to drive a public opinion 4:49that, hey, we should be splitting it up into anymore? 4:51Yeah. For sure. Sophie, maybe I'll turn to you. 4:53I mean, when you respond to my question, I don't put words in your mouth. 4:56But you were like, no. Hell, no. 4:58If I ran Google, I would definitely not sell Chrome. 5:00Why shouldn't Google sell Chrome? 5:02I mean, it's a lot of money, isn't it? 5:03Like this is a it's a lot of money. 5:05Yeah, definitely. 5:06And to show which point is kind of an opener right. 5:09And it and it pulls a lot of attention to, 5:12to both of, of of the companies. 5:14Um, but I think what I see is definitely 5:17that, that more and more of these AI, 5:20uh, applications are meeting the users where they are. 5:23And this is definitely. 5:25Yeah, also from Perplexity point of view, a very interesting, 5:29uh, consideration. 5:31Um, uh, also, they originated very much from, from search 5:35and they distinguish itself from, from other applications. 5:39Uh, with that. 5:40So I can definitely understand why perplexity is interested 5:45and I think, uh, but I think from Google perspective, 5:48Chrome is one of the big, 5:50um, out hanging things out boards for them. 5:55Uh, to to reach a lot of, uh, users 5:57where they are as well. 6:00So that is, um, yeah, I think for Google. 6:03Still something to to cherish and keep. 6:06So, Abraham, so far we've been talking a little bit about like 6:08$34.5 billion. It's a lot of money. 6:12You know maybe it's not a real number. Maybe it's an opening bid. 6:14Maybe we could talk a little bit about product and technology here. 6:18Um, which is obviously Perplexity with the comet browser is trying to say 6:22that the future of AI is going to be really in the browser. Right? 6:25Um, and certainly a purchase of Chrome or an attempted purchase 6:28of Chrome only emphasizes that more, right? 6:30Like which is I'm an AI company. 6:31I'm going to buy one of the try to buy 6:33one of the biggest browsers in the world. 6:35Um, it's it's all really kind of emphasizing this point that they think 6:38that the form factor for AI in the future is, 6:41is going to be the browser, basically. Do you buy that? 6:44I mean, like there's almost a view, which is maybe the browsers like toasts, like 6:47we won't really have browsers in the future. 6:49So curious about what you think about, like the browser 6:52really being the kind of core platform for where AI is going? 6:55Yeah, I think the browser is still kind of your first entry point 6:58into a lot of, you know, tools, applications. 7:00So it's still kind of your first contact point in terms of using the internet. 7:03If you want to find something, you typically go to a browser. 7:06So AI based search functionality is really the conduit 7:09in which I think a lot of these tools and technologies 7:11and applications are still going to be accessed. 7:14Um, in terms of the actual bid, 7:16I think this is more of like a marketing ploy for Perplexity. 7:20So, I mean, the article you shared, 7:23one, the valuation of Perplexity is 7:26half of what they've offered as part of the actual bid. 7:29So, you know, realistically speaking, 7:31they say they have VC's backed up to, 7:33you know, cover the rest of the cost. 7:34But, you know, 7:36that's a lot of money to make up for. 7:38Also, it was interesting that they gave about a week 7:41I saw as part of the letter 7:43for there was an exploding offer, which is like, 7:45yeah, we kind of appreciate how audacious it was. Just ridiculous. 7:48You know, $34 billion offer, 7:50you know, twice your valuation 7:52and you give to the end of Friday to be able to have a response. 7:55So I think this is great. 7:57I think they did the same thing with TikTok as well, 7:59in terms of potentially making a bid for it. 8:02This is great in terms of getting Comet out there, getting people to, 8:05you know, clickbait on, you know, what is Perplexity? 8:08Why are they trying to, you know, purchase Chrome. 8:11Um, and really just gravitating towards that, you know, 8:14free marketing for Perplexity and comb their browser. 8:17Also Google is you know, they lost their antitrust. 8:20So in a worst case scenario 8:22they have to fully divest from from Chrome. 8:24So I think this really sets the one price point, 8:27the entry price point for what Chrome could potentially be worth. 8:30This is the first like, you know, public pricing. 8:33Um, and also I think it establishes Perplexity 8:36as or at least in some type of mindshare, Perplexity 8:39as a potential, you know, next up for, 8:42you know, ubiquitous search engine, so. 8:44I'll make two quick comments. 8:45I think there was just a. 8:47Just the fact that you said this audacious bit on LinkedIn. 8:50I've had multiple small startups, 8:53CEOs just make audacious bids. 8:55$10 billion to buy. 8:57Perplexity, right? 8:59It just doesn't matter at this point, right? 9:01People are just making these audacious bids 9:03as of this very hypothetical, just shows you where we are 9:05with the Silicon Valley hype cycle. 9:07But I think the from my perspective, 9:10the my biggest use of my my Comet 9:13Perplexity browser has been actually an enterprise workflows 9:17versus the way all the discussions 9:19so far we focus is on consumer, 9:21and the consumer interface is actually going to change a lot faster than enterprise, right? 9:25We're going to move to our mobile 9:26and voice and things of that nature much faster. 9:28But if I'm logged into my SaaS 9:30tools that I use for my day to day work, it could be Salesforce, 9:33it could be SAP or Workday or whatever else. Right? 9:36Those workflows, it has been terrible 9:39trying to get that automation 9:41to to get embedded into those SaaS tools, right? 9:43And so far, we've been waiting for. 9:45The SaaS providers to provide us some 9:47AI agents that'll go automate some workflows. 9:50And that is not consistent across all the SaaS vendors, some of the bigger ones. 9:53Obviously, deep pocketed can automate faster. 9:55But these are buying a browser 9:58that has AI baked in and has agents that can act on your behalf. 10:02Now all of a sudden, I'm able to go into. 10:04My SaaS tools like Salesforce of the world 10:06to go automate the workflows that I have to do. 10:08Because I have an agent sitting on the right hand side, I just can describe. 10:11So you'll be surprised how quick I'm doing 10:13my expenses right now in SAP Conquer, right? 10:16So that's my killer use cases. 10:18I mean, I think enterprises will have the have more separation anxiety from the browser 10:23than we in our individual personal lives. 10:25That's super interesting. Yeah. And I really didn't 10:27think about that because I think I was thinking about it very much, Shobhit, 10:29as a consumer and I say, well, it is true. 10:32I'm using my like desktop Claude my desktop. 10:35Uh ChatGPT. 10:37More and more in a way that's actually substituting for the browser. 10:40But I almost love the idea that actually this is not a consumer thing, right? 10:43Really like the argument for the browser 10:45being the key platform for is an enterprise thing, 10:47which is like very, very, very interesting. 10:50I like that a lot. 10:51Um, Sophie, any final thoughts on this? 10:54Um, you know, I'm kind of curious about, like, where you think this all goes. 10:57And, and I think, you know, ultimately, 10:59I think kind of the question is, um, 11:01where is where is kind of Google in all this, right. 11:04Like, what are they going to do next? 11:06How do they play the game? 11:08Um, you know, I'm, I'm, I'm curious about your kind of thoughts on that. 11:11Yeah. No, I'll definitely say something on that. 11:14And also to to Shobhit's point 11:17the competition on on where 11:19uh, what what is then the key platform to to entry. 11:22I think that is something we will see 11:24more and more, uh, things to come. 11:27Where is it? Uh, the entry point for the, the enterprise users. 11:30Uh, what will be that part for the future? 11:33Uh, so where is Google heading to? 11:35I think, uh, with this, 11:37uh, this potential split up of of products it needs to make up. 11:42They need to make up their mind as well. 11:44On where to to focus on. 11:47Um, and uh, I think their, 11:49their business model is still very much. 11:51Uh, on, on the advertising part. 11:54So I think, yeah, we will see a lot more, 11:58uh, happening over the coming time. 12:00Uh, and will be more clear on 12:03where where they were heading towards. 12:05But this bidding will definitely speed things up, uh, as well. 12:14I'm going to move us on to our next topic. 12:16Um, I wanted to talk a little bit about sort of Grok 12:19Imagine, but more generally about kind of 12:22like the rise of generative, uh, video. 12:25Um, and I think in some ways, Grok 12:27Imagine is a really interesting feature for us to talk a little bit about. 12:31Um, there's been, you know, Elon Musk, in his usual way, 12:34I think is like promoting it very, very aggressively. 12:36And now on X you may have seen that 12:39you can just sort of animate images by pressing and holding on them 12:42as the feature that they've been advertising. 12:44But I actually really wanted to talk less, maybe about Grok and more 12:47about how Grok is pitching its video generation features. 12:51So, uh, Elon Musk had this very interesting comment where he said, 12:54look, Grok Imagine our video generation technologies. 12:58They're going to be like the new Vine, right? 13:00Which is referring to this short form video platform, 13:02very popular and kind of like the the old, 13:05you know, classic era of Twitter. 13:07Um, and I think it's really interesting 13:09to think a little bit about like 13:11where generative video goes, because I think certainly what 13:14you know, X seems to have in mind 13:16is the idea is that, you know, you watch TikTok, 13:19um, you like short form video. 13:21Well, okay, well, we can just generate endless versions of that 13:24now just through generative AI, i.e. 13:26this is going to be a media feature, right? 13:29This is the where the future is going to go. 13:31But I think a lot of people are also like this really expensive 13:33to run video generation. Right. 13:36Um, and so I think one of the things I want to ask is like 13:38whether or not we think video generation 13:41is going to be a consumer feature over time or will really be more 13:44of an enterprise feature with time, where, you know, 13:46the main use cases are going to be Hollywood and video editors 13:49and people who are power users of like the Adobe Creative Suite. 13:53You know, if that's really where things are going, 13:55or if we really do think that this is going to be sort of 13:57like the media of the future Shobhit, 13:58you're already going off mute, so I'll let you just go. 14:00Oh, yeah. 14:02Um, I was at the AI4 conference. 14:04I was giving a talk, uh, this week and I got, 14:06I got into three conversations with actual media producers, 14:10uh, with all the big, big different, 14:12uh, uh, banners. 14:14And I had this particular conversation around digital, 14:18all the creative, generative AI making videos and stuff. 14:21One of the biggest hurdles 14:23across the industry right now is IP, 14:26the training content that has gone into these, 14:29these video generation images and stuff they prohibit you 14:33from actually using it for anything that is commercially usable. 14:36Unless we can clearly articulate 14:38what happens on the data that we do the training, who owns the copyrights? 14:41And Adobe has done a far better job than some of the 14:44some of the peers on training on on actual, 14:46uh, clean licensed data. 14:49Unless we solve for that, this will not enter the enterprises. 14:53Yeah, that's really interesting. Yeah. 14:54I mean, I think there's a really good argument for one of the reasons why 14:57the consumer kind of application of this technology 15:00has legs is, 15:01you know, it's a little easier from an IP standpoint. 15:04The norms are a little bit more open. 15:07Abraham, maybe a question for you is, are 15:09are you a TikTok user at all? 15:10I am not, no. Okay. Got it. 15:12Or do you enjoy short form video or are you like 15:14don't even want to touch it. 15:16No. Well I was I was part of the Vine generation. 15:18Okay. So you were you. Okay. All right. 15:20So you remember and I guess the question for you is like 15:22is, is a computer generated Vine is like a generative 15:25AI Vine the same experience. 15:27Like, do you think eventually we will just have like 15:29it's it's either TikTok 15:31or, you know, Grok Imagine like that. 15:33These two are actually substitutable in some sense. 15:36So no, just because of the cultural context behind some of like again, 15:39speaking specifically to the cultural context 15:42behind a lot of the images or videos that were created, and then the, 15:45the kind of the, the pulling 15:47the thread through the video and the actual 15:50your real life experience where if it's created on demand, it doesn't really ties well. 15:54But I think that Grok Imagine is really kind of connecting, 15:57you know, to that audience. 15:59As you know, most of the Twitter users or 16:01I guess X now are within that Vine. 16:04You know, previous life that they grew up on Vine. 16:08They grew up on these kind of these tools. 16:10Um, truthfully, I echo a lot of the same sentiments 16:12that Shobhit did in terms of where this is actually going to play out. 16:15So, you know, short form, obviously playful media generation. 16:18I think that's where you're going to see a lot of the adoption. 16:21You've already kind of seen a lot of the adoption where there's minimal moderation requirements. 16:25Um, I think there's a big controversy 16:27around deepfakes fakes that needs to be answered 16:29and really needs to be talked about a little bit more loudly in terms of, 16:32you know, what we're allowed to create with these models 16:34and whether that's putting guardrails on the actual, 16:37you know, APIs or, you know, 16:40policing the output a little bit more. 16:41So in terms of what's found. 16:43Um, but yeah, from a, 16:45from having this move from a, 16:48you know, C2C or B2C to like a B2B, 16:51I think we're, we're quite a ways from there. 16:54There's going to be quite a bit of oversight and compliance and, you know, 16:58brand safety content guardrails that we have to really enable. 17:02Um, but yeah, from the perspective of just playing around like 17:05I see this as basically just another cool tool as opposed to 17:08something that's really going to fundamentally change, 17:10you know, enterprise, uh, use cases and business models. 17:14Um, Sophie, uh, I'm 17:16one of my reflections of the AI era that I think is always 17:19very funny is like, you've you've built, like, the most complex 17:22advanced technologies, you know, known to humanity. 17:25And then it's like, I really need you to format some JSON 17:28or like, I really need you to clean up this code and remove whitespace. 17:31Um, and I think that's I have a similar reaction to video, 17:34which is video generations really, really resource intensive. 17:37It's really, really expensive to do. 17:40Um, is this sustainable to offer it as a consumer platform? 17:45Because it just feels like if you were to really price 17:47the actual price of generating these videos, 17:50you know, you would be talking a subscription service, which is like really, really expensive. 17:54Um, so curious about like 17:56if you think the dollars and cents even work out for offering this feature 17:59as like a broad mass feature, 18:01uh, for, for kind of playful content generation, like you're perhaps talking about. 18:04Yeah. No. And I think we yeah, we've seen since the launch of, of generative 18:09AI and always in this model. Right. 18:12It comes out people can, uh, can use it for free. 18:16They get used to to working with it. 18:19And then it goes into, 18:21uh, yeah, another form which is paid 18:24or it goes into new kind of applications. 18:27And in that sense, it's it's a similar launch that that we see here. 18:31But it's so convenient to use and it's indeed for, 18:35for many generations, uh, also addictive. 18:39And it, um, it plays really to, 18:41to the need of, of, of a lot of, um. 18:44Yeah. 18:45What, what people like. 18:47Um, what I think what is truly missing 18:51is, is the transparency around what. Yeah. 18:54What is used, how is it used and what is it consuming? 18:57Right. Indeed. If you would be aware about how much resources 19:01would really be needed to create something 19:04and make people conscious of it, and that is a 19:06yeah, that is a missed chance. 19:08That is also lacking with a lot of these applications to make, 19:12uh, make your users, 19:14uh, aware of, of, of what it is. 19:16And then so allowing them to make also conscious choice in 19:20um, in, in using this. 19:23So I think it's a typical launch 19:25of sort of a new, uh, thing. 19:27Uh, and it, um, 19:29it makes it very convenient for, for a lot of users. 19:32Now to start, uh, thinking in, in more video, uh, created content. 19:38Yeah. And I think, actually, I don't know Shobhit you went off 19:40site if you were going to ask the same question, I was. 19:42But, you know, I think both Abraham and Shobhit have referenced, 19:44like, this technology's very fun, but we need to kind of, like, 19:47make sure that it's, like, operated and deployed 19:50with the right sort of ethical guidelines. 19:52Um, and I'm kind of curious 19:54if you have recommendations on that front. Right. 19:56I mean, lots of companies are offering 19:58generative AI video now. 20:00Um, and I'm curious if you're in your experience, 20:03there's, you know, specific best practices, 20:05things that you think people should kind of keep in mind, 20:08particularly listeners of the show, 20:10maybe deploying this technology and would love to get kind of your thoughts 20:13and tips on, you know, how people have been doing it 20:15well. So let me give you an actual example. 20:18Um, back in April, I was at the Google Next event 20:21where we're we're massive partners with them. 20:23Uh, we were at the Sphere in Vegas. 20:25I'm not sure if you guys have gone to this massive sphere. 20:27It's just absolutely a phenomenal experience to sit in the middle 20:30and have this whole spherical environment around here. 20:33And it was the launch of Wizard of Oz. 20:36Uh, that video, they've been partnering with the studio 20:39to take that Wizard of Oz that was very small, 20:42and then scale it out on this huge mega 20:44platform in 360 degrees around you. 20:46Right. So this is working with the owners of the actual content. 20:51And it's a very complex problem, right? 20:52If you think about the fact that, hey, your screen has, say, 20:55a few characters who come trotting in dancing and then they leave. 20:59Now, if you're extrapolating it out on the whole sphere, 21:01you need to have the characters actually show up somewhere else 21:04and walk all the way through some of these, hiding behind a tree 21:07that should be visible to you, and then you pan into that. 21:09So it takes a lot to generate that kind of video. 21:12But that was a phenomenal example of content owners working with these AI models. 21:18With the right guardrails and guidelines. 21:20What can cannot be done and extrapolate from there. 21:22I think we need to do enough. Right now. 21:24We're in that phase where these generative models are still learning. 21:27They need a lot more direction on what is okay and what is not. 21:31They need more training data that is good approved clean training data. 21:35I think that particularly the phase that we are in right now 21:38should be more constrained and should be a good partnership between content owners 21:42and the AI model creators. 21:44It will be. 21:46I'm a little scared when we start to open these models up. 21:49When they're not quite ready yet, they would not understand 21:52what would be okay and what is not not okay. 21:54There's not been a good feedback loop from humans to even detect 21:57even things like if a particular video 21:59or image is not suitable for 22:01for Facebook or Instagram and stuff like that. 22:04It took us a long while to create the right filtering processes. 22:08So Facebook as an example. Meta. 22:10Uh, Yann LeCun had shared this, 22:12this information back in the days it used to take them 22:15a lot of compute to figure out 22:17even a quarter of the images 22:20or posts and flag them ahead of time for governance. 22:23Now, with the large language models, 22:2592-94% of those images 22:27are not suitable in software that are being flagged 22:29because the LLMs have been trained to have the right 22:31guardrails and and do this at scale. 22:33So I think the technology is getting there. 22:35We will need the right frameworks and guidelines 22:38and whose definition of guidelines to be shared. 22:40It may change between you and me. 22:42What I'm okay looking at or what I'm 22:44okay my kids looking at is going to be very different. 22:46So we'll need to have some personalization of those guardrails. 22:50We need the technology to get a lot more mature with the right training, 22:53curated training and stuff like that before 22:55we start to open it out to mass production. 22:58But I think this kind of a partnership 22:59like what Google did is the right direction forward. 23:02And now these are starting to get more and more open source as well. 23:04GEO3 was an amazing model. I loved working with it. 23:07Now we have an open source version of it as well 23:09that people are attempting to get to. 23:10So I think as we start to think about this open ecosystem, 23:14people can see how these models are being trained. 23:16What's the data coming in? And so on and so forth. 23:17I think we'll progressively, as a community, do a lot better at this. 23:20Yeah. No, definitely. 23:22And I think especially if you want to scale this 23:24to, to enterprise use. Right. 23:26That that should always starts with a lot of education and enablement and thinking about 23:31what is it indeed what are the values that we want to see back in, 23:35in the usage of, of, of this 23:37and what I call that as transparency, that's one of the 23:41yeah, the big guardrails that that would help a lot 23:45if, if people know 23:47what, what models are used 23:49and also in the whole IP discussion, 23:51uh, what happens then 23:53uh, and also make, make that clear 23:56and also have the guidance 23:58to, uh, to publish that together with the video. 24:01Then we have a lot more, uh, options to, 24:05to discuss, uh, and alternative. 24:07And people can also think about alternatives then. 24:10Right. Because um, but I think that's, 24:12that's a step in the into maturity. Uh, also. Right. 24:15And that's, uh, yeah, 24:16it's interesting that kind of the idea is that as the technology matures, 24:19there's just kind of a stages that you sort of move through 24:22and things you, the problems you sort of need to solve 24:24as you go. 24:28I'm going to move us on 24:29to our final topic here of the day. 24:32Um, maybe the obvious topic to talk about, 24:35which is GPT-5. 24:36Uh, it has been, of course, dominating headlines all across my social media. 24:40I can't get away from GPT-5. 24:42Um, and I know last week we actually did our kind of, 24:45like, breaking news episode for quick takes. 24:48Um, but the pace of AI moved so quickly 24:51that I think it's always good to kind of revisit a week later 24:53now that the dust has cleared more 24:56to talk a little bit about, sort of 24:57like how we're feeling about GPT-5, 25:00what we thought the results of the launch were. 25:02There's a win or a loss for OpenAI. 25:04There's a lot to get into here. 25:06Um, Shobhit, maybe I'll kick it to you first. 25:08Um, I think one kind of provocation I want to use to start 25:11this conversation was that there was a post by Gary 25:14Marcus, known I critic and skeptic, 25:17who basically used the opportunity to declare victory to basically say, look, 25:21GPT-5 shows that we are on a plateau for AI 25:25and that the current paradigm is not going to work. 25:28I'm smart. Everybody else is dumb. 25:30Um, and, uh, and that's that's the end of the story. 25:33Um, and so maybe I'll just ask you the question directly, 25:35you know, week or so on. 25:37What's your feeling about GPT-5? 25:38Is it an indicator that the current paradigm of doing 25:41I really is hitting a plateau? 25:44Um, and, uh, and and if not, why not? 25:46So I'm, I'm pretty, uh, 25:49like, I truly believe that we are making 25:51massive progress every week with these models. 25:54And GPT-5 was one of the many, 25:56even Claude and others have done some amazing work. Gemini. 25:58They've they've all been scaling up intelligence very, very quickly, rapidly. 26:02The cost of computers is coming down. 26:04Plummeting access to AI has been very good. Right? 26:07So I think overall having access to such intelligence. 26:11I think this this whole GPT-5 26:14launch was more of, uh, OpenAI 26:16pivoting into the super app domain, right? 26:19They would like to have one central router, one central point of entry, 26:23and then at the back end, you're exposing all kinds of agents 26:25to smaller and bigger models and so on and so forth. Right. 26:27So I think taking the friction away from the end user 26:30to having to pick a model and stuff like that, obviously it helps you 26:33with economics of scaling this model, 26:36but I think this starts to open up 26:38some very deep thinking models to the 700 million users 26:42who come to ChatGPT every day. Right. 26:44So that's an insane access to intelligence that's available. 26:48So I think we're definitely in the right direction going forward. 26:51Now from just on pure intelligence. 26:53I think there are a few hurdles that we still have to cross over 26:57before we truly get to start thinking 26:59about superintelligence, AGI and so on and so forth. Right. 27:01So the trajectory that we're on. 27:02One of the, one of my peers was talking about 27:06the fact that if all progress in AI stops today. 27:10We will still be. 27:11We still have AI models to 27:13to be able to do 40-50% of what humans do today. 27:16I kind of disagree with that statement. 27:18I believe that the there are two hurdles. 27:20One was around the feedback loop on these training models, right? 27:24We do not have a good mechanism 27:26of give providing feedback and adjusting the way the model works. 27:31And the second is around long term memory which is kind of related. 27:33But long term memory of hey, Shobhit said something up here 27:37and now I want that to follow through in the way 27:39I wanted to, to to transfer it over to me. 27:41I think that is something that both Google and ChatGPT are doing pretty well, 27:44and they're starting to make progress around that. 27:47Uh, my most commonly used feature of ChatGPT 27:51is temporary chat. 27:53Like, I do so many temporary chats 27:55because I don't want to start thinking and building 27:58long term memory about the topic that I'm searching. 28:00That is not going to be relevant for anything else that I'm going to be doing. 28:02It's a one time search thing, right? 28:03So I think those two hurdles about feedback loop mechanisms and long term memory. 28:07How do you manage that? And once we cross those, 28:10and that may require new algorithmic improvements to it 28:13before we can get to a point where we can get to superintelligence and AGI. 28:16But I generally genuinely loved the 28:18the latest release of GPT-5, especially on the API 28:21side, the way I'm able to manage and 28:23and for enterprise use cases, be able to 28:25to to tune the parameters the way I wanted, 28:28how much I wanted to think, and so on, so forth. 28:29This is also a good lesson in how 28:32if from product management perspective, 28:34they launched a product, and one of the first things 28:36they said that we're going to deprecate GPT-4o. 28:38And that's not how the enterprises work. 28:40Our consumers have an attachment to the way 28:42the style and tone and things of that nature. 28:44You can't just do that when you have such a large, 28:47uh, large scale application in the field. 28:50So there was definitely learnings about product management. 28:52But on the core raw power, 28:55I was very, very optimistic about the direction that we were all taking. 28:58Once we saw for those two hurdles. 29:00Abraham, I did definitely want to get to this last point. 29:02I mean, Shobhit was doing it in his way that he does 29:04great, which is basically like here are feature comparisons, right? 29:07Here's here's the improvement, here's the delta. 29:10But that final comment I think is so, so interesting. 29:12So I think one of the things that's kind of popped up in the news 29:15cycle of the last week is all these people who had 29:18what appears to be a pretty emotional relationship 29:22with the older models that are now being deprecated. 29:25Um, and I'm sort of interested in your, your take on that. Right. 29:29Because I think Shobhit is right. 29:30Like playing around with it myself. 29:32It's like on many respects this is just a better model. Right? 29:35This is actually a real improvement. 29:37Um, but they're kind of in this situation 29:39where it looks like almost like people kind of don't care as much about that. 29:43And what are we supposed to do about that? Right. 29:45Like, do you imagine a world where these companies have to maintain these models 29:48indefinitely because people have built these relationships 29:50or, I don't know, I just like curious 29:52about getting your thoughts and takes on this kind of very fun, 29:54like, almost like legacy attachment 29:56that people have to, uh, to older, older models. 30:00Yeah. So I think it's less about like, 30:02an emotional attachment to these models and more. 30:05So just, um, understanding exactly 30:08the, uh, the input output that you're going to get. 30:10So it's, it's really being able to reproduce 30:12the same experience time and time again, so that when you do, 30:15you know, inference your model, 30:17you know, the right way to inference it 30:19to get what you want out of it. 30:20And anytime you replace a model or, you know, 30:22drop a new model, there is some prompt engineering. 30:25There's some playing around with it to really get familiar 30:27with it to and to, you know, to truly understand, 30:30okay, what is the best input for the output that I want to, 30:33to, to receive for this particular use case 30:35or this particular, you know, question. 30:38And you know, we've done we've dealt with that here 30:40in terms of IBM's release of Granite models, 30:42where, you know, the chat template may shift or the model, 30:45you know, stylization or output may shift from version to version. 30:48So you do get a little bit of an outcry, 30:50if you will, from users outlining. 30:52You know, I don't have the same experience 30:54I did with this model, independent of whether it's better or not. 30:57And I'm more focused given that, you know, performances. 31:01We're really kind of the diminishing gains of performance. 31:05Don't really drive a ton of net new use cases for these models. 31:09I feel like a lot of the, you know, the net new abilities for models 31:13is really on the 31:14what GPT-5 has kind of demonstrated 31:17the software wrapped around the inferencing. 31:19So it's less about, hey, look, we've driven, 31:21you know, 0.1 on MMMU or World's 31:24Hardest Test or what have you. And it's really okay. 31:26Well, how do I become proficient 31:29at using this particular model as fast as possible? 31:32And I think that's where you're really seeing 31:34the the users kind of say, well, well look why would you deprecate 31:37you know, I built an entire, you know, 31:39agent or system or workflow with GPT-4o, GPT-o1, and now I can't anymore. 31:44And I have to basically reproduce this with GPT-5. 31:47And I think that's kind of saying that, like, this is just almost like 31:49it's a species of a long standing problem, right? 31:51Which is building 31:53a stack on top of an old piece of software. Now you're updating. 31:55Yeah. And also unfortunately or fortunately, OpenAI 31:58just It's criticized for absolutely everything it does. 32:00Everything is you're going to have a camp that thinks everything is wrong. 32:03You're going to have a camp that thinks you know everything is right with them. 32:07So I think there's a little bit of noise 32:09there that you got to siphon through. 32:10And I think that's really what the crux of the issue is, 32:14at least from a, from a scientific perspective or from like an actual use perspective. 32:18That's where the crux of the issue is in my opinion. 32:20Sure. Yeah. So if I've seen you grinning a couple of times 32:23as a Shobhit, and Abraham talk, 32:24I don't know if you've got a couple of reflections you want to get in. 32:27No, I think Abraham, you you you played it out very well. 32:30And I think if you, you look at enterprise scale adoption of this, right. 32:34This this is not only an it experienced play. 32:38You see you have many users who are. 32:40Yeah not so experienced 32:42with the with with a full IT background. 32:46So they have to get used to also working with this this new update. 32:51And they have been very successful with certain use cases. 32:54And then um, yeah. 32:56Have to keep on on doing that in. 32:59And I think that is also again plays into the maturity. 33:02But also as we do this as scale with, 33:04with a lot of users across enterprise, 33:08um, it is so important that you have the right governance 33:12for people in place that they can keep up 33:14when those new models come out 33:16and when new things come out. 33:18So I think, um, to Abraham's point, uh, that that is a difficulty. 33:23And it's also, I think because there are a lot of new unexperienced users, 33:28uh, that, that you have to take along 33:30and still adhere to the, 33:32the enterprise needs that, uh, that that was the first reason to start making those, 33:37those use cases for productivity or for quality increase. 33:41And I think that is, uh, that is something that we, 33:44yeah, have to, uh, to take into account. 33:46And that's also where we are working, 33:49uh, on with, uh, with our own companies 33:51and, and many others and to OpenAI's like credit. 33:54They released a very robust prompt engineering 33:57guide for GPT-5 shortly after the release. 34:00So I think it's they kind of noticed or maybe they are fully aware of- 34:03No, sympathy from Abraham. 34:07I'll make a quick comment on the emotional intelligence 34:11piece as well that Abraham was talking about. 34:13So I think we need to get to a point where, 34:16just like we do this with, 34:17with with dating apps, are you trying to find your like 34:20how close you are to a particular person and so on, so forth. 34:23And we've tried to create a science out of dating here. Right. 34:25So I think that that's going to happen 34:26with AI intelligence models and stuff like that as well. 34:29Most of us tend to be able 34:31to choose the voice of the avatar that you're talking to. 34:34Uh, with Gemini or ChatGPT and others. 34:37I think there will be some that there should be some features 34:40that are added to make sure that I can tune, 34:42uh, the AI assistant to my style, to my tone, and this one, so forth. 34:47That gel better. 34:48And that would have a that have a lot of, uh, 34:51a lot of implications on, 34:53Uh, hey, I'm a Republican or a Democrat, so I need this kind of personality, 34:57which means that you are going to have an adverse reaction 35:00to news that comes in this direction, and so on and so forth. 35:03So I think over time, we'll start to be able 35:05to create a long term memory, create 35:07a set of of learnings and stuff that I want in this particular eye 35:11before it starts, uh, connecting with me. 35:13MIT recently released a new benchmark 35:16for emotional intelligence as well. 35:18And I think we'll see a lot more of these to 35:20to in addition to all the math and stuff that you're doing. 35:22Are you emotionally intelligent? Are you understand? 35:24I, I do actually truly understand what I really need, 35:28not the questions that are coming out of my mouth. Right? 35:30My if my daughter asks me something in my head, I'm always trying to unpack 35:34what she really, really means, not what she's just asking for. 35:37So I think we'll get to a point as a community 35:39where emotional intelligence, 35:41tailoring that experience to our own style and so on, so forth, 35:45that's going to be a big part of how these eyes get rolled out. 35:48So people have less separation anxiety from, hey, my 4o. 35:51Oh, and my my new GPT-5 is different. 35:53I could have just said I want my 4o personality, 35:56click a button, transfer it over to the new with GPT-5 and I'm done. 36:00Yeah, I know, I think that's like it definitely feels like 36:03we're going to push towards customization because of this problem, 36:05because I think ultimately like this is a really fascinating problem 36:08because I think it's almost downstream of the fact that it is conversation. 36:12Right? Which is basically that like 36:14with Google, I have to say, I've never been like, 36:17oh, they updated Google. Something's different. 36:19Or like, man, it's just like it just isn't the same. 36:21Interacting with the Google search bar, right? 36:23But with the conversation, you really kind of start to be like, oh, there's a 36:26there's a person I'm interacting with is kind of like the, the, the, 36:29the prior that you have. 36:31Um, and I think that's kind of the interesting problem 36:33that we have with this interface 36:35is that this interface comes with all this additional baggage 36:37that is, that is hard to navigate. 36:40Um, I guess maybe in the last few minutes, Sophie, maybe I'll turn to you. 36:43We got a little bit off track with my original prompt, 36:46which I think is worth kind of talking a little bit about is, Um, 36:50should we read GPT-5 as an indication 36:53that things are plateauing or slowing down? 36:55Uh, because there has been I feel like a general vibe, 36:57I guess, you know, outside of any metrics 36:59that, you know, this was a this was an improvement, 37:02but not a huge quantum leap that that we were promised. 37:06And I think there's one point of view which is this is open 37:08AI overpromising and people are disappointed. 37:10The other view is, well, they tried really hard 37:13and now we're now we're in this plateau. 37:15I'm curious if there's one, you know, 37:17sort of theory that you ascribe to more. 37:19Um, well, I think the expectations indeed. 37:22I mean, and that's the marketing part 37:24and that is that is there. 37:27Um, but I think it's also, uh, 37:30I think there's still a giant leap 37:32to take into, from, from the current 37:34GPT use for, for consumer 37:37or citizen use towards, uh, enterprise use. 37:41And I think, um, with, with this step, 37:44uh, they are definitely coming closer 37:46to, uh, more, more of that. 37:49And with the memory 37:50and other features that that are built in in here. 37:54So I think in from that sense, it's 37:56definitely a step forward for, uh, for the OpenAI, 38:00um, uh, coming closer to enterprise. 38:03Um, but yeah, 38:06there's a lot of expectation, uh, settings. 38:09So that is, uh, so it's also good to understand. 38:12Um, yeah. What what kind of new users are they looking for? 38:16How do they want to grow, uh, further 38:18and then, um, relate back to those expectations again? 38:22Abraham, I think I'll give you the last word here. 38:25Um, I think the question I really wanted to end on was, 38:27you know, people have been looking forward to GPT-5 38:30for a very, very long time. 38:32And I have to feel I feel a little bereft 38:34now that the announcement has happened. 38:35I'm like, what else am I looking forward to? 38:37What's the next big AI you know, announcement to be had? 38:40Um, looking past GPT-5, 38:42are there particular things that you are excited about? 38:45Like what's the what's the thing that I should wake up being like, 38:47oh, is GPT-5 out yet? 38:49What's the what's that thing? 38:50I think model releases are like a dopamine trip right now. 38:53Aha. Sure. Waiting for the next hit, GPT-6. 38:57Yeah. 38:58I mean, to maybe to your earlier question in terms of how we, you know, 39:02have we hit a wall? 39:04I think scaling laws. 39:06No one can argue that scaling laws have kind of the diminishing returns are there. 39:09You're starting to see kind of a congregation 39:12of certain models at the top with, 39:14you know, .1, .2 difference. 39:16So in terms of what comes next, I'm more excited about 39:20some of the things that we're doing on top of models. 39:23So I think what GPT-5 did really cool. 39:26And you know, obviously plug for IBM here 39:28with Project M is wrapping the, 39:31you know, being programmatic about how we inference models 39:33and really wrapping software around the inferencing. 39:36So kind of an extension of test time compute 39:38where you're throwing inference at the compute. 39:40Now if we were to throw some software programing around 39:42how we actually take the output or route 39:45the inputs through different models to get the, 39:48you know, a particular answer, particular question. 39:50Or, you know, put in policies or governance requirements 39:53as part of what you want to have your model perform. 39:55I think for me, that's really kind of cool, and I think I'm excited 39:58to see the next iteration of what that looks like, as opposed to 40:02what's the next LLM that's going to drop and what's the next benchmark 40:05that's going to be showcased as part of this LLM. Well, cool. 40:09This is a great episode. I'm glad we got into a number of different points. 40:12And yeah, the discussion went in a couple of different directions, 40:15but I think we hit on some really, really good stuff here. 40:17So, uh, Abraham, Shobhit, uh, thanks for joining us as always. 40:20And, Sophie, we'll hope to have you back on MoE at some point. 40:23And thanks to all you listeners. 40:24If you enjoyed what you heard, you can get us on Apple Podcasts, 40:27Spotify and podcast platforms everywhere, 40:29and we'll see you next week on Mixture of Experts.