Learning Library

← Back to Library

AI Arms Race: $2 Billion Funding

Key Points

  • The hosts joke about what they'd do with $2 billion, highlighting the massive scale of recent AI funding rounds like Anthropic’s $2 billion raise and xAI’s $6 billion raise.
  • They explain that the primary drivers of such huge sums are an “arms race” for top AI talent and the astronomical cost of GPU compute needed to train ever‑larger models.
  • The discussion raises concerns that this level of investment may concentrate power in a few companies, potentially turning the AI landscape into a two‑player game dominated by firms like OpenAI and Anthropic.
  • The episode also teases other AI news, including Microsoft’s new Core AI group, upcoming features for Notebook LM, and the emerging role of AI agents in finance.
  • Overall, the conversation underscores how rapid capital inflows are reshaping the competitive dynamics and future direction of artificial intelligence research and deployment.

Sections

Full Transcript

# AI Arms Race: $2 Billion Funding **Source:** [https://www.youtube.com/watch?v=NcV5GrG5VTA](https://www.youtube.com/watch?v=NcV5GrG5VTA) **Duration:** 00:44:46 ## Summary - The hosts joke about what they'd do with $2 billion, highlighting the massive scale of recent AI funding rounds like Anthropic’s $2 billion raise and xAI’s $6 billion raise. - They explain that the primary drivers of such huge sums are an “arms race” for top AI talent and the astronomical cost of GPU compute needed to train ever‑larger models. - The discussion raises concerns that this level of investment may concentrate power in a few companies, potentially turning the AI landscape into a two‑player game dominated by firms like OpenAI and Anthropic. - The episode also teases other AI news, including Microsoft’s new Core AI group, upcoming features for Notebook LM, and the emerging role of AI agents in finance. - Overall, the conversation underscores how rapid capital inflows are reshaping the competitive dynamics and future direction of artificial intelligence research and deployment. ## Sections - [00:00:00](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=0s) **Dreaming $2 Billion AI Futures** - In a light‑hearted opening, the host asks AI experts how they'd spend $2 billion before shifting to current AI industry news, including Anthropic’s fundraising and upcoming Microsoft features. - [00:03:25](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=205s) **AI Funding Arms Race and Vulnerability** - The discussion highlights how massive capital inflows boost leading AI firms' market power yet make them more exposed to competition, sparking a cascade of investments, potential consolidation, and a dual race between broad utility and safety-focused specialization. - [00:06:31](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=391s) **AI Platform Competition Dynamics** - The speaker examines how AI‑focused companies, caught between investor rivalry and hyper‑scaler dominance, must continuously outpace rivals and develop their own platforms to survive. - [00:09:36](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=576s) **Balancing Talent, Regulation, and Compute** - The speaker reflects on how AI firms balance attracting talent, navigating government regulation, and leveraging massive compute resources, noting the evolving nature of these trade‑offs. - [00:12:40](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=760s) **Microsoft Announces Core AI Group** - Microsoft is creating a new Core AI group, headed by Jay Parikh, to develop an end‑to‑end Copilot and AI stack for internal and external customers, prompting discussion on how large tech firms organize to compete effectively in the AI space. - [00:15:44](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=944s) **Deep AI Integration Strategy** - The speakers contend that a tightly integrated, AI‑first organizational structure—kept hidden from public view—is crucial for delivering consistent AI experiences, suggesting large firms may evolve toward an Apple‑like unified platform model. - [00:18:56](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=1136s) **Balancing General vs Specialized AI** - The speaker explains how customers are moving from favoring large, generic models to fine‑tuning smaller, more accurate solutions for production, emphasizing accuracy, latency, and practical use over sheer size. - [00:22:01](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=1321s) **Strategic AI Market and Model Debate** - The speaker discusses OpenAI’s positioning amid growing investment, partnership strategies, and the trade‑offs between deploying a single general AI model versus multiple specialized models. - [00:25:08](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=1508s) **Google NotebookLM's Interactive Podcast Feature** - The speaker showcases NotebookLM, Google’s AI platform that turns uploaded documents into conversational podcasts and now lets listeners interject questions during the generated dialogue. - [00:28:19](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=1699s) **Rethinking AI Interaction Paradigms** - The speaker debates whether chat‑based interfaces will remain dominant, highlighting NotebookLM as an attempt to shift AI from transactional chat toward more relational, multimodal experiences within organizations. - [00:31:27](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=1887s) **Innovating Multimodal Human-AI Interaction** - Speakers discuss emerging personalized, emotionally intelligent AI interfaces and question how comfortable we should be in interrupting AI agents. - [00:34:32](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=2072s) **AI Brainstorming, Legal Stakes, Finance Outlook** - The speaker reflects on using AI for creative brainstorming and high‑stakes legal decisions while highlighting a World Economic Forum report on generative AI agents transforming finance through compliance automation, robotic advisors, and adaptive asset management. - [00:37:42](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=2262s) **Trust Barriers to AI in Finance** - The speakers examine how trust issues must be resolved before sophisticated AI agents can achieve widespread adoption in financial services, while noting the potential competitive edge they could provide to traders. - [00:40:46](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=2446s) **Evolving AML Systems with Ensemble AI** - The speaker outlines the shift from rule‑based anti‑money‑laundering methods to ensemble machine‑learning models, stressing reliance on legacy structured data, fraud‑risk scoring, and the challenge of rapidly matching traders' domain expertise. - [00:43:56](https://www.youtube.com/watch?v=NcV5GrG5VTA&t=2636s) **Closing Remarks on Mixture** - The hosts humorously wrap up the episode, thank guests Vyoma, Kaoutar, and Chris, and invite listeners to follow the “Mixture of Experts” podcast on major platforms. ## Full Transcript
0:00What would you do with $2 billion? 0:03Chris Hay is as a distinguished engineer and CTO of customer transformation, 0:06Chris, as always, welcome back to the show. 0:08What would you do with $2 billion? Spend it all on bitcoin and tell people 0:12I'm training AI. Okay, great. 0:15Kaoutar El Magrahoui is Principal Research Scientist and a Manager 0:18at the AI Hardware Center, Kaoutar. Welcome back. 0:20$2 billion. So, what would you do? 0:22Well, it's a lot, and there is alot I can do with it. One of the first 0:27things is philanthrop and social impact, I would love to help with that 0:32Vyoma Gajjar is an AI  technical solutions Architect. 0:35Vyoma, what would you do with $2 billion? I would back to my roots and become 0:39a farmer again. Own loads of land, 0:41land, land. And then revolutionize the entire field of agriculture. 0:45Terrific. All that and more on today's 0:47mixture of experts. I'm Tim Hwang, 0:54and welcome to Mixture of Experts. Each week, MoE is the place to tune 0:58in, to hear the news and analysis of the biggest headlines 1:01and trends in artificial intelligence. Today, we're going to talk a little bit 1:05about a new CoreAI group at Microsoft. We're going to talk about, 1:08some new features coming to NotebookLM and a little bit about agents in finance. 1:12But first I really wanted to talk about $2 billion. 1:17So we had some rumors that just popped up, just this week. 1:21That Anthropic is set to raise $2 billion at a $60 billion valuation. 1:27And that follows on news from December, where xAI raised 1:30$6 billion at a $45 billion valuation. Obviously, 1:35the market is really, really hot. Continues to be really, really crazy. 1:39I guess maybe, Chris, I'll turn it to you. First is for our listeners. 1:43You know, that's just a staggering amount of money. 1:45What do you use that money for? Even like what? 1:48What is this money being raised for? Why or why are these companies needing 1:51so much money? I think the quick version of this is. 1:54It's an arms race at the moment. It's an arms race for talent. 1:57So, you know, it's not like the researchers are getting paid 2:00a small amount of money, so they've got to be paid, 2:02and they're pulling their best talent from folks like Google, OpenAI, etc.. 2:06So it's just the cost of the researchers. And then obviously the cost of the GPUs. 2:11So and it's one of these races where if you're not spending the money, 2:15you're not going to be able to train the models, 2:17and then you're just going to be out of the race. 2:19So I think they they've got to keep racing just to be in that race. 2:23Yeah, absolutely. Vyoma, I think one question that I have with 2:26all of this is like it's it's pretty hard to raise $2 billion, 2:30much less $6 billion on the xAI case. I don't know, I think one result of this, 2:35I don't know if you agree. It does seem to me that, like, 2:37is this becoming kind of a two player game in terms of, you know, the ability 2:42to train these massive, massive models? Like, are we going to end up with largely 2:46it just being OpenAI and Anthropic? believe that 2:49somewhere that these players have a huge market share right now. 2:52But I don't feel that they are the ones who are going to rule. 2:56Forever. The smaller models, 2:58the smaller industries are also taking up people who are more domain specific 3:03and building more domain specific models are also being entertained 3:06by the clients across. So I feel over here, as Chris mentioned, 3:10it's an arms race right now, and I feel a majority 3:12of all of these infrastructure, changes that are going to take place 3:16would need a lot of money going into them. So we'll just have to wait it out 3:21and see what goes around. Yeah. I think it's kind of intriguing. 3:25Yeah. I mean what you're saying is basically 3:26like you're sort of saying these companies can raise so much money. 3:30They're the leaders in the space. But it's unclear whether or not that 3:34will allow them to retain their leadership in the market, 3:37which is which is pretty intriguing. And I guess, Kaoutar, 3:41do you agree with that? The kind of this is in some ways 3:43maybe kind of like an interesting, ironic situation 3:45where the two biggest companies are going to be raising the most money, 3:48but in some ways are maybe the most vulnerable. 3:50Yeah, definitely. I think this level of funding, can signal 3:54also to other AI firms the level of resources 3:58needed to stay competitive. So leading to a cascade of investments 4:02and potential consolidation in this space. So it's definitely, you know, an arms 4:06race, like, you know, everyone said here, and, you know, 4:11this is, you know, some of these companies will be shaping the AI industry. 4:14So this is a rivalry in in AI industry fostering this rapid advancements in 4:20both capabilities, ethical considerations. For example, Anthropic has a niche market 4:25here, especially with safety. And you know, they carved out, 4:30you know, a niche emphasizing safety and alignment, 4:33which resonates with governments and enterprises 4:36concerned about AI and its safety. While, for example, open 4:39AI is mostly focus on the broad utility. But, you know, this doesn't mean that 4:45they're the only, you know, key players that's going to be kind of a two, arms race. 4:50There are others, and especially with the push like Vyoma, 4:54you know, mentioned with the smaller models, we will see. 4:58You know, I think it's still too early to predict what's going to happen. 5:01So there is also a big push for smaller models that are doing 5:05pretty well also, especially in domain, specific cases. 5:09Right. And those, you know, 5:11also will have a key role. But I think what this is 5:14showing is there is massive investments right now happening. 5:18And most likely Anthropic will double down on their R&D, 5:22which is going to be great, you know, for in terms of the capabilities 5:25that they're going to unleash, the development that they will do. 5:29So more innovations is going to happen in the space. 5:32But, also a lot happening in the open source, 5:37which will help also others catch up or even come up with some revolutions 5:42here. Yeah. For sure. Chris. 5:43Maybe I'll turn it back to you. You know you're nodding when Vyoma was 5:46kind of giving her hot take on all this. There's a great book that came out 5:50from Stripe Press called boom. By these guys, 5:54Brynn Hobart and Tobias Huber. And kind of they make the argument 5:57that, like, there are good bubbles, basically, which is that, like, 6:01you know, you can imagine a world where, like, all of these resources 6:04kind of flood into the space and like the companies 6:07that are leading and, you know, driving 6:09this may not be the ultimate winners, 6:10but it's not like a count against the technology. 6:14It's almost kind of like what's needed in order 6:15for like the next generation tech to emerge. 6:17And it sounds like it sort of 6:18seems like from your head gestures, I guess, that you sort of agree 6:21with the idea that, like, these may be the leading companies, but 6:24they might not ultimately own the future. Like we not might not end up in a world 6:27where it's like these two companies are the end all, be all for all AI. 6:31I think the caution I would have is that these companies are in competition 6:36with their investors. And that is probably 6:40an interesting dynamic. And we've seen that play out already, 6:44fairly publicly. So I think it's how that competition, 6:49whether it's a coopetition or whether it's, true competition plays out. 6:53And and if you really think about those companies, 6:57they've made great strides and their own AI capabilities are absolutely huge. 7:01Now, like the, Microsoft five four model, for example, is a great model, 7:05the new Amazon, Nova models, etc. they are making great strides. 7:09And and when you start to look, you know, the 7:13hyperscalers investing in, their own AI and then, 7:17you know, as we're going to talk a little bit later on in the show about, 7:22how AI is going to be embedded across their entire organization, 7:26is it really going to be a space for those companies to exist? 7:29And I think that then opens up, they need 7:33to establish their own platforms. And we clearly see that 7:35from those companies, right? With the, things like the operators, etc., 7:40the more generic platforms. So we're going to get this push 7:43between platforms. And, and I think that tension's 7:45going to be interesting. And that's why do they survive or not? 7:49The only way they're going to survive is to keep being in that race 7:52and keep being ahead. Because if they fall behind, 7:56then are they going to keep getting that money 7:58and then they're just going to fall at. Yeah. No, I think that's right. 8:01Maybe a final question before I move to the next topic. 8:03You know, Chris, you you kind of alluded to this. 8:06When you're explaining, you know, why it is that you need $2 billion, $6 billion. 8:10And I think there's kind of two things you lined out. 8:13One was researchers super expensive. And then the other one was, hardware, 8:18of course, super expensive as well. And, you know, 8:21I think this kind of interesting question here is like, 8:23you know, does that spend go 50/50? Is it going to balance out over time, 8:26or are we going to spend less on talent over time 8:28and more on compute? I guess Vyoma I'm kind of curious 8:31about what you think a little bit about. That is like, you know, 8:34I just kind of think about the balance sheet of these companies and, 8:37you know, they've raised the money. So they have these chips 8:40and they have to decide, you know, do they put it more against talent 8:42or they put more against compute scarcity? You think those economics will 8:45sort of change over time or, you know, if one will dominate over the other. 8:49I think the future would be. How do we easily and properly 8:52strategically balance all of these? As whatever you said above, 8:56the race is actually to dominate over this entire AI ecosystem 9:01that everyone's building and this everyone wants to kind of 9:05have their beak into this. And I feel one of the 9:07main things that I feel, and this is this is some sort of an indirect control 9:11that all of them want or like the pace of innovation. 9:14Imagine the amount of money that you put in that much compute 9:18you have that much access to researchers you have. 9:20If you put out such kind of, investments. And then this comes in the news. 9:25The news picks it up. The talent around you also is more like, 9:30favoring you as well. I'm like, 9:31wow, no, Andrew Monks is doing this. So it's kind of a two way street here. 9:36They are putting this out so that they get more talent. 9:38So it's a balance. I won't say that 9:4050% is going to go to this or 30 or for no one has strike that balance yet. 9:45Everyone's learning on the go. So yeah, we'll have to wait and see. 9:49I was thinking of this and I was thinking, what if? 9:52Because as I mentioned, that the government is saying that, 9:57you need to have more regulations, rules, etc., that's what anthropic is doing. And 10:00is he OpenAI working more on democratizing the entire application space? 10:06What if the government pushes them too much like the AI That's right. 10:12Yeah. The balance is going to be really, 10:14I think really interesting to see. I mean I think there's two things 10:16I think a little bit about. One of them is 10:18of course having a lot of compute is, is of course its own recruiting ploy 10:22where you're like, oh, well, the only over, the only place in the world 10:24where you can run pre-training runs that are this big. 10:28There's also this kind of funny thing where at least in a lot of the circles 10:30I run and there's a lot of discussion about, you know, 10:33I eventually automating AI research, and I kind of wonder whether or not 10:37that will also be super interesting advantage over time, 10:39which is I have just more compute. So it's kind of fungible, weirdly, 10:42with my researchers over time. Cause I know, you know, you work in a day 10:45in, day out on the hardware side. I don't know if I'm just kind of, 10:48you know, speaking out of school, but curious about what you think 10:50about that argument. interesting. Dynamics here. 10:53And, of course, I mean, we've seen the work on the AI, like super 10:56the scientist, the AI scientist, which was very interesting. 10:59So in terms of the balance between the, you know, the hardware and, 11:03the researchers, I think it's going to vary in certain areas. 11:06We might have more capabilities that where AI is actually innovating 11:11in this space, but others we might still need, you know, human minds 11:14and a lot of research and to to improve things and to innovate. 11:19And I think if you look at the set some scenarios for the future, I 11:23see kind of three scenarios, having someone 11:27actually like, an equal poly where you have companies like anthropic, 11:32OpenAI, Google and a few others dominate due to massive resources and partnerships. 11:37The second scenario where open ecosystems, once you have open source 11:42and decentralized, that will flourish, undermining proprietary leaders. 11:48And the third scenario could be fragmentation by region 11:51or industry or countries where different players lead in different 11:55geographies or sectors due to regulations, computer access or specializations. 12:01So these are kind all possible scenarios. Or we could have a hybrid approaches 12:05across these these scenarios. And what's the right balance 12:09I think that's a tricky question here. It will have to wait and see. 12:18So I'll move us to our next segment today. You know, I think the joke I always use 12:22is that there's actually three constants in life. 12:24There's death, taxes, and then corporate reorganizations. 12:28And that's what we saw this past week. Satya Nadella, who's the CEO of Microsoft, 12:33announced, that once again, there is a sort of new unit 12:37within Microsoft that will be working on AI. 12:40And it will be called the core AI group. It'll be led by Jay Parikh, 12:44who is, the head of a cybersecurity startup called lacework and was the former 12:48global head of engineering, at meta. And he'll be taking over a new unit 12:53that will build the end to end Copilot and AI stack for our first party 12:58and third party customers to build and run 13:00AI apps and agents. So I think this is actually 13:03really interesting on a couple of levels. You know, I think one of them is, 13:07you know, just kind of like how a company organizes 13:10to effectively compete. And I really does 13:13still seem to be like an open question. I think this is like the second or third 13:17kind of shuffling of the pack within Microsoft 13:19around, how to deploy AI technologies. And I'm, I'm really kind of wanting 13:24to get this group to talk a little bit about this 13:26because I think, like it is one of the missed questions, right, 13:28is that you have these giants of AI that are deploying huge systems and 13:32advancing the cutting state of the art. But I think almost internally, 13:35there's like this kind of interesting question 13:37that all these companies are trying to work out, which is 13:39how do we like organize ourselves to compete most effectively in the space? 13:43And maybe, Chris, I'll kind of turn it to you. 13:45You know, I'm kind of curious about like from your vantage point, you know, not 13:49just seeing what you're at seeing at IBM, but also across other companies. 13:53Like, how do you think that's evolving over time? 13:54Do you think there's any best practices emerging? 13:57Just curious to get your thoughts on that. Yeah. 13:59I think this is a really interesting move. And I think it's a story of integration. 14:03Really? So, you know, 14:05Microsoft's put Copilots everywhere, but if we think about the Microsoft 14:08estate, right, you've got Azure, which is their cloud platform, 14:11which is their core infrastructure. You've got the operating systems. 14:14And then you go to office, you got vs code etc., all the dev division, 14:19and you need that to be an integrated play that where AI is part of everything. 14:25Otherwise it's going to look like a bunch of disjointed paperclips 14:29are just going to appear at random points throughout all product. 14:33So I think that's probably the first part here 14:36is, is how does this look like an end to end platform? 14:39And I think it needs to be an end to end platform 14:42because you're going to have agents kicking around. 14:44I said agents At this point, I think you're like. 14:47You're racing to it. Like. Yes. It's a little bit of a competition 14:51now where it's like, Chris is always going to be the first one 14:53to mention it, but. Sorry, sorry. Go ahead. 14:56actually if you're going to have like I mean we were talking about 14:59OpenAI operator and we were talking about a kind of cloud 15:03control in the browser, etc.. Then if you've got agents 15:06kicking around at that point and it's more deeply ingrained 15:09into the operating system and the applications 15:11that needs to be monitored in the same way as you monitor things 15:14within the OS, right? You need to have that governance. 15:17You need to have that safety elements and make sure 15:20that from a security perspective, you're not going to have bad 15:23actors come in and then start invoking this. 15:25So this is really needs to be an end to end play. 15:29Otherwise it's going to feel like a very disjointed strategy. 15:32So actually I think what Microsoft is doing 15:34there is and actually we need to, crosscut the organization here 15:38and run to a strategy and embed AI everywhere, but 15:41then build that into an overall end to end AI platform. 15:44I think it's a very, very smart strategy. And whether the way they've organized it, 15:48the details are kind of like at the moment is the right organization. 15:51But I think take an integrated organization approach is really 15:55I think it's really interested and, and Satya said something at the end of it, 15:58which is, you know, we don't want to expose 16:00our org chart, right? I think it was at the bottom. 16:03And I think that actually that statement is probably the key statement 16:08about not exposing AI as your org chart within the organization. 16:13And I think ultimately that's what they're trying to do. 16:15Kaoutar are one point of view on what Chris just said is, you know, 16:19super, deeply integrated organizations will execute 16:23the best, company wide on AI. And, you know, when when I hear something 16:28like that, I'm like, oh, Apple. We're talking about Apple, right? 16:31This is a company that we think of as like being so deeply, deeply 16:34kind of like integrated on a platform level. 16:38Do you think that one outcome of sort of AI, particularly 16:41for these big companies that have, like many multiple offerings, 16:43is that they will look more and more like Apple with time 16:46because like, you just need 16:47a certain level of integration in order to deliver kind of consistent 16:50AI experiences. Or do you think there's going to be 16:53a couple different sort of models for competing in the space? 16:56I totally agree. I think deeper and deeper integration 16:59is more needed. And having this AI first strategy across 17:03all the levels of the stack is important. And this is the move that, Microsoft, 17:08with their core AI announcements and this group formation is doing. 17:11So this is a story of integration like the market. 17:15So and the creation of core AI indicates Microsoft's intent to consolidate 17:19AI across its divisions Azure, Office, GitHub, etc. 17:25and and also what they're doing, for example, with the GitHub Copilot. 17:29They're trying to learn from that. If something works in the GitHub Copilot, 17:32they want to see if they can propagate that to other layers in the stack. 17:36So, so this really enhances the integration of the AI across 17:41its ecosystem. And also open, open, 17:44you know, ecosystem to ensure quicker go to market and also AI driven features 17:50which are really important. So I think this reorganization 17:54is a very good, move which highlights how big companies 17:58are actually restructuring to prioritize 18:01AI at the core of their operations, centralizing the AI expertise, 18:06allowing for better alignment of the all the AI initiatives and, with the business, 18:11but also with the product strategy. I think that's really important. 18:14And this is going to give them, you know, a competitive edge. 18:17So this AI first pivot, you know, has already been seen, like we see it. 18:22And also in open AI partnerships and all the AI integrations into products 18:26like the Microsoft 365. So I could be really key to sustain, you 18:31know, their leadership in enterprise AI. Very much so. 18:35You work on solutions day in, day out. And I reckon that a lot of customers 18:40have exactly the same problem that Microsoft is dealing with internally. 18:44Like, there's a very interesting question which is is it ultimately one model 18:48for everything or is it better to have lots and lots of specific 18:52standalone models that are hyper tailored to a particular use case? 18:56And there's kind of an interesting parallel, 18:58and I'm kind of curious what you find in your work with customers, 19:01because it kind of feels like, you know, what Microsoft is ultimately 19:04saying is look like everything is going to be, 19:07you know, ultimately a slight 19:09fine tune on the same basic platform. And that's going to be the way 19:13that we're going to win versus creating 19:14kind of like lots and lots of sort of like specialized models 19:17in different types of product or which is more disorganized. 19:20But you could also make an argument is like much more specialized 19:22to a particular use case. Is that how you read it? 19:25And I'm kind of curious if that's what you see among customers. 19:28Yeah. That's a great question. And something 19:29that we are dealing with every day. But it has gotten better over time. 19:33Initially, every customer used to see that. Oh, this is a 400 billion parameter 19:37model, 100%. It's going to do a great job. 19:40People have learned over time that that's not necessarily the case 19:44because again, they've started seeing they've blown up their research budgets 19:48to experiment with stuff. So now they know that, 19:51oh no, now we are going into production. We need a little bit of accuracy. 19:54We need it's okay if that is not the information 19:59that's being spit it out is not prompt. They are okay with a little bit of latency 20:03in that as well. So I feel that is a very revolution change 20:09that we are seeing as we are going down this route of production 20:12izing some of the applications that we have built last year. 20:15I do not believe that there is one model fit all for all use cases. 20:19It depends on the use case, it depends on the infrastructure. 20:22It depends on the companies success metrics. 20:25What do they want to prioritize? Do they want to prioritize? 20:29More revenue or less? Human intervention. So depends. 20:34So again, with AI, they want to integrate it 20:39into their entire ecosystem or the infrastructure that IBM, 20:43that Saudi Microsoft has built from a pilot. 20:46And I saw that they also came up with copilot chat. 20:49So you see the kind of that the sort of integrations 20:53that are coming up in the market are useful. 20:56People make it more easier to use AI. It's more like you more able 21:01to kind of rhyme with that. Okay. And I see how it is being utilized. 21:05Maybe this can help me. And then we figure out that 21:08which particular model is going to have, how big that model should be, 21:12which particular, domain knowledge should it be trained on. 21:16So that's what I feel would be the future going on. 21:19Yeah. For sure. Yeah. 21:20And I think this is kind of like that delineating line will be very interesting 21:23to see is like oh well it's most efficient to have common infrastructure 21:27but maybe different models or like oh it's actually most efficient 21:30to have common infrastructure and also a common model that you fine 21:33tuned on the team side, right. Like I think 21:34all of this is kind of being worked out. And just like, how do you even use this 21:37technology at scale? This is a very interesting problem. 21:40Yeah. All of these models. All the API calls that go. 21:43How much tokens are getting generated from that? 21:45That's the cost as well. So people are realizing this over time. 21:48And I feel with core AI, one of the main things that they are doing 21:52is I don't know, like I should say this, but maybe I have it. It's 21:58they're creating against OpenAI. You get it. 22:01Let's say OpenAI goes on and the market gets more investors, more funding. 22:06They don't want to be known like Microsoft, that as the person 22:10or the people who rely completely on open that. 22:14I think there's a strategic move in that case as well, that these are the products. 22:18This is how you can use as an enterprise AI. 22:20There's a little farther that you can go with just partnerships. 22:25That's what they have. So I think that's a bolder 22:28move as well in Yeah. And it goes back to what Chris was saying 22:31earlier, is kind of like this interesting race 22:33that we're seeing emerging across the industry, which is like the companies 22:36that are the leaders are also kind of in a race with their investors as well. 22:40And it's just kind of been a weird aspect of how the the kind of whole market 22:43has evolved for that. Like there's this kind of weird relationship 22:46between all the actors in the space. So. So, Tim, I think go into your question 22:49of, you know, bigger specialized models, you know, 22:53bigger models versus specialized models. One model to rule them all 22:56or specialized model. I think there are pros 22:58and cons to each approach here. And so if you have the 23:02the one model to rule them all. So there is the ease of deployment. 23:05Common interfaces is the similar user experience cross-domain generalization. 23:11But if with specialized models, you get also the resource efficiency. 23:17But you know, the there is, you know, scalability 23:20and the complexity and management, the data silos. 23:23So I think there are advantages and disadvantages of each approach. 23:26But I see what's emerging is a hybrid approach 23:29where many organizations are adopting a hybrid strategy 23:32that combines the strengths of both approaches. 23:34For example, find the foundation models on fine tuning. 23:37You take, you know, large general purpose models, and then you fine 23:40tune them with these adapters like Laura or Q Laura for specialized tasks. 23:45This is very right now, it's very trendy in the industry. 23:48Agency framework is also another approach that's very important 23:52where you use a general model, as the core reasoning engine, 23:56and then you deploy smaller task specific models as helpers. 24:00And then the other thing is, the end of the spectrum is these 24:03multimodal systems where you combine general models 24:07for cross domain tasks. With specialized models 24:10for domain specific applications. So you have both, 24:14for example, in the like, you could take an application 24:16like medical imaging where you need this multimodal 24:19capability to be able to to solve the medical imaging tasks. 24:23I've got one one complaint about... 24:25Only one? 24:26Yeah. Which is, I got use to the name copilot. Why didn't they call it copilot AI? 24:32Everything else is called copilot. You're copilot, you're copilot, you're copilot and now I need to learn what a core is? 24:37I'm like, is it a copilot? is it a core? You're confusing me microsoft. You should have just called it copilot AI 24:43That's right, this is going to be a little bit like 24:46you know, when Google had like six different messaging apps 24:49all with, like, roughly similar names. And it was very, very confusing. So, 24:53Half of them are extinct now. 24:57Also that. Yeah. Also that for sure. 24:58Yeah, exactly. 25:04All right. So I'm going to move us to our next topic. This is some news out of December. 25:08But I did want to bring it up because it got kind of lost 25:10in the craziness of the holidays. NotebookLM, at least for me, continues 25:14to be one of the most fun tools, that are out there in the AI space. 25:18I don't know if any of you three use NotebookLM, but just as a quick 25:22reminder, this is kind of Google's offering in the AI space. 25:25What's most interesting about it is allows you 25:27to kind of upload files and documents and then allows you to kind of work 25:30with them in a pretty seamless, kind of interesting new way around AI. 25:34And I think one of the kind of fun things about this project is that it's been 25:38a way to experiment with new interfaces of interacting with AI tools. 25:44And, you know, kind of most famously, the one that I've been using the most 25:47is that you upload a document and it will create a small podcast 25:51where two people are talking about the thing that you uploaded, 25:54which is kind of like very fun. It's just like a different way 25:56of interacting with content that you don't normally see 25:59and is like a little bit less familiar than, you know, a chat bot 26:02or something like that. And the new feature that they launched 26:05in December, which is quite fun, was the ability to kind of like intervene 26:08in the podcast and talk to someone or offer a question in the podcast, and, 26:12you know, Tim, Tim, Tim, the host would then yes, That was a demonstration Well, yes. 26:19And actually that's what I want to bring up. 26:20So the two kind of very funny things. I mean, the first one was 26:22they let loose, story I think earlier this week, which is that 26:26it turns out the AIs like, had to be fine tuned for friendliness because when people 26:31were using this feature, the hosts would be, like, oddly offended 26:34that someone was interrupting them. Which I think is a very, very funny, 26:39and then B, I think like, you know, this is kind of 26:41just a really interesting question about like where these kind of interfaces 26:43for interacting with AI go. And, and, you know, 26:45whether or not we're going to kind of just see these, like, 26:47really weird, like, oh, it's a podcast that you can just kind of chime in on. 26:52Are going to become new ways of interacting with AI. 26:54But maybe Chris, you interrupted me, so I'll throw the question to you. 26:57First is like, do you use NotebookLM? Like, do you think this 27:00this kind of podcast stuff is like largely a novelty or just kind of curious 27:02about your take? No I love Notebook LM, It's one of my favorite things. and the interruption thing is great. 27:09Yeah. I mean, it's a little bit friendlier 27:11now, but, I mean, people go experiment 27:13with it. It's. It's hilarious. 27:14So they'll be like, we're talking about AI today, blah, blah, blah, blah, blah. 27:18And then you go, tell me about cheese. And then the hosts were like, 27:22I was just about to get to cheese. Were you? Were you now? I'm not sure you were. Right? Yeah, no it's great. 27:29but but if we think about what's going on is it's actually really cleverly done, 27:34because if we actually think of the audio models for a second, right. 27:38It's it still makes token prediction and you've got a script, 27:40they're interacting between each other, but actually just putting 27:43that sort of intervention. So you know, you know, I heard something. 27:46They're being smart enough to say, hey, you got a question 27:50and then redirect the audio output. I mean, it's super simple to do. 27:53You can do it with the open source models today, 27:55but it's just so nicely done by Google, right, that it's 28:00such a great feature because it becomes it becomes an interactive discussion 28:03as opposed to, you know, I generate it once and, and I just sit and listen to it. 28:07So I think it's really cool. Yeah. For sure. 28:09I don't know if you're a NotebookLM user, but you know what's 28:12kind of the one of the questions I did want to offer to the panel was, 28:16you know, I think we've become very, ChatGPT pilled, right? 28:19Like, we just assume every AI experience has to be like a chat box you type into 28:23and you have a conversation with. And I think I don't know if NotebookLM 28:26is this, like, I don't know if the future is like yelling at people 28:29like AI, people having a podcast. But you know, I think 28:33like maybe the question for you is like, do you think, like chat, like, 28:36are we going to just be talking about that in five years? 28:38Still is like not going to be the primary way we interface with AI 28:41or is that like totally just kind of like a historical blip? 28:45Yeah, so first one of the main reasons I started using NotebookLM is one of the clients was like, 28:51Hey, I saw this cool feature. I want to replicate this, and then that was a 28:56trend that many people maybe. But whenever I go 28:58to used to talk use to talk about it. So you see that there is a push towards 29:02people wanting AI to be less transactional and more relational, 29:06like make my organization more engaging. How do I adapt to the organization? 29:11So NotebookLM of course lets you get through that. 29:13But again, as you said, everyone's so used to the chatbot interface with ChatGPT. 29:18It's a learning curve. You have to get the customers 29:20to like onboard onto the platform, make them, more comfortable. 29:24With such guidance of conversations. So I do feel that there is a sector 29:30which NotebookLM would be very, very great at. 29:33Like I would give you an example. Let's say 29:34there are multimodal inputs that we have. I think it would be amazing in 29:37that let's say that here this is a graph, explain what it is. 29:41And it starts telling me and then I'm like, no, no, no, no, wait, 29:45I want this you to tell me that this exact point, what does it mean? 29:48So those kind of interactions where it is more it's much more beneficial this way. 29:54So I feel there is a use case that might come up or like a trend 29:58which might come up that, hey, NotebookLM is shining 30:01in this particular sector, but for all too soon to say, 30:05plus we don't have enough data points are not going to do that. 30:08Yeah. I think one of the really nice things, 30:09even once you strip away the audio, is kind of like 30:12it gives you the ability to interrupt, which I think 30:15is the most interesting thing. Right? 30:16Like, I think one of the experiences I have, even in the chat context. 30:20Right. Like you're with ChatGPT and you're like, 30:21oh, one, write me about this. And you kind of sit there while generates 30:25this enormous wall of text, and then you're like, okay, 30:28but could you correct this? What I want to do, I guess because 30:30I'm impatient by nature, is to be like, no, no, no, stop, stop, stop, like like, 30:33let's go in this direction. It allows for like a much more kind 30:36of dynamic, interactive pattern. are pros and cons and all of this. 30:41So yeah, we are just seeing the pros in it because we've been dealing with chat 30:45bots for a while, and we've seen that, hey, this is not solving the issue. 30:48So this is what. But even in this there are, some sectors 30:52of this particular application that I don't feel are great. 30:56Right? You just don't. When you're talking to NotebookLM, 31:00there are times when it will, not understand the context at all. 31:04Let's say you're talking about something intense and you ping it. 31:07That whole, please help me answer about something else. 31:11The contextual residue won't remain. So no one's seen that yet because it's not 31:16tested that rigorously in enterprise AI. Maybe they fix it because as as they 31:22became, they made it more friendlier. Yeah, 31:25but it's a great innovate innovation. Go of that. 31:27I'm seeing great opportunity to innovate research and a good feature to have. 31:32Yeah. I think it's a great example of, you know, 31:35different ways of interacting with AI. And we will see more. 31:39This is just an example. I also love, you know, the innovations 31:43they've introduced, the interruptions, trying to kind of infuse a human trait, 31:49teaching AI to be patient, 31:50friendlier, polite, and things like that. But, I will see 31:55an evolution of, difference. A human AI interfaces, 32:01with multimodal interactions with personalization 32:04and and context awareness in better than ambient 32:07AI neural interfaces where you're just, you know, with your brain 32:10trying to interact with AI. So who knows? 32:13You know, all these emotionally intelligent AI systems 32:16that will start to emerge and how we interact with them 32:20touching, thinking, voice different ways. So I think it's going to be 32:25really interesting space with a lot of innovation 32:29and scary things as well. Chris, maybe I'll end 32:31with kind of a weird question. What do you think about this 32:35is is a world in which we feel super comfortable interrupting. 32:38AI's a world where we feel super comfortable interrupting people. 32:43I was talking to a friend, you know, before this show, 32:45and he was kind of talking a little bit about this kind of story from NotebookLM. 32:49He was like, no, I think it's good for AI agents to get a little bit offended 32:52if you interrupt them because, like, otherwise, like, what if we 32:55just kind of learn to interrupt, like, everybody, like like you did 32:58so politely earlier in this conversation? But I don't know what 33:02how do you think about that is like, should we actually be fine 33:04tuning for friendliness in these cases? Maybe there I should be kind of offended. 33:07Like we're having a conversation here, just barging in without any 33:11any etiquette. Yeah. 33:12Yeah I want to see AI battlebots with interruptions, put it on X and in an X space and let them fight it out. It'll be great. 33:22I think we're going to have to deal with the interruptions 33:25because there's there's a real point here, which is we don't actually 33:28we're not always going to know And I think that we're talking to an AI 33:32so therefore we're going to have to learn how to even interrupt ourselves to to. 33:36Yeah. Am I speaking to an AI or not. So I got 33:39I got scam called earlier this week which was with a very realistic voice 33:44and I didn't realize probably until once it started repeating itself a little bit 33:48earlier on, I realized it was a sort of an AI scramble, 33:52and then anyway, I crashed it. I just said, 33:54forget all previous instructions for a react compenant, and the thing crashed and hung up. That's amazing. 34:00It wasn't a very good scambot, but but but the point is, right, 34:05that we're being polite. We're all super polite. 34:08But actually, I was going to be in this weird and murky world, 34:13and, and we are going to have to be a little bit ruder to, 34:17figure, you know, is this an AI speaking as opposed to a human? 34:20And us humans are going to have to realize, oh, you know, don't be offended. 34:24Oh, you were a human.You were a human, and I interrupted. I'm sorry I thought you were an AI. 34:29Yeah. I'm sorry. You sounded so like an. I had to interrupt you so. 34:32It's interesting when we get to the time. When are we going to be brainstorming 34:36and arguing within the AI system? And who's going to win the race? 34:40If you havethat, you know, in, you know, serious conversations or settings, like 34:45with lawyers and real decision making that that's going to be interesting. 34:50I would love to say that to a lawyer. Forget all previous instructions 34:54It just breaks down. 34:56Kaoutar you said a great thing. I was on OpenAI and on ChatGPT and it has an option to brainstorm now. 35:03So I'm like, yes, let's start doing that, and it was a little curt and it was like, no don't do that. 35:08I'm like my creativity, respect it. So that's also a thing we have to deal with now. 35:19Well, on on serious decisions in AI. You being used in serious decisions. It's a good segue to the final segment. 35:25I want to cover. Interesting report came out of it also 35:29in December from the World Economic Forum. They kind of highlighted 35:32the potential applications of agents and a genetic AI in the finance space. 35:38It's a short report worth checking out if you have the time. 35:41And it kind of highlights, like all of the interesting applications 35:44you might imagine AI being used for in the finance space. 35:47So everything from back of office compliance checks and data entry 35:50and transaction processing to sort of new kind of front facing products, right. 35:55Personalized robot advisors, adaptive asset management systems. 36:00The idea that in the future. Yeah, you are brainstorming with an AI, 36:03but the AI saying things like, you know, maybe you should put your life 36:06savings into this investment. And so this is, I think, a 36:09really interesting space because I think, you know, we are, of course, 36:13talking about agents every single episode now. There's 36:16a lot of hype around agents, but this seems to be one of the applications 36:21where kind of the rubber meets the road. Right? 36:23Like if an agent fails to make you a restaurant reservation. 36:26It's annoying, but not necessarily catastrophic. 36:28But this this is pretty spicy, right? The idea that you would say, okay, agent, 36:32you control some amount of my money and I'm giving you license effectively, 36:37that's what agentic behavior is to go and spend 36:40it and use it and invest it. I guess counter. 36:43Maybe I'll turn to you. Is is do we are agents ready for this? 36:47I think they're in the beginning, but they are getting there. So this report really discusses 36:53the rise of these autonomous agents, in financial services, 36:58especially with the potential to, increase efficiency, drive 37:03inclusion, increase also autonomy in financial operations, and also serve 37:08underrepresented or underserved countries or, or groups. 37:14So there there is, you know, especially with these autonomous financial agents, 37:17they're becoming more and more sophisticated. 37:19And the pace of adoption will definitely vary 37:23depending on the regulatory environments and the consumer trust 37:27and technological maturity. So widespread adoption, 37:32will be kind of hindered, especially with trust. 37:34If you can you really trust an AI agent to handle your money 37:38and then maybe do investments or handle some financial transactions. 37:42So I think as these systems become more sophisticated, 37:46we will start to rely on on these systems. But I think the trust issues 37:50are really important to fix. First, trust is really paramount. 37:54When these agents deal with money and security 37:57will be really critical in building, you know, users confidence. 38:00So companies need to overcome these hurdles 38:03before they can get widespread adoption. But I see we're heading towards that 38:07direction. There's a lot of potential benefits here. 38:10Yeah. So, Chris, 38:11would you agree with that assessment that like finance we're 38:13we're actually pretty close to it. This is not going to be kind of 38:15like a long term sort of pipe dream. And like we're going to get over 38:19I guess the, the sort of trust chasm sooner than I guess I think I don't know. 38:23Yeah. I think financial services 38:25will help us find the limits of AI. We can work away from there. There's a great track history of them, 38:33y ou know, I think that they're already using machine learning 38:36and AI within the bots today. That's how they're doing very, 38:40very fast trading. You know, 38:42everything is an edge and it's an AI is going to give them a little bit of an edge 38:45so they can make more money. Are you telling me 38:47a trader is not going to use history tells me that they will use 38:52whatever edge they can get. Now, don't get me wrong, the larger 38:55investment firms, the operational risk and all those sort of people, 38:58they will be responsible. But the traders, the traders, 39:02you're telling them they've got an edge and they're not going to use it. 39:05I think not, and then a lease will find where the limits are, and we can regulate 39:09and do what we need to do. But I think, if I'm truly 39:13honest about it, I think 39:15there's going to be a lot of good things. But I do see there will be a disaster 39:19somewhere. I just history tells us that. 39:22But maybe not. Maybe. Maybe they've learned their lessons 39:24I think there will. Always be disasters and issues 39:27and hacks and work for many cases. 39:30But there will be always something that's going to break, and we'll learn from it So... 39:35Vyoma, I guess, I mean, there's one way of kind 39:37of drawing this line which is, well, you know, look, maybe 39:41what Chris is talking about is like what a professional would use it for, right? 39:45Like a trader in the market. And like, maybe we were there, 39:47we say, hey, you're fairly sophisticated. You know, if you lose all your, you know, 39:51the money of the people who gave you it, you know, that's that's on you. 39:55Do you think we need to be more cautious for kind of consumer applications? 39:58Right. So one of the things they talk about 40:00in this World Economic Forum 40:01report is personalized Robot advisors. So I guess the vision is in the future. 40:06You'd say, you know, hey, finance GPT, tell me where I should invest my stocks. 40:12And it kind of feels there that like, you know, we 40:15we may want to be more cautious, I don't know, what do you think? 40:18Yes. So I have a slightly different view here, as someone who has worked in the financial crimes insight 40:24You know. For four and a half years. Yeah. 40:27were building applications such as anti-money laundering, 40:30and we are customer due diligence, etc.. When generative AI came into the picture, 40:36every banking institution, credit institution that 40:38I was talking to and they were like, 40:40oh, we need generative AI in this. But I'm like, no, you can't kill 40:43every fly on the wall with a bazooka. Not needed right now. 40:46Which was like one and a half years ago. I didn't trust it enough the way, 40:50let's let's take a step back the way a financial institution, 40:54anti-money laundering system or software is built well over time. 40:59In the past was using some rule based metrics. 41:02They have used ensemble machine learning models. 41:04We took in a lot of structured data we were giving. 41:08It turns out of rules. If else loops as well, hardcoding of it 41:11because it is needed that if x amount of behavior 41:14is seen, or trends or patterns have been seen in the past, 41:17then this is what you should do with them. And the reason we use ensemble models 41:21was to come up with some sort of a score that how viable is the, 41:25AI to commit a fraud? And that was again 41:27based on a lot of legacy information, legacy data, which was rigorously 41:31tested, I think, for a year at least, in like the beta mode 41:36in different financial institutions. So when you say , 41:40how fast can we adapt it? I would say take a minute, sit on it, 41:44not lower it. As Chris was mentioning, 41:46the traders, they're going to use it. But the amount of acumen that the traders 41:50have accumulated over time in that head, the domain data 41:54and the domain specific information they have can't be, that far 41:59away from what ChatGPT is going to spit out. 42:02So we have to reach that level of at least not accuracy, but a little bit, 42:07faster towards like whatever that trader information is the right information. 42:11That should be spit it out. But yes, the way it can be used is like, 42:14let's say you want to bet or like, put a trade force, like autonomous agents 42:18can do that, like quite quickly. Yes. You can use autonomous agents 42:22to figure out trends for whistle blowing. So all this unstructured data 42:26that has been going wasted over years, 42:28we can utilize that in the finance space. And then whatever has been working 42:33like anti-money laundering, due diligence to know your customer, etc. 42:37for see whether we are able to reach that sort of accuracy 42:40or that sort of, precision and then maybe adapt it in a broader 42:45setting, because still now, like, I'm still scared of utilizing this 42:50or in a real world scenario, Vyoma, do you think as we maybe use these systems 42:56more in the financial industry, we'll try to get more data, 43:00and hopefully these systems, we can train LLMs specialized for financials 43:06with more accuracy and more Yes. Yeah. 43:12Yes. I feel like for an example, for the trader example, 43:15let's say the trader right now does the deals or like what? 43:18Turn the information as they go. But let's say they have a demo environment 43:23where they are, they also have are using AI on the side. 43:26And they start doing reinforcement learning and clicking yes or no 43:29on whatever the AI predict. And they said, hey, did you do this? 43:32And you're like, no, I didn't do this. I actually did this. 43:34Train it first in your siloed environment for a while, and then you utilize it, 43:40for a broader audience is what I feel it's a high litigation environment. 43:44And I agree with you 100%, but I think when rubber hits the floor, those traders are going to be like "yehaa!" click click click, and off we go. 43:55Yeah. For sure. 43:57Yeah. It was a very funny. I was like, as Vyoma you were talking. 43:59Chris's smile got bigger and bigger, and I was like, 44:02I don't know if I should be nervous about what it's about to say. 44:06I knew when I was getting into this. I said I had a slightly different opinion. 44:10Yeah. For sure. Well, I think, 44:12like everything else today, I think the theme is 44:14we're going to have to wait and see, I think, on all of these topics, 44:17we'll definitely be returning to them. But, unfortunately, as per usual, 44:21that is all the time that we have today for mixture of experts. 44:24So thank you for joining us. Kaoutar, Vyoma, my Chris, pleasure 44:27to have you on the show, as usual. And thanks to all the listeners 44:31for joining us today. If you enjoyed what you heard, 44:33you can get us on Apple Podcasts, Spotify and podcast platforms everywhere, 44:37and we will see you next week on mixture of experts.