Learning Library

← Back to Library

White House AI Plan Meets IMO Milestone

Key Points

  • The White House unveiled an AI action plan that serves as a national strategy for artificial intelligence and a “starter pistol” for future congressional legislation.
  • Tim Huang’s “Mixture of Experts” podcast gathers leading AI thinkers—including Kate Soul, Gabe Goodhart, Mihi Crevetti, and policy expert Ryan Hagaman—to unpack the week’s most important AI news.
  • DeepMind and OpenAI have demonstrated AI systems that can achieve gold‑medal level performance on the International Math Olympiad, placing them in the top 8‑10 % of high‑school mathematicians worldwide.
  • The panel likens this IMO breakthrough to AlphaGo’s historic impact—recognizing it as a major benchmark shift but noting it may not yet translate into immediate real‑world applications.
  • Upcoming segments of the show will dive into the ChatGPT agent, Mihi’s MCP gateway project, and a deeper discussion of the newly released AI action plan with Ryan Hagaman.

Sections

Full Transcript

# White House AI Plan Meets IMO Milestone **Source:** [https://www.youtube.com/watch?v=6RYrUyxXsYU](https://www.youtube.com/watch?v=6RYrUyxXsYU) **Duration:** 00:43:22 ## Summary - The White House unveiled an AI action plan that serves as a national strategy for artificial intelligence and a “starter pistol” for future congressional legislation. - Tim Huang’s “Mixture of Experts” podcast gathers leading AI thinkers—including Kate Soul, Gabe Goodhart, Mihi Crevetti, and policy expert Ryan Hagaman—to unpack the week’s most important AI news. - DeepMind and OpenAI have demonstrated AI systems that can achieve gold‑medal level performance on the International Math Olympiad, placing them in the top 8‑10 % of high‑school mathematicians worldwide. - The panel likens this IMO breakthrough to AlphaGo’s historic impact—recognizing it as a major benchmark shift but noting it may not yet translate into immediate real‑world applications. - Upcoming segments of the show will dive into the ChatGPT agent, Mihi’s MCP gateway project, and a deeper discussion of the newly released AI action plan with Ryan Hagaman. ## Sections - [00:00:00](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=0s) **White House AI Action Plan Overview** - The segment introduces the administration’s AI action plan as a national strategy and a catalyst for future legislation, while previewing the Mixture of Experts podcast discussion with AI experts. - [00:03:22](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=202s) **AI Tooling, Aentic Techniques, and IMO Context** - The speakers explain how modern AI augments language models with external tools such as calculators, illustrate the growing precision of Aentic AI methods, and then shift to describing the prestige and difficulty of the International Math Olympiad, noting the guest’s modest mathematics background. - [00:06:33](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=393s) **Evolving AI from Prompts to Toolchains** - The speaker explains how AI development has shifted from simple, token‑cheap prompting toward pre‑built toolkits that define, verify, and execute tasks in parallel, dramatically improving result quality. - [00:10:01](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=601s) **AI Solves Olympiad Math in Real Time** - The speaker highlights how the specialized AlphaProof model, once requiring days per problem, now completes all six Olympiad questions within the contest’s 4½‑hour limit, demonstrating AI’s shift toward practical, time‑constrained problem solving. - [00:13:07](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=787s) **Evaluating Niche AI Performance** - The speaker argues that AI systems addressing extremely specialized problems cannot be reliably assessed with traditional statistical methods, likening this shift in evaluation to the gradual acceptance of Wikipedia as a trustworthy source despite its earlier skepticism. - [00:16:22](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=982s) **OpenAI Unveils ChatGPT Agent, Cost Concerns** - The speaker introduces OpenAI’s new ChatGPT agent—a Lux‑tier, browser‑like, agentic tool—asks a colleague for quick impressions while highlighting the growing expense of the service, especially in Europe. - [00:20:08](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=1208s) **From PC Recipes to AI Agents** - The speakers liken early PC experimentation to today’s AI agents, emphasizing the minimalist UX, public trust concerns, and security hurdles that keep such tools consumer‑focused rather than enterprise‑ready. - [00:23:24](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=1404s) **Cautious Adoption and UX Leap** - The speaker explains limited trust in AI without final control, favors slow, experimental adoption, and emphasizes that improved user experience—rather than new technology—transformed existing tools like GPT‑3 into a major breakthrough, highlighting the need for simple, tool‑agnostic entry points. - [00:27:49](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=1669s) **Unified MCP Gateway for Multi‑Server Federation** - The speakers explain how the open‑source MCP gateway and registry let you combine disparate servers into a single virtual endpoint with centralized authentication, authorization, observability, plugin hooks, and protocol conversion, providing a lever to manage the complexity of real‑world MCP deployments. - [00:33:04](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=1984s) **Scaling Trust in AI Agents** - The speakers examine how UI/UX design must manage agent failures—stemming from model limits, noisy internet data, and poor implementations—and advocate for middleware solutions to enhance scalability, maintenance, and security of emerging AI agents. - [00:36:18](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=2178s) **Trump AI Policy Blueprint Overview** - The speaker summarizes the administration’s newly released AI agenda—including three executive orders, roughly 135 agency actions across three pillars, and a forthcoming legislative push—framed as a “policy Super Bowl” for AI‑focused officials. - [00:39:38](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=2378s) **Executive Order Streamlines Data Center Expansion** - The participants describe how a recent executive order simplifies regulatory approvals to accelerate energy grid capacity and data‑center construction, indirectly benefiting IBM by supporting the increased power needs of future large‑language‑model workloads. - [00:42:51](https://www.youtube.com/watch?v=6RYrUyxXsYU&t=2571s) **Ryan Returns for Ongoing Coverage** - The host invites Ryan back to discuss unfolding DC developments, thanks him for his participation, and concludes the episode with the usual podcast sign‑off. ## Full Transcript
0:00The White House has released its AI 0:03action plan, which is sort of a national 0:05strategy for artificial intelligence. 0:07The short version is what this document 0:09basically does is it lays out the Trump 0:11administration's policy agenda as it 0:13relates to artificial intelligence. And 0:16part of the reason I think you haven't 0:17seen so much action uh in Congress is 0:20this is also the starter pistol for 0:22legislative action in the future. 0:25>> All that and more on today's mixture of 0:27experts. 0:28[Music] 0:33I'm Tim Huang and welcome to Mixture of 0:35Experts. Each week, Moe brings together 0:37a sharp group of thinkers working at the 0:39very cutting edge of artificial 0:41intelligence to discuss, debate, and 0:43distill the week's news. Today I'm 0:45joined by Kate Soul, director of 0:47technical product management for 0:48Granite, Gabe Goodhart, chief architect 0:50AI open innovation, and Mihi Crevetti, 0:53distinguished engineer, Aentic AI. Later 0:55in the show, we're going to be joined by 0:57Ryan Hagaman, who's the global AI policy 1:00issue lead. Per usual, we're going to 1:01talk about ChateBT agent, uh, Mihi's MCP 1:05gateway project. Uh, and we're going to 1:06have Ryan on to talk about this newly 1:08released AI action plan. But first, I 1:10really want to start by talking about 1:11IMO, which is the International Math 1:14Olympiad. 1:15[Music] 1:19First, I think wanted to start today by 1:21talking about IMO. Um, IMO stands for 1:23the International Math Olympiad. It's 1:26the world's foremost annual mathematics 1:28competition for high school students. 1:30And we're talking about it today because 1:32both DeepMind and OpenAI have claimed 1:35sort of uh their systems achieving a 1:37gold standard in competing in that 1:40competition which would basically put 1:42their technology within kind of a 1:44comparable performance to the top 8 to 1:4710% of high school mathematicians. Um 1:50now this is a big competition. Uh over 1:52110 countries send teams to it each 1:54year. And I think some mathematicians 1:57actually took a step back on this news 1:59breaking and said, you know, we think 2:01that this is a a Lisa Sadole moment. 2:03This is as big as Alph Go uh from a 2:05number of years back. And I think that's 2:07the first kind of round the horn 2:08question I wanted to kind of prompt you 2:09guys with is, you know, is this a Lee 2:12Sad moment? Is this a really big deal or 2:14is this kind of just another benchmark? 2:16Um Gabe, maybe I'll start with you. 2:18Well, as a former mathematician who 2:21discovered that computers can do math a 2:23lot better than I can, uh, and then 2:25turned to become a computer scientist, 2:27uh, this is both extremely comforting 2:29and not surprising. Um, but, uh, I think 2:34seeing the depth of the logic and 2:36reading some of the techniques they 2:38used, um, it's a really cool piece of 2:41technology change. I don't know that 2:43it's going to flip any tables over 2:44today. 2:45>> Cool. Got it. Kate, what do you think? I 2:47mean, I think it is a similar moment to 2:49Alph Go in that we've, you know, cracked 2:52a new benchmark just like, you know, 2:54AlphaGo cracked the game of chess. 2:57Similar to Alph Go though, I don't think 2:59it's going to have a tremendous kind of 3:02real world tangible impact in the next 3:04couple of years. Like I don't think uh 3:06we're all, you know, we saw Alph Go beat 3:08chess and then all of a sudden we were 3:10impacted in AI in our daily lives every 3:12day with all these different value 3:13drivers and applications. And I think 3:15this is kind of similar in that this is 3:17an impressive challenge that was beaten. 3:20Um I don't think this is like, oh, we've 3:22now unlocked AGI and all these other 3:24applications are going to fall into 3:25place. 3:27>> Got it. Emihi, what do you think? 3:28>> I think it's really cool. I think it's 3:30cool because it demonstrates a lot of 3:32the techniques we use in Aentic AI as 3:33well. Things like computer use, building 3:35calculators, and using those functions 3:37to solve the problems is not just 3:39relying on what inherently the large 3:42language models had been trained with. 3:44And I think from that perspective, it 3:45just demonstrates growth where aentic 3:48techniques are becoming more and more 3:50and more fine-tuned and more specific to 3:52the type of task or workload they're 3:55being applied to. 3:56>> Yeah, that's great. And I definitely 3:57want to get into that, but I think first 3:59maybe Gabe, I'll go back to you because 4:00I hadn't realized you had kind of a 4:02mathematics background. Um, do you want 4:04to just give our listeners a flavor of 4:05like how big of a deal is the 4:07International Math Olympiad? Is this 4:09like pretty significant? How difficult 4:11is this test? Like could I do this test? 4:12like what what is this? What what are we 4:14talking about? 4:14>> To be fair, I don't think I have a good 4:16answer to that one because frankly I 4:17wasn't that kind of mathematician. Uh I 4:20was a uh a liberal arts student that was 4:22good at math and needed to pick a major. 4:24So I started with math uh and then I 4:26discovered computer science and said 4:28this is much better. Um but no, I I mean 4:30I went far enough in in math to really 4:33get to the point where there were some 4:35hard problems. Um, but frankly, uh, you 4:38know, the beautiful thing about math is 4:40that it is a well-defined ecosystem with 4:44strict rules. That's kind of the whole 4:45point of it. Um, and the higher you go 4:48in math, the more you're exploring the 4:51boundaries, the edge cases of those 4:52rules. Um, and sort of the the esoterica 4:55of rules that you might not have thought 4:57of when you're looking at sort of a 4:59simple uh arithmetic based or geometry 5:02based space. So when you get into the 5:04chaotic dynamics or uh you know 5:06multivariate calculus and the like uh 5:08and things that I never even got to um 5:11you're really starting to take those 5:12rules and go as far as you can and I 5:14think that's one thing that's really 5:15fascinating about this is that it's 5:18basically showing that with these 5:20additional techniques that Mihi 5:22referenced um and with some uh inference 5:25time compute that Kate I know you've 5:26talked a lot about um they were really 5:28able to push the model's reasoning 5:31capabilities a lot further down the 5:34boundaries of this very nicely defined 5:36space of mathematics to explore portions 5:39of it that uh typically you have to go 5:42pretty deep and you have to actually as 5:44a human uh have some good intuition 5:47about where you're exploring within 5:48those boundaries of mathematics. So I 5:50think that's the the really interesting 5:51thing here and I think Mahai you said it 5:53really well. It's basically getting to 5:55the point where we're no longer just 5:57sort of throwing uh you know guess and 6:01hope type of depth where every step in 6:04the chain is going to have some errors. 6:05They're going to compound and you might 6:06get lucky and prove that it's possible 6:08to get to a good answer. But the fact 6:10that it could do it consistently enough 6:12to get a good score on this many 6:14difficult problems shows that the 6:15techniques are really starting to reach 6:17a point where they have uh sort of real 6:20world applicab well maybe not even real 6:21world but consistent applicability such 6:24that you could imagine applying them to 6:26a more difficult challenge in the real 6:28world and actually relying on the 6:30results without needing to check every 6:31step of the way. 6:32>> Yeah, for sure. And that's something I 6:33really want to get to. I think Mihi to 6:35kind of pick up on a comment that you 6:36had in the response to the opening 6:38question. You know, I think one of the 6:40the most interesting things about AI is 6:41we say like, oh, AI does this, AI does 6:43that, but like we don't often talk about 6:45the fact that like AI itself is sort of 6:47changing as we go. And you highlighted 6:50that like there's a bunch of different 6:52techniques here. Um, and so do you want 6:54to give us a little bit of flavor of 6:55like what's new, right? Versus like how 6:57we were maybe attacking these problems a 7:00few years back when I guess we were more 7:01in like stochastic parrot land. It seems 7:03like you're almost suggesting that like, 7:05you know, there's there's there's more 7:06more tools being used to get get these 7:08types of results. 7:09>> I think what's changed is how we 7:10approach these problems. Like to your 7:12point, Gabe, we're no longer just 7:13throwing a simple prompt or even a 7:15simple chain of thought at a problem. 7:17We're spending time beforehand to build 7:19tools to define how those tools will be 7:22used either to generate the answer to a 7:24mathematical equation to verify those 7:26results. And we're executing these tools 7:29in parallel massively. So, think about 7:33the problems you used to try to solve 7:34with AI before. It cost you a nickel and 7:38a couple of tokens to go to ask a 7:40question. You're going to get a result. 7:41That result was probably terrible and 7:43you'd say, "I'm going to work with it 7:45and try again and try again and try 7:47again." Uh, the approach here is 7:49massively parallel. You're firing off 7:52millions, tens of millions of tokens at 7:54hundreds of different tools, verifying 7:55them every step of the way. It's similar 7:57how deep the researchers work and you're 8:00going to get really good results and the 8:02question even as a business you have to 8:04ask yourself is it more cost effective 8:06to ask a question 10 times and get the 8:10wrong answer or to ask it a million 8:12times parally is going to be more 8:14expensive but you're going to get the 8:15correct answer so finding that balance 8:17is going to be key. I don't think we can 8:19use the same approach for every problem 8:22because that approach requires you to 8:23first engineer your tools. It requires 8:26you to design a system and execute a 8:28very expensive query because I don't 8:32suspect this costs a dollar or $10 or 8:34100. It probably costs tens of thousands 8:37to ask all of the questions to solve 8:39this problem. But if we can get to a 8:41reasonable level within software 8:44development use cases, for example, 8:45where for one or two dollars you can do 8:48massive refactoring of a codebase of a 8:50class of a function with the same level 8:52of tools, I think we'll be in good 8:54shape. Yeah. And it leads quite nicely 8:56actually. I guess Kate, you know, I 8:57think you had a good comment which was 9:00um look, I think almost like the Lisa 9:02Dole moment is actually like a great 9:03parallel because you're like, okay, you 9:05you cracked go, you know, like a thing 9:07that we didn't think we could do, but 9:10it's also just like, okay, then then 9:12what do what do we do? Like I guess, you 9:14know, this is obviously like an 9:15impressive technical achievement. is 9:17kind of what you were saying a little 9:18bit earlier that you feel like the 9:19actual practical impact is going to be 9:21limited in the near term like you know 9:23that like what we're kind of seeing here 9:25to Mihi's point is like it's not 9:27actually kind of like a practical 9:29approach to most of the problems that 9:30people are trying to use AI for today. 9:32>> Yeah, I'm skeptical that this is going 9:35to all of a sudden just totally change 9:38the calculus and how we approach all 9:40math problems and unsolved problems. I 9:42think it's going to be a really helpful 9:43tool. I think it's going to be 9:45incremental change though and not step 9:47change in where we see things going. The 9:50thing that I think most interesting is 9:53that if you look, this isn't the first 9:55time Gemini or Google at least has been 9:57in the news around the math Olympiad. 9:59Like over maybe a little over a year 10:01ago. Um if we look at alpha proof which 10:04is their specialized model for math it 10:06achieved silver performance uh on the 10:10the Olympiad but what was different I 10:13mean they used a specialized model but 10:15their model took days sometimes to solve 10:18a single question and what's really 10:20exciting and what I think is a bit of a 10:22breakthrough is that they are able to 10:24solve these problems now all of them 10:26within the time limit that all of the 10:28other competitors have to observe which 10:29I think is something like 4 and 1/2 10:31hours total for six questions. So, I 10:34think that as we talk about kind of 10:36these general tools as they're advancing 10:38uh and techniques to Mihi's point, I 10:41think this is a really great 10:42demonstration of how those tools are 10:44starting to enable much bigger changes. 10:47I don't think it's something that like 10:48okay math specific all of a sudden we're 10:50going to have this crazy breakthrough 10:52and uh the in the field and all of these 10:54>> we're like not going to solve math or 10:56whatever. M math is still going to be 10:57really complicated and there's going to 10:58be a lot of really thorny cool research 11:00problems to explore hopefully faster 11:02with the help of AI. Um but I think it's 11:06really exciting the uh as we talk about 11:07how these techniques and capabilities 11:09are evolving to be much more practical 11:11and be able to run in a much more real 11:13time. 11:14>> I'd really like to see one of these 11:15benchmarks have a time limit like this 11:17one, but also a budget and the time 11:20limit of the prep work you prep work you 11:22do ahead of time. So what can you do 11:24with free engineers? 4 hours to prepare. 11:27Kind of like robot wars, right? You 11:28know, you only have four hours to write 11:31your tools. You need to get it done. You 11:33have a budget of 10 million tokens or 11:37however many dollars with whatever 11:38platform. Now, let's see who's the best. 11:41>> Yeah. I want to do like a cooking 11:42reality show where it's like the secret 11:44ingredient today is this data set and 11:46you have four hours to create a 11:48mathsolving robot. I I don't know. My 11:50tastes are pretty specific. I guess I'd 11:52watch that. So I'd watch that. I think 11:55the last thing I want to address before 11:56we move on to the next topic is you know 11:58I was thinking about IMO and you know 12:00one way I think about IMO is it is it is 12:02an eval right um and uh one thing I I 12:06occurred to me is like a lot of people 12:08spend a lot of time getting the 12:10international math olympia test together 12:12each year um and it's a really expensive 12:15eval to build but it is as a result kind 12:17of like gold standard in some ways right 12:19like I think the reason deep mind and 12:20openi are here is that it seems to be a 12:23really strong test of the technology. 12:26Um, and I was talking with a friend 12:27recently about this and just wanted to 12:28kind of test this group on it is it 12:30feels like as capabilities expand like 12:33trying to get good evals is going to get 12:35more and more expensive, right? Like you 12:36want to know whether or not it can do 12:38expert graduate level math. Well, you 12:40know, it's a little bit different from 12:41getting like, you know, a bunch of 12:42simple arithmetic problems together. Um, 12:45and so I guess I'm curious if if 12:47anyone's kind of seeing that in their 12:48work, like seeing basically like eval 12:50become like more and more expensive for 12:52us to produce. um in a way that I think 12:54introduces some new problems, right? Is 12:56like how much time do you need in order 12:57to kind of get like an expert level 12:59evaluation, you know, the the number of 13:02humans that you need to put together 13:03that kind of eval gets more and more 13:04limited. I guess Gabe, you're smiling. 13:06Do you want to I don't know if you want 13:07to respond to that? 13:08>> Well, I mean, I think it's a really 13:10interesting point you're you're you're 13:12putting at pointing at here. uh is just 13:15that the more the the AIs try to tackle 13:19problems that are already specialized to 13:21a very small subset of humans, the 13:23harder it is to actually have a rigorous 13:25evaluation because statistics stop 13:27really applying in standard like Gausian 13:30distributions of people that would solve 13:32this thing, right? Like if there's a 13:33small handful of humans that can solve 13:35this, that's not a very well formulated 13:37statistical distribution to say, hey, 13:39look, you've got this score with this 13:40variance. Like that math just doesn't 13:42apply. And so um you know I think one 13:46thing that has I this is gonna be a very 13:48nonsp scientific answer but I feel like 13:52we are going to hit a point with these 13:54models in general sort of uh I've made 13:58this analogy before but there was a time 14:00when we all probably were in our 14:01learning phases where it was verboten to 14:04site uh a Wikipedia article in a 14:07research paper like you can't site 14:09Wikipedia because you just cannot trust 14:10its validity and eventually that just 14:13kind of eroded, right? Like everyone 14:14just was like, "Well, yes, technically 14:16there's there's some error bars in my 14:17mind if I see a Wikipedia citation." But 14:20I have pretty darn high confidence that 14:22I can go over to Wikipedia, read the 14:24article, and then maybe click through to 14:26their citations and say, "Yep, the 14:27article is accurate. Okay, we're fine. I 14:29can just skip that second part. I can 14:30probably trust the Wikipedia article." 14:32And I think we're going to probably 14:33start hitting that, you know, the higher 14:35up we go. And in some ways, to your 14:37question, the more specific and narrow 14:40the the population that could solve this 14:42as a human becomes, the more we're going 14:45to have to start relying on the just I'm 14:47going to choose to trust the model here, 14:49right? Like it's it's built enough cred 14:51on other things that I trust that in my 14:53mind that translates to probably 14:55believing that it's good at this thing. 14:57Um because it just becomes much much 14:59harder to apply rigorous statistics and 15:02evaluations to a much smaller population 15:05of data. 15:06>> Yeah, for sure. There'll almost be like 15:07a hypothetical math or like big if true 15:09math where you're like, well, 15:11>> the model has produced all these proofs 15:13and no one's really verified whether or 15:15not it's right, but if it were, then 15:16this is next step. And there are I mean 15:18there are plenty of techniques in math 15:19and again it's been a long time since my 15:21mathematician days so I'm not going to 15:22use the right words here but uh you know 15:24there are a lot of techniques where you 15:26do like validation against one another 15:28where nei neither is a source of truth 15:29but you have a way of cross validating 15:32uh and I imagine we'll get to that point 15:34right where you've got a model that is 15:36trying to tackle problems that only a 15:38small handful of folks can do and so 15:40rather than trying to produce a rigorous 15:41benchmark for those tasks you instead 15:44say hey expert that could eval do this 15:46on your own evaluate what the AI did and 15:49now we've got a small sample size and in 15:50some ways that is what the uh Olympiad 15:52is doing here. You've got probably a 15:54small subsample of expert judges that 15:56are able to actually qualify what these 15:58mathematicians are doing. Um, and so 16:01you're, and this is already kind of an 16:02example of that where you're not using 16:04sort of a standard benchmarking 16:05approach, but instead you're using 16:07expert judges to evaluate what the model 16:09has done. And those judges are 16:10theoretically fallible as well. But, uh, 16:13especially as you push further into the 16:14frontier, you just kind of have to trust 16:16the humans and then the humans have to 16:17work with the AIs to trust each other. 16:23>> Um, I'm going to move us on to our next 16:24topic. A lot to discuss here. I'm sure 16:26now they they've gotten gold, you know, 16:28now they have to find a new AV out to 16:29do. Uh so we'll we keep tracking this 16:31into the next year. Uh second topic I 16:34want to get to is uh maybe what is the 16:37big kind of product announcement uh of 16:39the week uh which is that after a long 16:43period of speculation and rumors, OpenAI 16:45has finally released chat GPT agent. Um, 16:49and so this is a feature whereby you can 16:51ask things into the model and it has 16:53kind of full-on agentic behavior as its 16:55own little browser. It can do all sorts 16:56of things for you. Um, it's only I think 16:59my understanding it's only available at 17:01like the Lux tier. Um, and uh, I guess 17:04Mihi maybe I'll toss it to you because I 17:06know you work with agents day in day 17:08out. Have you played with it? Any 17:09impressions, strengths, weaknesses? Just 17:11kind of curious about your quick kind of 17:13off-the cuff review. And you probably 17:14also know I have the lux steer for um 17:17both GGP and clothes and all these 17:19things and now I'm in trouble probably 17:21>> because when I see the bill at the end 17:22of the month uh it's getting quite 17:24expensive especially in Europe. Uh that 17:26said um I don't think this is 17:29necessarily a new thing. I think Chad 17:31GBT has had these kind of agents 17:33internally and many of the tools that it 17:36was using were agentic in nature. So for 17:38example it used the code execution 17:40sandbox. So if you say for example write 17:42me a build me a diagram it internally 17:44generated some Python code inside an 17:46execution sandbox. It used mplot lib it 17:48generated u that kind of thing and it 17:50then gave you that diagram. Uh this just 17:53increases the number of tools it makes 17:54available. It makes them a lot more 17:56customizable. It also gives the same 17:58kind of tooling support that your deep 17:59researchers used to support with 18:01internet browsing and web browsing and 18:02that virtual uh computer use idea. So I 18:05think from that perspective is building 18:07on the same concepts it used to have 18:08before. Um, but the market is pushing 18:12towards Agentic, especially with 18:13Entropic releasing model context 18:15protocol. Everybody building agents in 18:18the open source community. I think 18:19they're seeing the need to give their 18:21platform the same firstear experience 18:23for uh for an agentic system. 18:25>> Yeah. K, do you do you agree with that? 18:28I mean, I guess a cynical view of Mihi, 18:30what you just said is, well, this is 18:31kind of an incremental improvement. Uh, 18:34really we should see this as marketing 18:35more than anything else. Is is that the 18:37right way of thinking about this? I 18:39definitely agree with that from a 18:40technology perspective, like an 18:42algorithmic perspective of what's going 18:44on. But I do think this is a tremendous 18:47leap forward in a UI and user 18:49interaction with AI perspective because 18:52it at least from what I can tell this 18:54seems to be the first major like 18:56asynchronous workflow enablement with 19:00agents and uh users for OpenAI. So, a 19:03lot of their their marketing materials, 19:05I haven't played with it, but just, you 19:06know, reading about it and watching some 19:07demos really heavily focus on start a 19:10task and then close your laptop, walk 19:12away, the agent's going to run, do 19:14different things for you, and you can 19:16come back, you know, whenever it's done, 19:18having made more productive use of your 19:20time elsewhere. So, they're really seem 19:22to be heavily indexing on this kind of 19:24asynchronous deal, which is kind of the 19:27first I've seen, and I think it's long 19:29overdue. Like, we've been waiting for 19:30this for a while. So really excited to 19:33see uh some of that start to come to 19:35fruition. Um I do wonder a lot like they 19:39talk about all this, you know, value of 19:41being able to walk away, but then they 19:42also say don't worry, you know, if the 19:44agent's going to do anything. It's going 19:45to ask you for permission. You're going 19:47to have to, you know, enter your 19:48credentials yourself. And so it's kind 19:49of like, okay, well, what how much can 19:51it do on its own if I'm also not 19:53trusting it to go and do a bunch of 19:55stuff without me giving the approval 19:56every step of the way. So I'm curious to 19:57see how some of that plays out. But I I 20:00do think from a a UX perspective and a 20:03just interaction it is a very 20:05interesting leap forward. 20:07>> Yeah. Remind me reminds me actually of 20:08these kind of stories from the early 20:10days of PCs where like people had these 20:12PCs and they're like what do we what do 20:14we do with this? And so for a period of 20:16time they're like oh you can use it to 20:17keep recipes like this like they're 20:19clearly trying to invent and like 20:21there's an effort to have to like teach 20:22people what what they do like 20:26>> that's right. Yeah. Yeah, like K, you're 20:27almost kind of describing like a very 20:28similar situation where it's like 20:30agents. Okay, that's really cool. What 20:32what now? What? How am I supposed to do 20:34this? And it is kind of funny to me that 20:36like the the big kind of UX part of this 20:38is actually you just can walk away from 20:40your computer. Like it's actually like 20:42the UX is no UX, I guess, in some sense. 20:44Um yeah. Uh how far do you think that's 20:47going to go? Do you think the public is 20:48ready to trust these models in this way? 20:50>> I think it's going to be a really uh 20:52engaging and interesting consumerf 20:55facing tool. I do not see this being 20:57ready for any sort of enterprise 21:00deployment. I mean, I I mentioned 21:01already the security and that's just for 21:03like trusting the agent with my open 21:06table login to be able to make 21:08reservations where there's a 21:09non-refundable deposit. Like those are 21:11very small stakes uh depending on the 21:14restaurant, I guess. But I, you know, I 21:17I think we're still really far off and 21:19there's so much to work out from a 21:21security perspective uh to get this into 21:24enterprise use. I mean it feels like 21:26it's a little bit delicate because I 21:28don't know if you'd agree with this case 21:29like the two worlds are connected right 21:31like you can imagine that like not just 21:34open AI but like a number of these 21:35companies don't get the consumer 21:36experience right and like I think 21:38everybody's general impression is well 21:40if it can't even book a restaurant 21:42reservation I'm definitely not going to 21:43use it for this like and do you think 21:45there's actually some risk that if we 21:46don't do this agent thing well on the 21:48consumer side it actually almost almost 21:49like closes off the pass on the 21:51enterprise side I 21:52>> I think the pattern that we've seen emer 21:54emerged so far and that will continue to 21:56be true is iterate and work on getting 21:59the workflow right for consumer and 22:01flush out all the bugs, flush out all 22:03the kinks, then bring it to enterprise. 22:05And I think OpenAI is following that 22:07exact same playbook here. So they're 22:08going to figure out the kinks, they're 22:10going to iterate and evolve while it's 22:13low, relatively low stakes, you know, 22:15planning an itinerary, minor purchases 22:18at the grocery store, that type of 22:19thing. Um and you know hopefully that 22:22that will give them some experience to 22:24learn of what's uh what could go wrong 22:27when and then uh be able to address that 22:29for when the risks are far greater. I 22:32don't know how their incentives if 22:33they'll perfectly align where they're 22:35trying to go again they're they've 22:37always strayed a little bit more towards 22:38the AGI at all costs versus focus on 22:41making really really great enterprise 22:44specific tools. So I think there's going 22:46to be a little bit of friction between 22:47those two priorities with OpenAI that 22:50other companies might not um might have 22:53kind of a clearer alignment of trying to 22:55get faster to the enterprise readiness 22:57for even if it means going a little bit 22:59slower on the more general purpose 23:01intelligence frontier. Um so I think 23:04that's going to be a big question is can 23:06other providers get to enterprise ready 23:07faster? 23:08>> Gabe, maybe I'll end with you. What's 23:10your trust level with agents? Like I 23:13don't know if you've played with chatbt 23:15agent like for any agent like what's 23:17what's the most important thing you 23:19would trust it with? 23:20>> Um yeah great question. Um I am very 23:24trusting as long as there are no stakes 23:26which is to say I'm not very trusting at 23:28all. Uh so no I I um I am very happy to 23:34uh experiment, try things out and 23:38generally use uh an agentic system 23:41anytime it could accelerate what I'm 23:42doing as long as I am the final arbiter 23:44of the output. Um and that's just sort 23:47of my comfort level with using these 23:49tools for my own personal use. Um I 23:52think uh eventually if and when they 23:55keep getting better at those things I'll 23:57gradually step it up. But I'm a fairly 23:59uh slow adopter of things that that just 24:02have magic behind them. Uh and I suspect 24:04that's true of a lot of people that want 24:05to know how things work. Um but I do 24:08think you know I I I want to second what 24:11both Mahai and Kate said about this is 24:14that on the one hand this is an 24:15incremental technology change and on the 24:18other hand uh this is a major step 24:20function in UX. Uh to me it brings to 24:23mind the the difference between GPT3 and 24:26chat GPT. Fundamentally the technology 24:28was all there in GPT3 and it was 24:31literally just the UX of the uh 24:34instruction tuning that went into chat 24:36GPT that made it explode. So even though 24:39you know these this these agentic 24:41patterns of uh tool usage and uh even 24:45you know long-term inference scaling uh 24:48with deep research all of these things 24:50have existed and in fact the individual 24:51tools the building blocks have all been 24:53there. I think the idea of a single 24:55entry point that doesn't require the 24:57user to have knowledge of what tools are 24:59appropriate for the task and the idea of 25:01something that interacts more in the way 25:03you would with like a colleague where 25:04you delegate a task or you uh you just 25:07hand something off and and sort of wait 25:09for feedback and have potentially you 25:11know a mechanism for interactive 25:13updating you know Kate to your point I 25:15can imagine my phone pinging me to say 25:16hey my agent needs uh permission to do 25:19this do you want to give permission um 25:21yeah I'm happy to say oh cool like I'll 25:23be interrupted for a quick context 25:24switch to say, "Oh, yeah, this looks 25:25good. Check. Go ahead. Keep keep going." 25:27I think the UX pattern is really going 25:29to change uh with this to be the central 25:32agent entry point that uh hopefully will 25:36make this much more accessible to folks 25:37to help build that trust in these 25:39systems. 25:44All right, I'm going to move us on to uh 25:46our third segment of today. Um, Mihi, I 25:49think because we've gotten you on the 25:51show, we want to give you the 25:52opportunity to plug plug your project, 25:53but just to kind of quickly set up the 25:55context. Um, set up the context. Um, uh, 25:58we've talked a lot about MCP, uh, over 26:00the course of many shows ate. Um, and I 26:04think one of the reasons I like the 26:05topic is that's really this fascinating 26:06question of how new standards are going 26:08to emerge in the space and how adoption 26:11occurs in open technologies. Um and so 26:14Miha, you've been working on a specific 26:16project I understand which is called MCP 26:18gateway. Um do you want to give our 26:19listeners just a quick like sort of 26:21overview of what it is and why you think 26:23it's important? 26:23>> Yeah, sure. So first let me give you a 26:26bit of um idea of how the project 26:28started. Uh we like to treat AI agents 26:30as insider threats. So every time an AI 26:33agent interacts with the system through 26:34a tool, we believe that's actually a 26:36potential insider threat because it is 26:38an input. You're giving it a text. that 26:40text goes to your tool and if that tool 26:41just happily executes the input from the 26:44user that can drop your database that 26:46can delete a database record that can 26:47delete bits of your code. Um so we 26:50wanted to have a way to provide 26:51observability guardrails monitoring 26:54security authentication authorization 26:57um even things like user impersonation 26:59for saying hey do you want to access 27:00this and then it goes on your behalf and 27:02we've been building a similar system for 27:04the last year year and a half but we 27:07haven't done something very important we 27:09didn't go open source with it and said 27:11hey we believe this is the standard for 27:13how an agent should interact with tools 27:15and tropically and they did a great job 27:17with the model context protocol 27:19and they've released it as a standard 27:21way to decouple your AI agent from your 27:23tools. However, there's a couple of 27:26interesting things in the mix here, 27:27which is the protocol came out. There's 27:29already version four of it. There was a 27:31draft. Uh there's multiple 27:33implementations. There's like 15,000 27:35open source servers somewhere in the 27:36community, all implementing different 27:38standards, different versions, uh 27:40incomplete implementations of the MCP 27:42protocol. Some of them don't have things 27:43like authorization and authentication 27:45and accounting and all the other things. 27:47So we've created the MCP gateway and 27:49registry as an open source project which 27:52gives you the ability to first federate 27:54multiple servers in the same gateway. So 27:56if you have for example resources 27:58prompts tools from multiple servers you 28:00can combine them into a virtual server 28:02with its own authentication with its own 28:04authorization with retry mechanisms with 28:07observability with monitoring with rate 28:10limits with health checks and a plug-in 28:12system. So you can plug in for example 28:14pre and post hooks for every operation. 28:18Before a user input you could trigger 28:20open policy agent or you can trigger a 28:22PII filter or after a specific input you 28:25can again do the same thing. So it's 28:28meant to be as a centralized point that 28:30can give you control over your context 28:33whether that is going to be tools 28:35resources prompts but also a mechanism 28:38that lets you convert between different 28:40protocols. So if your tools aren't 28:42already written as an MCB server, maybe 28:45you have a REST API, you can connect 28:47that REST API to the gateway. It will 28:49turn it into an MCB server and give you 28:51the same control over it. 28:52>> And so is it right to say I mean it 28:54seems like part of the project is I 28:56don't know if this is putting it in too 28:57simply, but it almost kind of feels like 29:00MCP is a standard, but we kind of know 29:02that the world is going to be really 29:04really messy when when we actually put 29:06MCP into action. And so what you're 29:08attempting to do is like give people a 29:10lever on kind of controlling that 29:12craziness, right? Like that there's 29:13going to be just so much variance that 29:15you will want kind of checks at every 29:17step. Is that the right way of thinking 29:18about it? I 29:19>> I think to some extent, yes. We also 29:21want to give a point where you can plug 29:23in your plugins. You can add your own 29:26spin to control or to even change the 29:30input and the output going to these MCB 29:32servers. 29:32>> So Gabe, you think a lot about open 29:34protocols is my understanding. Um, do 29:36you want to think about or do you want 29:38to kind of respond to kind of Mihi's 29:39project? I'm sort of interested in your 29:40take on like on the backdrop of 29:42everything we know about open source 29:44open tech. Is it just the right time 29:46like do we see MCP gateway type projects 29:48in other domains, right? Like how does 29:50this all look kind of situated in 29:51history? 29:53>> Uh, yeah. I mean, I think this is an 29:56excellent time for a project like this 29:58because we're at this this blossoming of 30:01a new open- source standard for a novel 30:04interaction pattern, uh, which in this 30:07case is fetching context for your AI 30:09models. And um if you think about 30:13historically how other protocols have 30:15emerged and the technology the 30:17implementations that have arisen around 30:19those how much bad HTML is out there on 30:22the internet right uh and you know that 30:25put the onus on the browsers to be able 30:27to just be wildly robust to all the 30:30things that could go wrong. um you know 30:33how many HTTP servers out there that 30:36occasionally just don't return 30:38occasionally just randomly spit out a 30:41500 or uh you know re even return some 30:45malformed you know stream of packets 30:47that everything on the client side has 30:49to be robust to and so to your point 30:51Mihi like we are very much in these 30:54early days and I imagine what we'll see 30:56just like with HTTP servers is that for 30:58every favorite programming language 31:00we'll eventually the deacto standard 31:03community edition and potentially 31:05enterprise edition of the MCP server and 31:07client library emerge probably one or 31:09two small handful of them that have 31:11their passionate followers and their 31:13their differentiators relative to one 31:15another and there will start to be some 31:18you know coalescence because people will 31:20stop being interested in implementing 31:22the server layer of the MCP and they'll 31:24start being interested in what's sitting 31:25behind it but I think for a while right 31:28now we're going to be in the wild west 31:30of actually getting you know the the 31:32bits to flow correctly uh and staying on 31:35top of the spec as it evolves cuz also 31:37with anything like this you know uh 31:39there's sort of an exponential decay in 31:41the volatility of the spec itself uh I 31:44think Mihi you pointed out in the blog 31:45post you wrote that um you know they've 31:47already deprecated one of the primary 31:49transport protocols for MCP so SSSE is 31:52going away that's server sentent events 31:54um and being replaced by streamable HTTP 31:57but so many people have already 31:59implemented their streaming MCP servers 32:01on top of SSE like huh what do we do 32:03with that? So I think having a piece of 32:06for lack of a better word middleware in 32:08the MCP domain where you can actually 32:10coers standards and isol uh basically 32:13manage the chaos in a single central 32:15place uh is an excellent both 32:18implementation tool for engineers trying 32:21to build this ecosystem themselves. Um, 32:24and you know, a good thing to maybe help 32:26shake out some of these uh 32:28inconsistencies across implementations 32:30where we can actually start to clearly 32:32identify, hey, look, every time somebody 32:33attaches an MCP server written with this 32:36random MCP library in Elixir, um, it 32:40turns out that I have, you know, we have 32:41to enable all this glue in the gateway 32:43to make it work. So maybe we should not 32:46use that one or, you know, get that 32:47community to step up their game and, you 32:49know, etc., etc. So I think I think 32:52there could be some really strong 32:54benefits both at the leaves of this 32:56graph, right, where the developers are 32:57trying to build things and at the the 32:59connective tissue of this graph to start 33:01isolating patterns. 33:02>> Okay, I'll give you the last word here. 33:04Um, it strikes me that this topic 33:06actually weirdly goes back to what we 33:07were talking about a moment ago with 33:08chat GBT agent. Um, which is that like 33:11sometimes the fault of the agent failing 33:14is that like the model wasn't smart 33:16enough or the agent wasn't smart enough. 33:18Um, sometimes it's just because like the 33:20internet's really messy and like the 33:21technical implementation is really bad. 33:23Um, and curious about how you think 33:26about that from like a UIUX standpoint. 33:28You know, we're talking a lot about like 33:29trust and it's like almost like the 33:31agent's going to take the blame, I 33:32think, for all of this messiness if we 33:34don't find a way around it. It seems 33:35like 33:36>> Yeah. No, absolutely. I think this 33:37speaks to a much bigger trend which is 33:40obviously 2025 you're the agents 33:42everyone's really excited what agents 33:44can do but we're finding that 33:47maintaining these agents and maintaining 33:50these systems is really difficult 33:52especially given how quickly 33:53everything's moving and there's so much 33:56performance that's tied up as you said 33:58Tim like in how this is all architected 34:01and built that's beyond the pure LLM 34:03weights that's behind the scenes I mean 34:05that's obviously going dictate 34:06performance too. And so I think Mihi, 34:09this is a great example of emerging 34:12classes of middleware that augment 34:14existing agent frameworks and protocols 34:16in order to better improve how we 34:19scalably build agents, how we maintain 34:21these, how do we build these in a secure 34:23manner. And I think this is just the tip 34:25of the iceberg of this kind of class of 34:28projects that's going to have to emerge 34:29if we're going to really deploy these 34:31and maintain them for for the future. 34:34>> Well, Mihi, any final thoughts? And if 34:36people want to learn more about the 34:37project, where where should they go? 34:38>> Well, look, um, if you want to learn 34:40more about the project, go to 34:41github.com/ibbmcp-context-forge 34:45and you'll be able to get started with 34:47our MCP gateway implementation. Um, if 34:50you want to contribute, we also have a 34:52detailed road map as well as an issues 34:53page where you can bring in new features 34:55or say, I'm missing this feature or I'm 34:58having this issue. Uh I think Kate to 35:00your point uh decoupling this logic from 35:04the agent and letting a piece of AI 35:06middleware handle things like your retry 35:08logic and all of the filtering is going 35:11to simplify your agentic framework as 35:13well. And there's going to be a lot of 35:14duplication between all of these agentic 35:16frameworks longchain long graph autogenb 35:19IBM. All of these frameworks have their 35:21own specific way of doing things. But if 35:23you manage to decouple the logic then 35:25you're going to be able to consolidate 35:26that work into one one library. All 35:28right. Well, I think that's the time 35:29that we have for this panel today. Uh, 35:31Kate, Gabe, always good to see both of 35:33you. Um, and Mihi, we hope to have you 35:35back on the show sometime. 35:37>> Anytime. Thank you all. 35:38>> All right. Thanks everybody. We're going 35:39to go ahead and move to Ryan. 35:45>> So, today we've got Ryan Hagaman joining 35:47us. He's the global AI policy issue 35:50lead. Uh, Ryan, welcome to Moe. 35:51>> No, great to be here. Thanks for having 35:53me, Tim. So, uh, I know we want to 35:55scramble this segment because this news 35:57just broke yesterday, which is that the 35:59White House has released its AI action 36:02plan, which is sort of a national 36:03strategy for artificial intelligence. 36:06Um, and I know there's a lot going on. I 36:08was looking at the document this 36:09morning, just like a lot of different 36:10recommendations. Do you want to just 36:12walk us through like what is this 36:13exactly and and is it important? Like 36:15should we be paying attention to it? 36:16>> Yeah, sure. I mean, I'll walk you 36:18backwards. The answer to the second 36:19question is absolutely. Yes. Very. Uh 36:22this is something that not only IBM but 36:24pretty much everyone in industry and 36:25here in DC and frankly around the world 36:27has been anticipating and looking 36:30forward to for basically the last 6 36:32months uh since it was originally 36:34announced. It's in a way yeah this is 36:36this is policy Super Bowl for policy 36:38nerds and policy wonks. Um but the short 36:41version is what this document basically 36:43does is it lays out the Trump 36:45administration's policy agenda as it 36:47relates to artificial intelligence. And 36:49part of the reason I think you haven't 36:51seen so much action uh in Congress is 36:54this is also the starter pistol for 36:56legislative action in the future. 36:58Probably not by the end of this year but 36:59probably moving into next year. Uh but 37:02what is the plan? Uh I mean in short the 37:04plan is basically how the administration 37:07wants this Congress and the existing 37:10agency apparatus in DC to approach 37:13thinking about AI. So there's something 37:15on the order of about 134 135 individual 37:19actions that are recommended agencies 37:22take. Um this plan was also accompanied 37:25by three executive orders from the 37:27president which he signed yesterday 37:29which provides a little bit more clarity 37:31on what exactly some of the agencies are 37:33supposed to do with respect to some of 37:35the more important features of the plan. 37:38But basically the plan as outlined goes 37:40a little bit something like this. 37:41There's three pillars. There's 37:43accelerating AI innovation. Uh there's 37:46building out American AI infrastructure. 37:48And then there's leading in 37:49international AI diplomacy and security. 37:52And what this basically boils down to 37:54for IBM is a lot of positive momentum. 37:57Uh I think the big thing for us is there 38:00was a specific call out on the value and 38:03the importance of open-source and openw 38:05weight model development and deployment 38:08which is frankly something that we've 38:09been asking for from the administration 38:11and from Congress. It's one of our major 38:13pillars that we advocate for here in DC. 38:15The need for policy makers to make sure 38:17that the open space and the open 38:20community kind of remains handsoff from 38:23policy makers. That's like a big shift, 38:25right? It kind of feels like for a 38:26little bit there there was sort of a 38:27debate over, well, we've got these 38:29powerful new technologies. Is it kind of 38:31unsafe for them to be open? And it feels 38:33like here they're kind of very much 38:34affirmatively saying no, like we want 38:36open. We actually think that's like a 38:37really important thing to happen. So it 38:39feels like almost like there's there's 38:41kind of a a shift in that discussion and 38:43you know it's landing very firmly on the 38:44side of open which I I think I'm 38:46personally very excited about. 38:47>> Yeah. I mean me too and I think a lot of 38:49us at IBM would view that as a very 38:51positive development. I would say where 38:54it kind of moved us from was a space of 38:55uncertainty because no one really knew 38:57how the administration was going to come 38:59down on this. The last administration 39:01never really took a super strong stand. 39:04There was a great report from the 39:05Department of Commerce that looked at 39:06open model weight development. Um, 39:09didn't make any, you know, super strong 39:11statements, but it also didn't say open 39:14source bad, right? Which was kind of a 39:16win at the time because of the 39:17uncertainty. So, it's a little bit of 39:19sea change shift if only from moving 39:21from neutral to positive. Um but it's a 39:24big positive and it's a big you know 39:26signal for us in industry that kind of 39:28this direction that we've been taking is 39:30now essentially getting a little bit of 39:32kudos from policy makers which will be 39:34good for the next few years of 39:36administrative action. 39:37>> And so you mentioned that there was 39:38these uh executive orders that were 39:40signed. I think one of them had to do 39:41with like energy. I know IBM we all 39:43frequently on we're talking about the 39:45compute buildout and infrastructure side 39:46of this. Sounds like there's been kind 39:48of like a a kind of clear path to really 39:51build a lot more. It seems like 39:53>> yeah I mean that was the gist of the not 39:55only the section on energy and data 39:57center buildout in the plan but in the 39:59executive order as well and it basically 40:01it doesn't touch IBM so much because 40:03we're not really in the data center 40:04buildout game but we definitely benefit 40:06from more data centers right more you 40:09know energy capacity on the grid that 40:11executive order basically just 40:13streamlines a lot of regulatory approval 40:15processes it gets kind of in the weed 40:17references a lot of different statutory 40:19authorities and existing legislation 40:22Um, but the short version of it is 40:24basically to the extent that there are 40:26opportunities for us to be building more 40:28capacity on the grid, to be building 40:30more data centers, the federal 40:32government shouldn't be standing in the 40:33way of that, they should be finding 40:35opportunities to help expedite that and 40:37make it happen quicker because, you 40:39know, America's going to need a lot more 40:41energy if we're going to be doing a lot 40:42more LLMs in the future. 40:44>> Yeah, for sure. Uh, anything else that 40:46uh, you were pleased to see? It sounds 40:48like you focused kind of a lot on the 40:49open side. anything else that you think 40:51kind of is in this action plan that like 40:53there's there's stuff that folks should 40:54focus on or or even kind of pick up the 40:56PDF and read. 40:57>> Yeah, I mean too much to mention in a 40:59couple minutes, but I do think the the 41:01one other thing that really struck uh 41:02struck a chord with me was under the AI 41:05diplomacy bucket. Um there's essentially 41:08a dictate for the department of commerce 41:11to figure out uh with partners in 41:14industry how to kind of create a larger 41:16export package of a full American AI 41:19tech stack, right? So everything from 41:20data center buildout to you know model 41:23developers, model deployers, get 41:25everyone together in an industry 41:26consortium essentially package that all 41:29together so that the department of 41:31commerce can then use those essentially 41:33export packages to push out American 41:37tech to the rest of the world. And 41:38something that we've said a lot that 41:40this administration um can do in AI is 41:44really promote that idea of exporting 41:46American AI and American technology. So, 41:48you know, that's an opportunity for IBM. 41:50We'll see what comes of it. Um, but that 41:53I would say is the other really big call 41:54out here that really IBM, I think, and I 41:57hope stands to benefit from over the 41:59next year or two. 42:00>> That's great. Well, I know we just have 42:02a few minutes left. Um, action plan is 42:04out. Executive orders have been signed. 42:06What comes next? Are we I assume you're 42:08not saying, "Well, we're all done with 42:10this AI thing here in DC." 42:12Yeah, I mean I I kind of wish I could 42:14take vacation for the next rest of the 42:16year, but I think the reality is there's 42:18a lot of requests for information that 42:20are going to be, you know, this the AI 42:22action plan basically says here's what 42:23we're going to do. Now they actually 42:25have to do it, which means yeah, we've 42:27got a lot of uh responses to provide. 42:29We've got a lot more education uh and 42:32engagements to do on the hill as they 42:33think about legislative package packages 42:36to help, you know, make some of this a 42:37reality. So, you know, like I said at 42:40the outset, this is really just the 42:41starter pistol for the start of the 42:43race. Um, and I think now it's really 42:45heads down and books open and pens and 42:48paper in hand. We just got to get to 42:49work doing it. 42:50>> Well, Ryan, look, I know there's a lot 42:51going on. Uh, we'll have to have you 42:53back on the show as this all kind of 42:54unfolds. Um, it'd be really good to have 42:56your voice in here as we kind of track 42:57what's happening uh in DC on all this 42:59stuff. 43:00>> No, always always happy to stop by for a 43:02chat. 43:03>> Cool. Thanks, Ryan. 43:04>> Yeah, thank you. 43:06>> Thanks for joining us, listeners. If you 43:07enjoyed what you heard, you can get us 43:08on Apple Podcasts, Spotify, and podcast 43:11platforms everywhere. And we're going to 43:12see you next week on Mixture of Experts. 43:15[Music]