Learning Library

← Back to Library

Robots, Rights, and Cloud AI Deals

Key Points

  • The show kicks off with a discussion on the ultra‑early market for 1x Neo, a new $500‑per‑month (or $20 k one‑time) humanoid robot, highlighting how pricing is essentially a test of market appetite.
  • Panelists examine the legal pushback from Japanese copyright holders against OpenAI’s Sora 2, underscoring growing tensions between generative AI tools and existing IP law.
  • A major partnership between AWS and OpenAI is announced, signaling deeper cloud‑infrastructure support for OpenAI’s models and services.
  • The news roundup covers Perplexity’s $400 M deal to embed AI search in Snapchat, Coinbase’s AI agents with crypto wallets for autonomous purchases, Instacart’s AI suite for real‑time grocery inventory and meal planning, and Google’s launch of solar‑powered AI chips aboard satellites (project “Suncatcher”).
  • Throughout, the hosts stress that many of these developments are still experimental, with pricing, regulation, and deployment models evolving rapidly.

Sections

Full Transcript

# Robots, Rights, and Cloud AI Deals **Source:** [https://www.youtube.com/watch?v=9GhT1mp5Edk](https://www.youtube.com/watch?v=9GhT1mp5Edk) **Duration:** 00:36:17 ## Summary - The show kicks off with a discussion on the ultra‑early market for 1x Neo, a new $500‑per‑month (or $20 k one‑time) humanoid robot, highlighting how pricing is essentially a test of market appetite. - Panelists examine the legal pushback from Japanese copyright holders against OpenAI’s Sora 2, underscoring growing tensions between generative AI tools and existing IP law. - A major partnership between AWS and OpenAI is announced, signaling deeper cloud‑infrastructure support for OpenAI’s models and services. - The news roundup covers Perplexity’s $400 M deal to embed AI search in Snapchat, Coinbase’s AI agents with crypto wallets for autonomous purchases, Instacart’s AI suite for real‑time grocery inventory and meal planning, and Google’s launch of solar‑powered AI chips aboard satellites (project “Suncatcher”). - Throughout, the hosts stress that many of these developments are still experimental, with pricing, regulation, and deployment models evolving rapidly. ## Sections - [00:00:00](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=0s) **Early AI Wave: Robots, Laws, Deals** - The podcast episode examines the pricing uncertainty of fledgling AI tech while covering the 1x Neo humanoid robot, a Japanese copyright challenge to OpenAI’s Sora 2, the new AWS‑OpenAI partnership, and Perplexity’s $400 million integration with Snapchat. - [00:03:03](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=183s) **Humanoid Home Robots: Hype vs Reality** - The speakers contend that, despite impressive marketing videos, current AI and robotics lack the reliability needed for autonomous household humanoids, meaning years of further development and teleoperation are still required. - [00:06:33](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=393s) **From Autonomous Cars to Home Robots** - The speaker reflects on how self‑driving vehicles quickly become mundane, draws parallels to the slower timeline for fully automated household robots, and cites Asimov’s Three Laws as a guiding safety framework. - [00:11:06](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=666s) **Pricing Strategy for Early‑Stage Home Robotics** - The speaker analyzes why a nascent home‑robot startup charges $20,000 for early access, linking the high price to the value of data, market testing against housekeeper costs, and current teleoperation staffing constraints. - [00:14:21](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=861s) **Risky AI Hardware Costs & Piracy Concerns** - The speakers discuss the uncertain pricing and complex physical infrastructure of advanced AI systems, their strategic use for mindshare, and note a recent anti‑piracy complaint from Japan’s CODA to OpenAI. - [00:17:25](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=1045s) **Synthetic Data, IP, and Revenue Sharing** - The speaker examines how intellectual‑property rights, royalty structures, and data‑permission marketplaces could operate for AI‑generated content, questioning the practicality of revenue‑sharing mechanisms and ownership when synthetic data is used. - [00:20:56](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=1256s) **Navigating Style Boundaries and Revenue Shares** - The speakers debate the ambiguous definition of artistic styles in synthetic data, model providers’ cautious use of copyrighted material, and the potential for third‑party platforms to broker revenue‑share agreements between artists and AI model owners. - [00:23:59](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=1439s) **Balancing Prompt Censorship and Model Interpretability** - The speakers discuss the tension between restricting user prompts to prevent harmful content, the lack of clear governance in the AI race, and the need for societal frameworks and interpretability research to guide what is permissible. - [00:27:28](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=1648s) **AI Firms Forge Multi‑Cloud Alliances** - The speakers discuss how companies like Anthropic and OpenAI are forming overlapping partnerships with AWS, GCP, and Azure, creating a complex multi‑cloud ecosystem that adds technical complexity but offers diversification and strategic advantages. - [00:30:44](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=1844s) **OpenAI's Commitment to Nvidia GPUs** - The speaker argues that despite diversifying cloud providers, OpenAI remains tied to Nvidia hardware because its models are heavily optimized for those chips, making any switch costly and technically challenging. - [00:33:56](https://www.youtube.com/watch?v=9GhT1mp5Edk&t=2036s) **OpenAI Inference Focus for Enterprise** - The speakers argue that OpenAI’s new GPU resources will primarily serve inference—enabling agentic workloads for business customers—and debate whether this capability will be offered directly on cloud platforms like AWS or via a marketplace, noting the lack of clarity around proprietary model availability. ## Full Transcript
0:01It's such an early, early product in such an early 0:05space that at this moment in time, I think anyone 0:08who's trying to price this, they really are just trying 0:11to see where the market's at. All that and more 0:15on today's Mixture of Experts. I'm Tim Hoang and welcome 0:23to Mixture of Experts. Each week Moe brings together a 0:26panel of the sharpest minds in technology to distill down 0:29what's important in the latest news in artificial Intell intelligence. 0:33Joining us today are three incredible panelists. So a very 0:35warm welcome to Ash Minhas, lead AI advocate, Ambiganasan partner 0:40AI and analytics and Sandy Besson, AI research engineer. Welcome 0:44to you all. Really exciting and interesting episode today. We're 0:47going to cover all sorts of different aspects of what's 0:49been happening in the news. First, we're going to talk 0:52a little bit about 1x Neo, which is a newly 0:54announced humanoid robot and the sort of Wall Street Journal 0:57review of it. We'll talk a little bit about an 0:59interesting challenge to OpenAI's Sora 2 from Japanese copyright holders. 1:06And then finally we'll talk about a big partnership between 1:08AWS and OpenAI. But first we've got Illy with the 1:10news. Hey everyone, I'm Illy McConnon, a tech news writer 1:17for IBM Sync. I'm here with a few AI headlines 1:20you might have missed this week. AI giant Perplexity will 1:23pay $400 million to integrate its AI powered search engine 1:27directly into the Snapchat app so the social media platform 1:30can be used for AI search. In addition to sending 1:33snaps those images and messages that disappear. Two trends collide. 1:38Crypto platform Coinbase is giving AI agents their own crypto 1:42wallets to buy things on behalf of customers instead of 1:45simply recommending purchases. Instacart has launched a suite of AI 1:50enterprise tools for grocery stores so retailers have a real 1:53time view of what's on their shelves at any moment. 1:55This also means that shoppers will have an AI assistant 1:58for personalized meal planning and budgeting. Google is launching AI 2:02chips into space on solar powered satellites to test out 2:06solar powered AI, a project aptly called Suncatcher. Want to 2:11dive deeper into some of these topics? Subscribe to the 2:13Think newsletter linked in the show notes. And now back 2:15to the episode. The first thing I want to start 2:21with is kind of this video that got passed around 2:24wildly on social media this week that was covering the 2:28One X neo. Now, longtime listeners to the show will 2:32remember that we actually talked NEO when it was first 2:34announced, but it is finally Open for sales, you can 2:38go and buy it. It's being offered as a $500 2:41a month subscription or a $20,000 early access fee. And 2:47what 1x Neo is, it's a humanoid robot. The idea 2:50is it's literally a life size, sort of mannequin style 2:54robot which will be in your home and help out 2:56with home tasks. And so I guess, Ash, maybe I'll 3:00kick it over to you first. Does this have legs? 3:04In 2028, are we going to all have sort of 3:06a humanoid robot in our home? Or do you think 3:09this is going to end up being much more of 3:10like a sort of specific use case? As we see 3:13this company launch its. First products, I think that they 3:18did a really, really good job of their video introducing 3:21their product to the world. The Wall Street Journal video 3:27showed huge discrepancies in terms of what its capabilities are 3:31today versus what they showed as the art of the 3:34possible. I think that there's probably more than a year's 3:40worth of development work to go into putting something like 3:43that in my home, where I'm going to trust it 3:45with my glasses in my kitchen. And my casual, breakable 3:50things. In your kitchen? Yeah, yeah, yeah, exactly. I think 3:54that the fact that during the Wall Street Journal interview, 3:58the founder was very open about the fact that there's 4:00teleoperation happening is an eye opener. I think for us 4:04as a society to realize that a lot of these 4:07robotics that they're having to deal with very, very complex, 4:12ambiguous environments, we're just not there yet with the AI 4:16technology to power them to do that job accurately. And 4:19I do think that there's probably quite a number of 4:21years to go yet. Yeah, for sure. And Sandy, I'm 4:24wondering if you can give us kind of like maybe 4:25an intuition for why it is so difficult. Right. Like 4:29we've seen these huge explosions in AI capabilities. But as 4:33yet. Right. Like I think the Wall Street Journal review, 4:36which was widely passed around, you know, the robot is 4:38depicted trying to basically like open and close a dishwasher 4:42door. Dishwasher. And it's like. Yeah, yeah. And it takes, 4:46it takes like minutes to do what's, what's so difficult 4:50about, about this task from an AI standpoint. If we 4:54look at where we've seen robotics prosper, it's in settings 4:59where the job is quite monotonous. Right. It's quite routine. 5:03Like in Amazon factories, is, is like famous for having 5:10largely robotics and robot. What's the word? Roboticized. Is that 5:14a word? Maybe it will become a word, but it's 5:18famous for having that. Right. But when you're taking more 5:21general tasks, everyone's home is unique. Not just has a 5:26different layout, like your Robo vacuum. Right. But has different 5:30handles and different buttons and different makes of washing machines 5:36and gas range stoves versus electric stoves and things like 5:40that. Right. There's so much variance that just like Waymo 5:44experienced with driverless cars, which are still only allowed to 5:48go up to 25 miles an hour. And that's just 5:51starting to change. Right. They're going through this evolution where 5:57they're going to need to collect a lot of training 5:58data. And that kind of poses the question of, like, 6:01how are they going to do that? And do people 6:04want that in their homes for them? Like, he had 6:07this really interesting construct of Big Sister in the Wall 6:12Street Journal view that I thought was a cool way 6:15to look at it. Right. It's like, okay, we have 6:18this negative concept of Big Brother, but is Big Sister 6:21an okay concept because it's going to help you long 6:24term? Yeah, absolutely. I think the Waymo comparison is a 6:29really good one. And Ambie, I wanted to get your 6:31thoughts on this is like, I mean, Waymo works now. 6:33I mean, if you've been in San Francisco recently, you 6:36can call a robot car, it rolls up, you can 6:39jump in it. And I think for me, what made 6:41me very confident about the future of it is that 6:44after the second or third ride, it's completely boring. Like, 6:47you don't even think about it at all. But that 6:50took a really long time. And I guess the question 6:52for you is if you think that, like, you know, 6:55we're going to see a different, similar timeline on this 6:57kind of thing, right? Where I think Google's been talking 6:59about autonomous vehicles for, I don't know, over a decade. 7:02I actually don't know when they started talking about it. 7:04You know, do you. Do you think there's a similar 7:05timeline here before we have really kind of like fully 7:08automated kind of home robots? Yeah. And that's the way 7:12that I feel about the WAYMOS as well. I've taken 7:14a bunch of those. And, you know, I always say 7:16that the most surprising thing about the WEMOS is that 7:18it's so unsurprising. Right. Like, it's so normal that you 7:22just don't feel anything about it out of the ordinary. 7:24Right. Like, we've got a ways to go before we 7:27get to that stage on home robotics. And, you know, 7:32for me personally, and I think a lot of us 7:34would feel this way as well. Growing up, I was 7:37a Huge Isaac Asimov fan. And I, I always think 7:43about the three laws of robotics, right? Don't do any 7:47harm or through inaction, don't cause any harm, always safeguard 7:52humanity. Don't do any stuff that's going to put yourself 7:56in danger, things of that nature. I think there are 8:00ways to go before we encapsulate all of those and 8:05then make home robotics approach that space, right? So to 8:10Sandy's point, Waymo and the autonomous vehicles work to some 8:17extent because the search space is a little bit structured, 8:23right? Like you're always going to have street signs, like 8:25you're always going to have roads marked with lanes, right? 8:28There are a bunch of these factors that are structured. 8:32Whereas when it comes to home robotics, right, the search 8:35space is so unstructured and vast and infinite, it's going 8:42to take some time. I think there are some tantalizing 8:48clues and aspects that I think we are saying, hey, 8:50maybe we can use world models to simulate and synthesize 8:53data and then maybe we'll do. But there's a bunch 8:55of maybes here that we got to figure out. So 8:59there is. If you look at it from multiple factors, 9:02are you going to get enough data to do the 9:04training? Are you going to have enough? Just think about 9:07all the compute capacity that every time we hop on 9:10this podcast we complain about the compute capacity getting constrained. 9:15Infrastructure becoming the moat. No, just think about all the 9:19infrastructure that you'll need to run these robots at scale, 9:22right? There's so much that's pending to be built out 9:25from that infrastructure layer perspective. And then the third piece 9:29is, like I mentioned, all the safety aspects, right? Whether 9:33you subscribe to the three laws of robotics or some 9:36fashion of it, we'll have to come and codify and 9:39say, okay, here's how we need to regulate and leverage 9:43home robotics, right? And there is a lot to be 9:47figured out there. So, yeah, this is going to be 9:49a multi year journey. It's not a one and done 9:52deal in a couple of years. Honestly, one thing I 9:55was thinking of when I watched the Wall Street Journal 9:57video was I was thinking, I wonder if the folks 9:59of NIO have thought maybe we should just send the 10:02robots to IKEA stores and just get them to walk 10:05around the whole IKEA store for a while and just 10:08get them to train. Like they could just walk around 10:11opening and closing all the drawers and sitting on the 10:14furniture and doing all that stuff and then probably collect 10:17a mountain of training data doing that. And then they 10:20come to your house, which doesn't have ikea and then 10:23they'll fail. And Ash, if I could stay with you, 10:29I do want to talk a little bit about the 10:31data aspects of this, right? Because I think as you 10:33mentioned, it came out in the review that a lot 10:36of it is still tele operated and ultimately I think 10:39the teller operation is to collect data on how you 10:42might ultimately kind of navigate these complex spaces. I mean, 10:46I think one of the things I hear from Ambi's 10:49comments is basically that if anything, homes are even more 10:52complex than trying to navigate the road. I guess, Ash, 10:56with that all in mind, like why are they pricing 10:58it so expensively, right? Like $500 a month is like 11:02that's like two and a half ChatGPT Pro subscriptions, right? 11:06And if you want to buy like the main one, 11:07it's like $20,000 for early access. But the data is 11:11so valuable, why aren't they just making this product like 11:1420 bucks a month, Right? Because like the data is 11:16really what they need in order to get this thing 11:17to work. I really have no idea how the financing 11:20has worked to get the startup to where it is 11:22today. To know what they're looking at achieving by charging 11:27this price. I think that it's such an early, early 11:31product in such an early space that at this moment 11:35in time, I think anyone who's trying to price this, 11:38they really are just trying to see where the market's 11:41at. And I think they probably looked at this and 11:43thought, what does a housekeeper cost? What's that working out 11:46to for a household every month? And they're looking at 11:48what's that point at which, you know, this becomes more 11:51cost effective than a housekeeper. And they're kind of trying 11:54to anchor on the price and they'll get the product. 11:56I do think that there probably will be some volatility 12:00in that business model and pricing as this actually matures 12:04over the coming years. But as I said, it's so 12:08nascent right now. You're not going to have one of 12:10these in your home. I was just going to say 12:11I think they're constrained by their own constraints. Right? It's 12:15like you said, the teleoperation. They have to have people 12:18that are. Right now you're scheduling this on an app. 12:21So if they make it so widely available now, they 12:25have to get hundreds of teleoperators to do these things, 12:30right. And train those people and scale it up. So 12:33for an early access product, I think it makes perfect 12:36sense. Even if it means that it takes longer for 12:39the product to improve, that they're doing so in A 12:43really scoped way that they can control. One interesting thing 12:46that I noticed was that they were already anchored on 12:51the fact that people should have this expectation that you're 12:54going to have a companion application on your phone or 13:04of anti climatic as it were. Why would you have 13:06a robot in your house that you've got a schedule 13:09to do something, you know? And it comes down to 13:11that same constraint that you implied, Sandy. Right. They've got 13:14to hire people to teleoperate these things. Right. And they 13:16need shifts, I guess. And there's a whole labor model, 13:19a. Virtual housekeeper right now that's kind of what it 13:23is. You're outsourcing the body of a housekeeper, but you 13:29have someone behind the scenes that's still doing the work 13:32for right now. But that will change overall. And to 13:35be honest, if I don't have to fold my clothes 13:37and wash them and put them away in two years 13:39time, I would be happy to give it access to 13:42the viewings of my wardrobe. Well, we'll have to have 13:46you back on in 24 months. That's a good prediction. 13:50I mean, I think a lot of this is like 13:51we probably shouldn't worry too much about the price piece 13:56here. A lot of this I think is also to 13:59capture the mindshare. Right. I mean there is a reason 14:02now we are talking about all of this and they're 14:04capturing the mindshare. Right. I think robotics as Optimus has 14:10been edging at the top of the mind share figure 14:14has been pushing it. Yes, they're operating in the industrial 14:18setting. The home robotics space was a little bit open. 14:21I think this has gotten a lot of us talking 14:25about it and capturing mindshare. Right. Pricing to Ash's point 14:29I think is a little bit of a dart on 14:31the boat right now. No one really knows. Yes, granted 14:34there is a lot of sophisticated instrumentation that's needed to 14:38make something of this nature work. Right. So it's not 14:41just purely, hey, I'm collecting data and then I'm training 14:44data and I'm just inferencing it somewhere. There is a 14:48lot of physical equipment that's needed in order to make 14:53this work. Right. Like you'll have sophisticated actuators and gears 14:57doing this in precise fashion. So there is, I get 15:01the cost of operations and cost of manufacturing probably is 15:05a factor in there. But bottom line, right, it is 15:08a little bit of a dart on the board. This 15:10whole, I think push and declaration is a little bit 15:15to capture the mindshare and then declare to the world 15:17that, okay, we are, we are taking a step forward 15:20and we are going to go proceed forward. I'm going 15:27to move us on to our next topic. So, super 15:30interesting story came out of Japan earlier a week about, 15:34about a week ago. There's an industry organization in Japan 15:38called the Content Overseas Distribution association, or CODA for short. 15:43It's an anti piracy organization that represents basically Japanese IP 15:47holders. So Bandai, Namco, Studio Ghibli, like a lot of 15:52your kind of favorite brands out of Japan are kind 15:54of represented by this organization. And what's interesting is that 15:57they sent a letter to OpenAI expressing concern about essentially 16:02the use of their intellectual property in Sora 2 and 16:06the generation of videos that kind of like implicitly sort 16:09of like rely on that ip. And I think it 16:13was such an interesting case. And I guess, Sandy, like, 16:16I guess the question for you is how do you 16:18think OpenAI should kind of navigate these types of discussions? 16:22It's a really hard and tricky thing about how the 16:26rights of these rights holders should be taken accounted for, 16:28but also to give the freedom to innovate on the 16:31technology. And I think there's just a really interesting set 16:33of questions there. Totally. And the more I was thinking 16:36about it, the more I was like, oh my God, 16:38this is a large task. And there's a, a few 16:45ways, if I was OpenAI, I would think about it. 16:49One is, do we actually have to alter, and I'm 16:52sure they're already thinking about this one, do we have 16:55to actually alter our pipeline of how we tag and 17:01add metadata to our training data to be able to 17:06flag exactly where everything is coming from? And to some 17:09extent they might do this, but they might not have 17:11IP owners on there or certain things like that. And 17:14then do I have to potentially, potentially in order to 17:17in some ways make this go away, potentially do some 17:22sort of revenue share in the end of the day? 17:25That's what IP and royalties are about, right? They're about 17:30sharing revenue. So is it possible that if I say 17:34like, hey, make me a cartoon version in the style 17:37Studio Ghibli that's tagged somewhere, if it's using Studio Ghibli 17:43and there's some sort of revenue share there, that, that's 17:46one way, if I were OpenAI, I would be thinking 17:48about it, to kind of escape all of these large 17:51lawsuits. But I think that also opens up the other 17:55end of the side where it's like, okay, well who's 17:59going to actually collect all this information? Is there opportunity 18:02for a market share, a marketplace out there that essentially 18:06has the rights of people that say, yes, I allow 18:09you to use this data or you are not allowed 18:12to use this data. But the further I kept thinking 18:16about that, the more, and I'm a little bit on 18:19a monologue here, but the further I kept thinking about 18:21that, the more I realized is this just more argument 18:25to move into synthetic data and then what happens to 18:31IP and ownership? If it's almost like inception, right? Like 18:37IP ownership, inception, where is it still owned by that 18:42person? If it's influenced because it's created synthetically via that? 18:47And so you kind of move into this like ownership, 18:51inception. Right. So truthfully, I don't know where this is 18:54going to end up. Ambi, what do you think about 18:56all of this stuff? I think yeah, where I was 19:03really hard to figure out how you would set up 19:05kind of like some kind of payments infrastructure here. And 19:10so is one possibility that a lot of these companies 19:13just start investing a lot more in synthetic data. Right. 19:16Like kind of the era of like we need to 19:18scoop up all this data to train our models is 19:21going to give way to. Well, just we're going to 19:23try to do synthetic as much as possible. I don't 19:25think anyone has solved this. Right. No one really has 19:28a clear understanding or has a clear solution to any 19:31of this at this point in time. It's really, really 19:33murky waters. I'm going to put a little bit of 19:37an enterprise lens on it because I talk to clients 19:41and I deal with enterprises on a day to day 19:43basis and when I look at it from their perspective, 19:48you can't afford to have any of these being transmitted 19:54back to enterprises. And there is two, there are two 19:59forks in the road that I'm seeing. There's one set 20:02of model providers, very early on they always said I'm 20:06just going to go for completely copyrighted data, I'm not 20:09going to go and scrape anything and I'm going to 20:13go only use that. And that becomes a safe approach 20:17and a safe path for enterprises to consume. And then 20:22there is the other fork which is the likes of 20:25Gemini and OpenAI have. Clearly it's very consumer facing and 20:32therefore a lot of the imitation aspects that creep in 20:37here. But for enterprises outside of a text modality, if 20:44you're getting into image or video modality, something like this 20:47is still a Landmine that they wouldn't want to touch 20:51on so that that piece has to be solved. I 20:54don't think there is a clear answer to it yet. 20:57Maybe synthetic data, but even there, who draws the boundary? 21:02At what point do you say this looks like a 21:05Studio Ghibli style? No, this doesn't look like a Studio 21:08Ghibli style. There is no quantitative way for you to 21:10actually go and delineate that. So it's a little bit 21:13of a fuzzy aspect over there. No hard and clear 21:16answers over there. I think the safest approach some of 21:19the model providers like Adobe have taken as saying, I'm 21:22not even going to touch any of that before all 21:24of this thing gets sorted out. I'm just going to 21:26go train on just purely copyrighted data and then I'll 21:29deal with that for the time being. So it's still 21:34those two folks in the road that hasn't fundamentally changed 21:38over the last year or year and a half, Right? 21:41I think there is an opportunity to be made. Like 21:45Sandy is saying, if the model providers are okay with 21:50some of these rev share agreements, and I think there 21:53could be an enterprising layer that creeps up between the 21:58artists and the model providers. It may not even be 22:01the model providers themselves. There may be a third party 22:03that says, okay, you know what, I'm going to help 22:06the artist form a consortium or form a network and 22:08then I will help transact all of these between the 22:12model providers and the artists. So I think there's some 22:16new spaces to be covered here. Yeah, absolutely. And Ash, 22:19maybe I'll give you the last word on this. I 22:21mean, I think part of the big question is. Yeah, 22:24I think it's like part of what's difficult about this 22:26discussion is to what Sandy's saying. In principle it makes 22:29a lot of sense, right? Like you contribute some data 22:31and you should be compensated for the use of it. 22:34In practice, it's really, really complicated. And I think one 22:37of the questions is like how we say something is 22:40so close enough in style that you deserve to have 22:43some kind of compensation. Do you think at that point 22:46we're going to just have to draw an arbitrary line? 22:48It'll be like, oh, we're going to calculate a difference 22:50vector between this video and this video and if you 22:53are within this threshold, then you have to pay and 22:55if you're outside this threshold, then you don't have to 22:57pay. Some of this might just ultimately be maybe arbitrarily 23:00resolved. Do you think that's where we land. With some 23:02of This, I don't know if it is somewhere where 23:05we're going to arbitrarily land. I think that there's too 23:09much at play here, both commercially as well as culturally, 23:13for there to be a place that either the technology 23:20companies or the people who own the rights to a 23:22lot of this art are going to let that happen. 23:26I think that ultimately, at this moment in time, we're 23:31still in a very nascent space, right? I remember when 23:33stable diffusion came out in the first instance. And, you 23:36know, as background, I'm a photographer, okay? And I've always, 23:40like, over the last sort of 10 years or so, 23:43shared very few pieces of my photography on the social 23:46networks, because I'm like, hey, I'm transferring my rights away 23:49from my work. And so we've now got to this 23:54place now where these generative models, we need them to 23:57be good, to be able to get value from them. 24:02them to be able to respond accurately to what people 24:06are asking for, right? And as Ambie already touched on, 24:10at what point does that become censorship, right? Like at 24:16one point when someone's prompting the model and saying, hey, 24:18I want you to make me a picture of this, 24:20at what point, if we just keep on putting in 24:23sort of protections there at the prompt level to go, 24:25hey, you can't prompt for this, you can't prompt for 24:27this, you can't prompt for this. What I think about 24:30is who's making those decisions, right? And I guess at 24:33this moment in time, in this. This huge AI race 24:37that we've got going on, no one really wants to 24:39do that too much because that makes their model less 24:41helpful. And so I think that this won't be arbitrary. 24:46I think that this is just a lot of back 24:49and forth that's going on between so many different stakeholders 24:53right now. And we really need some sort of real 24:56frameworks in place, ideally from a government and a societal 25:02level, to make some decisions here as to what is 25:04and isn't allowed. And people should abide by that. I'd 25:08like to just add one little point is something that 25:10Ash and Ambie both touched on is that I think 25:14a lot of us look at it from the output 25:19perspective, like the user perspective, like, what are we prompting 25:22for? What does the model output show? But in reality, 25:27there's an entire field called modern interpretability, right, where they're 25:31trying to understand behind the scenes the reasoning of the 25:34model. And they have. The famous example, I think, is 25:39talking about what is the capital of France, right? And 25:43they're seeing the nodes light up throughout the model. Right. 25:47And so are we going to accelerate our interpretability of 25:52the reasoning of the models to understand what's actually being 25:55used to produce this output, or are we going to 25:59assess it from the perspective of the output? And I 26:02think right now we don't have a choice but to 26:04perspective but to assess it from the output because we 26:08don't understand the innards. Right. But as we start to 26:11understand the innards, there's going to be even more of 26:15a case for these other companies to say, hey, you're 26:19using my stuff because some node got activated somewhere deep 26:25in the neural network and then it will become even 26:29more gray. Yeah, exactly. Yeah. I think that somebody almost 26:34like, it's like the illusion of clarity is like once 26:36we start digging, it'll be like, okay, well presumably some 26:39people will come up with prompts that can get very 26:40similar styles that don't activate certain kinds of neurons, and 26:43that ends up becoming a new game. Right. Even as 26:47the field kind of improves. So. All right, I'm going 26:53to move us on to our last topic of the 26:55day. Super interesting story, particularly on the backdrop of the 26:59last MOE that we recorded for Halloween last week. Week 27:04there's basically news that OpenAI has announced a new partnership 27:07with AWS. And I will always kind of like quote 27:11the numbers here because they continue to be mind blowing 27:13to me. So OpenAI is doing a deal with AWS 27:17that represents a $38 billion commitment, expanding its compute capacity 27:23and basically working with AWS to expand on its infrastructure. 27:28This is an interesting one. Amazon, of course, has been 27:31touting its Trainium, its, its specific chip, but this one 27:35seems very much still focused on Nvidia GPUs, at least 27:38from the blog post and I guess maybe Ambi, I'll 27:41throw it over to you first. It seems like one 27:44thing we keep talking about on MOE is how you're 27:45starting to see these alliances form where you're like, okay, 27:49you've got Anthropic and they're getting close with GCP, but 27:51they're also a little bit close with AWS. And then 27:54of course OpenAI is very close with Azure and Microsoft, 27:56and now they're also getting close with aws. I think 28:00the most interesting thing and maybe a good place to 28:02start is it seems like a lot of these companies 28:04are going multi cloud ultimately, which seems like, I guess 28:08if you want to give us an intuition for why 28:10they're doing that, because my sense of it is that 28:11that actually adds quite a Lot of complexity. Yeah. Yeah. 28:14Well, yeah, I think alliances is the right word. It 28:16always reminds me of like Game of Thrones and it's 28:18so confusing. Who's alliance, who's in a royal marriage with 28:23who. Exactly. Right. It's so exciting, right? A new episode 28:28comes out and new. Things, you know, it's the nerdiest 28:29possible Game of Thrones you could imagine. But yeah, I 28:33mean, the $38 billion, if you think about the $1 28:35trillion supposed IPO, I mean, this is like a drop 28:38in the bucket. Right? But you know, leaving that aside, 28:43right. I mean, sort of makes sense, right? You want 28:46to diversify, you don't want to put all eggs in 28:48one basket. So. Yeah, I mean, you have to diversify 28:52like we were talking about in a previous episode, right. 28:55The moat is shifting from pure model layer into the 28:59infrastructure layer and some of the depths of the infrastructure 29:02layer. Right. So knowing that, I think it makes sense 29:05to ensure that you don't put all your eggs in 29:07one basket and then just go with one infrastructure provider 29:11and therefore you diversify into aws. What I found surprising 29:16was there wasn't a reciprocal arrangement for AWS to host 29:23the proprietary models from OpenAI and then expose them. That 29:28is still an exclusive arrangement between OpenAI and Azure. So 29:35it's a little bit of a one way imbalanced relationship, 29:41I would say. What would have made it more interesting 29:45was the models are also getting hosted on multiple different 29:49environments and it becomes truly diversified. And you're getting to. 29:55When I talk to enterprises, they've all moved on to 29:58just going with one model, to having a mixture of 30:04or a choice of picking their own models and cloud 30:07providers. Just like we went with the hybrid cloud approach, 30:12we are fully seeing the hybrid model approach, so I 30:16would have liked to see some of that manifest here. 30:20But bottom line, I think this is purely hedging your 30:23bets and making sure that you're not getting stuck in 30:25the infrastructure mode and getting caught there. Right. That's the 30:29simple way to look at it. Sandy. One interesting aspect 30:32of this is in contrast to the anthropic deal, which 30:37very much kind of touted like we're working with TPUs, 30:39I think the language is like we will use up 30:41to a million TPUs or some crazy number like that. 30:44This one really does still seem focused on Nvidia GPUs. 30:48And I guess maybe it seems like one part of 30:50this deal is OpenAI is not really. Maybe it's changing 30:53and diversifying its infrastructure providers. But at the end of 30:56the day, the Chips are still the same. That was 30:59100%. What I was going to hit on is that 31:02they've seemed to have chosen their hardware provider. They can 31:06diversify in their cloud infrastructure, they can diversify in how 31:12they offer it to end users, but they seem to 31:15have clearly chosen their hardware providers. And the reason for 31:18that is not all chips are created equal. Number 1 31:22and 2 is models are optimized specifically for the hardware 31:25that they run on. So clearly OpenAI has done a 31:29lot of work in optimizing their models to run most 31:32efficiently on these Nvidia chips. And so to make that 31:37switch is no easy feat. That's like teams and many 31:43hours and many weeks and potentially months of switching cost. 31:49And is it worth that if AWS also offers the 31:52thing that they're already optimized on? It's a really hard 31:54call. And I guess part of it is just like, 32:02of optimize outside of it, I guess. Ash, is this 32:06ultimately, like, how much of a difference do you think 32:09this makes going forwards? This is sort of the direction 32:11you'd expect these companies to be moving in, or, you 32:14know, how much of this, like, I think a little 32:15bit about how, like Nvidia or not Nvidia, but Netflix 32:18used to run a lot of its infrastructure on aws. 32:22And, you know, it seems possible to me that over 32:24time, like, OpenAI could become like much more of an 32:27AWS provider or a consumer. You know, do you, I 32:31guess, in the battle of the clouds, do you feel 32:32like one has an advantage over the other? I don't 32:35know whether I would look at it with that lens 32:37of it being a battle of the clouds. The way 32:41I'm looking at it is at a macro level, they're 32:46doing so much stuff with AI that they need all 32:49the GPUs that they can get their hands on. Okay. 32:53And what I found really, really interesting in this specific 33:03this to rapidly scale agentic workloads. And so I'm wondering, 33:08what are these agents doing? Right. Okay. And I don't 33:14think it's sort of a battle of the cloud per 33:15se, Tim, but more a case of they're using different 33:19infrastructure providers for doing different types of work. Right. And 33:23I mean, I honestly don't know what those agentic workloads 33:27are doing, but I think it just comes down to 33:32there's not enough GPUs available. They need to get something 33:35done. They're a rapidly scaling company. AWS has got loads 33:38of GPUs that they're willing to sell them and they're 33:41like, great, we can take this portion of work that 33:43we need to do and we can just go scale 33:45it over there quickly. Right. It's like almost like any 33:48GPU in a storm. Just like whatever is available, we'll 33:50take it. If I say what Ash said, I think 33:54you've hit the nail on the head in terms of. 33:56It's for a different point. I don't think they're doing 33:58training on these GPUs. I think they're doing inference and 34:03that means that they're moving more into their OpenAI for 34:09business play with it. Because what do businesses use? They 34:12use cloud. Right. And so as they scale more into 34:17their enterprise piece, if I had to take an educated 34:20guess, I would guess that they're using this compute power 34:24more for building agentic workloads with their clients and that's 34:30where they're going to use that inference for. Yeah, tbd, 34:34I have slightly different take on that. Like I said. 34:37Right. Again, they haven't given us a lot of details. 34:41So yes, enterprises will use, Azure will use AWS, all 34:47of that, but unless OpenAI is exposing them on the 34:52AWS environment, then it's not going to be consumable. So 34:56the big question is, are you just using it to 34:58run your workloads or are you going to actually offer 35:00a marketplace or are you going to offer a way 35:03to consume it on aws? If it is the latter, 35:05then yeah, I mean, that's fantastic. Right? I mean, that 35:07would be fantastic for enterprise consumers. So I think they're 35:11still figuring out a bunch of things here as well. 35:14Maybe it's a simple. They do have OSS models. They 35:17do have OSS models. Not the proprietary version. Right. That's 35:20what I'm. Proprietary ones. That's what I was hoping to 35:22look for. And I found that there is no mention 35:24of that there. Yeah, mysteriously not mentioned. And this is 35:29what led me to make my point to say that 35:31I just think that they're like, hey, They've got enough 35:34GPUs for these workloads that we want to run. Let's 35:36do a deal and let's just use these GPUs because 35:38that's why they specifically called out these agentic workloads. But 35:41like I said, I really want to know what those 35:43agentic workloads are. Yeah. It's like we're all looking for 35:47this N dimensional chess, but it's just like we just 35:49need more chips. We just need more cloud as maybe 35:52the primary driver. Yeah. What are they doing with them? 35:55That's what I want to know. Well, we're always asking 35:57the hard questions here on Moe, and that's a great 36:00note to end on. So that's all the time that 36:03we have for today. Ash, Ambi, Sandy, always great to 36:06see you on the show. And hopefully we'll have you 36:07on at MOE very soon. And thanks to all you 36:10listeners. If you enjoyed what you heard, you can get 36:11us on Apple podcasts, Spotify and podcast platforms everywhere. And 36:15we'll see you next week on Mixture of Experts.