Learning Library

← Back to Library

Beyond the AI Cold War

Key Points

  • The U.S.–China AI “cold war” – with export bans and zero‑sum thinking – is making the world less safe and is based on outdated assumptions that don’t fit today’s internet‑driven technology.
  • The belief that only one super‑intelligent AI will emerge (a “singleton”) is increasingly rejected; multiple powerful AIs will proliferate because the software can be copied and spread instantly online.
  • Restricting AI exports paradoxically speeds up innovation, as shown by breakthroughs like DeepSeek achieving GPT‑4‑level performance with dramatically less compute and the explosion of open‑source models on platforms such as Hugging Face.
  • Cross‑border collaboration and open research are already eroding the performance gap between Chinese and American models, demonstrating that knowledge flows like water and outpaces any containment strategy.
  • A strategic shift toward cooperative governance rather than competitive rivalry is needed to harness this rapid diffusion for global safety.

Full Transcript

# Beyond the AI Cold War **Source:** [https://www.youtube.com/watch?v=xoGei3nXPH8](https://www.youtube.com/watch?v=xoGei3nXPH8) **Duration:** 00:11:37 ## Summary - The U.S.–China AI “cold war” – with export bans and zero‑sum thinking – is making the world less safe and is based on outdated assumptions that don’t fit today’s internet‑driven technology. - The belief that only one super‑intelligent AI will emerge (a “singleton”) is increasingly rejected; multiple powerful AIs will proliferate because the software can be copied and spread instantly online. - Restricting AI exports paradoxically speeds up innovation, as shown by breakthroughs like DeepSeek achieving GPT‑4‑level performance with dramatically less compute and the explosion of open‑source models on platforms such as Hugging Face. - Cross‑border collaboration and open research are already eroding the performance gap between Chinese and American models, demonstrating that knowledge flows like water and outpaces any containment strategy. - A strategic shift toward cooperative governance rather than competitive rivalry is needed to harness this rapid diffusion for global safety. ## Sections - [00:00:00](https://www.youtube.com/watch?v=xoGei3nXPH8&t=0s) **Rethinking AI Superpower Competition** - The speaker argues that the US‑China AI arms race, driven by Cold‑War‑style zero‑sum thinking, is unsafe and calls for a new cooperative strategy that acknowledges multiple AIs will proliferate thanks to the internet’s near‑zero cost of collaboration. - [00:03:16](https://www.youtube.com/watch?v=xoGei3nXPH8&t=196s) **AI Race Outpaces Cold War Paradigms** - The speaker argues that the unprecedented speed of global AI adoption makes traditional, decades‑long strategic frameworks obsolete, creating new security and economic risks as the US and China cling to competitive, containment‑style policies. - [00:06:30](https://www.youtube.com/watch?v=xoGei3nXPH8&t=390s) **Cooperative Frameworks for AI Risk** - The speaker proposes practical bilateral steps—joint risk assessments, technical hotlines, and aligned safety standards—to manage shared AI threats while acknowledging competitive domains. - [00:10:06](https://www.youtube.com/watch?v=xoGei3nXPH8&t=606s) **Cooperating on AI’s Birth** - The speaker argues that, despite inevitable great‑power competition, humanity must coordinate the early development of AI—treating it as a shared, fast‑moving risk akin to nuclear weapons—to secure long‑term flourishing and choose “smart rivalry” over destructive conflict. ## Full Transcript
0:00The world's two AI superpowers are 0:02locked in a competition that's making 0:04everybody less safe. And today on July 0:064th, America's birthday, I want to talk 0:08about the strategy shift that we could 0:10choose to make that would keep everybody 0:12safer. The current AI race is not 0:14helping anybody, but I want to propose a 0:17alternative solution that could actually 0:19work. Let's start with how we got here. 0:21Every transformative technology has 0:23triggered a similar response to what 0:26we're seeing right now. So, in some 0:27ways, it's very understandable. Both 0:30Washington and Beijing are reaching for 0:32cold war era playbooks. Export controls, 0:35technology denial, zero someum thinking. 0:38There's an assumption that the others AI 0:40dominance would mean defeat for the 0:42other power. There is a narrative that 0:46artificial super intelligence is right 0:48around the corner, that it will be what 0:50you would term a singleton world, which 0:52means only one super intelligence will 0:54develop. And if that's the case and it's 0:56truly super intelligent, suddenly all of 0:59this cold war thinking starts to make 1:01sense. The problem is this. We don't 1:03live in a singleton world. Even Sam 1:05Alman has admitted he doesn't think 1:08anymore that we're going to only have 1:10one super intelligent AI or only one 1:13generally intelligent AI. We're going to 1:15have multiple. Do you know why? Because 1:17this technology is extremely easy to 1:20proliferate because it's built on the 1:22back of the internet. And what did the 1:24internet do? It took the cost of 1:28cooperation between people to zero. In 1:31the nuclear age, which was what the Cold 1:33War was built on, we had physical 1:35materials and clear boundaries. You had 1:37to move physical materials around in 1:40order to construct any kind of nuclear 1:42weapon. In the space age, we had massive 1:45infrastructure. You could track progress 1:47through rocket launches. In the AI age, 1:50everything spreads at internet speed. 1:52There are no borders. Yesterday's 1:54strategies fail at dealing with 1:56tomorrow's technology. And that is what 1:58we're looking at with AI. And that is 2:00why I think the cold war frame is 2:03incorrect empirically with the 2:06technology that we have today. There is 2:08a paradox with containment. When we put 2:10export restrictions on another country, 2:13we intend to slow progress. But instead, 2:16because necessity is the mother of 2:18invention, we trigger efficiency 2:21breakthroughs. Deepseek achieved GPT4 2:24level performance with 90% less compute. 2:26Innovation thrives under pressure 2:29consistently. We have 450,000 2:33plus open models on hugging face. Open 2:37AI models. Anybody can grab them. 2:39Researchers from both nations routinely 2:41publish together. That is, by the way, a 2:43fantastic thing. That is a great thing. 2:45Knowledge flows like water across 2:48national borders. It flows like the 2:50internet and performance gaps over time 2:53are narrowing, not widening. Mary Mer 2:55made that point really brilliantly in 2:57her large deck that I summarized where 2:59she talked about the fact that 3:01effectively over the last two years, the 3:03competitive difference between Chinese 3:06models and American models has 3:08disappeared. There's like a one or two 3:10percentage point difference in 3:11performance. It's not that big. 3:13Meanwhile, the world continues to adopt 3:16AI at a terrifyingly fast speed. Chat 3:19GPT famously hit a 100 million users in 3:2260 days, but that's old news now. 3:24They're on track for a billion, 10 times 3:26that number this year. What we talked 3:28about in the Cold War was changes that 3:30took decades. Things that took a long 3:33time to adjust. It took decades for 3:36nuclear weapons to proliferate. It took 3:39decades for great power relationships to 3:42change. with instant global 3:44transmission, with half a million open 3:46models, with the speed of intelligence 3:49growth that we're seeing, none of those 3:52old ways of thinking work. They just 3:54don't. And I get it. Everybody has a 3:56legitimate concern from an American 3:58perspective. AI could be used for 4:00authoritarian purposes. It could be used 4:02in military applications. There could be 4:04technology transfer to other countries 4:06that could be uh enemies of the state. 4:08Values alignment between AI systems is a 4:10real concern from a Chinese perspective. 4:12Technology embargos feel a lot like 4:14containment. They feel like exclusion 4:16from global AI standards. Security 4:18vulnerabilities from foreign AI become a 4:20real concern and economic 4:22competitiveness is something that they 4:24don't feel like they can trade down. So 4:26both nations in their own world have 4:28legitimate concerns. The question is 4:31does the current approach address any of 4:33these concerns for anybody or does it 4:35just create new risks? I would argue 4:37that it just creates new risks because 4:39it locks us into a competitive mindset. 4:41Uncontrolled AI will not recognize 4:44borders if it transpires. Cyber 4:46incidents from a misaligned AI will 4:48cascade globally. And by the way, I am 4:51actually more concerned about things 4:52like large-scale cyber attacks that 4:54cascade globally than I am about 4:57something like Skynet. Bio-risks, if 4:59that were to transpire, would affect the 5:01entire human population. Economic AI 5:04shocks, if that were to transpire, would 5:05ripple worldwide. This is the same way 5:07that Chernobyl didn't stop at borders. 5:09If an accident happens to one of these 5:12technologies, it's up to everybody to 5:14cooperate to solve it. The 2008 5:17financial crisis, it went global 5:19immediately. I remember where I was. 5:22Similarly, in 2020 with COVID, it went 5:24global right away. AI risks will move 5:26faster than biological risks and even 5:28faster than financial market shocks in 5:30certain situations. What I want to see 5:33is a cooperative framework that will 5:36enable both superpowers in AI to work 5:39together to converge around common 5:43standards that contain systemic risk. 5:46And I want to go further than just 5:47saying we should do that and actually 5:49propose some principles that we can talk 5:51about. And I know I I have no illusions. 5:54I do not think people in government are 5:55watching this video, but it's still 5:57worth us as a society talking about a 5:59global society because everybody shares 6:02risk when AI is not well managed. So, 6:05core principle number one, graduated 6:08engagement. Compete where values and 6:10interests diverge, sure, but cooperate 6:12where existential risks converge. And we 6:15have existential risks with AI even if 6:17we stop short of a Skynet scenario that 6:19are still worth working on cooperation 6:21for. build trust through little tangible 6:24steps and verify technical cooperation. 6:28These are things that like we can choose 6:30to do. Sure, there are areas where 6:32there's natural competition. Economic 6:34applications, national security systems, 6:36governance models, domestic 6:37implementations. I get it. We don't have 6:40to try and fully align there. But 6:42there's also areas where we can 6:43reasonably cooperate. Preventing 6:45autonomous weapons proliferation, that 6:47seems like something everybody would 6:48have an incentive for. Biod defense a AI 6:51safety protocols that seems reasonable. 6:53Financial system stability. Everyone has 6:54an incentive to keep the financial 6:56system stable and critical 6:58infrastructure protection. We can work 7:00on a common core of risks that we would 7:02want to contain and agree on a framework 7:05for cooperation to address those. We 7:08could choose to do that. So what are 7:10some practical steps that we could 7:11imagine? Can you tell I worked at the 7:13model United Nations? I was such a nerd 7:15as a kid. Anyway, joint risk assessment. 7:18Both nations AI scientists could 7:19identify shared risks. They could focus 7:21on technical issues, not politics. 7:23Somewhat similar to the climate science 7:25panels. The focus would be building 7:27common understanding. Incident 7:29communication channels, technical 7:30hotlines for AI anomalies, preventing 7:32misunderstanding during a crisis. We had 7:35hotlines during the Cold War. We don't 7:37have an AI hotline. Why don't we have an 7:39AI hotline? What about parallel safety 7:41standards? They don't have to be 7:42identical. They don't even have to They 7:45don't even have to be fully 7:46interoperable. They just need to be 7:48interoperable enough that there's some 7:50sense of common safety measures. 7:52International aviation is a good 7:53example. We have different airlines but 7:55common safety standards. Each nation 7:56implements it in their own way. We need 7:58a similar sort of approach with AI. It 8:00would be helpful if we could also agree 8:03and this is probably a little bit of a 8:04stretch but could we agree on research 8:07transparency zones places where 8:09everybody could come together to 8:11research to learn about AI to 8:12investigate AI safety. It benefits 8:14everybody. supposed to threaten nobody 8:17and it keeps competitive advantages as 8:19something that can be worked on together 8:22and sort of diffuses some of that great 8:23power tension. Third party verification 8:26Switzerland, Singapore, someone who's 8:27known for being neutral could act as a 8:30validator. Technical verification could 8:32occur and both nation secrets could be 8:35respected. I get that I'm talking at a 8:37little bit of a high level. I am not 8:39going to the level where I'm talking 8:40about specific systems because one, if I 8:43knew about them and I talked about them, 8:44I'm sure I would get in trouble. I don't 8:46know about them. And two, they're 8:48evolving very quickly. And so it doesn't 8:50make sense to actually go to the 10,000 8:52ft level and talk about specific 8:54technical systems when they're all being 8:55built. It is more important to talk 8:57about operative principles because at 8:59the moment the operative principle seems 9:01to be competition. And in this case, I 9:04think it was more rational to be 9:06competitive when the technology had a 9:08different footprint. Nuclear 9:09proliferation and competitiveness and 9:11mutually assured destruction, that was 9:13all the language of the cold war and it 9:15kind of worked. It held the world in 9:17tension, but it held it stable. I do not 9:20think this equilibrium is stable. If we 9:22have competition under a fastmoving 9:25technology footprint, it's not a stable 9:27situation, and that is dangerous for 9:29everybody regardless of where you live. 9:32And so I think it's more productive to 9:34have a more cooperative stance. And so 9:36my ask is that we think less about how 9:39we can maintain a competitive advantage 9:41in a way that's zero sum and more about 9:45how we can start to think about 9:47establishing practical frameworks that 9:50show that we can build trust step by 9:52step. It's essentially an ask that we 9:55return to the idea of America is a place 9:58where we can establish a sense of human 10:02flourishing that survives the AI age. 10:05Not that I'm saying the founders or the 10:06framers anticipated the AI age. Heck, 10:09most of us didn't anticipate the AI age 10:1130 or 40 years ago. There were only a 10:12few that were visionary. But now we're 10:14here and now we need to think about how 10:16these long-term principles apply in this 10:20new world we find ourselves in. And in a 10:22sense, that's all our jobs because as a 10:25species, it's our job to figure out how 10:28we establish human flourishing with AI 10:31for the next 500 years, for the next 10:33thousand years. And if we're going to do 10:35that, it means getting this part right 10:36right now. It means getting the birth of 10:38AI right. And so my thinking on July 4th 10:41is let's be cooperative about the birth 10:43of AI within reason. I know we're going 10:45to be divergent as great powers on 10:47different things, but as much as we can 10:49be cooperative, I think everybody will 10:51benefit because this baby AI is growing 10:54up really, really fast. So that's my 10:55July 4th reflection. Great powers have 10:58competed through history, but even the 10:59nuclear weapons story taught us that 11:01some risks require coordination. AI 11:03presents even like even greater shared 11:05dangers because it's moving faster. And 11:07I do believe that we can compete to some 11:10degree while cooperating to prevent real 11:12disaster. The choice is smart rivalry or 11:17destructive rivalry. We can be rivals 11:19like brothers, right? I have a brother. 11:21I like him a lot. We're rivals in a lot 11:23of fun ways, but we're also friends. We 11:25also have each other's backs. And even 11:28if that's not a perfect analogy, the 11:30idea of a smart rivalry is something 11:32that I think you can take away from 11:33this. Happy July 4th. Cheers.