Learning Library

← Back to Library

AI Eats the World: Strategic Takeaways

Key Points

  • Benedict Evans, a two‑decade tech strategist at a16z, framed AI’s rise within the broader “platform cycle” that historically reshapes industries—from mainframes to PCs, the web, smartphones, and now AI—while emphasizing that new layers typically augment rather than replace existing ones.
  • He highlighted AI’s “moving‑target” nature: technologies once labeled AI (databases, search, classic ML) shed the label once they become routine, meaning today’s hype around LLMs and generative models obscures deeper, longer‑standing technical progress.
  • The surge of AI investment follows a predictable wave pattern that creates new winners and losers, yet it rarely eliminates prior toolsets, resulting in a fractal ecosystem where chat‑GPT coexists with emerging 3D‑modeling and vision tools.
  • Massive capex from big tech—hundreds of billions (potentially trillions) in data‑center and GPU spending—signals that AI is transitioning from a speculative bubble to a fundamental infrastructure layer driving future profit margins.
  • For AI leaders, the takeaway is to treat AI as a strategic platform shift: balance hype with sustainable P&L impact, integrate new capabilities alongside legacy systems, and position teams to capitalize on the long‑term, layered growth of the AI economy.

Full Transcript

# AI Eats the World: Strategic Takeaways **Source:** [https://www.youtube.com/watch?v=iGvJpBWWGOU](https://www.youtube.com/watch?v=iGvJpBWWGOU) **Duration:** 00:15:48 ## Summary - Benedict Evans, a two‑decade tech strategist at a16z, framed AI’s rise within the broader “platform cycle” that historically reshapes industries—from mainframes to PCs, the web, smartphones, and now AI—while emphasizing that new layers typically augment rather than replace existing ones. - He highlighted AI’s “moving‑target” nature: technologies once labeled AI (databases, search, classic ML) shed the label once they become routine, meaning today’s hype around LLMs and generative models obscures deeper, longer‑standing technical progress. - The surge of AI investment follows a predictable wave pattern that creates new winners and losers, yet it rarely eliminates prior toolsets, resulting in a fractal ecosystem where chat‑GPT coexists with emerging 3D‑modeling and vision tools. - Massive capex from big tech—hundreds of billions (potentially trillions) in data‑center and GPU spending—signals that AI is transitioning from a speculative bubble to a fundamental infrastructure layer driving future profit margins. - For AI leaders, the takeaway is to treat AI as a strategic platform shift: balance hype with sustainable P&L impact, integrate new capabilities alongside legacy systems, and position teams to capitalize on the long‑term, layered growth of the AI economy. ## Sections - [00:00:00](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=0s) **AI Eats the World – Strategic Takeaways** - A briefing that introduces veteran analyst Benedict Evans’ “AI Eats the World” talk, outlines his credentials and the macro‑level focus of his presentation, and previews key strategic implications for AI team leaders and executives. - [00:04:46](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=286s) **Shifting AI Leaders and Adoption Gap** - The speaker explains that Anthropic, Google, and OpenAI now dominate model production, while most companies lag in daily AI use, with adoption hampered by motivation, integration, governance, and the need to envision LLMs as high‑fidelity “alien” intelligences. - [00:08:17](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=497s) **AI as Inevitable Infrastructure** - The speaker warns that firms must treat AI like spreadsheets—an essential, transformative infrastructure—not an optional R&D experiment, emphasizing that adoption is lumpy, path‑dependent, and reshapes value chains and workflows. - [00:11:33](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=693s) **Multi-Model Strategy Over Lock‑In** - The speaker advises treating AI models as interchangeable components—routing workloads by cost, latency, data sensitivity, and jurisdiction—while emphasizing that AI, like cloud before it, will reshape organizational structures and power dynamics rather than merely replacing jobs. ## Full Transcript
0:00This week, Benedict Evans, a 20-year 0:02veteran of A16Z, gave a memorable 0:05presentation in Singapore called AI Eats 0:07the World. My executive briefing this 0:09week is going to be focused on what he's 0:12talking about, why we need to pay 0:14attention, and what the implications are 0:15for all of us who are building and 0:17leading AI teams. Let's get right to it. 0:20So, first, who's Benedict? So he he has 0:22been a tech strategist for the last 20 0:24years specifically focused on platform 0:27shifts which makes him perfectly 0:28positioned to think about what AI means 0:31strategically. So he's been involved in 0:33PCs, the web, smartphones, social, and 0:36of course now AI. His job is to think 0:38about how these shifts change power 0:41margins and industry structures. So he's 0:43not selling you an AI product in this 0:44presentation. He's trying to be a macro 0:46translator between the hype and the P&L 0:49statement. So he's useful as a sanity 0:51anchor in a world that loves hype. The 0:54setting is Super AI Singapore 2025 and 0:57he is talking to senior leaders, right? 1:00CTO's, investors who are asking 1:02themselves, is AI a bubble? Is AI just 1:04the next software cycle? Is this the 1:06moment when everything we know about 1:08software economics breaks? So what did 1:10he talk about? This is 90 slides. I'm 1:13going to get it into just a few minutes 1:14for you and then we're going to start to 1:15look at strategic takeaways that I 1:17pulled out and what I think it means for 1:18all of us. First, Ben talked about AI as 1:21a moving target. AI used to mean 1:23databases. Then it meant search. Then it 1:25meant classical machine learning. Once 1:27it works, we stop calling it AI. I love 1:30that insight. So today, large language 1:32models and generative models are wearing 1:34that label. But other stuff, people are 1:36forgetting that it's AI because it 1:38works. When you think about it that way, 1:40you start to realize how deep the roots 1:43of this technical transition are and how 1:45much of our adoption curve is driven by 1:50novelty. Ben also talked about the 1:52platform cycle frame. The idea that we 1:55are moving through predictable wave 1:57patterns even as AI is a novel 2:00technology. But these novel technologies 2:02have predictable patterns that AI is 2:04following. So, we've moved from 2:06mainframes to PCs to web to smartphones 2:09and now to AI. Every wave attracts 2:12massive investment at first. It reshapes 2:14who are the winners and who are the 2:15losers. But this is the critical point. 2:18It rarely deletes previous layers. I 2:22loved that takeaway because it's 2:24fractal. That takeaway works both for 2:26the larger insight that like I have a 2:29smartphone now and also a laptop but 2:32also in the world of AI the newest tools 2:35that are coming out in 2025 are rarely 2:38deleting the base tools. We are getting 2:41new tools for 3D models. We are getting 2:43new tools for vision. We are not 2:45deleting chat GPT. So the idea that like 2:49you can have massive investment and 2:50reshape winners and not delete previous 2:53layers seems very powerful to me. On the 2:55capex side, Ben pointed out that yes, 2:58big tech is spending hundreds of 3:00billions, if not trillions, on data 3:01centers and GPUs. And at the same time, 3:04more and more labs are grabbing on to 3:07proliferating AI technology so that they 3:09can train good enough models. The net 3:11effect is that the model itself is 3:14looking like a commodity input. And we 3:16have talked about that a fair bit on 3:18this newsletter. You should not be 3:20surprised to hear that the model is not 3:23a moat. I will add a caveat that Ben 3:26didn't talk about a ton. One of the 3:28other papers that came out this week was 3:30a deep study on Chinese open-source 3:32models. And one of the things it 3:34concluded is that the flexible 3:37intelligence of these models taken in 3:39aggregate across Quen and many others is 3:42less clear, less effective, less 3:45generally flexible than the intelligence 3:47of Americanmade models. And that may be 3:49because it's not quantized effectively 3:51or distilled down effectively. But the 3:54general conclusion of the paper is that 3:55Chinese models are heavily reliant on US 3:58frontier models and distilling those 4:00down to get to opensource models that 4:03they can release to the world. And in a 4:05sense, what the paper suggested is that 4:07the pace of innovation is still being 4:09driven by private models developed by 4:11frontier labs in the United States. and 4:14the rest of the world is following suit 4:15in pulling distillations out of those 4:17models that may be good for some use 4:19cases but are not as generally 4:21intelligent and are not appropriate for 4:22cutting edge uses within that context 4:25right Ben's statement needs some nuance 4:28because I would argue that the 4:31methodologies used by the cutting edge 4:33labs are defensible and certainly their 4:36edge is defensible and so no one is 4:38going to join the table of top model 4:40makers which frankly at this point we've 4:42even lost folks books in the last year. 4:44Meta is not a top model maker anymore. 4:46Grock is trying to be but isn't leading 4:48anything right now. The top model makers 4:50are Anthropic, Google and OpenAI. That's 4:53it. And so in a sense, the model may 4:56become a commodity. Intelligence may be 4:58in everything and yet we still may have 5:00cutting edge modes. Let's move on to 5:02what else Ben talked about. One of the 5:03things he called out that we'll talk a 5:05fair bit about here is the adoption gap. 5:07Lots of people and companies have tried 5:09AI, but Ben made the point that far 5:12fewer use it daily in core workflows. I 5:14keep pounding this drum. The difference 5:16between casual chat GPT users and 5:19passionate professionals is night and 5:21day 10x. And this is critical for teams 5:24because one person on your team, two 5:26people on your teams who are an eight or 5:28a nine or a 10 in terms of their AI 5:30skill sets out of 10, they are going to 5:32run circles around everyone else. And so 5:35the blockers to adoption, the blockers 5:37to moving people that way are really 5:40around motivation, the ability to 5:43understand what these models can do, and 5:46then on the corporate side, how do you 5:47get them integrated? How do you handle 5:49governance and risk? And how do you roll 5:50them out? One of the things that Andre 5:52Carpathy talked about this week on X 5:55that Ben didn't mention because it 5:57hadn't happened yet is he talked about 5:59this idea that we need to be able to 6:02imagine LLMs as nonan animal alien 6:06intelligences at a high degree of 6:08fidelity so that we can understand how 6:10to work with them. Effectively, what 6:12he's saying is we are as a species 6:14having our first contact with a new 6:16intelligence. And the better we can 6:18build a mental model of what that 6:20intelligence looks like and how it 6:22works, the more effectively we can 6:24partner together. This is not a like 6:26scary doomsday first contact movie. It's 6:28more about imagining how the 6:30intelligence works helps us to prompt 6:32better, work better, collaborate better, 6:34all the boring stuff that's really 6:35important. And this is something that 6:38Ben didn't get into, but I think is 6:40really important. Having that 6:41imagination, that aha moment in your 6:44teams is critical to enabling outsized 6:47leverage, outsized impact for the team. 6:49So that was the heart of his message. 6:51That's what he talked about. That's 90 6:53slides in just a few minutes. What are 6:55the deeper takeaways here? Number one, I 6:57think we've quietly crossed from miracle 6:59to inevitable utility. So this is much 7:02more subtle than a commoditization 7:04argument. I think Evan's talk marks a 7:07tipping point. AI is no longer being 7:10framed in most settings as will this 7:12work, will we get there? Instead, it's 7:14being framed as obviously this works. 7:16Where does the margin end up? Where do 7:18the winners end up? That's especially 7:20true and top of mind this week when we 7:22saw visual reasoning solved with Nano 7:24Banana Pro when we saw Meta SAM 3 model 7:27drop and handle semantic search for 7:29video. We have these previously 7:32difficult spaces where we're seeing AI 7:34just works. And then we have 7:36confirmation from Google that Gemini 3 7:38didn't have special tricks up its 7:40sleeve. It was classical pre-training 7:42and post-training LLMs. There is no wall 7:45on training. You can just get bigger and 7:48better and train the same way you always 7:49have and get a smarter model. That may 7:52sound like a benol observation, but 7:55knowing that that's true and seeing the 7:57breakthroughs that we've had, we now are 7:59just living in a world where this is 8:01inevitable. AI is going to be 8:04everywhere. AI has already solved enough 8:06problems to let us know that the scaling 8:09laws hold. And if we assume it's 8:11everywhere, we need to ask a different 8:12set of questions. Where do we matter? 8:15Where do our companies matter? How do we 8:17set up ourselves as competitive players 8:20in this space? Those are becoming the 8:22relevant questions. And so the strategic 8:25risk isn't sort of missing the AI 8:27moment. It's really continuing to act as 8:30if this is a tunable or optional 8:32research and development play instead of 8:35this is inevitable infrastructure and if 8:37you don't go after it with every tool 8:39you've got, you're just not going to 8:41make it. A smarter question to ask in 8:43that world is if AI is as inevitable as 8:46spreadsheets have become, what parts of 8:49our value chain become just a feature in 8:51that world and are no longer 8:53competitive? That's a tight interesting 8:55question to play with. Deep takeaway 8:57number two, adoption isn't just slow, it 9:00is path dependent and it can trap you. 9:03So adoption is lumpy. Evans pointed that 9:07out. Lots of pilots, not a lot of deep 9:09usage. Some people use it a lot. Whether 9:11and where you choose to adopt shapes 9:14what becomes possible later. And he 9:15didn't talk about that. But think about 9:17spreadsheets. The first teams that 9:19adopted them weren't just more 9:20efficient. [snorts] 9:21They reorganized how information flowed 9:23through the business. They could model 9:25scenarios. They own the numbers. They 9:26could self-s serve. LLMs and agents are 9:28poised to do the same. So the pattern is 9:31going to be you drop AI into one or two 9:33workflows. Those workflows shift how 9:35information is produced. They shift how 9:36it's consumed. And that in turn shifts 9:38which other workflows are now possible. 9:42So the non-obvious leadership problem 9:44for you is if adoption is path 9:46dependent, are we choosing the right 9:48beach heads? talk a lot about problem 9:50framing about picking the right places 9:52to jump in with AI and that's really the 9:55question in front of us as we confront 9:57an adoption challenge in our teams 9:59recent model evolution makes this an 10:00even sharper problem agent native models 10:03Gemini class etc aren't just better 10:06autocomplete right they're they're 10:07suited to many kinds of work meaningful 10:09work knowledge work triage coordination 10:12followup repetitive decision loops with 10:14clear constraints if your first 10:16experiments are all summarize this doc 10:19You're never going to discover the 10:20compounding benefit of agent assisted 10:23customer onboarding or agent assisted 10:25engineering support. Essentially, the 10:26beach head you picked constrains some of 10:28your paths forward. So, where should we 10:31try AI is not a random sandbox question 10:34for a Friday afternoon. It is a path 10:36design question. In other words, you you 10:39will get compounding benefits or 10:41compounding costs depending on which 10:44workflows you choose. So look where 10:47there are important junctions in your 10:50organization's information flow patterns 10:52and jump in there because when you can 10:55create a change in that flow, you unlock 10:58a lot of downstream benefits. You unlock 11:00a lot of opportunity to use AI agents 11:02elsewhere. Non-obvious takeaway number 11:04three, AI is going to turn you into a 11:06buyer with additional leverage if you 11:08design for it. So Evans commoditization 11:11story has a second order effect that 11:13most people aren't talking about. As 11:15models get closer to par and quality, as 11:17you get more model options, your power 11:19is going to increase as a purchaser of 11:21models, as long as you structure for 11:23that effectively. Enterprise AI 11:24conversations still turn too often on 11:26vendor lock in. I have screamed about 11:28this a lot. I'm going to say it again. 11:30Don't say we're an Xodel shop. Just be 11:33multimodel from the get-go. If you take 11:36Evans seriously, if you take me 11:37seriously, the long-term equilibrium is 11:39going to look like treating models as 11:41components and routing your workloads to 11:43different models based on the cost, the 11:45latency, the data sensitivity, the 11:46jurisdiction, etc. That's not the 11:48reality in most of our orgs today. It is 11:51something we need to get to. So, the 11:53non-obvious implication is don't think 11:55about picking a winner model or even a 11:57winner lab. Instead, think about 12:00building an architecture that lets you 12:02be in the driver's seat in buyers 12:03conversations and lets you arbitrage 12:06models the way you want over time. Don't 12:09settle for lockin. Deeper takeaway 12:10number four, AI is eating the org chart, 12:13not just the tech stack. And it's not 12:15about layoffs. So Evans focuses on tech 12:18cycles, but if you extend his logic, 12:20spreadsheets didn't just change 12:21software, they changed who needed to 12:23talk to whom, what roles become 12:25bottlenecks, which functions gained 12:27political power like finance and 12:29operations. Cloud. Cloud didn't just 12:31move servers off premises. It shifted 12:34power from central IT to product and 12:36engineering. It accelerated the pace at 12:39which teams could experiment. AI will do 12:41the same for roles that are around 12:43coordination, for roles that are around 12:45synthesis versus roles that are mostly 12:47judgment and constraint setting. So 12:49recent agent style capabilities make 12:51this more concrete. A model that can 12:52read your emails, Slack, tickets, 12:54dashboards, you you name it, right? And 12:56propose actions is effectively an 12:58informal chief of staff for every 13:01knowledge worker. And we should expect 13:02that by 2026. That doesn't just increase 13:05individual productivity. It changes who 13:07needs an assistant, who needs a team, 13:09where the bottlenecks in decision-making 13:11live. And so the non-obvious implication 13:13for you as a leader is if you only think 13:16of AI as a tool roll out, you will miss 13:18that you are doing an org design change 13:20at the same time. Some roles will shift 13:23from doing work to specifying to 13:25checking to escalating that work. Other 13:27roles will shrink because the 13:28coordination overhead they manage gets 13:30automated away. So your span of control 13:32assumptions, your management layers, 13:33your hiring plans are all going to need 13:35to adopt much adapt much faster than in 13:38previous cycles. So Evan is giving you 13:40the technical story here, but I think we 13:42need to extend that out to the org 13:43story. So where does this leave us? I 13:45want to suggest to you especially at the 13:47end of one of the most jaw-dropping 13:48weeks I can remember in AI that we need 13:51to be taking a step back regularly as 13:53leaders and we need to be asking 13:55ourselves when we have weeks like this 13:58where I can't name the number of 14:00significant developments we had I've 14:01attempted to it's like half a dozen or 14:03so over the course of the week we need 14:04to say does any of this change the 14:07strategic operating reality of the 14:09business that I am building and I think 14:11Evan's matrix Evans talk AI eats the 14:14world gives us a good framework for that 14:16because it enables us to say okay is 14:19there something that is shifting the 14:20tech adoption cycle here is there 14:22something that is shifting my org chart 14:24here is there something about how 14:26information flows in my business that is 14:28changing is there something about my 14:30vendor relationships and my power with 14:32vendors that is shifting because of this 14:33unlock and the answer if we ask is often 14:36yes but having the right questions to 14:39ask helps put us in the driver's seat 14:42during times when the news cycle feels 14:44relentless on AI and I got to say that's 14:46not going to stop. And so my 14:48encouragement to you if you're feeling 14:49overwhelmed and you're trying to think 14:51about how to sort all of this out is 14:53make a regular practice of stepping back 14:56and looking at the world like Evans 14:58does. Take a day, step back, get a 15:00whiteboard out, maybe you get your 15:02senior team together or just go for a 15:04walk in the woods and figure out what 15:06this means for your business. Distill it 15:09down. Take your time because that time 15:11to reflect is what is going to enable 15:14you to digest, synthesize, and form core 15:17conviction that you need to push your 15:19teams forward. A lot of what I'm talking 15:21about here is really the meat of where 15:23leadership and understanding of AI meets 15:26the road, where you need to be at with 15:29your teams to drive them forward. And 15:30you can't do that if you don't have 15:32energy and conviction. And that comes 15:34from having the ability to reset, 15:37digest, and synthesize all of these 15:39updates effectively and then come back 15:42with fresh energy. So, take that into 15:44the week and uh I'll see you next