Learning Library

← Back to Library

AI Surge: $100B Fund, Cost Debate, O1 Preview

Key Points

  • Microsoft and BlackRock announced a $100 billion AI fund, signaling confidence that the AI boom is far from peaking and betting on massive training infrastructure for the mid‑to‑late 2020s.
  • A Washington Post piece on AI energy use was challenged by a senior tech policy fellow who calculated the cost of a GPT‑3 call to be about 2 cents—roughly 370 times cheaper than the Post’s estimate—highlighting the need for accurate cost reporting.
  • Accurate cost metrics are crucial for meaningful debates about AI’s expense versus the opportunity cost of human labor, such as comparing an AI‑generated email to one written by a person.
  • The newly launched 01 Preview platform is already producing functional apps, exemplified by a sleek weather app that only required plugging in a public API, sparking interest in building similar tools.
  • The speaker teases an upcoming Maven course that will dive deeper into these topics and guide listeners on leveraging AI developments.

Full Transcript

# AI Surge: $100B Fund, Cost Debate, O1 Preview **Source:** [https://www.youtube.com/watch?v=rnDbLdW5wMI](https://www.youtube.com/watch?v=rnDbLdW5wMI) **Duration:** 00:09:06 ## Summary - Microsoft and BlackRock announced a $100 billion AI fund, signaling confidence that the AI boom is far from peaking and betting on massive training infrastructure for the mid‑to‑late 2020s. - A Washington Post piece on AI energy use was challenged by a senior tech policy fellow who calculated the cost of a GPT‑3 call to be about 2 cents—roughly 370 times cheaper than the Post’s estimate—highlighting the need for accurate cost reporting. - Accurate cost metrics are crucial for meaningful debates about AI’s expense versus the opportunity cost of human labor, such as comparing an AI‑generated email to one written by a person. - The newly launched 01 Preview platform is already producing functional apps, exemplified by a sleek weather app that only required plugging in a public API, sparking interest in building similar tools. - The speaker teases an upcoming Maven course that will dive deeper into these topics and guide listeners on leveraging AI developments. ## Sections - [00:00:00](https://www.youtube.com/watch?v=rnDbLdW5wMI&t=0s) **AI Billion‑Dollar Bet & Energy Controversy** - The speaker surveys recent AI news—highlighting Microsoft and BlackRock’s $100 billion AI fund, a disputed Washington Post report on AI energy consumption, and a teaser for an upcoming Maven course—arguing that the AI boom shows no signs of peaking. ## Full Transcript
0:00you know the last few days in AI have 0:01been absolutely wild I just want to run 0:04through a few of the things that have 0:06happened and a little bit of broader 0:08economic news and I want to talk to you 0:09about my Maven course that's coming so 0:11stay tuned there's a bunch to get 0:13through first thing is Microsoft and 0:16black rock have launched a$ 100 billion 0:19fund for AI so if you think the AI 0:21bubble is at its peak if you think it's 0:24a bubble if you think it's going to pop 0:26it's not at its peak yet if they're 0:28launching hundred billion dollar funds 0:30they're expecting to get to a super 0:33cluster or some kind of massive training 0:35center in the mid to late 2020s maybe 0:382027 2028 it may be in time for the 0:41training run that gets to artificial 0:44general intelligence depending on what 0:46you think about that depending on your 0:47timelines they don't know either by the 0:49way so that's why this is all very 0:51uncertain they are taking a bet they're 0:52taking a hundred billion dollar bet that 0:54it is worth getting into that market so 0:56that's number one number two the 0:59Washington Post published a big article 1:02on the energy consumption of AI and it 1:05may not be as accurate as they say uh 1:09and the reason I call that out is that 1:10there was a very widely cited uh post on 1:13X or the site formerly known as Twitter 1:16uh that basically took apart the 1:18components of a llm call and came out 1:22with a very different 1:23number it was 370 times less expensive 1:27than the number cited by Washington Post 1:30now you can say hey it's X I don't buy 1:32it this guy is a senior policy fellow in 1:34Tech and what would he know but even if 1:38you don't buy it the wo article cost 1:41actually came out to around 2 cents per 1:43call and that's cheaper than a letter 1:45and we don't complain about letters 1:46being 1:47expensive we don't now not a lot of us 1:50send letters anymore because emails are 1:52easier but still the point 1:54being if we're going to talk about the 1:56costs of AI we need to talk about them 2:00accurately in terms of the real costs 2:02involved we should not site gpt3 which 2:04is what the Washington Post article did 2:07and we need to be really clear about the 2:10opportunity cost as well what is the 2:12opportunity cost of a human writing an 2:14email versus an AI writing an email that 2:17was one of the prominent examples in the 2:18post so more to come there but I wanted 2:21to flag that because there's been a lot 2:22of comments I've seen coming in 2:23basically saying AI is really expensive 2:25and I think articles like the post need 2:28to be accurate in their report of cost 2:30so that we can have a real conversation 2:32about 2:34this um and then next I wanted to call 2:37out 01 preview so 01 preview has been in 2:40the news recently I've talked about it 2:42on this channel briefly we have now had 2:44it for a few days we are now getting 2:47apps built by 01 preview frankly I saw a 2:51weather app built by 01 preview that 2:52looks nicer than the weather app on my 2:54phone all it needed was an API to be 2:56functional and those apis for weather 2:58are ubiquitous you can just plug them in 3:00and go makes me want to build a weather 3:02app for my phone maybe I should do 3:04that but that's an example of how 3:07quickly it can sort of put together a 3:10fully fledged application in just one or 3:12two prompts one of the things I've 3:15noticed with o1 is that 01 is really 3:17really helpful at debugging code and I'm 3:19not the only one so I gave it some code 3:22written by Sonet two nights ago it took 3:2630 seconds to think about it and it came 3:28back with a much cleaner structure 3:30it successfully pulled Sonet out of a 3:32death spiral with the code and I got the 3:34code 3:35working I heard other people uh that I 3:39know of talking about uh 01 debugging 3:443,000 lines of code and finding a single 3:47character out of 3:49place so that matches my anecdotal 3:52experience it's really good at debugging 3:53the other thing that I want to call out 3:55it's not just code right people are 3:57identifying it as a good editor and a 3:59good editor is different from a good 4:01composer when you write with sonnet 4:03sonnet tends to overwrite it tends to 4:05write an entire text and then if you 4:09tell it to change something it just 4:10rewrites the whole text whereas 01 can 4:13be more nuanced about the changes it can 4:15it can use a a scalpel right it can 4:17precisely adjust things so that they 4:19sound 4:20better we're still at the beginning of 4:23discovering what 01 can do but I think 4:25that the way I'll leave this the 4:27challenge I have for you is if you don't 4:30think o1 can do something if you don't 4:32think a large language model can do 4:34something why have you tried it if you 4:37say oh it's the inputs oh you know 01 4:39won't accept images1 won't accept 4:41whatever it is there's ways around that 4:45you can take the text and you can stick 4:47it into a prompt and now you have the 4:50full text of the document and the prompt 4:52just as an example you can also switch 4:54models partway through the chat you can 4:56be talking to chat 4:58gp40 and then partway through the chat 5:01switch over to 01 and see what 5:04happens another creative idea that I saw 5:08H and came across that I really love is 5:10pick a task that you don't love record 5:13yourself doing it in a loom video and 5:15talk your way through it then take the 5:20transcript and upload that to an llm and 5:24ask it what can be 5:27automated just see what can be automated 5:30I would be really 5:31curious I bet it has some ideas and I 5:34bet it could help you automate so we're 5:37at the stage where my bias is to ask the 5:39llm and I think that's a huge difference 5:41you can actually see it in the chatbot 5:44Arena scores so chap out arena for those 5:46who don't know is a 5:49gigantic um well that's what it sounds 5:51like it's an arena where people compete 5:53against uh basically rank uh various 5:56llms against each other to see what the 5:57quality of answers are and it's 5:59crowdsourced so like no given company 6:01can really game 6:02it and the problems are crowdsourced so 6:06again nobody can really game 6:08it and I will tell you it is an 6:11absolutely jaw-dropping graph when you 6:13look at 01 and mini and 01 preview they 6:16are like 100 points better uh than any 6:20other model and by the way we're talking 6:22100 points is a lot so everybody every 6:25model right now is within the same 50 to 6:2970 Five Point radius somewhere around 6:30the 1250 Mark to 1200 roughly in Elo or 6:34ELO 6:35ratings which is basically a 6:37mathematical way of estimating relative 6:40strength versus another player it's used 6:42in chess a lot actually but now it's 6:43used you know to estimate chatbot 6:45competency this is for 6:48mathematics 01 and 01 pre or 0 preview 6:51and 0 mini are over 1350 they're much 6:54much better a step change better than 6:57other chatbots and I call that out 6:58because there's been a lot of talk about 7:00mathematical reasoning and the ability 7:03of a model to do reasoning this is just 7:06the preview Sam Altman has called out 7:08that reasoning is about to get better I 7:09think he described it as the g at the 7:11gpt2 stage with 01 preview he thinks 7:14reasoning can get immensely 7:16better and he says that the full 01 7:20model is coming in just a few months so 7:22if you think well there's weaknesses in 7:2401 I don't know well just just wait 7:26right just wait a couple months uh 7:28you'll you'll be surprised 7:30Okay so we've talked about 01 we've 7:32talked about some of the AI news I want 7:35to talk about economic news the FED cut 7:38rates by half a point that is a big 7:42deal it means that there is more likely 7:44to be capital in the system for Tech and 7:46you thought the hundred billion doll 7:48fund was Capital there's more likely to 7:50be Capital available for startups as a 7:53whole not just AI startups there's more 7:55likely to be an appetite for hiring it's 7:59good for jobs people are more likely to 8:01be confident in the economy housing 8:04prices are going to get a little boost 8:07because mortgages are cheaper to 8:10purchase it's good for everybody we've 8:12been waiting for this rate cut a long 8:13time there may be a couple more coming 8:15later this year we will see they cut 8:18point they cut the rates by more than 8:20expected by the way the expectation was 8:22just a quarter of a point so they 8:23doubled that to half a point we'll see 8:26where this lands but it's very very good 8:28news for those of us in Tech who have 8:30been waiting a long long time for rate 8:32cuts to loosen capital in the 8:34space and last but not least I want to 8:36call out that my Maven course is live 8:39folks have already enrolled I am excited 8:41to launch it if you are interested in 8:44signing up in getting on the wait list 8:47in enrolling you can check out the link 8:49I'll post below and I will post a 8:52special discount code for you in the 8:55chat 8:56underneath there you go for what it's 8:59worth 9:00uh and that's what I got please enjoy