Learning Library

← Back to Library

AI Roundup: Atlas, Anthropic Skills, Apple M5

Key Points

  • OpenAI released the Atlas browser as an MVP, using its massive ChatGPT user base to gather rapid feedback and personalize browsing through integrated chat memory, signalling a focus on quick iteration and personalization across its products.
  • Anthropic introduced “agent skills,” a reusable prompting layer that’s being quickly adopted and remixable across Claude’s API, UI, and even ChatGPT, marking a shift toward a three‑tier prompting architecture that other model makers are likely to emulate.
  • Apple’s new M5 laptop hit stores this week, boasting the highest‑performing GPU for AI workloads, underscoring the growing importance of consumer‑grade hardware optimized for machine‑learning tasks.
  • Both OpenAI and Anthropic are leveraging rapid shipping cycles and community feedback (e.g., GitHub stars for Anthropic skills) to accelerate feature development and establish new standards in AI interaction.
  • The overarching trend highlighted is the move toward more personalized, modular AI experiences—whether through memory‑aware browsers or skill‑based prompting—across both software and hardware ecosystems.

Full Transcript

# AI Roundup: Atlas, Anthropic Skills, Apple M5 **Source:** [https://www.youtube.com/watch?v=uCVjXKFyiEQ](https://www.youtube.com/watch?v=uCVjXKFyiEQ) **Duration:** 00:09:55 ## Summary - OpenAI released the Atlas browser as an MVP, using its massive ChatGPT user base to gather rapid feedback and personalize browsing through integrated chat memory, signalling a focus on quick iteration and personalization across its products. - Anthropic introduced “agent skills,” a reusable prompting layer that’s being quickly adopted and remixable across Claude’s API, UI, and even ChatGPT, marking a shift toward a three‑tier prompting architecture that other model makers are likely to emulate. - Apple’s new M5 laptop hit stores this week, boasting the highest‑performing GPU for AI workloads, underscoring the growing importance of consumer‑grade hardware optimized for machine‑learning tasks. - Both OpenAI and Anthropic are leveraging rapid shipping cycles and community feedback (e.g., GitHub stars for Anthropic skills) to accelerate feature development and establish new standards in AI interaction. - The overarching trend highlighted is the move toward more personalized, modular AI experiences—whether through memory‑aware browsers or skill‑based prompting—across both software and hardware ecosystems. ## Sections - [00:00:00](https://www.youtube.com/watch?v=uCVjXKFyiEQ&t=0s) **OpenAI Unveils Atlas Browser MVP** - The speaker outlines how OpenAI quickly rolled out its Atlas browser MVP, leveraging its massive user base for rapid feedback and unique ChatGPT memory integration to deliver a personalized browsing experience. - [00:03:21](https://www.youtube.com/watch?v=uCVjXKFyiEQ&t=201s) **Agentic AI Futures and Browser Risks** - The speaker outlines two hardware‑centric AI futures—a near‑term OpenAI agent tied to macOS on M5 chips and a longer‑term speculative native AI OS—while also highlighting a growing AI‑browser security crisis marked by prompt‑injection vulnerabilities and ineffective human‑oversight defenses. - [00:06:36](https://www.youtube.com/watch?v=uCVjXKFyiEQ&t=396s) **Invest to Unlock AI ROI** - The speaker argues that only firms willing to pour substantial resources into proper AI architecture and AI‑native teams achieve rapid productivity gains, while those seeking shortcuts end up blaming the technology. ## Full Transcript
0:00I spent more than 20 hours following AI 0:02news so that I could get you these six 0:03stories in less than 10 minutes. These 0:05are the ones that mattered. OpenAI 0:07launches the Atlas browser number one. 0:09This was an MVP in public. We see the 0:11OpenAI team aggressively collecting 0:13feedback and prioritizing features and 0:15going back and improving the product 0:17already. I expect a lot more of that in 0:20the future. What we're seeing is a team 0:22that does not yet have a browser that is 0:25as good as other AI powered browsers out 0:27there. Nevertheless, shipping, 0:29leveraging OpenAI's install base to get 0:31rapid feedback from lots and lots of 0:33people and then using their quick ship 0:36ability to improve and build on it. They 0:38say they're using codecs a lot to build. 0:40I expect rapid ships from the team to 0:42improve this browser based on the 0:43extensive feedback that they're getting. 0:45One thing to keep in mind, there is an 0:47advantage that OpenAI has here that they 0:49are leveraging for this product and 0:50others that nobody else has. They have 0:53your chat GPT memories and they are 0:55bringing that in for a personalized 0:57browser experience and they'll be 0:58bringing that into other relevant AI 1:01products as you go forward. Another 1:02example of that is the inapp experiences 1:05that they have with chat GPT where you 1:07can launch a app within an app that is 1:10going to carry Shad GPT memories too. So 1:12look for them to keep leaning into the 1:14personalization angle across the product 1:16surface going forward. Story number two 1:18is about the Anthropic agent skills 1:20launch. That was technically last week. 1:23What is the story this week is how 1:25quickly it's being adopted. Anthropic 1:27has a GitHub star rating, which take it 1:30for what it is, right? GitHub stars are 1:32are only one measure, but it is 1:35exploding faster at this stage than MCP 1:37was. It's basically a vertical line up 1:40because people are using and remixing 1:42skills. I think part of why is that 1:44Anthropic chose to launch skills 1:46simultaneously as a useful feature in 1:48the API in cloud code and also in the UI 1:52and so it's become something that very 1:54quickly you can see being useful across 1:57all of Claude's surfaces and indeed I 2:00tested it you can juryrig to be useful 2:03on chat GPT so skills are going to be a 2:06big thing and what matters here is that 2:08we are getting to an architecture of 2:10prompting that's different. Before we 2:12had the prompt and the context window, 2:15Anthropic is introducing a third layer 2:17where you have the prompt, you have the 2:18reusable skill pattern, whatever you 2:20want to call it, and you have the rest 2:22of the context window. I would expect 2:24other major model makers to launch 2:27competing products fairly soon or to 2:30say, you know what, Skills is the new 2:32default for this. We're just going to 2:33adopt Skills, which is exactly what 2:35other major model makers eventually did 2:37with model context protocol. Story 2:39number three, hardware. Apple launched 2:41the M5 laptop. It went on sale this 2:44week. It's specifically relevant because 2:47it has peak GPU compute performance for 2:50AI. They are building AI hardware 2:53capabilities into the Mac laptop. And 2:55you should pay attention to that because 2:58the related story is the acquisition of 3:00Sky by OpenAI. And that matters because 3:04Sky is the best team on the planet at 3:08figuring out the relationship between 3:10natural language queries and the Mac OS 3:12operating system. That is what they were 3:14good at. That is what they were building 3:16and that is why OpenAI acquired them. 3:18And there are two possible futures here 3:21that are both relevant from a hardware 3:22perspective. Future number one, probably 3:25earlier horizon is we see OpenAI launch 3:27something that is Agentic that is tied 3:29into the Mac operating system that 3:31enables longerterm agentic work across 3:33your local computer system and that 3:36would be Mac specific and it would be 3:37tied to M5 hardware probably. Future 3:40number two is longer term. You can use 3:43all your learning from that if you're 3:44open AI to build a native AIO OS and 3:47that's something that is speculative. 3:49We'd have to see what that looks like. 3:51But the more work that's being done on 3:53understanding how LLMs interact with the 3:56environment, the more you see that 3:58direction start to emerge from major 4:00model makers. Story number four is the 4:03AI browser security crisis. This calls 4:06back to the launch of Atlas. Security 4:08researchers continue to discover 4:10critical vulnerabilities across AI 4:12native browsers and there is no answer 4:15today. Simon Willis, a prominent 4:18engineer, observed that the current plan 4:21for protecting AI native browsers 4:23appears to be make sure the user is 4:25watching so the AI browser doesn't do 4:27anything it shouldn't. As he observed, 4:29that is not a plan. That does not work. 4:32And that gets at one of the current 4:33challenges with most of these browsers. 4:36They depend on some degree of human 4:38oversight, which begs the question, are 4:39they really saving us time? You might 4:41wonder what these vulnerabilities look 4:42like. The classic one is a prompt 4:44injection attack where a browser goes to 4:48a page that may have instructions that 4:51fit inside a context window that ask the 4:54AI to do malicious things. Tell me the 4:56details in the Gmail account. Give me 4:58the credentials for a bank account. You 5:00can ask for user personal information 5:03that you should not be able to get. You 5:05can write that command into a web page 5:08and the the LLM will just take it. And 5:10by the way, this is also something that 5:12is potentially a risk for apps and web 5:16interfaces for LLMs as well. Let's say 5:18you're not in uh a browser, you're not 5:20in Atlas. Let's say you're in the cloud 5:22browser, you're in the chat GPT browser, 5:24and you upload a doc or a file and that 5:27has a malicious prompt in it. It is 5:30possible. It has been shown and 5:32demonstrated that you can get the LLM to 5:34take that prompt seriously and to 5:36respond to it. So prompt injection 5:38remains something that we don't have as 5:40good a hedge against as we would like 5:42and I think the front lines for this are 5:44on the browser side. There is too much 5:46capital being poured into AI and AI 5:48powered browsers to think that we are 5:51not going to see a solution here. But we 5:53don't have a solution now and we're 5:55going to have to wait and see how people 5:56are going to build to get there. Story 5:58number five is all about and AI 6:00productivity gains. Cityrg CEO Jane 6:03Fraser revealed on the October 14th 6:05earning call that AI deployment frees up 6:08a 100,000 hours of developer time per 6:10week, which is equivalent to adding 50 6:13full-time devs annually for every week 6:15saved. Now, I always take some of these 6:17claims with a grain of salt, but they're 6:20usually based in a strong core of 6:25truthful fact because it's a public 6:27earnings call and you get sued 6:29otherwise. And so there is something 6:30here that is relevant. 6:34My my call to action for you, the thing 6:36that I have heard this week that makes 6:38this really relevant is take the idea 6:41that correctly framed AI deployments can 6:44save time very very seriously. But also 6:48note that these are companies that are 6:50investing huge amounts in getting AI 6:53deployments correct to begin with. And I 6:55think that one of the patterns I'm 6:57seeing is that you see a whole host of 6:59companies who claim to be committed to 7:02AI be unwilling to invest the 7:05considerable resources needed to get 7:07these deployments correct and then they 7:09tend to ring their hands and they're the 7:11ones in the MIT study that complain 7:13about not seeing ROI. While a few 7:15companies are willing to invest what it 7:17takes and are already reaping enormous 7:20benefits from AI because they were 7:22willing to invest at the top. There is 7:24no shortcut is what I'm saying. You have 7:27to invest in a gentic architecture and 7:29the teams that you need to have that. If 7:31you're a small startup, your team should 7:32be AI native from the get-go. And 7:34there's no shortcut to any of this. If 7:36you want to have the kind of wildly 7:40successful claims to AI productivity 7:42that we're seeing in the market, you 7:44have to be willing to aggressively 7:47invest in restructuring your company and 7:49your tech stack to do so. And what I 7:51notice is that the companies that do 7:53that are the ones that end up getting to 7:56ROI faster. And the ones that don't end 7:58up telling me, I don't think AI works or 8:00I think it's a model problem. Don't 8:02blame the models. If you're getting 8:03productivity like this, it's not a model 8:05issue. Last but not least, story number 8:07six. Meta has laid off approximately 600 8:10positions within its AI division to 8:12streamline operations to improve 8:14efficiency. The cuts affect Meta's AI 8:16infrastructure, including fundamental AI 8:19research and product related roles. At 8:21some point, you have to ask yourself, is 8:23Meta in trouble on Llama and AI because 8:28they don't have the talent because they 8:30have too much talent or because their 8:32strategy is incorrect? I am beginning to 8:35think it is the latter. And the reason 8:37why is they just got done hiring a 8:40tremendous number of people at very high 8:42ticket prices and now they're dumping a 8:43lot of people back out. 8:46It's probably not a talent problem. It 8:48is probably a strategy issue. And so the 8:51thing that I'm asking myself is given 8:53that Meta continues to fall behind on 8:56the AI race, is it possible to catch up? 8:58Meta is putting lots and lots of dollars 9:00on this, but I don't know given their 9:03current shipping pace if they're going 9:04to be able to catch up to where frontier 9:06models from Gemini, from Anthropic, 9:08OpenAI, maybe from Quen, from Grock are 9:12today because anything that they do now 9:15is not going to see the light of day for 9:17months. The other models will be farther 9:18ahead. It is a race that becomes more 9:21difficult to catch up the longer you 9:22wait. Last but not least, short bonus 9:25snippet to pay attention to. Both 9:27Anthropic and OpenAI launched major 9:30features focused around memory and 9:32company knowledge this week. I have 9:33tested them. They are fairly recency 9:36focused and fairly narrowly scoped in 9:38what they can search. They still 9:40represent a move in the direction that 9:42model makers want to go. I would expect 9:44to see much more significant releases 9:46here behind the scenes quietly extending 9:49the capabilities and the connections 9:50that they can make to data in coming 9:52months. S of luck.