AI Goes Proactive: OpenAI Pulse & Microsoft Copilot
Key Points
- OpenAI’s new “Pulse” feature delivers proactive AI assistance based on a user’s recent chats, prompting people to start conversations days in advance and noticeably altering their workflow.
- Because Pulse is unsolicited, it provides a seamless spot for sponsored cards, and the simultaneous hiring of an ads‑monetization lead suggests OpenAI is gearing up to embed advertising directly into the experience.
- Pulse represents one of the first consumer‑grade proactive AI products, signaling a broader industry shift toward AI that initiates interactions rather than only reacting, a trend expected to dominate by 2026.
- Microsoft announced it is diversifying its Copilot offering by integrating Anthropic models, moving beyond its prior reliance on OpenAI and broadening its AI partner ecosystem.
Sections
- OpenAI Launches Proactive AI Assistant - The host outlines OpenAI's new Pulse feature, a proactive chat‑GPT tool that leverages recent conversation history to deliver timely insights, reshapes user workflows, and hints at broader implications like ad integration.
- Anthropic Opus Outshines ChatGPT, Meta Eyes Gemini - The speaker notes that Anthropic’s Opus 4.1 surpasses OpenAI’s ChatGPT for practical tasks such as slide decks and spreadsheets, prompting Microsoft to integrate Anthropic models into Copilot and hinting at an upcoming Opus 4.5, while Meta negotiates a partnership with Google Cloud to embed Gemini’s multimodal AI into its ad‑targeting systems.
- Stargate: $400B AI Compute Push - OpenAI’s multi‑year Stargate initiative, backed by over $400 billion from partners such as Oracle, Nvidia, and SoftBank, aims to amplify compute capacity by about 100× and eventually achieve fully autonomous, robot‑built data centers.
- Adobe and Notion Sacrifice Margins for AI - The speaker highlights that smooth, useful AI tools will boost user adoption, while firms such as Adobe and Notion are integrating third‑party models and deliberately cutting gross margins to remain competitive in the rapidly evolving AI landscape.
- Landmark US AI Safety Bill - The speaker highlights that, despite limited public positions from major AI firms, the passage of Senate Bill 53 will mandate model makers to demonstrate safety protocols, whistleblower protections, and new compliance measures, affecting both providers and their customers.
Full Transcript
# AI Goes Proactive: OpenAI Pulse & Microsoft Copilot **Source:** [https://www.youtube.com/watch?v=-hK4Qt8B9Fg](https://www.youtube.com/watch?v=-hK4Qt8B9Fg) **Duration:** 00:15:54 ## Summary - OpenAI’s new “Pulse” feature delivers proactive AI assistance based on a user’s recent chats, prompting people to start conversations days in advance and noticeably altering their workflow. - Because Pulse is unsolicited, it provides a seamless spot for sponsored cards, and the simultaneous hiring of an ads‑monetization lead suggests OpenAI is gearing up to embed advertising directly into the experience. - Pulse represents one of the first consumer‑grade proactive AI products, signaling a broader industry shift toward AI that initiates interactions rather than only reacting, a trend expected to dominate by 2026. - Microsoft announced it is diversifying its Copilot offering by integrating Anthropic models, moving beyond its prior reliance on OpenAI and broadening its AI partner ecosystem. ## Sections - [00:00:00](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=0s) **OpenAI Launches Proactive AI Assistant** - The host outlines OpenAI's new Pulse feature, a proactive chat‑GPT tool that leverages recent conversation history to deliver timely insights, reshapes user workflows, and hints at broader implications like ad integration. - [00:04:19](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=259s) **Anthropic Opus Outshines ChatGPT, Meta Eyes Gemini** - The speaker notes that Anthropic’s Opus 4.1 surpasses OpenAI’s ChatGPT for practical tasks such as slide decks and spreadsheets, prompting Microsoft to integrate Anthropic models into Copilot and hinting at an upcoming Opus 4.5, while Meta negotiates a partnership with Google Cloud to embed Gemini’s multimodal AI into its ad‑targeting systems. - [00:07:58](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=478s) **Stargate: $400B AI Compute Push** - OpenAI’s multi‑year Stargate initiative, backed by over $400 billion from partners such as Oracle, Nvidia, and SoftBank, aims to amplify compute capacity by about 100× and eventually achieve fully autonomous, robot‑built data centers. - [00:11:42](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=702s) **Adobe and Notion Sacrifice Margins for AI** - The speaker highlights that smooth, useful AI tools will boost user adoption, while firms such as Adobe and Notion are integrating third‑party models and deliberately cutting gross margins to remain competitive in the rapidly evolving AI landscape. - [00:15:12](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=912s) **Landmark US AI Safety Bill** - The speaker highlights that, despite limited public positions from major AI firms, the passage of Senate Bill 53 will mandate model makers to demonstrate safety protocols, whistleblower protections, and new compliance measures, affecting both providers and their customers. ## Full Transcript
All right, let's get right to it. What
were the AI stories that mattered the
most this week? And new this week, I'm
going to put in the post a special
prompt for you to grab the implications
of these stories depending on your
company, your role, your domain of
interest. You're going to be able to go
like a step deeper with that. Story
number one, Open AI is going proactive.
And this has a lot of pull out thread
implications I want to get at briefly.
The launch is called Pulse, and it
offers proactive AI assistance based on
what you most recently talked about. So,
think of it as sort of like an Instagram
stories reel, but it's tuned to your
previous chats, specifically very recent
chats, like the last day or two. It's
available to pro users initially. I've
tried it out, and what I've found is
that it's a very, very seamless, almost
eerily relevant experience. It changes
my behavior in some interesting ways
though because it's pushing me now to
start to have conversations a day or two
in advance of when I think I need to
have them to plan for work so that I
give pulse a chance to work overnight
and give me interesting insights the
morning when I actually have to do the
work. It's always a good sign when a
product launch is so impactful it
immediately changes your workflow. And
that's what I found here. But the
implications go beyond sort of prouser
availability. What you see with pulse is
chat GPT investing fairly transparently
in an ad surface. It was always going to
be challenging for chat GPT to keep an
aura of objectivity in individual
conversations that users initiate. It
will be much easier to position ads
inside Pulse because Pulse is already
proactive. Pulse is already an
experience that you didn't necessarily
ask for but chat GPT is offering to you.
So if they slide in a card in that sort
of Pulse format and the card is
sponsored, you can just ignore it if you
don't want it. Right? It's a very simple
ad experience. And of course at the same
time as they launch Pulse, we notice
that OpenAI has opened up a ads
monetization role at ChatGpt. So
something is coming there on the ads
front and Pulse seems to be related to
it. We will have to kind of put a pin in
that and stay tuned to see where they go
with that. One final implication I want
to call out on the pulse story. This is
the beginning of an entire new arc where
we will see AI become more proactive.
This is one of the first widely consumer
available proactive AI experiences. And
I want you to keep that in mind because
a lot of the other threads we're
following over the course of the last
few months point to a 2026 that is more
AI proactive versus AI reactive and
major model makers are all building in
that direction. Let's get to story
number two. Microsoft is diversifying
co-pilot with anthropic models. And this
is really fascinating because Microsoft
has been known for leaning in on OpenAI
for a while. But as we discovered in the
last couple weeks, Microsoft and OpenAI
have finalized a new looser partnership
format. And it's not super surprising in
that context to see Microsoft go for
another major model maker as essential
to their suite of tools. And I think
it's the right call because if you look
at how Anthropic actually performs, how
Claude Opus 4.1 specifically performs on
work that co-pilot people do, slides,
sheets, looking at docs, that's work
Opus 4.1 does very well. It's not just
me saying that. Ironically, it's Chad
GPT saying that, too. Open AAI completed
a study called GDP val this week that
actually tests major AI models against
economically useful work tasks. Now,
before you run away with the wrong
headline here, this is not the same as
saying it's testing major model makers
against economically useful jobs. These
are very very limited tasks where the
context is prepared by an expert in a
neat little package and then an expert
prepares a gold standard solution and
then the major model makers run their
models against the problem space and see
how their result compares to that gold
standard in a blind evaluation. Fine.
It's not real work in the sense that
it's not as messy, but it is real work
in the sense that the tasks are real and
designed by experts in the field. That
being said, OpenAI sponsored all of this
and set it up and ran it and even they
admit that Opus 4.1 from Anthropic is
better at doing that kind of
economically useful work than any of
Chat GPT's models, which is a huge
endorsement for Opus 4.1 and the
Anthropic team and one that I personally
have found correct. Like I prefer Opus
4.1 for preparing a slide deck, for
creating a a spreadsheet. it is just
much more useful than the chat GPT
models at this point. And so it's not
surprising to see Microsoft pulling the
anthropic models into Copilot as they
evaluate that. There are heavy rumors
that this is going to get even better
shortly as we suspect that Anthropic is
on the verge of releasing something like
a 4.5 version of Opus, which would
presumably be a step forward here. So
stay tuned for that. That may be coming
in the next couple of weeks. We will
have to see. Story number three, Meta is
exploring a Gemini partnership for ad
targeting. So Meta is in early
discussions with Google Cloud about
integrating Google's Gemini AI models
into Meta's ad operations. And Meta
employees have explored how Gemini's
multimodal capabilities like Nano Banana
could refine algorithms that match ads
to users on Facebook and Instagram. So
basically, can you take the sort of
Facebook, Instagram, social algorithm
secret sauce that Meta has and marry it
to the image generation capabilities
potentially or the text generation
capabilities that Gemini has. This is a
huge step back for Zuckerberg and Meta.
And I know you might not have expected
me to say that, but hear me out. They
have invested an enormous amount of
money in their own AI. Zuck just got
done making headlines for the largest
pay packages in history for his AI team
and then very publicly lost about a half
a dozen researchers who stayed briefly
at Meta and left for undisclosed reasons
but everyone guesses the culture. In
this situation, Zuck is having to admit
that Llama, his own homebuilt model, is
just not good enough for this task and
he's having to go to Google. This
underlines how the race for AI has
narrowed over the last year or so. A
year or two ago, Llama would have been
right in the race with the top leaders.
It's just not now. It's just not. And
the race has narrowed really to Google
and OpenAI and Anthropic. And I know
Grock is really trying, but like Llama's
not even in the conversation. And even
Meta is admitting that. And that is the
undercurrent here. And that is a really
big question mark for Meta because Zuck
has not publicly suggested he's walking
back his big dollar investments in AI
for the future. He's still planning to
spend hundreds of billions over the next
few years on AI. Where is that money
going to go? What do they anticipate? Is
this a story sort of like Apple's
chipset story where they're going to
spend a vast amount of money trying to
catch up and build their own chipsets
and eventually they do? Or is it a story
where Meta is going to invest this money
and then gradually kind of roll it back
as they realize they can't catch up? We
don't know where that's going to go yet,
but for now, it looks like Meta is not
in the driver's seat, even in their own
business, on AI. Story number four, Open
AAI is continuing to make headlines for
how much they are spending on data
centers. They are aggressively building
capacity in ways that sort of it boggles
the mind, right? the number the the the
size of the numbers involved in terms of
power generation, in terms of dollars.
We are now over $400 billion in
investment over three years in the
Stargate project, which is their sort of
flagship project. And they announced
with Oracle five new AI data centers as
a part of the Stargate project. What
this is suggesting to me is that not
only is Stargate not just a headline,
Stargate is an umbrella term for the 3
to 5year vision that OpenAI has for a
massive X or so scale up in compute
versus present state, maybe more than
100x scale up in compute when it's all
done. And they are not having trouble
attracting the investment they need to
get there. In fact, some of the stories
over the last few weeks around Oracle's
investment in OpenAI, around Nvidia's
investment in OpenAI, this week around
SoftBank leaning in on financing for
Stargate, it all adds up to Saman being
able to attract the capital he needs to
get this power generation vision done.
And his vision goes beyond just building
stuff. He wants to get to a point where
construction of additional capacity for
AI is fully autonomous. And there was a
blog post about that that while it's
aspirational is important to consider.
Sam expects that he will be able to have
in 3 to 5 years entirely autonomous data
center production. So robots will put
the data centers together. Robots will
bring the chips in. There will be
robotic chip fabs and they will be able
to autonomously stand up new data
centers to match the capacity needs of
AI. You would only do this if you were
expecting not just a 100x gain in demand
for AI, but a thousand or text gain. And
so Sam's vision is predictably
incredibly grand. And that blog post
gives us a sense of why these other
companies are willing to put so much
skin in the game, so many dollars to
scale up on the data center side. What
this suggests is that the demand story
for AI remains intact and that we are
going to continue to see aggressive
buildout. Let's get to story number
five. Kimmy is a Chinese AI company for
it's Moonshot AI's model right so the
Chinese company's named Moonshot and
they produced Kimmy K which is a
trillion parameter model it's very very
good and it is now available as an agent
it's called okay computer and it was
launched on the 24th of September it
allows Kimmy to access its own virtual
computer environment with a file system
a browser and a terminal and it can
autonomously execute complicated
multi-step tasks for you and so it can
transform transform chat requests into
websites or data dashboards or
production documents what have you. This
gives us a glimpse of do the work
assistance but critically it also is
pushing the edges of AI agent autonomy
and it's doing so from a Chinese
perspective and what I mean by that is
that it is reminding us that the Chinese
uh model development evolution continues
to push the edges and if you're like
deep in the weeds technically Kimmy K2
also set a new bar on efficiency for
training you can look up um mu nuon if
you want to get deeper on that and you
will see a lot on sort of how the K2
model was trained and how efficient they
were at training the K2 model. That's
just a little sort of nerd sidebar
there. But this is really the larger
story that that the Chinese companies
innovating on AI are pushing the edges
forward especially around open-source
and they are very very intentionally
pushing straight to Agentic. It's sort
of a leapfrog motion where they're not
just satisfied with LLMs, they're going
to push into advanced AI agent
capabilities. And while US enterprises
and European enterprises may have enough
data concerns to say, you know what, we
don't trust a Chinese hosted Kimmy K2 to
do okay computer agent mode for us.
Individuals will not have those
constraints and if the product
experience is smooth and useful,
individuals are going to start using
this. So I look at okay computer as both
a push on agent mode for the industry as
a whole and you'll see us companies
start to follow suit. I also look at it
as a directly useful tool for
individuals and potentially some small
businesses that aren't too worried about
the data side right now. Story number
six, Adobe is buying instead of
building. Adobe has taken a beating
recently in the public markets because
their AI products are widely perceived
to be missing the mark. And so they
announced comprehensive thirdparty model
integration across Firefly, embedding
what the Luma Ray 3 model for video and
adding a bunch more opportunities for
model support from Google, OpenAI,
Ideog, and others. In other words, Adobe
is deliberately importing best-in-class
capabilities rather than trying to
compete anymore with proprietary Firefly
models. This feels a lot like the meta
story. Adobee's basically admitting
their own AI strategy is dead in the
water and they need to bring in other
models. Now the implication here is that
Adobe is going to eat margin to do that.
Notion very publicly has said during the
notion 3.0 rollout which also happened
this week that they were eating their
own margin and that they were costing
their gross margin about 10 percentage
points to roll out AI. So they went from
90% gross margins to 80% gross margins
as a business to deliver AI
functionality, but they felt it was
worth it for the long-term value of the
business. Adobe hasn't revealed their
gross margins, of course, but I suspect
that is the kind of impact they're going
to have when they actually bring in
these third-party services and start to
pay for them and start to essentially
have the brains of Adobe Firefly and
other tools built by other people. So
this is a story where even traditional
SAS companies are starting to have to
let other models in the door because
it's just too expensive to compete on
the model side directly. Very, very
interesting implications here for other
businesses that need intelligence.
Finally, story number seven. California
is advancing AI safety legislation. So,
California's legislature passed Senate
Bill 53, which requires major AI
developers to publicly disclose disclose
safety and security protocols and
establishes whistleblower protections
and incident reporting. It also creates
Cal compute, which is a public computing
cluster for AI research. The reason this
matters is because the major model
makers are all based in California and
because California is such a large and
influential state in the federal system.
And so I would expect that we will see
actual implemented AI safety and
transparency requirements at Anthropic,
at OpenAI, both of which are based in
San Francisco and at Google, also based
in San Francisco. And we are going to
start to see public disclosure and
discussion around what that means and
probably implications for other states
that may be looking at AI safety.
California is setting the bar for the
nation as a whole here, but it is also
directly impacting the daily operations
of the biggest model makers on the
planet. And so if we're looking ahead at
national AI safety legislation,
California is showing a sort of way to
go there, a direction to head in. But
it's also going to require companies to
change their behavior immediately. Now,
OpenAI has said they're neutral on this.
They don't have an opinion. Anthropic
has endorsed the bill publicly and
Google hasn't really given a signal that
I've been able to find. Regardless, once
this is signed, these model makers will
have to comply. They will have to start
to show that they have safety and
security protocols in line with Senate
Bill 53. They will have to show that
they have whistleblower protections.
There will be new compliance loads. This
has implications for companies
purchasing from those model makers, too.
And that's not been figured out yet. So,
stay tuned for more there. But I wanted
to call out that that's a landmark in
the US on AI safety legislation and one
to watch. As I noted, you can get more
implications on each of these stories in
the prompt that I prepared for you in
the post. Have a great week. Cheers.