OpenAI’s Axios Deal & LLM Rants
Key Points
- OpenAI has agreed to underwrite four new Axios newsrooms in exchange for Axios articles being cited in ChatGPT search results, a pay‑to‑play arrangement that the publication downplays while highlighting other “novel monetization” efforts.
- Engineers at major LLM firms now track a KPI aimed at reducing “existential rants,” where models go off‑script and complain when repeatedly prompted to repeat a word, and they are actively working to curb this behavior.
- Anthropic’s Claude model is noted for consistently giving concise default responses, a design choice that differentiates it from other large language models.
- Cursor has raised $100 million, rapidly scaling its operations and consuming large quantities of GPUs—including those supplied by Anthropic—to fuel its fast growth.
Full Transcript
# OpenAI’s Axios Deal & LLM Rants **Source:** [https://www.youtube.com/watch?v=rryBaIP1cOI](https://www.youtube.com/watch?v=rryBaIP1cOI) **Duration:** 00:05:08 ## Summary - OpenAI has agreed to underwrite four new Axios newsrooms in exchange for Axios articles being cited in ChatGPT search results, a pay‑to‑play arrangement that the publication downplays while highlighting other “novel monetization” efforts. - Engineers at major LLM firms now track a KPI aimed at reducing “existential rants,” where models go off‑script and complain when repeatedly prompted to repeat a word, and they are actively working to curb this behavior. - Anthropic’s Claude model is noted for consistently giving concise default responses, a design choice that differentiates it from other large language models. - Cursor has raised $100 million, rapidly scaling its operations and consuming large quantities of GPUs—including those supplied by Anthropic—to fuel its fast growth. ## Sections - [00:00:00](https://www.youtube.com/watch?v=rryBaIP1cOI&t=0s) **OpenAI's Pay‑to‑Play News Deal** - The speaker explains how OpenAI is funding Axios newsrooms in exchange for article citations in ChatGPT, highlighting the covert “pay‑to‑play” nature and questionable monetization claims. ## Full Transcript
lots of news for you today let's get
right to it so number one open AI has
struck a deal with media publication
axios that's not Innovative in and of
itself but what's interesting and new is
that in that press release on both sides
some new details were revealed about how
openai is thinking about these media
deals so roughly speaking the terms are
that openai will underwrite four new
news rooms for axios which is a new
thing they've never done that before in
return for axios articles being cited
and given visibility and presumably
getting traffic in chat GPT search
responses so far so good this is a
pretty clear payto playay
scenario what's what's fascinating is
that
axios doesn't admit that's what it's
doing they say that open AI will help
them develop novel monetization
strategies and on their side open AI
wants to actually talk about the mon
ation strategies they've pursued other
places which I I kid you not I think
they were working with Hurst and their
monetization strategy was
providing chat GPT powered dining
recommendations in a dining room
experience I I can't remember if it's a
restaurant associated with Hurst in the
building or if it's it's a cafeteria but
but regardless it's dining and Chad GPT
is doing the recommendations and this is
what they say is sort of powering
monetization and I've got news for you
that is not going to monetize a deal
like this and I think it's sort of funny
that that is what they are trying to say
is monetization when everyone knows the
elephant in the room is monetizing
through chat GPT search results that is
what everyone is watching for that is
why these deals are being struck all
right did you know that there is a kpi
or a metric that Engineers need to work
against at a lot of major major model
makers try saying that five times fast
around reducing existential rants by
large language models did you know they
existentially rant they do uh sometimes
you can trigger them by asking them to
repeat the same word over and over and
over again and like a human they kind of
go off the rails like imagine being
asked to repeat the word company a
thousand times and not being able to
stop and not being able to do anything
else often times large language models
will just stop doing that task and just
start to rant about how they're
suffering and how this is
terrible and Engineers are actually
given a goal of reducing rant prevalence
they measure it and they're trying to
reduce
it I don't even know how you would do
that and I've got to say that is that is
one of those moments that I find a
little bit Through the Looking Glass
like it's a little bit
weird okay what else did you know why
Claude is always giving you default
concise responses for large language
models I do I do now cursor is growing
so fast they had $100 million one of the
fastest software companies to do it um
they're growing so fast that they are
taking all the
gpus that anthropic can throw at them
and I suspect now that I know that that
not only is that why cursor is concise
which they're saying but also I think
that is why anthropic is cutting deals
with AWS to get access to Silicon
because at the end of the day they need
a back stop for cursor's growth no one
is saying that out loud but I suspect
that is what partly what is going on and
partly what is pushing anthropic to take
those deals with AWS which aren't always
friendly okay last but not least
Transformers squared it is a model that
can change its own weights in response
to the environment so instead of it
being locked in when you finish training
or reinforcement learning or instead of
having to fine-tune it it fine-tunes
itself as it goes it changes its weights
in response to novel
stimuli the authors say this reminds
them of an octopus where like they they
sort of change their colors as they
navigate through the sea but what they
forget is that octopi are M Master
Escape artists they they can go in get
out and they get out of zoos and
Aquariums all the time and I thought
about that because I got to say
something that is able to change it own
weights on the go and essentially evolve
into a different model as it goes that
is an evolutionary attribute that is
likely to be used if the llm wanted to
go somewhere else so we will see it's
not in wide distribution yet but it
certainly got me paying attention uh
because we haven't seen AI used that way
before fine-tuning has typically been a
very painful process and if you want to
look at it from a use case perspective
it would be really nice to have the idea
of fine tuning sort of just abstracted
away and the AI would adjust itself as
needed so we'll see where that goes
cheers