Learning Library

← Back to Library

OpenAI’s Axios Deal & LLM Rants

Key Points

  • OpenAI has agreed to underwrite four new Axios newsrooms in exchange for Axios articles being cited in ChatGPT search results, a pay‑to‑play arrangement that the publication downplays while highlighting other “novel monetization” efforts.
  • Engineers at major LLM firms now track a KPI aimed at reducing “existential rants,” where models go off‑script and complain when repeatedly prompted to repeat a word, and they are actively working to curb this behavior.
  • Anthropic’s Claude model is noted for consistently giving concise default responses, a design choice that differentiates it from other large language models.
  • Cursor has raised $100 million, rapidly scaling its operations and consuming large quantities of GPUs—including those supplied by Anthropic—to fuel its fast growth.

Full Transcript

# OpenAI’s Axios Deal & LLM Rants **Source:** [https://www.youtube.com/watch?v=rryBaIP1cOI](https://www.youtube.com/watch?v=rryBaIP1cOI) **Duration:** 00:05:08 ## Summary - OpenAI has agreed to underwrite four new Axios newsrooms in exchange for Axios articles being cited in ChatGPT search results, a pay‑to‑play arrangement that the publication downplays while highlighting other “novel monetization” efforts. - Engineers at major LLM firms now track a KPI aimed at reducing “existential rants,” where models go off‑script and complain when repeatedly prompted to repeat a word, and they are actively working to curb this behavior. - Anthropic’s Claude model is noted for consistently giving concise default responses, a design choice that differentiates it from other large language models. - Cursor has raised $100 million, rapidly scaling its operations and consuming large quantities of GPUs—including those supplied by Anthropic—to fuel its fast growth. ## Sections - [00:00:00](https://www.youtube.com/watch?v=rryBaIP1cOI&t=0s) **OpenAI's Pay‑to‑Play News Deal** - The speaker explains how OpenAI is funding Axios newsrooms in exchange for article citations in ChatGPT, highlighting the covert “pay‑to‑play” nature and questionable monetization claims. ## Full Transcript
0:00lots of news for you today let's get 0:01right to it so number one open AI has 0:04struck a deal with media publication 0:06axios that's not Innovative in and of 0:09itself but what's interesting and new is 0:12that in that press release on both sides 0:14some new details were revealed about how 0:16openai is thinking about these media 0:18deals so roughly speaking the terms are 0:22that openai will underwrite four new 0:25news rooms for axios which is a new 0:28thing they've never done that before in 0:30return for axios articles being cited 0:33and given visibility and presumably 0:36getting traffic in chat GPT search 0:40responses so far so good this is a 0:42pretty clear payto playay 0:44scenario what's what's fascinating is 0:47that 0:48axios doesn't admit that's what it's 0:50doing they say that open AI will help 0:52them develop novel monetization 0:54strategies and on their side open AI 0:58wants to actually talk about the mon 1:00ation strategies they've pursued other 1:02places which I I kid you not I think 1:05they were working with Hurst and their 1:07monetization strategy was 1:11providing chat GPT powered dining 1:15recommendations in a dining room 1:17experience I I can't remember if it's a 1:19restaurant associated with Hurst in the 1:21building or if it's it's a cafeteria but 1:24but regardless it's dining and Chad GPT 1:27is doing the recommendations and this is 1:28what they say is sort of powering 1:31monetization and I've got news for you 1:33that is not going to monetize a deal 1:34like this and I think it's sort of funny 1:37that that is what they are trying to say 1:39is monetization when everyone knows the 1:41elephant in the room is monetizing 1:43through chat GPT search results that is 1:46what everyone is watching for that is 1:48why these deals are being struck all 1:50right did you know that there is a kpi 1:54or a metric that Engineers need to work 1:56against at a lot of major major model 1:58makers try saying that five times fast 2:02around reducing existential rants by 2:06large language models did you know they 2:08existentially rant they do uh sometimes 2:10you can trigger them by asking them to 2:12repeat the same word over and over and 2:14over again and like a human they kind of 2:16go off the rails like imagine being 2:19asked to repeat the word company a 2:20thousand times and not being able to 2:22stop and not being able to do anything 2:26else often times large language models 2:29will just stop doing that task and just 2:32start to rant about how they're 2:34suffering and how this is 2:36terrible and Engineers are actually 2:39given a goal of reducing rant prevalence 2:41they measure it and they're trying to 2:43reduce 2:44it I don't even know how you would do 2:47that and I've got to say that is that is 2:49one of those moments that I find a 2:51little bit Through the Looking Glass 2:53like it's a little bit 2:56weird okay what else did you know why 3:00Claude is always giving you default 3:03concise responses for large language 3:05models I do I do now cursor is growing 3:09so fast they had $100 million one of the 3:11fastest software companies to do it um 3:14they're growing so fast that they are 3:17taking all the 3:19gpus that anthropic can throw at them 3:22and I suspect now that I know that that 3:25not only is that why cursor is concise 3:27which they're saying but also I think 3:30that is why anthropic is cutting deals 3:32with AWS to get access to Silicon 3:35because at the end of the day they need 3:37a back stop for cursor's growth no one 3:39is saying that out loud but I suspect 3:41that is what partly what is going on and 3:43partly what is pushing anthropic to take 3:44those deals with AWS which aren't always 3:47friendly okay last but not least 3:50Transformers squared it is a model that 3:53can change its own weights in response 3:55to the environment so instead of it 3:57being locked in when you finish training 4:00or reinforcement learning or instead of 4:02having to fine-tune it it fine-tunes 4:04itself as it goes it changes its weights 4:06in response to novel 4:08stimuli the authors say this reminds 4:11them of an octopus where like they they 4:14sort of change their colors as they 4:15navigate through the sea but what they 4:18forget is that octopi are M Master 4:20Escape artists they they can go in get 4:22out and they get out of zoos and 4:25Aquariums all the time and I thought 4:27about that because I got to say 4:28something that is able to change it own 4:30weights on the go and essentially evolve 4:32into a different model as it goes that 4:34is an evolutionary attribute that is 4:38likely to be used if the llm wanted to 4:42go somewhere else so we will see it's 4:45not in wide distribution yet but it 4:46certainly got me paying attention uh 4:48because we haven't seen AI used that way 4:50before fine-tuning has typically been a 4:52very painful process and if you want to 4:53look at it from a use case perspective 4:56it would be really nice to have the idea 4:58of fine tuning sort of just abstracted 5:00away and the AI would adjust itself as 5:03needed so we'll see where that goes 5:05cheers