ChatGPT Usage, AI Economics, Expert Insights
Key Points
- The “Mixture of Experts” podcast, hosted by Tim Hang, brings together AI innovators (including IBM fellows and master inventors) to dissect the week’s most significant AI research and news.
- The episode’s agenda covers a range of cutting‑edge work: the MBER study on how people actually use ChatGPT, the latest Anthropic Economic Index, DeepMind’s research on agent economies, the Ultra Ego demos, and Meta’s newest wearable technology.
- In the news segment, the hosts note that Alphabet’s market value recently crossed the $3 trillion mark, the WTO predicts AI could increase global trade value by nearly 40 % by 2040, and researchers have taught dogs and parrots to interact with touch‑screen devices.
- The discussion then pivots to the MBER paper “How People Use ChatGPT,” highlighted as a rigorous, economics‑style analysis that systematically maps real‑world ChatGPT usage patterns.
- Lauren McHugh shares a personal connection to the study—her former professor David Deming is a co‑author—providing insider insight into the paper’s methodology and its implications for understanding AI adoption.
Sections
- Is Humanity on AI Autopilot? - The hosts and expert panel debate whether society has ceded control to AI, while reviewing recent research papers, demos, and industry headlines on the Mixture of Experts podcast.
- AI’s Economic Impact Through Embedded Search - The speakers contend that generative AI will generate far greater economic value by being integrated via APIs into everyday services as a search‑enhancing tool, rather than through standalone chat applications like ChatGPT.
- Misaligned Expectations of LLM Use - The speakers highlight how real-world interactions with large language models far diverge from popular assumptions—coding accounts for only about 4% of chats, while therapy, relationship advice, and gaming each make up less than 2%, underscoring a gap between anticipated and actual user behavior.
- From Search to Predictive Feeds - The speakers argue that emphasizing traditional keyword search is shortsighted, advocating for AI-driven agents that proactively surface relevant information, while warning that such automated feeds may replicate the engagement‑centric biases of current social media.
- Anthropic AI Usage Density Insight - The speaker highlights the significance of a normalized “usage density” metric to gauge real-world AI utility, noting its correlation with high‑income economies while also emphasizing emerging adoption in lower‑income regions through remote learning and accessible tools.
- Global Disparities in AI Chatbot Use - The speakers analyze how nations such as Singapore, Canada, and India vary in adoption rates and purposes—like coding versus general queries—for chatbots like Claude and ChatGPT, highlighting cost barriers that influence usage patterns.
- Geography, AI Adoption, and Agent Economies - The speaker reflects on a hopeful study suggesting that location need not dictate AI opportunity and proposes grassroots initiatives to raise AI usage in underserved areas, then pivots to discuss DeepMind’s speculative “Agent Economies” paper which envisions future economies populated by interacting AI agents and highlights the novel risks this scenario may introduce.
- Challenges of Emerging Agent Economies - The speakers examine how autonomous AI agents communicate and act across layers, the risk of intent loss and unintended outcomes, and the necessity of steering such agent‑driven markets—citing algorithmic trading in finance as a cautionary example.
- Rise of AI Agent Companies - The speaker is optimistic about a forthcoming era of agent-driven firms, highlighting projects such as MetaGPT—a full‑stack software startup run by agents—and AI Scientist, a multi‑agent system that independently conducts scientific research and publishes results.
- Invisible AI Prototype and Meta Glasses - The speaker describes a low‑profile AI demo that operates without visible wearables, likening it to an “invisible” AI companion, and then references Meta’s recent event unveiling next‑generation Ray‑Ban‑style smart glasses.
- Challenges of Thought‑Controlled Messaging - The speakers explore how ostensibly simple AI chat‑bot interfaces become unexpectedly complex with near‑telepathic wearables, stressing the need for an approval step to ensure only intended thoughts are transmitted.
- Wearable AI: Seeking Real Use - The speakers debate how to pinpoint everyday, valuable applications for AI‑powered wearables—beyond novelty—citing smartwatches’ fitness tracking success and proposing speech‑impediment assistance as a high‑impact, empathy‑driven use case.
- Podcast Closing and Promotion - The hosts wrap up the episode, thank guests and listeners, and promote the show’s availability on major podcast platforms.
Full Transcript
# ChatGPT Usage, AI Economics, Expert Insights **Source:** [https://www.youtube.com/watch?v=1PlyV-pf9_M](https://www.youtube.com/watch?v=1PlyV-pf9_M) **Duration:** 00:46:50 ## Summary - The “Mixture of Experts” podcast, hosted by Tim Hang, brings together AI innovators (including IBM fellows and master inventors) to dissect the week’s most significant AI research and news. - The episode’s agenda covers a range of cutting‑edge work: the MBER study on how people actually use ChatGPT, the latest Anthropic Economic Index, DeepMind’s research on agent economies, the Ultra Ego demos, and Meta’s newest wearable technology. - In the news segment, the hosts note that Alphabet’s market value recently crossed the $3 trillion mark, the WTO predicts AI could increase global trade value by nearly 40 % by 2040, and researchers have taught dogs and parrots to interact with touch‑screen devices. - The discussion then pivots to the MBER paper “How People Use ChatGPT,” highlighted as a rigorous, economics‑style analysis that systematically maps real‑world ChatGPT usage patterns. - Lauren McHugh shares a personal connection to the study—her former professor David Deming is a co‑author—providing insider insight into the paper’s methodology and its implications for understanding AI adoption. ## Sections - [00:00:00](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=0s) **Is Humanity on AI Autopilot?** - The hosts and expert panel debate whether society has ceded control to AI, while reviewing recent research papers, demos, and industry headlines on the Mixture of Experts podcast. - [00:03:57](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=237s) **AI’s Economic Impact Through Embedded Search** - The speakers contend that generative AI will generate far greater economic value by being integrated via APIs into everyday services as a search‑enhancing tool, rather than through standalone chat applications like ChatGPT. - [00:07:48](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=468s) **Misaligned Expectations of LLM Use** - The speakers highlight how real-world interactions with large language models far diverge from popular assumptions—coding accounts for only about 4% of chats, while therapy, relationship advice, and gaming each make up less than 2%, underscoring a gap between anticipated and actual user behavior. - [00:14:17](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=857s) **From Search to Predictive Feeds** - The speakers argue that emphasizing traditional keyword search is shortsighted, advocating for AI-driven agents that proactively surface relevant information, while warning that such automated feeds may replicate the engagement‑centric biases of current social media. - [00:17:22](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1042s) **Anthropic AI Usage Density Insight** - The speaker highlights the significance of a normalized “usage density” metric to gauge real-world AI utility, noting its correlation with high‑income economies while also emphasizing emerging adoption in lower‑income regions through remote learning and accessible tools. - [00:20:34](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1234s) **Global Disparities in AI Chatbot Use** - The speakers analyze how nations such as Singapore, Canada, and India vary in adoption rates and purposes—like coding versus general queries—for chatbots like Claude and ChatGPT, highlighting cost barriers that influence usage patterns. - [00:25:12](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1512s) **Geography, AI Adoption, and Agent Economies** - The speaker reflects on a hopeful study suggesting that location need not dictate AI opportunity and proposes grassroots initiatives to raise AI usage in underserved areas, then pivots to discuss DeepMind’s speculative “Agent Economies” paper which envisions future economies populated by interacting AI agents and highlights the novel risks this scenario may introduce. - [00:28:49](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1729s) **Challenges of Emerging Agent Economies** - The speakers examine how autonomous AI agents communicate and act across layers, the risk of intent loss and unintended outcomes, and the necessity of steering such agent‑driven markets—citing algorithmic trading in finance as a cautionary example. - [00:32:46](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1966s) **Rise of AI Agent Companies** - The speaker is optimistic about a forthcoming era of agent-driven firms, highlighting projects such as MetaGPT—a full‑stack software startup run by agents—and AI Scientist, a multi‑agent system that independently conducts scientific research and publishes results. - [00:35:54](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2154s) **Invisible AI Prototype and Meta Glasses** - The speaker describes a low‑profile AI demo that operates without visible wearables, likening it to an “invisible” AI companion, and then references Meta’s recent event unveiling next‑generation Ray‑Ban‑style smart glasses. - [00:40:13](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2413s) **Challenges of Thought‑Controlled Messaging** - The speakers explore how ostensibly simple AI chat‑bot interfaces become unexpectedly complex with near‑telepathic wearables, stressing the need for an approval step to ensure only intended thoughts are transmitted. - [00:43:18](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2598s) **Wearable AI: Seeking Real Use** - The speakers debate how to pinpoint everyday, valuable applications for AI‑powered wearables—beyond novelty—citing smartwatches’ fitness tracking success and proposing speech‑impediment assistance as a high‑impact, empathy‑driven use case. - [00:46:25](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2785s) **Podcast Closing and Promotion** - The hosts wrap up the episode, thank guests and listeners, and promote the show’s availability on major podcast platforms. ## Full Transcript
Whenever I hear people's hypothesis and
I read the paper, I ask myself this
question, right? Is the human race
officially on autopilot? Because first
we use chat GPT, you know, for help, but
then we used it for everything. All that
and more on today's mixture of experts.
[Music]
I'm Tim Hang and welcome to Mixture of
Experts. Each week, brings together a
panel of the innovators who are pushing
the frontiers of technology to discuss,
debate, and analyze our way through the
week's news in artificial intelligence.
Today, I'm joined by a great crew. We've
got Aaron Botman, IBM fellow master
inventor, Lauren McHugh, program
director, AI open innovation, and
joining us, I believe, for the very
first time is Martin Keane, master
inventor. This is going to be a great
episode today. We're going to be
covering a lot of interesting research.
We'll talk about a great paper out of
MBER called how people use Use Chat GPT,
the latest edition of the Anthropic
Economic Index and a paper out of Deep
Mind on agent Economies. We're also
going to cover a pretty interesting uh
set of demos called Ultra Ego. Um and
also talk about the recent meta uh
wearable. Um but first, as always, we've
got the news and headlines from Ay. So
Ay, over to you.
Hey everyone, I'm Mcconnen, a tech news
editor for IBM Think. I'm here with a
few AI headlines you might have missed
this week. Google's parent company,
Alphabet, joined the $3 trillion club.
Yes, that's trillion with a T. Only
three other companies besides Alphabet,
could brag about having a market cap
over 3 trillion. According to a new
report from the World Trade
Organization, AI could boost the value
of trade in goods and services by nearly
40% by 2040.
Are you ready for the animal internet?
Scientists from the University of
Glasgow have taught dogs and parrots to
interact with touchcreens. The parrots
learned how to use their tongues to play
music, and the dogs use their paws to
call their friends. Want to dive deeper
into some of these topics? Subscribe to
the Think newsletter linked in the show
notes. And now back to the episode.
>> So for the first segment I really wanted
to talk about a really interesting paper
that came out of MBER which is in some
ways kind of like the gold standard
working papers coming out of the field
of economics. Um and it's a paper
entitled how people use chatbt. Um and
in many ways it's sort of a very
straightforward paper um that breaks
down I think for the very first time in
a very kind of professional academic
grounded way how people are using
chatbt. Um and I guess Lauren we're very
fortunate to have you on the show
because I believe you mentioned that
David uh Deming one of the authors on
the paper actually was your old
professor.
>> That's right.
>> And so I guess I don't know kicking off
I guess maybe I'll give you the the kind
of opening remarks on this one. uh
curious about like what you thought
about the paper and if there's
particular things that stood out to you
in terms of the trends, anything
unexpected or um or did this kind of
really confirm your biases in terms of
how people are using this technology?
>> Yeah, I mean I think um knowing a little
bit about uh the research behind this,
what I appreciate is what it takes to
actually try to create a taxonomy around
how people are using chat GPT. And they
did that. they had um classification for
different kind of tasks that people are
using it for. And I think what stood out
to me was that the number one task was
gathering information
um which is which is search you know
that's you know chat GPT is really
search GPT and then what that means from
a you know the point of this was to
understand the economic impact you know
is it really that this is at least for
now you know what the main use cases
that it's being used for is this really
more of like a search engine 2.0 0
technology versus I think the probably
hypothesis going in was that this is a
net new like category making technology.
So I think it's um interesting to see
how that will evolve. But I also think
too that um you know looking at the
impact the economic impact of Genai
through the consumer standalone tool
like chat GPT itself is really limiting
what I think is probably the bigger
economic impact which is where that um
technology like via an API gets embedded
into other software that we use every
day. So, like, you know, when I can go
to Amazon and search, you know, what are
great toys for a 5-year-old, or I can go
to the New York Times website and search
like, you know, what's the latest update
on XYZ bill, you know, getting passed in
the Senate. To me, those are probably or
I I would argue are going to be the
bigger economic impacts than people
using Genai through a standalone chat
interface like ChatBT.
>> Yeah. And actually, Martin, would we
love to pull you into this discussion
because I think yeah, Lauren, similar to
you, I kind of like read it and I was
like, wait a minute. We've been sold
this like multi-purpose general use
technology. It's just it's just search,
you know, and I guess Martin, I don't
know, is that in some ways like kind of
a disappointing outcome? It sort of
feels like, you know, I guess Lauren, it
seems like the argument you're trying to
make is well, it's still early and all
these other types of use cases are
maturing. Is there a possibility here,
Martin, where just it turns out the main
use of AI is just search? Well, that's
certainly what this report indicates is
how many people are using it for for
search. What I found interesting as
somebody who really works in education
was that the main use case in education,
50% of the use for work in education was
for writing, which I think in some
respects is not terribly surprising
because we see the internet full of AI
slop everywhere, right? But but it kind
of is surprising because large language
models today generally are just not very
good writers that you you have a certain
writing style and that writing style is
fine and totally legitimate but it's the
same writing style and no matter how
much you prompted it always wants to go
back to the same way of of phrasing
things. So initially I saw that and I
thought my goodness that is a scary
thought that educators
>> kind of a grim outcome I guess. Yeah,
just using this for writing. But when
you look closer, actually twothirds of
the things that are classified with this
writing taxonomy are actually not
creating new content, but are working on
existing content. So editing or
critiquing or translating. Twothirds of
the use were for those three categories.
uh which makes I think a lot more sense
because to me that is where the power of
this comes in that if you give it your
own sample of writing if I'm trying to
explain something to a particular
audience and I need to put it in terms
they will understand a large language
model is pretty good at taking that and
acting kind of as a reviewer as a and as
an editor but certainly how I use it so
I I was comforted to see that a lot of
people are in education are doing the
same thing
>> yeah it's the same way I use it as well
is like largely as like kind of it's
great as like an editor and a critiqueer
Um though it's interesting that like
kind of like pure just text generation
is the really big thing. Um, and I guess
I don't know, do you how do you feel
about kind of like sort of I think
Lauren put out like a very interesting
hypothesis and I've used this example
before on the show, but I'm going to use
it again just cuz I love it is basically
like, you know, when they first invented
the PC, you look at all the ads from the
early days of the PC and they're like,
"Oh, well, I don't know, you could maybe
use it to store recipes, right?" And it
like took a while for us to eventually
come up with the idea that there's a
thing called the spreadsheet and oh,
okay, all right. Then everything changes
for the PC on terms of what you use it
for. Do you think there's a similar
thing here where basically like we
shouldn't be surprised because the chat
interface might kind of only be good for
like a couple things really and so like
we're still kind of waiting for almost
like different ways of interacting with
this technology to start to see
different results.
>> Yeah. And think about what some of those
uses we might think would be uh like
coding for example. But this report
showed something like 4% of messages are
about coding which is you know much less
than you would you would think. And when
you look at the benchmarks and so forth
they're very much coding focused. Other
things that surprised me in this report
were uh people talk about using this as
a therapist. Well, something like less
than 2% of messages were actually about
relationships and personal reflection
and then games and role playing and that
kind of thing that was like a tiny
percentage less than half a percentage.
So the sort of the use cases that I
think people have initially thought
let's use large language models for uh
and not necessarily playing out here in
what people are actually doing with
these models and these chat bots.
>> That's right. And I think, you know,
Aaron, just maybe kind of turn and get
your voice into this conversation. I
think one of the greatest ironies that
I've always thought has happened with
LLMs is you have like, you know, very
leftrain technology generating a very
like kind of like wordy almost kind of
like feelings-based uh tech on the other
side. And you know, I had a similar
reaction to Martin reading the paper was
almost like, have I just been living in
a bubble for all this time? Because you
know for the last few like I don't know
you sampled the last 10 20 episodes of I
think we talk about the codegen
applications like every other week it's
like very constantly in our discussion
it's what like a lot of the high-profile
use cases that people are excited about
you know cloud code and etc. That's like
what people want to talk about. But I
think what we're finding here is that
that's like a a total bubble. If you're
interested in mass adoption of this
technology it's like coding is not the
thing. Um, so I guess Aaron, maybe to
bring that to a question, not just rant
at you, is like do you think it's
actually like should technology
companies should foundation model
providers be like totally changing what
they're focusing on? Because it sure
seems like you talked to anthropic,
they're spending a lot of time on this
code application, but it accounts for
like such a tiny slice of what people
are doing.
>> Yeah. Whenever I hear people's
hypothesis and I read the paper, I ask
myself this question, right? Is the
human race officially on autopilot?
Because first we use chat GPT you know
for help but then we use it for
everything.
>> Uhhuh.
>> And in this paper you know it's become
this niche to everyday where it's a tool
for the techsavvy users but now it's
moving to the consumer tech just like
the internet smartphones once did. And
what really stood out to me uh was the
amount of people that are using you know
these tools. is about 10% of the world's
population which depends on what source
you look at. That could be about 800
million people are using it. But the
early adopters were professionals.
That's where it started. But that's
flipped now where now 70% of usage are
now what we call non-workers, right? And
if you look in that paper, the worker
profile of what they do, you know, we
discussed some of that uh here already,
you know, where worker profiles used
mostly for writing, for example,
computer programming. It's really about
helping knowledge workers find
information in these knowledge intensive
jobs. Whereas nonwork profile, it's
really where AI is being embedded into
everyday life where you want to create
images, art, video, uh multimodal, but
to help patterns of life where you can
rewrite uh this uh content. And what's
what's neat when you put both of those
two groups together, you know, seeking
information, practical guidance, and
writing, um, according to this paper,
that's about 80% of all the tasks and
topics that are used within these, uh,
models, right? And I think in the future
where we're going that's alluded to this
paper is that these are not just
assistants anymore but these are agents
like in sports you know where you would
have an assistant that do that does your
task what you tell your assistant to do
where your agent is sort of constantly
in the background working on your behalf
and you know we're seeing these
soloreneurs pop up where a singular
individual has a lot of uh forsight and
foresight with all these tools right and
it gives these small teams
access to expert level tools. But the
last point is even though uh we have
access you um the access par gap is
closing uh although it still seems as
though those who have those countries
that have the highest GDP the people in
those countries still have more
accessibility to to these tools it still
doesn't mean that there's an equal
playing field about how to use these
tools as an agent. Yeah, definitely. I
do want to get to that because that'll
definitely be part of like I think the
anthropic paper which we'll talk about
in the second segment. Before I move on
to that though, I guess maybe a final
question maybe Lauren to you about kind
of business strategy here. Um, and it's
I think related to Aaron what you're
saying about like well the future is
going to be agents and the use cases are
going to look pretty different as we get
agents to come online. It feels like in
the near term if we don't figure out
agents does Google kind of end up
winning this game like uh you know one
of the things we've been tracking ate is
kind of like if you asked me like a year
ago I would have said ah Google's out of
the game they're so far behind they
haven't released anything good but in
pretty quick succession they really seem
to be catching up very very quickly and
I think this this kind of report made me
think a little bit about like well it
turns out the majority use case is still
search does that mean that the incumbent
search company ultimately kind of
triumphs in this game like that like
most people still go to Google for
search and so very naturally if you have
an AI product around search the two will
kind of go together right and so I guess
how much do you think this kind of
weighs in Google's favor in terms of
like winning ultimately these kind of
early innings of the the AI game
>> I I mean I really think the fact that
search is the number one task well
information gathering they call it is
because there's still a long way to go
in the imagination gap of what we could
use generative AI for it's not because
that is the singular best application
for it. So I really think that in you
know from a business strategy
perspective investing in the actual work
I mean this is really like product
strategy and product design work to
figure out what are the problems that
besides search that can actually be
solved with Jedai you know doing the
market research doing the user research
you know prototyping solutions that work
has actually been surprisingly
limited and um there's just now getting
to be this wave of like entrepreneurs
who are taking that forward. Like there
were 36 I think AI unicorns this year
alone. You know, unicorns have more than
a billion dollars valuation. That's
crazy. You know, it wasn't I don't know
what the number was for last year or
for, you know, 2023, but it was not. I'm
sure it was not.
>> Seems like a lot.
>> Yeah. So, I really I really think that
it's not that search will go forward as
the most dominant use case. I think well
it it might but I think that that isn't
where we should focus. We should really
focus on all the other things that it
could do which might look more like a
long tale but if we invest in figuring
out and like using creative intelligence
to figure out what those things are and
then test them and eventually build and
scale them that makes more sense.
>> Yeah. Yeah. Yeah. To me, you know,
search implies that it's human-driven,
that a human has to go in and enter a
keyword and search. Whereas, um, I I
hope in the future data will find you,
you know, where we all become magnets,
you know, for the data that we don't
have to actively search, but we have
these agents going off already
predicting what we want to see and
searching for us and then providing us,
you know, what we're looking for. And
hopefully Google, you know, will will be
on we and I think they will will be on
the forefront of some of that. Yeah,
it's a funny kind of future where you
like wake up and you're like, "Oh yeah,
this is everything I wanted to like
I didn't even realize that this is what
I wanted."
>> That sounds an awful lot like a social
media feed we have today. And I'm not
sure that it's necessarily delivering
the things that maybe we should be
getting, even though it thinks that
that's the thing that's going to get the
most engagement. So yeah, it will be
interesting to think of a world where uh
we're not just trying to send you stuff
to maximize the engagement of the stuff
that we send you, but it's more
personalized to the fact that we think
there is some utility in you receiving
this information and that you would
benefit from it.
I'm going to move us on to our second
topic which uh I think luckily is like
pretty related to a lot of the things
that we're touching on. Um so you know
not to be you know outclassed anthropic
also has released a major review of how
people are using AI technologies. This
is the second I believe edition of their
anthropic economic index. The basic
intuition is that they have a lot of
people using claude now and what they
want to do is basically get a better
sense of how people are adopting and
using AI in the field using the data
that they have um from operating a
platform like claude. Um and it's super
interesting and I think there's a lot to
go through but I think the main thing I
want to focus on which is new for this
edition of the anthropic economic index
is that they have started to expand
their analyses beyond just the United
States. Um and again I think in the
spirit of like getting out of our bubble
like not everybody uses AI for coding.
Uh I think it's also really useful to
for us to think about like the
international scene and how that's kind
of adopting AI. And so Aaron, I kind of
want to go back and maybe kick off with
the point that you had raised, which is
you were talking a little bit about the
fact that yeah, what anthropic finds is
that there's this kind of relationship
between like wealthier countries and
adoption of Claude and there's kind of
like this very kind of specific sort of
income distribution difference um in
what they're seeing in the data. Um, I
guess should we I guess Aaron be worried
about kind of like an AI gap like where
actually just turns out that like
wealthy countries adopt this technology,
they get all the benefits and countries
that are maybe relatively poorer don't
adopt that technology and are left
behind.
>> Yeah, I think it's important to find the
signal in the noise. Um, because I don't
think it's all about, you know,
wealthiness or GDP. What I really liked
was the anthropic AI usage index that
they introduced which looked at what
they called uh usage density um and and
it's like this normalized measure that
when they adjust for let's say working
age that smaller tech advanced countries
are lead and usage per person right so
so going back to some of Martin's point
it's about the utility you know what are
people actually getting out of using
these tools right is it useful is it
actionable right and then how much are
they using it And um there there
certainly is a correlation right in the
paper about high income means you more
usage density but there were some
corners where potentially you know there
are people who are maybe not in high
income countries but they're learning to
work with AI right so so they're uh
eventually getting through it so um um
they called it um directively automating
task was was one place you where people
are becoming uh much more familiar with
the tools and lots of this is working
out because of these remote
technologies. You know, you can go on
the internet and take a remote class.
You can even watch mixture of experts,
right, to learn how all of this
technology works and and then begin to
go pick up a tool and try to use it and
they're very accessible. Uh which is
really nice as well, right? So um you
know businesses are tending to trust
more of of these you know tools and that
I think is beginning to spill over into
individuals adopting this but it doesn't
mean AI adoption is uniform because it's
certainly not and I do think that we
need to be careful right about widening
this AI gap. Yeah, Martin, I guess on
this education point, you know, it's so
funny because I think, you know, when I
used to work at an AI startup, the thing
we always used to say is like, well, it
turns out that LLMs, you can have a
natural language conversation with them.
And so this is like the seamless, most
easy adopt interface because it's just
conversation. Uh, and like you don't
have to learn to program it. You don't
have to read some handbook to learn how
to use the interface. You just talk. Um,
but it seems here and I think what
Aaron's kind of pointing at is yeah that
there actually is still this like
learning curve even though it is
conversation and so has AI turned out to
be harder to learn how to use
effectively than we thought.
>> Yeah. When you see people selling online
100page prompting guides that makes you
think maybe this maybe this
>> wait are we just back to where we
started?
>> Right. That's right. Suddenly we need a
big manual just to be able to talk to
this chatbot. I mean, and it's certainly
been been shown that prompting is still
a big part of this is understanding
intent uh massively affects the the
outcome of the model. I mean, I thought
I thought it was really interesting the
fact that they had lined up all of the
countries that were part of this study
and then they did correlate so closely
to GDP that the higher the GDP, the
higher the percentage of people in that
country were actually using the model
and maybe getting more utility out of
it. But then when whenever they do that,
it makes me interested to see what
doesn't correlate with with the with
that line, that straight line. And there
are a couple of countries like that.
Singapore and Canada, they significantly
overindexed on that. So something like
four times the amount of people in
Singapore were using chat GPT than you
or or in this case actually claude than
you would uh actually expect considering
the GDP of that country. So it's kind of
begs the question well what sort of
utility are these people getting out of
it that that that other countries are
not and then when you look at India we
mentioned earlier that we think you know
there's so much talk about using these
tools for codegen and actually it wasn't
a big part of what chat GPT usage was in
India half of all use of claude were for
coding tasks so you can see again that
people in different countries are
getting different utility out of these
different chatbots
>> Lauren I think there's a way to have the
conversation, another way at this
conversation if you will, like that
we've been having for the last few
minutes, which is like Tim, you're being
a huge dummy, right? It's like it's kind
of no surprise that rich countries adopt
Claude more because the reason why is
that like you can spend $200 a month on
Claude, right? That it's actually like
something that you you you have to pay
for. Uh and like some of these rates to
actually like get the better versions of
the model are indeed extremely
expensive. Um, and I guess I want to ask
you as someone who works in open
innovation, thinks a lot about open
source and kind of how all those pieces
fit together, do you think this map
looks very different for open source,
right? Like is this kind of correlation,
right? Is there this huge dark matter
where it's like, yeah, no surprise they
don't use cloud because they don't want
to pay for it. They're using open source
alternatives that are free and much
better. Is that one way of reading this
data? Do you buy that? I think if you
look at developer populations in
countries, I would I would guess that um
amongst developers only that GDP has
much less of an impact because if you
already are trained as a developer then
you have access to all the tools in the
world. You can get models, you can get
inference engines, you can get valuation
frameworks, tuning frameworks, you know,
whatever you like. But I think the
problem is that developers are a really
small percentage of the population in
some countries versus much bigger in
others. And that's fundamentally an
access issue and an economic issue. And
I think this whole thing played out
pretty much the same way with social
media. Like 10 15 years ago, we were
having the same debate of social media
is so widely adopted in higher inome
countries. there's such low adoption in
lower income countries. The stakes were
a lot lower because I think social media
it's more of like a um you know has more
entertainment value than you know like
productivity and labor value. But I
think what's most important now is what
happens next because when that happened
with social media I was actually living
in East Africa at the time and Facebook
made a very very um bold move which was
to create something called internet.org
or and made these tools available
completely for free, you know, working
together with the Telos and the internet
providers and I could see how that was
received on the ground and it was a very
mixed bag. You know, there of course
were people who were very excited and I
mean even today in Kenya, you can if you
run out of data on your cell phone plan,
you can still access Facebook because
it's there's it's negotiated to be for
free and that's because the goal is to
make sure that there is access and
adoption. But then the other um you know
cohort is that that essentially creates
like a gatekeeper mechanism and and the
big problem with that particular uh
project was that it was a selection of
certain websites and tools like
Facebook, Wikipedia and a few others and
in fact in India within a year of that
being released it was banned by the
government. So they actually said it's
better to have no free internet if it's
going to be curated by someone else you
know with no sense of like net
neutrality potentially creating
information monopolies that you know the
population is not deciding on. So I
think whatever I think that this report
helps bring awareness to the issue that
right now adoption and access are not
equally distributed. I think what
happens next needs to account for like
how to do this with dignity for the
populations it's meant to serve. Not
really using charity as a cover for
actually just creating market dominance
um in in these markets, these emerging
markets. Um I think that's the most
important thing to keep in mind now
about now that this access gap has been
identified, what do we actually do about
it and not not repeat history.
>> Yeah. Yeah. Yeah. This paper left me
with some hope right at the end right
where where where I was reading it and I
interpreted it this that geography isn't
necessarily destiny you know but it does
help you know where you live and the
things that that you can do right if you
do live in a low usage region you could
learn remotely you could create your own
AI exposure you could find niche
industries that could adopt AI but
there's different areas that you can
help to close that gap right so it could
be a grass grassroots movement to
increase the density of usage in these
geographies that are looking like
there's just no hope, you know, but
there is.
I'm going to move us on to our third
topic of the day. Um interesting paper
moving a little bit away from these kind
of economic studies uh to a paper that
DeepMind released called agent
economies. Um, and it's a kind of fun, a
little bit of a speculative paper,
though I think we can debate about how
speculative it is. And the paper
specifically looks at the idea that
look, in the future, we're going to have
all of these AI agents in the economy.
And in many domains, you know, these AI
agents may be interacting with one
another. Um, and kind of what the the
paper does is just to simply point out
like that'll be weird and new and that
we will need to figure out what to do
with that. And indeed that it kind of
introduces a number of sort of new
risks. Um and you know this actually
connects to something that we talked
about on the last episode. Um just to
kind of recap everybody here and also
our listeners, we talked a little bit
about the phenomenon of you know
particularly in job hiring recruiting.
We have this world where basically
people are starting to use LLMs to
submit applications and then we're
starting to see like HR teams and people
teams use AI to try to filter through
those applications and there's like a
little bit of like an algorithmic war
going on there which you know ultimately
hasn't been great for anyone. Um there's
an Atlantic article that literally was
entitled something to the effect of like
the job market is hell. Um and so I
guess may Martin I'll kick it over to
you is like should we be worried about
agent economies? It's like it does feel
like in many of the cases I could name
you know the minute we sort of have
automation on both sides of a market you
know things get a little bit out of
control and not always in the best way.
>> Yeah we see this in education as well
that uh somebody has written uh some
sort of article to explain something in
AI then somebody has used a large
language model to summarize the AI
summary and then they're using a large
language model to create quiz questions
based on the summary that was based on
the LLM generated article. Yeah, we're
just like, yeah, it's so yeah, that's
that's obviously it's an such an
interesting thought to think of the fact
that, you know, how powerful a
particular agent can be and now what
happens if you take that thing that has
so much utility and then you connect it
to another thing that's just as
powerful, another agent, and how does
that communication work? I mean just
sort of from the the plumbing point of
view I've been looking at that recently
how you integrate these two agents
together and using things like the agent
to agent protocol the A28 protocol which
is a open source thing um that's now
part of the Linux foundation but
originally came from Google and just
seeing how these agents can basically be
rapid so that they can talk to to other
agents and discover each other and so
forth and the analogy I heard was oh you
know it's kind of like making Lego a
Lego brick out of an agent so the number
of times in my IT career that I've heard
that we are going to take something and
we're going to wrap it and it's going to
be like a Lego brick that we can plug
into anything else. I mean we've been
there with S SOA with Cora
microservices.
Yeah, here we are again. Uh you know
that helps with discovery. It helps with
with communication and so forth. But I
think this is a whole new scale. This is
not talking about a microser that know
does one thing. It writes a field to a
database. We're talking about agents
communicating with other agents asking
them to do things and then the amount of
processing and complexity in that agent
to perform that thing. How do we know
that the thing it's going to do is
really what we were asking from the
original agent and so forth and as we go
down the line that kind of the meaning
could could get lost as it goes. So
yeah, it's it's really interesting to
think how this agent economy would work.
what would the what would be the kind of
the first use cases for it and all of
the potential unintended consequences
from that.
>> Yeah, for sure. Aaron, are you are you
hopeful here? I mean, so the paper ends
on kind of a positive note because I
think the the authors are trying to like
offer a way forwards from a research
standpoint and they pitch this idea of
look, we need to figure out how these
economies can be made steerable and if
we can steer these markets in the right
direction, we can make sure that they
behave properly. And I guess my
skepticism on the paper is like I don't
know one of the most arguably agentified
markets is the financial markets, right?
People do algorithmic trading all the
time and a huge amounts of the volume of
the stock market is on algorithmic
trading. The stock market has proven to
be a really hard thing to try to steer.
We're certainly maybe better at it than
we used to be. But as far as I
understand, I mean, when the market like
gets into a crisis mode, the best
solution we have is we literally hit
what is called a circuit breaker and we
stop the market for a period of time and
start it again. Um, and so I don't know,
do are we are we promise like do we feel
steerable markets is like a promising
frame or are we going to just be kind of
like what we do financial markets which
is like turn it off and turn it back on
again and like hope that the system kind
of keeps working properly. Well, I'm
waiting for agents to unionize, right?
And demand some sort of profit from this
and even demand nap breaks,
right? Like
>> um but but I mean with that, so this
this reminds me of of a field. So
whenever I was back in college, you
know, I was very interested in
evolutionary computing. So I'd studied
artificial life and that's really the
study of how man-made systems could
exhibit behaviors that are
characteristic of living systems. Now
these these agents are becoming very
similar to that but the focus and the
the enabler around um the modernday
uh agents is AI which is more of a top-
down logic driven piece right and and to
your question about do will these agents
be able to you know um solve the
problems uh for example in the stock
markets of uh tomorrow and of today well
I think that you don't have to be the
most powerful agent you just need to be
the most needed agent, right, in order
to help solve some of these problems.
And it comes down to can we set the
right um distribution of fairness,
credit assignment, the right kind of um
incentive for an agent to self- evvolve
to so that they can better solve some of
these problems. But you're still going
to see, you know, I think, you know,
traditional machine learning that's
going to be embedded into these agents
along with Jin AI and and there's going
to be some intertwining between both of
them. you know uh for example LLMs can
use the outputs of you know let's say
decision tree support vector machines
and so on but also um it goes in the
opposite direction you know you know
common day or uh traditional machine
learning can use the outputs of also
LLMs and they work together so so I
think that combination right um is is
going to work together to help solve and
and create this scalable coordination
amongst all of these different agents
>> so ultimately I think that that is
promising I could see at right as a way
of kind of like approaching uh some of
these problems. Um maybe Lauren, I'll
give you the last word here before we
move on to our final topic for the day.
Agent economies. Are you optimistic? Is
this something that you're excited to
see?
>> I think I'm optimistic about what I
would see as the next phase, which would
be agent companies. So we'd have
companies of agents before we get to
economies. And of course, I look to
what's happening in open source
communities to see if that's realistic.
And actually I'm us I'm generally the
skeptical person on my team but on this
one there's two projects I've seen that
were like jaw-dropping which is one is
meta GPT and it claims to be a software
startup um of agents. So there's agents
to do your product you know market
research competitive landscapes agents
to define the requirements to give to
the engineering team and then the
engineering team is a team of agents
that does the codegen and then it gets
deployed. So, it's a, you know, uh,
end-to-end software company. That's
super cool. It's a super popular project
worth checking out. The other, um, which
is mind-boggling to me is AI scientist,
which is a team of agents that can do
its own scientific experiment and come
up with a publication and could even try
to get that published. So, with AI
scientists, this again, it's it's a
popular project. It has like 10,000
stars. um it you know writes its own you
give it a prompt like you know what's
going to be a more efficient way to use
LLMs it will come up with a hypothesis
design the experiment create the data or
get the data run the experiments these
are usually experiments about LLM itself
so it's a little meta they actually got
one of these AI generated experiments
plus papers accepted into ICLR so they
worked with the organizing committee and
told them, you know, we're going to
submit some AI generated papers to like
keep this ethical, you know, like they
gave them the heads up and one of the
three papers they submitted got
accepted. So, I think agent economies I
I honestly I can't quite wrap my mind
around it because first like let's make
agents work better and then I think the
next step from there would be agent
companies that create the agent
economies. I'm just wondering is this
like just a giant echo chamber when it's
coming up with a hypothesis it's then
coming up with its solution and then
it's peer reviewing itself like is it
just going to be yeah just saying yes to
everything
>> yeah I do like kind of Aaron's
hypothesis is that we'll see like a lot
of other phenomena emerge here where
like agents will try to unionize or like
the AI scientists will argue over who
gets credit on the paper or like the AI
engineering team is always complaining
about what the AI product team's putting
together. Like I think we're gonna we're
about to get there. That's that's the
future of the AI economy. So
>> AI bickering.
>> Yeah. AI bickering. Yeah. You've seen AI
cooperation. Get ready for AI bickering.
>> All right. So I'm going to get us to our
final topic of the day. And in some ways
I want to tee this up as kind of a tale
of two wearables. Um so there was a demo
that was released a few weeks back from
a startup by the name of Alterra Ego.
Uh, we'll put the link in the notes, I'm
sure, and you should just check it out
online. You can search it up. Um, and it
was a fascinating demo. Basically, the
idea was it was just a guy sitting and
he was able to do a bunch of stuff with
AI, but there was no visible kind of
interface. So, there weren't glasses,
there wasn't a wearable, there wasn't a
pendant he had to wear. It was almost
just like a little kind of behind the
ear effectly. Um, and uh, and it was
really impressive as a demo. Um, and you
know, I think they caveed it in an
appropriate way. They said, "Hey, this
is still prototype and this is what
we're working on, but there's a company
doing this right now." And it's very
kind of like almost like invisible AI,
right? Like the idea that in the future
you'll carry an AI companion around, but
there won't really be like a device.
It'll just be kind of like a small kind
of unobtrusive thing um that helps to
kind of like use computer vision and
language models and generative AI um as
a way of assisting you throughout the
day.
split screen, I think, to yesterday. I
believe Meta did a huge event where they
demoed a bunch of Meta AI. Um, and their
big kind of announcement was this, I
believe, multi00, $700, $800 wearable
with Rayban, which are kind of the next
generation of their glasses. Um, and
they had a bunch of kind of flops on
their live demo, but overall the reviews
have been very, very positive that this
might be sort of like the glasses that
finally get the AI integration to work.
And they showed off some cool stuff
like, oh, in the future it'll display
like a translation, you know, uh, sort
of captions while you're talking to
someone, right? So you could kind of do
live translation. Um, you can pull up
notes while you're talking to someone.
Um, and so I guess anyways, these are
sort of two really, really interesting
visions, I think, of what the kind of
sort of wearable AI future might look
like. Um, and I guess, uh, maybe Aaron,
I'll kick it over to you. Does one of
these visions are they more compelling
to you than another? Like do you believe
in kind of like you're going to have
kind of a transparent screen in your
glasses and that's how you're going to
interact with AI or do you kind of
believe in this like you know fully
invisible it's just kind of an audio
voice that you interact with? Yeah, I
mean there's there's no free lunch, you
know, so it always depends on what
environment, you know, you're you're
working in to what solution works out
the best, you know, and this this
reminds me um I worked in biometrics for
quite some time, uh you know, for about
about 10 years, uh before I you know,
I'm doing what what I'm currently doing.
And there are these biometrics called
past thoughts where if you could measure
what you're thinking, then you could get
access and authenticate yourself. And
this was back in 2005, right? and they
they would use, you know, these these
different pieces of um EEGs to measure
brain waves. Whereas this is using
what's called EMGs where it's looking at
the neuromuscular, you know, facial
throat activity that's happening. It's
trying to deduce, right, and infer what
you're going to say based on those said
activations. So, you actually have to
really think about it, of course, but
then intrinsically your muscles have to
respond without actually projecting
sound out, right? um whereas there are
these other non-invasive pieces where
you don't have to do that right um and
then if I look back and and by
biomedical engineering work you know
there's things like fMRIs you know
there's transcranial magnetic
stimulation where you can turn off
portions of the brain but and and those
are somewhat you know remote right so
there's all these spectrums of how do
you do this ubiquitous computing right
do you wear it what kind of devices are
those and so on and you know as always I
think it's going to be a combination Uh
what what I did like about the Ray-B
band is that it uses it it used a um
consumer device that people always use
and need sunglasses and then they
attached on you know some some kind of
tech you know around that right so so
you know if you could pick an object
where someone looks at and and it has an
affordance or you know what to do with
it and then you add in AI because AI has
become invisible right then I think it
becomes very powerful so the so the less
physically intrusive the better. Um, and
I think this altry ego is getting there
and it's a very interesting way. What I
would like to see is more technical work
and and research has been published
because I I did search and I only was
able to find like a paper in 2018,
right, about this work, but I was
looking for more information, right? How
big is the library vocabulary and so on.
So, there's just a lot of questions that
I had.
>> Yeah, definitely. And Martin, this goes
to something we were talking about a few
minutes ago. Interestingly, you know, I
was joking a little bit about like, oh
well, you know, chat GBT chat bots were
supposed to be the easiest interface.
And it turns out there's like actually
like a lot to learn. Um, and I guess a
little bit of what kind of Aaron is
saying is like while theoretically it's
better to just be like, I'm thinking
about sending a text message and like
the AI does it for you, like that's
actually kind of almost more difficult
than like glasses and like a little
thing that is kind of like a mouse
that's in my hands, right?
>> Yeah. I I was watching that demo and it
looked actually like these guys are
really having to concentrate very hard
to make this near telepathic wearable
actually send a message and it's
supposed to only pass intentional
thought. But I'm thinking
is is it really only going to pass the
things that I want it to? Like I really
hope there's an approval button that
before it kind of sends the message cuz
like I'm answering your question now Tim
and I'm I'm thinking about how to answer
it but I'm also looking at Laura and I'm
thinking what's that picture frame
behind her? I wonder what's in there.
>> What am I having for dinner?
>> I didn't want that included in the
message.
>> So So yeah, I you know it it was a very
impressive piece of tech, but I always
like to look at how the there's a bit of
a disconnect between the engineering of
hey, let's see if we could create a
wearable and it uses some kind of
brainwave analysis or something, you
know, versus what the marketing
department has to do, which is to create
this video to sell the product. And you
look at the use cases that they had in
the video. And one of the use cases is
well I I want to talk to my friend over
here. My friend's in the room but it's
perhaps it's too noisy. So uh you know
that is the use case that they decided
that if it's too noisy to talk to your
friend you need a wearable that goes
over your ear and uses telepathy. like
could we not just use our phones and
send a text message or write down on a
piece of paper like and and I think back
to like when the Apple Watch was
released and you know that that was the
idea was could you take all of the the
tech from a phone and basically put it
into a tiny little watch and then great
if you can do that how would we use it
and one of the things from the Apple
keynote when that came out was uh these
these heart tapbacks where it would
measure your pulse right and then send
it to somebody else because that used
the the heart rate monitor that was in
there and it used the the haptic engine
that was in there. I mean that is not a
problem that I'm sure anybody was trying
to solve and nobody's going to go out
and buy a watch for that and that very
quickly disappeared. So it's interesting
to see now the kind of use cases. Now
the the meta rayband display that was
announced. I mean that does look as
Aaron said that's something that you
might already be wearing anyway. So it's
it not such a problem to have to like
bring it with you and put it on. You
might have it regardless. But even then
when you look at the use cases they had
things like a person out about town and
they looked at a building and they said
what sort of architecture is this? And
then the AI gave them an answer. Well,
yeah, that's kind of cool. But am I
gonna need to know every day what kind
of architecture I'm looking at?
>> Four or five times a day,
>> right? So, trying to find the the daily
sort of use for these technologies, I
think, is is the real thing. Like with
the watch, it turned out the use was
fitness tracking and notifications, and
that's basically 90% of all uses. So, it
does make me wonder how some of these AI
powered devices, these things that you
wear and and are basically screens, h
how we'll actually end up using them.
>> Yeah. Yeah. Yeah. I thought there was a
bit of a miss on the use case. You know,
I could quickly think one of the better
use cases that I could imagine is the
ability to help people who have speech
impediments or can't speak, right?
Because that's, you know, I looked and
that's about 5 to 10% of the global
population. That's as many people that
use these Genai tools, right? That's
about 400 to 800 million people. That's
a big market that they would have
already had. and it shows a nice use
case that's you know that's helpful you
know and it and it can really create and
help create a sense of empathy with
their product between people and product
um you know I would like to see sort of
that you know come out of this you know
how can we help humanity better than
just being this neat interesting tech
>> Lauren I'll give you the final word of
today's episode telepathy would you pay
for it uh but more generally would love
kind of your thoughts on this space I
mean I think like it feels like we're
getting skepticism on both sides. It's
like regardless of whether or not it's
telepathy or the glasses, I guess what
I've heard from Martin and Aaron is
well, we're still waiting on what the
good use case is here. I mean, do do
wearables have a future with AI for now,
or is it kind of just still very
speculative from your point of view?
>> I think these two have two very
different purposes. I think the glasses
are more about making a more convenient
or usable interface. So take the
technology we have and just make it
easier to bring up into your interface
that you use with it. I think what's
interesting about alter ego is it
creates a new communication plane really
like you know we have vocalized
language, we have body language, we have
facial expressions. This kind of sits in
between where you want to say something
to someone but not let the whole room
hear it. Um you know do we really need a
new communication plane? I feel like in
pretty select circumstances like
everyone's been saying. I mean, I I do
think it would be sometimes convenient
if you're in a in-person meeting or at a
party and you want to, you know, I mean,
sometimes it's been out of here.
>> I want to get out of here. Let's go. Or
like, you know, hey Aaron, would you be
ready to talk about XYZ case study in
this meeting? And I don't want to ask
that out loud because it is a
distraction. Um, so I, you know, I there
are use cases. I'm not sure if they'll
be worth the cost of these technologies
just yet, but like Aaron also, I think
the most compelling use case. It seems
like when they were researching this at
MIT, um, one of the main use cases they
were focused on was people with MS and
other um, distrophies to actually help
them be able to communicate. That seems
like hands down, you know, truly
invaluable worth all of the research
that it takes. And then if it can be
extended to other more of like you know
social conveniences that would be cool.
Sure.
>> That's a great note to end on. That's
all the time that we have for today. Uh
Lauren Aaron always great to have you on
the show. Uh and Martin hope to have you
back sometime uh ate. And thanks to all
you listeners. If you enjoyed what you
heard you can get us on Apple Podcast,
Spotify and podcast platforms
everywhere. And we'll see you next week
on Mixture of Experts.
[Music]