DeepSeek Tops App Store, Raises Concerns
Key Points
- DeepSeek vaulted to the #1 spot in the App Store by bundling two under‑discussed innovations: openly showing the model’s step‑by‑step reasoning and offering a free, high‑performance “R1” reasoning model.
- The visible reasoning UI not only lets users fine‑tune prompts on the fly but is already being used by OpenAI for model distillation, suggesting a new design standard for future AI products.
- By making the reasoning model free, DeepSeek tapped the “autocomplete crowd”—everyday users who treat AI like a sophisticated autocomplete tool—driving rapid adoption without any algorithm‑gaming tricks.
- However, DeepSeek’s terms of service are markedly invasive: they retain user data (including keystrokes), lack clear ownership rights for generated outputs, and route legal disputes to Chinese courts, raising serious privacy and IP concerns.
- Some users dismiss these worries by claiming they can run the model locally, but the broader risk remains that the company could legally claim ownership over ideas or data derived from its service.
Full Transcript
# DeepSeek Tops App Store, Raises Concerns **Source:** [https://www.youtube.com/watch?v=vjvZMK5C0pk](https://www.youtube.com/watch?v=vjvZMK5C0pk) **Duration:** 00:08:32 ## Summary - DeepSeek vaulted to the #1 spot in the App Store by bundling two under‑discussed innovations: openly showing the model’s step‑by‑step reasoning and offering a free, high‑performance “R1” reasoning model. - The visible reasoning UI not only lets users fine‑tune prompts on the fly but is already being used by OpenAI for model distillation, suggesting a new design standard for future AI products. - By making the reasoning model free, DeepSeek tapped the “autocomplete crowd”—everyday users who treat AI like a sophisticated autocomplete tool—driving rapid adoption without any algorithm‑gaming tricks. - However, DeepSeek’s terms of service are markedly invasive: they retain user data (including keystrokes), lack clear ownership rights for generated outputs, and route legal disputes to Chinese courts, raising serious privacy and IP concerns. - Some users dismiss these worries by claiming they can run the model locally, but the broader risk remains that the company could legally claim ownership over ideas or data derived from its service. ## Sections - [00:00:00](https://www.youtube.com/watch?v=vjvZMK5C0pk&t=0s) **DeepSeek Takes App Store Lead** - The speaker outlines how DeepSeek surged to the top of the App Store by offering a free, reasoning‑visible AI model, prompting UI innovation, influencing OpenAI’s development, and fueling a wave of casual, autocomplete‑style AI usage. ## Full Transcript
why did deep seek work and what is
everybody else doing now that deep seek
is here this is going to be a bit of a
longer one but now that deep seek is
number one in the App Store I think it
Bears unpacking the implications of what
is going on so number one besides
putting on a beanie is deep seek got to
the App Store top spot with two
innovations people aren't talking about
one they showed their reasoning that is
a big big deal it makes it easy for
people to edit change adjust their
prompts in fact I have heard that open
AI is using those reasoning outputs that
deep seek openly displays to help with
model distillation already so like
already the learnings from Deep seek are
getting back to open AI but the larger
the larger lesson learned there is that
showing reasoning is a UI Innovation
that more model makers should adopt it
makes it really obvious what the model
is doing that leads me to Innovation
number two which is by offering it for
free and by making R1 a reasoning model
widely available in the App Store you
are getting to what I call the
autocomplete crowd I have lots of folks
I know like this who are outside Tech
many others do too I like to think of it
as uncle's T at Thanksgiving who will
tell you chat GPT is nothing but
autocomplete and roll his eyes over the
turkey it's a chat gp2 chat gpt2 level
response it's like I saw this a few
years ago it was kind of terrible
and I don't think it's gotten better
since well it's hard to argue that it's
not gotten better when you're looking at
Deep seek and you can literally see the
reasoning and I think that is a big
factor in why this app has shot to the
top in the App Store I don't think there
was any gaming involved I've seen people
who said well they game the algorithm I
don't think they did I think they just
produced a really good experience so
this brings me to the second part of the
video what is everybody doing about that
number one everybody is not reading
their terms of service which are super
creepy and concerning if you look at it
you only get redressed through Chinese
courts they do not actually delete your
data when you delete your account they
are keeping a monitoring table that they
tell you that they are keeping for quote
unquote illegal activities that they
won't Define they do not clearly give
you rights to the model outputs So in
theory if you got a startup idea from a
deep seek model output if is possible
that deep seek could make a legal claim
to that
startup I don't know that they will I'm
not saying that they will but the fact
that the legal terms aren't clear should
be really worrying and if you compare
them versus open AI like people complain
a lot and rightly hold open Ai and other
model makers to a high bar deep seek is
a lot farther back on that deep seek has
really really concerning uh and invasive
terms of use they log your keystrokes
now my crowd the people people who are
talking here are going to immediately
say that doesn't matter I can run the
model locally and my answer to you is
you can run the model locally but
99.99% of people are using the freaking
app and they are going through that same
terms of service and not even noticing
so that's the first thing that's
happening and it is a concern if you're
worried about Tik Tok and you're worried
about like the app collection and data
collection from Tik Tok I would argue
this is worse because it captures your
direct thinking and the model outputs
are things you can't necessarily
use it's scary okay the second
implication around what people are doing
is model makers in the
US are
desperately playing
catchup that's not a surprise right but
they're playing catchup in interesting
ways they are doubling down on the fact
that they need the money they need the
billions of dollars they're Investing
For chips for Next Generation models and
they're arguing it on two points which I
think are both correct first if you want
to make a Next Generation model that is
much harder than making a model for
parody and what deep seek has done is
make a model that seeks to be roughly on
parody with state-of-the-art and that is
easier versus making a model that is
going to push The Cutting Edge and
that's much more expensive and so that's
that's Piece One Piece two is
serving all of that inference serving
all of that compute to people who ask
for responses is not cheap and it takes
a lot of chips and so what Wall Street
didn't understand yesterday is that most
of the chips that people buy are for
inference it's for serving the model it
is not for training the model most of
Jensen sales are for serving the model
the reason deep seek went down yesterday
is because they did not have enough
chips to serve the model at scale and so
I I think that there's a little bit of a
defensiveness there I've noticed that
I'm not discounting it people do get
defensive when competition comes up but
I think net net they're probably correct
that they need the money to advance the
field now I will say one of the things
that is under discussed and this is the
third thing that like people aren't
really talking about but people are
actually doing uh so if you're in the
Tech Community if you're a developer and
you're like replicating what deep seek
is doing what you are doing in
replicating the model and the technical
details of the paper is something that
was not possible but was tried two years
ago and so what they are able to do now
with Group Policy reinforcement with
essentially
reading reasoning out of the data stream
that they're training on was tried
previously and it didn't work and now it
does and the reason why it works is
because reasoning models like 01 have
come out generated a ton of tokens into
the data stream we have a lot of
evidence that's very public of humans
either praising or criticizing specific
model responses and that is now in the
general internet data availability which
means deep seek can train on it which
means that other model makers can train
on it too and so what we're seeing
really is a reasoning takeoff moment
before there weren't enough examples of
model reasoning out there with humans
either saying yes or no good or bad for
models to learn what reasoning looked
like just by reading through the data
set and doing some group policy
reinforcement now there are in the last
year and a half two years that has
changed now there's enough reasoning
examples out there that you can hit a
critical mass I think the number I saw
for critical mass was 800,000 responses
or 800,000 samples I don't know if that
exact number is true but the point is
there's enough of them out there that
you critical mass and you were able to
actually use a technique that had been
tried previously and discarded to grow a
reasoning model more organically for
lack of a better term we're not talking
about artisanal organic models here
we're saying basically the model didn't
need an external validation point to
learn reasoning and that is a big deal
and that is being rapidly replicated now
that people have figured out it works
and so that's the third thing people are
doing is they're replicating deep seeks
results and they're seeing it work and
that means that we are at a point where
models are effectively very close to
self-improving they can look at
reasoning they can learn reasoning on
their own and if they can do that then
the higher quality responses they
produce into the data stream are going
to be used by the next generation of
models to improve faster so that is one
of the big long-term implications is
that effectively deep seek has
accelerated model development Again by
making reasoning more transparent and
available so we'll see what happens it's
a collection of things so first off we
talked about deep seek and their
position in the App Store and why it
worked and second I covered the three
things that I think are most important
coming out of this right like how we
handle the terms of service what the
model makers are doing as far as their
investment levels and finally what
actual people are doing when they figure
out that they can replicate this uh
reasoning development Chain of Thought
development from the data stream which I
think is perhaps the most interesting
implication so far so it's been weird
it's been fun seek is here