Learning Library

← Back to Library

DeepSeek Tops App Store, Raises Concerns

Key Points

  • DeepSeek vaulted to the #1 spot in the App Store by bundling two under‑discussed innovations: openly showing the model’s step‑by‑step reasoning and offering a free, high‑performance “R1” reasoning model.
  • The visible reasoning UI not only lets users fine‑tune prompts on the fly but is already being used by OpenAI for model distillation, suggesting a new design standard for future AI products.
  • By making the reasoning model free, DeepSeek tapped the “autocomplete crowd”—everyday users who treat AI like a sophisticated autocomplete tool—driving rapid adoption without any algorithm‑gaming tricks.
  • However, DeepSeek’s terms of service are markedly invasive: they retain user data (including keystrokes), lack clear ownership rights for generated outputs, and route legal disputes to Chinese courts, raising serious privacy and IP concerns.
  • Some users dismiss these worries by claiming they can run the model locally, but the broader risk remains that the company could legally claim ownership over ideas or data derived from its service.

Full Transcript

# DeepSeek Tops App Store, Raises Concerns **Source:** [https://www.youtube.com/watch?v=vjvZMK5C0pk](https://www.youtube.com/watch?v=vjvZMK5C0pk) **Duration:** 00:08:32 ## Summary - DeepSeek vaulted to the #1 spot in the App Store by bundling two under‑discussed innovations: openly showing the model’s step‑by‑step reasoning and offering a free, high‑performance “R1” reasoning model. - The visible reasoning UI not only lets users fine‑tune prompts on the fly but is already being used by OpenAI for model distillation, suggesting a new design standard for future AI products. - By making the reasoning model free, DeepSeek tapped the “autocomplete crowd”—everyday users who treat AI like a sophisticated autocomplete tool—driving rapid adoption without any algorithm‑gaming tricks. - However, DeepSeek’s terms of service are markedly invasive: they retain user data (including keystrokes), lack clear ownership rights for generated outputs, and route legal disputes to Chinese courts, raising serious privacy and IP concerns. - Some users dismiss these worries by claiming they can run the model locally, but the broader risk remains that the company could legally claim ownership over ideas or data derived from its service. ## Sections - [00:00:00](https://www.youtube.com/watch?v=vjvZMK5C0pk&t=0s) **DeepSeek Takes App Store Lead** - The speaker outlines how DeepSeek surged to the top of the App Store by offering a free, reasoning‑visible AI model, prompting UI innovation, influencing OpenAI’s development, and fueling a wave of casual, autocomplete‑style AI usage. ## Full Transcript
0:00why did deep seek work and what is 0:02everybody else doing now that deep seek 0:04is here this is going to be a bit of a 0:07longer one but now that deep seek is 0:08number one in the App Store I think it 0:10Bears unpacking the implications of what 0:12is going on so number one besides 0:14putting on a beanie is deep seek got to 0:19the App Store top spot with two 0:22innovations people aren't talking about 0:25one they showed their reasoning that is 0:27a big big deal it makes it easy for 0:29people to edit change adjust their 0:31prompts in fact I have heard that open 0:34AI is using those reasoning outputs that 0:36deep seek openly displays to help with 0:39model distillation already so like 0:42already the learnings from Deep seek are 0:44getting back to open AI but the larger 0:47the larger lesson learned there is that 0:49showing reasoning is a UI Innovation 0:52that more model makers should adopt it 0:54makes it really obvious what the model 0:56is doing that leads me to Innovation 0:58number two which is by offering it for 1:00free and by making R1 a reasoning model 1:04widely available in the App Store you 1:06are getting to what I call the 1:08autocomplete crowd I have lots of folks 1:11I know like this who are outside Tech 1:13many others do too I like to think of it 1:15as uncle's T at Thanksgiving who will 1:17tell you chat GPT is nothing but 1:19autocomplete and roll his eyes over the 1:21turkey it's a chat gp2 chat gpt2 level 1:26response it's like I saw this a few 1:28years ago it was kind of terrible 1:30and I don't think it's gotten better 1:32since well it's hard to argue that it's 1:35not gotten better when you're looking at 1:36Deep seek and you can literally see the 1:38reasoning and I think that is a big 1:41factor in why this app has shot to the 1:45top in the App Store I don't think there 1:47was any gaming involved I've seen people 1:49who said well they game the algorithm I 1:50don't think they did I think they just 1:52produced a really good experience so 1:55this brings me to the second part of the 1:56video what is everybody doing about that 2:00number one everybody is not reading 2:02their terms of service which are super 2:04creepy and concerning if you look at it 2:06you only get redressed through Chinese 2:08courts they do not actually delete your 2:10data when you delete your account they 2:12are keeping a monitoring table that they 2:16tell you that they are keeping for quote 2:17unquote illegal activities that they 2:19won't Define they do not clearly give 2:21you rights to the model outputs So in 2:25theory if you got a startup idea from a 2:27deep seek model output if is possible 2:30that deep seek could make a legal claim 2:32to that 2:34startup I don't know that they will I'm 2:36not saying that they will but the fact 2:38that the legal terms aren't clear should 2:40be really worrying and if you compare 2:43them versus open AI like people complain 2:45a lot and rightly hold open Ai and other 2:47model makers to a high bar deep seek is 2:49a lot farther back on that deep seek has 2:52really really concerning uh and invasive 2:55terms of use they log your keystrokes 2:58now my crowd the people people who are 3:00talking here are going to immediately 3:01say that doesn't matter I can run the 3:03model locally and my answer to you is 3:07you can run the model locally but 3:1099.99% of people are using the freaking 3:13app and they are going through that same 3:16terms of service and not even noticing 3:17so that's the first thing that's 3:18happening and it is a concern if you're 3:21worried about Tik Tok and you're worried 3:24about like the app collection and data 3:26collection from Tik Tok I would argue 3:28this is worse because it captures your 3:30direct thinking and the model outputs 3:33are things you can't necessarily 3:35use it's scary okay the second 3:38implication around what people are doing 3:41is model makers in the 3:45US are 3:47desperately playing 3:50catchup that's not a surprise right but 3:53they're playing catchup in interesting 3:54ways they are doubling down on the fact 3:57that they need the money they need the 3:59billions of dollars they're Investing 4:01For chips for Next Generation models and 4:03they're arguing it on two points which I 4:05think are both correct first if you want 4:08to make a Next Generation model that is 4:11much harder than making a model for 4:14parody and what deep seek has done is 4:17make a model that seeks to be roughly on 4:18parody with state-of-the-art and that is 4:20easier versus making a model that is 4:22going to push The Cutting Edge and 4:23that's much more expensive and so that's 4:26that's Piece One Piece two is 4:30serving all of that inference serving 4:33all of that compute to people who ask 4:35for responses is not cheap and it takes 4:40a lot of chips and so what Wall Street 4:43didn't understand yesterday is that most 4:45of the chips that people buy are for 4:47inference it's for serving the model it 4:49is not for training the model most of 4:51Jensen sales are for serving the model 4:53the reason deep seek went down yesterday 4:55is because they did not have enough 4:57chips to serve the model at scale and so 5:01I I think that there's a little bit of a 5:02defensiveness there I've noticed that 5:04I'm not discounting it people do get 5:06defensive when competition comes up but 5:08I think net net they're probably correct 5:10that they need the money to advance the 5:12field now I will say one of the things 5:15that is under discussed and this is the 5:17third thing that like people aren't 5:19really talking about but people are 5:21actually doing uh so if you're in the 5:23Tech Community if you're a developer and 5:24you're like replicating what deep seek 5:27is doing what you are doing in 5:30replicating the model and the technical 5:32details of the paper is something that 5:34was not possible but was tried two years 5:37ago and so what they are able to do now 5:41with Group Policy reinforcement with 5:43essentially 5:44reading reasoning out of the data stream 5:48that they're training on was tried 5:51previously and it didn't work and now it 5:54does and the reason why it works is 5:57because reasoning models like 01 have 6:01come out generated a ton of tokens into 6:03the data stream we have a lot of 6:05evidence that's very public of humans 6:07either praising or criticizing specific 6:09model responses and that is now in the 6:13general internet data availability which 6:15means deep seek can train on it which 6:18means that other model makers can train 6:20on it too and so what we're seeing 6:22really is a reasoning takeoff moment 6:26before there weren't enough examples of 6:29model reasoning out there with humans 6:31either saying yes or no good or bad for 6:34models to learn what reasoning looked 6:37like just by reading through the data 6:38set and doing some group policy 6:40reinforcement now there are in the last 6:43year and a half two years that has 6:45changed now there's enough reasoning 6:46examples out there that you can hit a 6:49critical mass I think the number I saw 6:50for critical mass was 800,000 responses 6:53or 800,000 samples I don't know if that 6:55exact number is true but the point is 6:57there's enough of them out there that 6:59you critical mass and you were able to 7:01actually use a technique that had been 7:02tried previously and discarded to grow a 7:06reasoning model more organically for 7:10lack of a better term we're not talking 7:11about artisanal organic models here 7:13we're saying basically the model didn't 7:14need an external validation point to 7:16learn reasoning and that is a big deal 7:19and that is being rapidly replicated now 7:21that people have figured out it works 7:23and so that's the third thing people are 7:25doing is they're replicating deep seeks 7:26results and they're seeing it work and 7:28that means that we are at a point where 7:30models are effectively very close to 7:32self-improving they can look at 7:34reasoning they can learn reasoning on 7:35their own and if they can do that then 7:38the higher quality responses they 7:39produce into the data stream are going 7:41to be used by the next generation of 7:43models to improve faster so that is one 7:45of the big long-term implications is 7:47that effectively deep seek has 7:49accelerated model development Again by 7:52making reasoning more transparent and 7:55available so we'll see what happens it's 7:57a collection of things so first off we 7:59talked about deep seek and their 8:01position in the App Store and why it 8:02worked and second I covered the three 8:04things that I think are most important 8:06coming out of this right like how we 8:07handle the terms of service what the 8:09model makers are doing as far as their 8:11investment levels and finally what 8:13actual people are doing when they figure 8:15out that they can replicate this uh 8:18reasoning development Chain of Thought 8:21development from the data stream which I 8:23think is perhaps the most interesting 8:25implication so far so it's been weird 8:28it's been fun seek is here