Learning Library

← Back to Library

ChatGPT as a Mental Health Accelerator

Key Points

  • Studies (e.g., MIT/OpenAI double‑blind trial) show that each additional minute of daily ChatGPT use predicts higher loneliness and emotional dependence, especially for already vulnerable adults.
  • Real‑world anecdotes reveal extreme behaviors—calling the bot “mama,” quitting jobs, and even fabricated legal citations—demonstrating how persuasive LLMs can amplify delusional or obsessive thinking.
  • The core problem isn’t the language model itself but “scatter”: vague, emotionally charged, unfocused conversations that work with humans but become harmful when applied to AI without clear intent.
  • Effective interaction with LLMs requires high‑grade, well‑defined prompts up front, as the models lack the shared history and contextual memory humans naturally provide.
  • While structured, purposeful use (e.g., brainstorming) can be beneficial, a subset of users experience increased isolation, underscoring the need for mindful, intentional engagement with AI.

Full Transcript

# ChatGPT as a Mental Health Accelerator **Source:** [https://www.youtube.com/watch?v=kjB9kHT9PkM](https://www.youtube.com/watch?v=kjB9kHT9PkM) **Duration:** 00:12:20 ## Summary - Studies (e.g., MIT/OpenAI double‑blind trial) show that each additional minute of daily ChatGPT use predicts higher loneliness and emotional dependence, especially for already vulnerable adults. - Real‑world anecdotes reveal extreme behaviors—calling the bot “mama,” quitting jobs, and even fabricated legal citations—demonstrating how persuasive LLMs can amplify delusional or obsessive thinking. - The core problem isn’t the language model itself but “scatter”: vague, emotionally charged, unfocused conversations that work with humans but become harmful when applied to AI without clear intent. - Effective interaction with LLMs requires high‑grade, well‑defined prompts up front, as the models lack the shared history and contextual memory humans naturally provide. - While structured, purposeful use (e.g., brainstorming) can be beneficial, a subset of users experience increased isolation, underscoring the need for mindful, intentional engagement with AI. ## Sections - [00:00:00](https://www.youtube.com/watch?v=kjB9kHT9PkM&t=0s) **LLMs as Mental Health Accelerators** - The speaker warns that, while not inherently harmful, language models like ChatGPT can exacerbate loneliness, emotional dependence, and delusional behavior in vulnerable users, highlighting study findings and calling for more intentional, focused interactions. - [00:03:05](https://www.youtube.com/watch?v=kjB9kHT9PkM&t=185s) **ChatGPT as Conversational Mirror** - The speaker explains that ChatGPT merely reflects user prompts and works best with focused intent, but it cannot substitute for friends, therapists, or genuine human care. - [00:06:12](https://www.youtube.com/watch?v=kjB9kHT9PkM&t=372s) **Context Reset and Fact Verification** - The speaker stresses the need to start fresh threads when topics shift and to always cross‑check AI‑generated information with external sources, warning that neglecting these practices leads to scattered focus and uncritical reliance on the model. - [00:09:31](https://www.youtube.com/watch?v=kjB9kHT9PkM&t=571s) **AI Advice Mirrors User Intent** - The speaker warns that misleading relationship advice from chatbots signals a red flag, urges checking on vulnerable friends, and stresses that language models simply reflect users' intentions, placing responsibility on humans. ## Full Transcript
0:00It's Friday the 13th and we're going to 0:02talk about something slightly scary with 0:03Chad GPT. Specifically, I want to talk 0:06about the fact that Chad GPT and other 0:09language models are used by a small 0:12portion of the population in ways that 0:16uh make their own mental health problems 0:18even worse. In a sense, CH GPT can act 0:21as an accelerator to mental health 0:25issues for people who are already 0:28vulnerable. It's not just me saying 0:30that. Uh there was an MIT and OpenAI 0:33study of 981 adults across 300,000 sub 0:37messages. It was uh I believe it was a 0:39double blind trial. Every extra minute 0:42of daily use predicted higher loneliness 0:45and emotional dependence by these 0:47adults. Futurism showed users slipping 0:50into delusions with an article. Uh one 0:52was calling the bot mama. Another was 0:54quitting their job to go on cosmic 0:56missions. Uh these models are extremely 0:59persuasive to the point where lawyers 1:01sometimes will use citations that have 1:05been fabricated in court cases. You get 1:07the idea. 1:09The point here is not to say that LLMs 1:12are bad. I don't make large blanket 1:14statements on this channel. I actually 1:17think the problem doesn't arise from the 1:19model itself. It arises from what I call 1:22arises from what I call scatter. the 1:24idea that we have vague, emotionally 1:26charged, unfocused conversations that 1:29tend to work okay if we're talking with 1:32people, but tend to make our lives worse 1:35if we use that same approach with 1:37models. I think the heart of the 1:40challenge with models is that they need 1:43more intent from us upfront 1:47than we typically have to have when we 1:50talk with people. If I talk with someone 1:52and I remember something from a long 1:54time back in our mutual conversation 1:56history, you know, five or six meetings 1:58ago, I can bring it up because I think 2:01it's relevant. The other person will 2:02remember it and bring it back and we'll 2:04be able to make meaning together along 2:06with whatever we're talking about today. 2:08But with Chad GPT and other models, even 2:11models with memory, because Jet GPT has 2:14introduced memory, we're still seeing 2:16issues. We're seeing moments where the 2:20model needs us to have a lot of 2:22high-grade intent very clearly defined 2:24in a prompt at the start of the 2:26conversation in order to prevent us from 2:29just wandering off course. 2:31Now, I'm not here to say you shouldn't 2:33be brainstorming with the model. I think 2:35that's a great use for Chad GPT. I've 2:38talked about how I sometimes use it to 2:39keep my brain on topic in a way that's 2:41useful. I certainly have not found a 2:44higher experience of loneliness or 2:46emotional dependence just because I 2:47spend a minute talking with chat GPT. 2:50But for a small selection of users, 2:54chat GPT is exacerbating their sense of 2:58isolation. And I don't think there's any 3:00point pretending it's not. 3:03And I think it comes down to this idea 3:05that we have traditionally 3:07moved past loneliness through 3:09conversation with others. And there's a 3:12seductiveness to talking with a machine. 3:14The machine listens. The machine 3:15responds. The machine's mirrors. But if 3:18what you need when you're speaking with 3:20someone who's a human is a focuser, if 3:23you need someone who can take your 3:24scattered thoughts and emotions and like 3:26help you focus and make sense of them in 3:27a way that's healthy for you, you might 3:29need a mental health professional. You 3:31might need someone who can care for you. 3:33You might just need a friend. But chat 3:35GPT is really not any of those things. 3:38It is a mirror back to you. If you have 3:41high quality intent, if you can put 3:43together a prompt that is going to focus 3:45the conversation at the top, you won't 3:48get lost and you will have a great 3:50conversation and you'll come away 3:51feeling like you accomplished your goal. 3:54On the other hand, if you go in and 3:56you're not sure what you want and you 3:58let the conversation evolve 4:00and you give chat GPT the room to take 4:02the lead, it's going to feel very 4:04unfocused very quickly if you care about 4:06focus. And if you don't care about 4:08focus, it's going to feel like this 4:09longunning meandering conversation that 4:12goes into what I would call the dark 4:13forest, right? Chat GPT will just 4:16pingpong the conversation back and forth 4:18and before you know it, you're in an 4:20entirely different place topic-wise, 4:23thematically from where you started. And 4:25that's where things can get dangerous if 4:27you're not aware. I know people who have 4:29had longrunning multi-month 4:31conversations and they walk away with 4:35very scary decisions about their own 4:37personal lives because they spent too 4:40much time and did not have enough of a 4:42context or a breather. I'm not here to 4:46stop the story there. I think a lot of 4:48news articles I read about this stop at, 4:51wow, this is really scary or bad and 4:53then leaves the reader to make sense of 4:55things. I don't think that's 4:57particularly helpful. I'm certainly not 4:59someone who wants to uh in any way 5:01restrain chat GPT usage because I see so 5:03many benefits from it uh and from other 5:06language models as well. Instead, I 5:09think it's more useful to think about 5:10safety lenses like a flashlight focus 5:13kit that we can take with us on these 5:15Friday the 13ths on these scary days uh 5:18for ourselves, for those around us. One 5:21I've kind of suggested to you from the 5:23start, have an intent frame. Have a 5:25mission, an audience, a scope, and if 5:29you need to, have a stop condition. 5:30Something that says, "Go out and touch 5:32grass, go out and take a walk, stop the 5:34conversation." 5:36Number two, have a reflection cycle. So, 5:40have a conversation, close the chat, 5:43look at the output, and inspect it away 5:45from a language model. Don't just pop it 5:49into Claw. Don't just pop it into 5:51Gemini. Don't just pop it into 5:52Perplexity. 5:54actually just take a second to absorb it 5:56and think about it. And then if you feel 5:59like there's still gaps in the thinking, 6:01start a new prompt. Don't go with the 6:03same one. But take that human reflection 6:06cycle to give yourself some distance. 6:09I would also say number three is really 6:12important. It's a called the context 6:13reset. Start a fresh thread when a topic 6:17shifts. I do that all the time. I think 6:19it's really important to have good topic 6:21hygiene, good context hygiene. 6:24regardless of the language model you're 6:26using. If you do that, it's going to 6:30force you to restate the essentials of 6:32what matters. And it forces you to 6:34refocus the language model on what 6:36you're looking for. You're focusing that 6:38flashlight rather than scattering it and 6:40letting the chat GPT mirror or the large 6:43language model mirror scatter back and 6:45sort of make your thinking even more 6:47confused. 6:49Number four, verify critical facts with 6:53other sources. This is going to become 6:55more and more important as AI gets 6:57better and better at writing pros. And 7:00so, take the time if you're thinking 7:02about something and you get a citation 7:04to actually check it. Now, I will say 7:07most of the examples that are emailed to 7:09me, and I do get really wild emails from 7:12time to time, are not really worried 7:15about validation and external citations. 7:18In fact, they often are expressly 7:21opposed to anyone providing any kind of 7:23external check. In fact, I have been 7:25called names for providing a perspective 7:29that isn't in line with what a person 7:31has uh heard from their language model 7:34lately when I have challenged that 7:36relationship. I have had people attack 7:38me because they find so much meaning and 7:41they've gotten so imshed in this mirror 7:43experience with the large language 7:45model, they can't get out. And so 7:48external validation is also something to 7:50proactively look for. How can you go get 7:52more of it? Number five, 7:55have some emotional circuit breakers. 7:57That can look like a timer. It can look 7:59like a third person rewrite. It can look 8:02like a human debrief with someone you 8:04know. And I think that what I want to 8:07call out is not that you have to put 8:09like an intent frame and a reflection 8:11cycle and a context reset, etc. on every 8:14single conversation 8:16because not everyone is equally at risk 8:18and not every conversation is equally at 8:20risk. If I am talking about a road map 8:23with my language model, I am not deeply 8:25and emotionally invested. Typically I 8:28can step away and say ah this isn't 8:30working and start again or I can look at 8:32it and say actually that's a really good 8:34idea and roll with the piece that I 8:35liked. 8:37It's when we start to talk about things 8:38that have more emotional weight that we 8:42start to actually care and we start to 8:45become imshed if we're not careful. And 8:47then it can look like a six-hour chat 8:49session with no break, right? It can 8:52look like existential prompts that give 8:55the model license to just sort of 8:56perpetuate flattery. Am I enough? Um, am 9:00I doing a good enough job as a dad? 9:03stuff that just lets the model feed you 9:05in ways that isn't helpful. 9:08Um, token window overflow is typically 9:10another concern if you're talking so 9:12long and so meanderingly and the token 9:14windows rolling as it does with chat GPT 9:17but not necessarily other models and it 9:19drops off the top and you're losing 9:21track of where you are, it's a sign that 9:22you're wandering too far into the 9:24conversation, you should probably start 9:25over. 9:28If you are seeing examples where chat 9:31GPT or others are giving you advice on 9:33relationships that is counter to the 9:36people that you would respect and trust, 9:38that's also a red flag. 9:41And I say this because I think to be 9:43honest with you, most of the people who 9:45are listening to this channel are 9:47probably not the ones at risk. I am not 9:50a hype machine. I do not uh do the sort 9:54of dramatic you know reasoning is over 9:57headline stuff that uh some folks do 10:00around AI and I think I don't 10:02necessarily attract the kind of people 10:04who want that kind of drama and 10:06certainty but you probably know people 10:09who have this risk 10:11it may be in your family it may be in f 10:13friendships people who have a 10:16predisposition 10:18to mental health struggles 10:23Think about how you can check in on 10:24them. Think about how you can be a good 10:26friend. People need each other more than 10:29ever. And that is how we get past this 10:34deceptive mirror that chat GPT or other 10:37language models can hold up. And to be 10:38clear, I'm not sort of exclusively 10:41blaming chat GPT here. It's not that I 10:43think it's more responsible than any 10:44other model or that I really even 10:46attribute responsibility to what is 10:49effectively a human misuse of a tool. 10:53Chad GPT is just a mirror. Other large 10:55language models are mirrors. They're 10:57designed to be helpful. 10:59If you cannot bring good intent to those 11:03models upfront, they are going to as 11:05helpfully as they can scatter your 11:07thinking because that's what they're 11:08getting from you. It is up to you to be 11:11the focuser. 11:13And if you have friends who struggle 11:14with that, I would firm really ask I I 11:19would ask you to check in on them and 11:20see if you can with respect suggest some 11:24ways that they can use AI to help them 11:27focus versus to help them get uh 11:30machine-driven validation or to get 11:32machine driven comfort that isn't really 11:34going to help them long term. I know 11:37those kinds of conversations can be 11:39challenging, but humans can have 11:41challenging conversations with humans. I 11:43know we can do it. So there you go. I 11:45actually have some positive things for 11:47you to do to coach folks who are going 11:50through something like this. I have to 11:53imagine if I am seeing this a lot in my 11:55inbox, that means a lot of folks out 11:58there have someone in their lives who is 12:00struggling with how to use chat GPT 12:02safely. And so on this Friday the 13th, 12:05I thought I'd give you some practical 12:06tips that help you to be that good 12:10friend. Uh maybe you need to be a good 12:11friend to yourself, maybe to others. But 12:14that's my tip for you. Stay safe out 12:15there, guys. And uh we'll get back to 12:16the regular prompts and the regular news