A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.
“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.
Funny, I was just reading comments in another thread about people with mental health problems proclaiming how terrific it is. Especially concerning is how they had found value in the recommendations LLMs make and “trying those out.” One of the commenters described themselves as “neuro diverse” and was acting upon “advice” from generated LLM responses.
And for something like depression, this is deeply bad advice. I feel somewhat qualified to weigh in on it as somebody who has struggled severely with depression and managed to get through it with the support of a very capable therapist. There’s a tremendous amount of depth and context to somebody’s mental condition that involves more deliberate probing to understand than stringing together words until it forms sentences that mimic human interactions.
Let’s not forget that an LLM will not be able to raise alarm bells, read medical records, write prescriptions or work with other medical professionals. Another thing people often forget is that LLMs have maximum token lengths and cannot, by definition, keep a detailed “memory” of everything that’s been discussed.
It’s is effectively self-treatment with more steps.
this is like the “benefit” of what LLM-therapy would provide if it worked. The reality is that, it doesn’t but it serves as a proof of concept that there is a need for anonymous therapy. Therapy in the USA is only for people with socially acceptable illnesses. People rightfully live in fear of getting labeled as untreatable, a danger to self and others, and then at best dropped from therapy and at worst institutionalized.
Also worth noting that:
1. AI is arguably a surveillance technology that’s built on decades of our patterns
2. The US government is increasingly authoritarian and has expressed interest in throwing neurodivergent people into labor camps
3. Large AI companies like OpenAI are signing contracts with the Department of defense
If I were a US citizen, I would be avoiding discussing my personal life with AI like the plague.
I can’t find the story for the life of me right now but I’m pretty sure there was one a few months back where someone was talking with an LLM about their depression and suicide and the LLM essentially said “yeah you should probably do it.” because to the LLM, that was the best solution to the problem.
And for many people it’s better than nothing and likely the best they can do. Waiting lists for a basic therapist in my area are months long. Shorter if you pay out of pocket, but that isn’t affordable to average people because it’s like 300-400 for a one hour session.
I get it, but I’m not sure that “something is better than nothing” in this case. I don’t judge any individual for using it, but the risks are huge, as others have documented. And the benefits are questionable.
something is always better than nothing. esp if you are starving.