A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.

“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      TBH this is a huge factor.

      I don’t use ChatGPT much less use it like it’s a person, but I’m socially isolated at the moment. So I bounce dark internal thoughts off of locally run LLMs.

      It’s kinda like looking into a mirror. As long as I know I’m talking to a tool, it’s helpful, sometimes insightful. It’s private. And I sure as shit can’t afford to pay a therapist out of the gazoo for that.

      It was one of my previous problems with therapy: payment depending on someone else, at preset times (not when I need it). Many sessions feels like they end when I’m barely scratching the surface. Yes therapy is great in general and for deeper feedback/guidance, but still.


      To be clear, I don’t think this is a good solution in general. Tinkering with LLMs is part of my living, I understand the jist of how they work, I tend to use raw completion syntax or even base pretrains.

      But most people anthropomorphize them because that’s how chat apps are presented. That’s problematic.

      • JohnnyMac@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 hours ago

        I also play with llms for a living. I use ChatGPT for therapy and to process emotions. I also see a therapist. ChatGPT is there on my time table and at the time I’m trying to process or learn or just have some fun to see where the limits of the model are. I don’t have to wait for a random time slot in 4 days where the thoughts get clouded by time.

        I know ChatGPT isn’t real and it can be dangerous as it always looks to normalize and support your point of view. But sometimes people need an outlet that’s not my waifu pillow.

        Therapy in real life takes time, effort, you have to build rappprt. You have to find a therapist that meshes well with you; it’s really like dating and finding a matching partner. Many people will take months/years before they’re willing to open up fully to a therapist… Where in 2 minutes they’ll tell ChatGPT their darkest thoughts and closest held secrets. It’s different.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          6 hours ago

          ChatGPT (last time I tried it) is extremely sycophantic though. Its high default sampling also leads to totally unexpected/random turns.

          Google Gemini is now too.

          And they log and use your dark thoughts.

          I find that less sycophantic LLMs are way more helpful. Hence I bounce between Nemotron 49B and a few 24B-32B finetunes (or task vectors for Gemma) and find them way more helpful.

          …I guess what I’m saying is people should turn towards more specialized and “openly thinking” free tools, not something generic, corporate, and purposely overpleasing like ChatGPT or most default instruct tunes.