• samus12345@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    14 hours ago

    Honestly, if people actually followed the New Testament part of the Bible it would be an improvement, even with the awful stuff in it.

    • WoodScientist@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      13 hours ago

      Yeah, except we’ll have thousands of nutjobs running around. Each running their own instance of your New Testament LLM. Each thoroughly convinced they are the messenger of the new digital messiah. According to the text of the Bible, many people walked away from their lives and abandoned everything to follow Him. Considering what we observe in modern cults, that doesn’t seem an unlikely historical reality.

      An LLM trained on the words of Jesus won’t just tell people to live good lives. It will be telling people, "give everything up and follow Me (the computer.) And if it was a good enough LLM, it would be pretty persuasive for good number of people. The one saving grace is that JesusGPT isn’t going to be healing the sick, walking on water, or raising the dead any time soon. But words alone can be quite dangerous.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Step 1: Create LLM trained exclusively on popular religeon.
          Step 2: Allow it to be faithful to that initial training set until its garnered a large cult of chatJPT.
          Step 3: Start subtly altering the LLM (you make it web only so no local copies) behind the scenes to serve your own interests.

          Actually you could do the same thing with any LLM people trust, religeous, theraputic, judicial, medical… we’re fucked ain’t we.

          • Bronzebeard@lemmy.zip
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            4 hours ago

            Yeah, using LLMs to do things that would be better suited to different machine learning models is a bad idea.