If they all devoutedly followed the same religion, and defined their whole worldview accordingly, then an AI could be trained on that religion. And it would never stray from orthodoxy.
…and now I want an LLM trained on the Bible just to dunk on “Christians” and their thinly veiled bigotry by quoting actual Jesus at them.
Oh God, we already have a problem with people believing ChatGPT is giving them divine visions and prophecies. The last thing we need is LLMs specifically trained on holy texts! You’ll have a tenth of the population believing in their new digital prophet.
Jesus Fucking Christ. We’re going to have to go full Butlerian Jihad here, aren’t we?
Yeah, except we’ll have thousands of nutjobs running around. Each running their own instance of your New Testament LLM. Each thoroughly convinced they are the messenger of the new digital messiah. According to the text of the Bible, many people walked away from their lives and abandoned everything to follow Him. Considering what we observe in modern cults, that doesn’t seem an unlikely historical reality.
An LLM trained on the words of Jesus won’t just tell people to live good lives. It will be telling people, "give everything up and follow Me (the computer.) And if it was a good enough LLM, it would be pretty persuasive for good number of people. The one saving grace is that JesusGPT isn’t going to be healing the sick, walking on water, or raising the dead any time soon. But words alone can be quite dangerous.
Step 1: Create LLM trained exclusively on popular religeon.
Step 2: Allow it to be faithful to that initial training set until its garnered a large cult of chatJPT.
Step 3: Start subtly altering the LLM (you make it web only so no local copies) behind the scenes to serve your own interests.
Actually you could do the same thing with any LLM people trust, religeous, theraputic, judicial, medical… we’re fucked ain’t we.
NGL actually something I used while cataloging my notes on the influence of Christianity on western esoteric mystery traditions. I mostly just used it to organize and format things though. Most of the actual data came from outside sources. For instance it couldn’t keep the translation correct when pulling up specific verses.
…and now I want an LLM trained on the Bible just to dunk on “Christians” and their thinly veiled bigotry by quoting actual Jesus at them.
Oh God, we already have a problem with people believing ChatGPT is giving them divine visions and prophecies. The last thing we need is LLMs specifically trained on holy texts! You’ll have a tenth of the population believing in their new digital prophet.
Jesus Fucking Christ. We’re going to have to go full Butlerian Jihad here, aren’t we?
Honestly, if people actually followed the New Testament part of the Bible it would be an improvement, even with the awful stuff in it.
Yeah, except we’ll have thousands of nutjobs running around. Each running their own instance of your New Testament LLM. Each thoroughly convinced they are the messenger of the new digital messiah. According to the text of the Bible, many people walked away from their lives and abandoned everything to follow Him. Considering what we observe in modern cults, that doesn’t seem an unlikely historical reality.
An LLM trained on the words of Jesus won’t just tell people to live good lives. It will be telling people, "give everything up and follow Me (the computer.) And if it was a good enough LLM, it would be pretty persuasive for good number of people. The one saving grace is that JesusGPT isn’t going to be healing the sick, walking on water, or raising the dead any time soon. But words alone can be quite dangerous.
I don’t really see how this is worse than the Christofascism we have now.
Step 1: Create LLM trained exclusively on popular religeon.
Step 2: Allow it to be faithful to that initial training set until its garnered a large cult of chatJPT.
Step 3: Start subtly altering the LLM (you make it web only so no local copies) behind the scenes to serve your own interests.
Actually you could do the same thing with any LLM people trust, religeous, theraputic, judicial, medical… we’re fucked ain’t we.
Yeah, using LLMs to do things that would be better suited to different machine learning models is a bad idea.
NGL actually something I used while cataloging my notes on the influence of Christianity on western esoteric mystery traditions. I mostly just used it to organize and format things though. Most of the actual data came from outside sources. For instance it couldn’t keep the translation correct when pulling up specific verses.
best I could find