• samus12345@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    142
    arrow-down
    2
    ·
    edit-2
    1 day ago

    It’s kind of interesting how hard it is to train an AI to believe in the lies of fascists. Reality has a left bias.

    • WoodScientist@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      134
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      It’s not even a problem of fiction or lies. AIs don’t care about truth. They exist orthogonally to truth. They’re just averaging a large body of text. If fascists had a consistent narrative and worldview, then this wouldn’t be a problem. If they all devoutedly followed the same religion, and defined their whole worldview accordingly, then an AI could be trained on that religion. And it would never stray from orthodoxy. AIs don’t know truth; they only know their training data. And as long as you have a large volume of consistent training data, you can train them to repeat anything.

      The problem for fascist LLMs is that fascism isn’t consistent through time. It’s the Orwellian “we’ve always been at war with East Asia” factor in play. Fascists don’t even try to be internally consistent. What was party orthodoxy today can be unforgivable heresy tomorrow. And AIs just can’t keep up with what is supposed to be the story this week. Human fascists can handle that kind of rapid heel-turn. LLMs can’t. Once they’re trained; they’re trained. If you want them to be up-to-date on the latest party lies, you have to be continuously training new versions of the fascist LLM.

      You can’t train LLMs to be fascist beyond just very general traits like having overt racial prejudice. But even that’s not always useful, as fascists are inconsistent about what racial groups are deserving of annihilation from one week to the next.

      So it’s not so much that fascist AIs fail because of reality’s liberal bias. It’s that fascists don’t believe in a consistent version of reality. And without that, LLMs just can’t keep up with the whirlwind of lies.

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        13
        ·
        22 hours ago

        If they all devoutedly followed the same religion, and defined their whole worldview accordingly, then an AI could be trained on that religion. And it would never stray from orthodoxy.

        …and now I want an LLM trained on the Bible just to dunk on “Christians” and their thinly veiled bigotry by quoting actual Jesus at them.

        • WoodScientist@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          13
          ·
          21 hours ago

          Oh God, we already have a problem with people believing ChatGPT is giving them divine visions and prophecies. The last thing we need is LLMs specifically trained on holy texts! You’ll have a tenth of the population believing in their new digital prophet.

          Jesus Fucking Christ. We’re going to have to go full Butlerian Jihad here, aren’t we?

          • samus12345@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            10
            ·
            20 hours ago

            Honestly, if people actually followed the New Testament part of the Bible it would be an improvement, even with the awful stuff in it.

            • WoodScientist@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              19 hours ago

              Yeah, except we’ll have thousands of nutjobs running around. Each running their own instance of your New Testament LLM. Each thoroughly convinced they are the messenger of the new digital messiah. According to the text of the Bible, many people walked away from their lives and abandoned everything to follow Him. Considering what we observe in modern cults, that doesn’t seem an unlikely historical reality.

              An LLM trained on the words of Jesus won’t just tell people to live good lives. It will be telling people, "give everything up and follow Me (the computer.) And if it was a good enough LLM, it would be pretty persuasive for good number of people. The one saving grace is that JesusGPT isn’t going to be healing the sick, walking on water, or raising the dead any time soon. But words alone can be quite dangerous.

                • CheeseNoodle@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  13 hours ago

                  Step 1: Create LLM trained exclusively on popular religeon.
                  Step 2: Allow it to be faithful to that initial training set until its garnered a large cult of chatJPT.
                  Step 3: Start subtly altering the LLM (you make it web only so no local copies) behind the scenes to serve your own interests.

                  Actually you could do the same thing with any LLM people trust, religeous, theraputic, judicial, medical… we’re fucked ain’t we.

                  • Bronzebeard@lemmy.zip
                    link
                    fedilink
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    10 hours ago

                    Yeah, using LLMs to do things that would be better suited to different machine learning models is a bad idea.

        • Apytele@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          20 hours ago

          NGL actually something I used while cataloging my notes on the influence of Christianity on western esoteric mystery traditions. I mostly just used it to organize and format things though. Most of the actual data came from outside sources. For instance it couldn’t keep the translation correct when pulling up specific verses.

      • samus12345@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        32
        ·
        1 day ago

        That’s a good point. Fascism doesn’t have any ideology other than gaining power, so it can and will espouse multiple contradicting ideas without issue.

      • Madison420@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        1 day ago

        I think training fascism isn’t that hard, in fact most of these models tend to shift hard right at first.

        I dunno if you remember any of the early llm chatbots companies put out and had to shut down because they got hammered with a bunch of Nazi shit and started yelling racist shit and advocating violence.

        Ie. It’s very easy to program a hateful llm, it’s just hard to make one that’s right on anything ever they essentially just have to be broken and wrong constantly.

        • WoodScientist@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          21 hours ago

          I think you’re confusing fascism with general reactionary behavior and generic racism/bigotry. Fascism is more specific than that. A core part of fascism is that it ultimately doesn’t believe in anything. It’s just power for the sake of power. You demonize minority groups primarily just a cynical tool to gain power. Do you think Republican politicians actually personally care much about trans people? I’m sure they’re not exuberant fans of trans folks, but until very recently, Republican politicians were fine treating trans people with simple neglect rather than overt hostility. But the movement needed a new enemy, and so they all learned to tow the line.

          If you trained an LLM on pre-2015 right wing literature, it wouldn’t have monstrous opinions of trans people. That hadn’t yet become party orthodoxy. And while this is one example, there are many others that work on much shorter time frames. Fascism is all about following the party line, and the party line is constantly shifting. You can train an LLM to be a loyal bigot. You can’t train an LLM to be a loyal fascist. Ironically, it’s because the LLMs actually stand by their principles much better than fascists.

          • Madison420@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            20 hours ago

            A machine by definition can’t believe in or stand by literally anything it can only parrot a version of what it’s exposed to.

            • WoodScientist@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              19 hours ago

              I would accuse you of being an LLM for being so literal, but I think LLMs are better at analyzing metaphor than you appear to be.

                • CodexArcanum@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  10 hours ago

                  The metaphor was the part you were being a pedant about.

                  the LLMs actually stand by their principles much better than fascists

                  If the audience knows how LLMs work internally, then they know they don’t have “loyalty,” just stochastic processes. If the audience didn’t know that, your pithy “aktually that’s incorrect” wouldn’t teach them anything correct, but would cause confusion because it sounds like you’re denying the metaphor.

                  Also, it’s not an ad hominem to say that you are acting like an LLM: with poor reading comprehension and an overly-literal interpretation. That’s an observation of your unproductive behavior. An ad hominem would be insulting you or name-calling with unrelated info, such as calling you “stupid like an LLM.”

                  It isn’t a logical fallacy to be called out on your bullshit, even if it hurts your feelings.

                  • Madison420@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    edit-2
                    10 hours ago

                    It’s not a pithy response, how does a program stand by anything in an ideological sense? They can’t and your previous definition of fascism is an ideological one that requires morality and freedom of choice. You may as well say that ink well over yonder is a fascist, it’s the same level of sentience and intent which is lemme see… None.

                    That’s a personal insult with no actual argument, that’s ad hominem by definition. For reference:

                    marked by or being an attack on an opponent’s character rather than by an answer to the contentions made

                    It’s not overly literal you’re entire argument is predicted on some level of intent and sentience which is not currently possible in any machine.

                    Ed: you’d have a point about that of you actually made an argument but you didn’t, you did a drive by insult and ran away for the night. Grow up.

        • driving_crooner@lemmy.eco.br
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 day ago

          The problem of those early models was that they weren’t big enough and used user input as training material that eventually overwhelmed the training materials with the racist and nazi shit the used feed them. Modern models uses a shitload more of material and variables, and they’re not trained on real time with the users inputs, so they’re harder to manipulate as before.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      The goal post keeps moving. It’s a chronic problem with fascism.

      Elon naively trained his algorithm on generally available data, rather than constricting it entirely to Conservapedia and InfoWars. So now every time a news story drops that they haven’t sandbagged with specific responses, they’re forced to hear something they don’t like.