• pelespirit@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      arrow-down
      3
      ·
      12 hours ago

      Good point, but wouldn’t it be time sensitive? Meaning, it would give more weight to recent events? I’m going with the “kirk is always right theory.”

      • neukenindekeuken@sh.itjust.works
        link
        fedilink
        arrow-up
        34
        ·
        11 hours ago

        You can set up an MCP server in front of the LLM so that it can reach out to external APIs, like news sites/feeds/etc…

        But, the majority of its behavior is going to be from its original training set, which is likely 12+ months old at this point. The pipeline for this is so long to generate and refine a good new AI model, that your data sets are constantly out of date.

        An MCP server will only get you so far.

        • pelespirit@sh.itjust.works
          link
          fedilink
          arrow-up
          12
          arrow-down
          3
          ·
          edit-2
          11 hours ago

          Duckduckgo’s AI search seems to slightly agree with you, meaning Futurism and Engadget:

          Grok, the AI chatbot, initially spread misinformation about Charlie Kirk’s death by claiming he survived the shooting and that videos of the incident were fake. This confusion stemmed from the chatbot’s inability to accurately process breaking news and conflicting information, leading to a series of incorrect statements before eventually acknowledging Kirk’s death.

          That means AI really, really sucks and can be manipulated easily.

          • Cyberspark@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            3 hours ago

            I think it’s well known at this point that grok in particular has been designed to be easy to manipulate by deliberately keeping it in the dark and feeding it only select information so that Musk can make it say what he wants.

          • Hubi@feddit.org
            link
            fedilink
            arrow-up
            25
            ·
            11 hours ago

            Kinda ironic that you posted a quote from Duckduckgo’s AI to make a point that AI sucks and is easy to manipulate.

            • ameancow@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              1
              ·
              11 hours ago

              Our entire future of internet use from now until the next major technological upheaval is going to consist ENTIRELY of going between different shitty AI models to try to get enough coherent answers that we can possibly, maybe figure out some shred of truth.

              While the vast bulk of humanity just accepts whatever their most convenient chat model tells them.

              • some_kind_of_guy@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                7 hours ago

                My fear is most will forget what truth looks like during this stage, and how to look for it, such that there won’t really be a next stage. Those of us who do remember will be pushed to the margins and hunted, or driven mad.

            • pelespirit@sh.itjust.works
              link
              fedilink
              arrow-up
              3
              ·
              10 hours ago

              I guess you didn’t understand the nuance. Duckduckgo was requoting Engadget and Futurism. It’s a loop of information that is controlled by media outlets chosen by the person running the bot.

              • BenevolentOne@infosec.pub
                link
                fedilink
                arrow-up
                2
                ·
                7 hours ago

                This has been the case since it was possible to pay someone to run from village to village shouting things… It’s just more now.

                Welcome to the party, beers over there.

          • Zetta@mander.xyz
            link
            fedilink
            arrow-up
            6
            ·
            11 hours ago

            Yes, LLMs, or what people call AI, are absolutely manipulated easily. Just the way you phrase your question can steer it to answer in a particular way. I haven’t been on Twitter in a long time, but I hopped on yesterday and today to check out all of the Kirk memes.

            I saw so many comments from people both happy and upset with Kirk dying, that were asking questions in manipulative ways (to the llm) to try and get the response from grok they want.

            LLMs are indeed horrible for live or recent events, and more importantly horrible for anything that is super important to not get wrong.

            Don’t get me wrong I personally find llms useful and I use open source models occasionally for tasks they are better at, for me typically that means reformatting or compiling shorter notes from documents. Nothing super critical.

          • lime!@feddit.nu
            link
            fedilink
            English
            arrow-up
            5
            ·
            10 hours ago

            ddg doesn’t run it’s own llm, they’re just a frontend to chatgpt that (allegedly) strips out all the tracking.