This was the weirdest thing I’ve seen today. These are only the ones I’ve spotted.

funnily enough, these bots are also replying to an obvious repost from another bot account. It’s at the top right now! Beautiful

https://www.reddit.com/r/goodnews/comments/1p8dt2a/_/

tipping points:

  1. consuming so much AI content has led to me able to see subtle patterns
  2. They’re all saying “exactly” and saying the same thing"
  3. their usernames are similar, flower/nature related, two words, no profile pictures
  4. All of their profiles have the exact same format of comments with the agreement, summary
  5. and they all have porn on their profile. oh

edit: tf?

    • Xylight@feddit.onlineOP
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 day ago

      No, when it comes to LLMs there’s hardly any “dead giveaways” now. You have to learn to recognize the patterns.

      Omitting the final punctuation is quite a common thing people do, in fact you did in your comment. It’s probably just a part of the system prompt.

    • grepe@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      i don’t think i (or perhaps anyone) can recognize any single particular comment as being llm generated… but when the bots come in force it is still really easy. basically it boils down to this: many replies keep reiterating the same exact points in slightly different way with the same exact keywords. if you would use chatgpt to summarize each response you’d get basically the same thing from all bot replies.

      • jgandert@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        I agree. I believe it’s difficult for me—or anyone else—to pinpoint a specific comment as being generated by an LLM. However, when numerous bots are involved, the pattern becomes clear. Essentially, many responses end up repeating the same points, just phrased differently and using the same keywords. If you were to use ChatGPT to summarize each response, you’d essentially get a very similar outcome from all the bot-generated replies.

    • SGforce@lemmy.ca
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 day ago

      Think it’s probably a bug in the script they’re running. It’s cleaning one character too many off the end.

        • SGforce@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          15 hours ago

          LLMs ramble unless you stop them forcefully. That can lead to partial sentences that need to be cleaned up.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            13 hours ago

            That’s not a problem inherent to LLMs, people building things with LLMs don’t normally need to account for this.

            I can’t say it never happens, but if you’re using an appropriately trained LLM with an appropriate system prompt, this concern should be uncommon enough that trying to compensate for it with code will be more likely to introduce problems than just leaving it.

            • SGforce@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              12 hours ago

              You just explained how it is a problem inherent to most LLMs. Most spammers aren’t able or willing to train a model.

              Every large hosted LLM drones on and on. It helps them land on the correct answer more often. And they always return to the mean of their training even with prompting. Try telling a model not to reply with “Sure thing!” or some other shit and it’ll do it anyway. Far easier to just cut that shit out.

              • PeriodicallyPedantic@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                11 hours ago

                There are lots of (relatively) high quality free models they can host themselves, or use hosted models. They don’t need to train their own models or use models without applicable training data.

                If your bar for “droning on and on” is them saying “ok” then sure I guess? But that seems like a crazy bar.
                What system prompt are you using, when you’re getting responses that “drone on and on”?

                Don’t get me wrong, I hate AI.
                But I also worked on LLM integrations for a year, so I had to develop a reasonable grasp of their capabilities and use, beyond just using the chat apps, even if I wouldn’t call myself an expert