This was the weirdest thing I’ve seen today. These are only the ones I’ve spotted.
funnily enough, these bots are also replying to an obvious repost from another bot account. It’s at the top right now! Beautiful
https://www.reddit.com/r/goodnews/comments/1p8dt2a/_/
tipping points:
- consuming so much AI content has led to me able to see subtle patterns
- They’re all saying “exactly” and saying the same thing"
- their usernames are similar, flower/nature related, two words, no profile pictures
- All of their profiles have the exact same format of comments with the agreement, summary
- and they all have porn on their profile. oh
edit: tf? 


Do LLMs always omit the period on their last sentence? Seems like that would be a dead giveaway
No, when it comes to LLMs there’s hardly any “dead giveaways” now. You have to learn to recognize the patterns.
Omitting the final punctuation is quite a common thing people do, in fact you did in your comment. It’s probably just a part of the system prompt.
Whoosh
Yeah, LLM would probably not omit the final punctuation unless specifically prompted to or unless it is given a ton of examples of comments it should mimic in the prompt.
Which it probably will have because it’s been trained on Reddit comments.
i don’t think i (or perhaps anyone) can recognize any single particular comment as being llm generated… but when the bots come in force it is still really easy. basically it boils down to this: many replies keep reiterating the same exact points in slightly different way with the same exact keywords. if you would use chatgpt to summarize each response you’d get basically the same thing from all bot replies.
I agree. I believe it’s difficult for me—or anyone else—to pinpoint a specific comment as being generated by an LLM. However, when numerous bots are involved, the pattern becomes clear. Essentially, many responses end up repeating the same points, just phrased differently and using the same keywords. If you were to use ChatGPT to summarize each response, you’d essentially get a very similar outcome from all the bot-generated replies.
thank you! we need slightly longer chain or more parallel replies to drive the point home… anyone else?
I don’t know.
Think it’s probably a bug in the script they’re running. It’s cleaning one character too many off the end.
Vibe coding.
List ends are exclusive, a new programmer could easily make that mistake.
Why would they cleaning characters of the end in the first place?
LLMs ramble unless you stop them forcefully. That can lead to partial sentences that need to be cleaned up.
That’s not a problem inherent to LLMs, people building things with LLMs don’t normally need to account for this.
I can’t say it never happens, but if you’re using an appropriately trained LLM with an appropriate system prompt, this concern should be uncommon enough that trying to compensate for it with code will be more likely to introduce problems than just leaving it.
You just explained how it is a problem inherent to most LLMs. Most spammers aren’t able or willing to train a model.
Every large hosted LLM drones on and on. It helps them land on the correct answer more often. And they always return to the mean of their training even with prompting. Try telling a model not to reply with “Sure thing!” or some other shit and it’ll do it anyway. Far easier to just cut that shit out.
There are lots of (relatively) high quality free models they can host themselves, or use hosted models. They don’t need to train their own models or use models without applicable training data.
If your bar for “droning on and on” is them saying “ok” then sure I guess? But that seems like a crazy bar.
What system prompt are you using, when you’re getting responses that “drone on and on”?
Don’t get me wrong, I hate AI.
But I also worked on LLM integrations for a year, so I had to develop a reasonable grasp of their capabilities and use, beyond just using the chat apps, even if I wouldn’t call myself an expert