This was the weirdest thing I’ve seen today. These are only the ones I’ve spotted.
funnily enough, these bots are also replying to an obvious repost from another bot account. It’s at the top right now! Beautiful
https://www.reddit.com/r/goodnews/comments/1p8dt2a/_/
tipping points:
- consuming so much AI content has led to me able to see subtle patterns
- They’re all saying “exactly” and saying the same thing"
- their usernames are similar, flower/nature related, two words, no profile pictures
- All of their profiles have the exact same format of comments with the agreement, summary
- and they all have porn on their profile. oh
edit: tf? 


Think it’s probably a bug in the script they’re running. It’s cleaning one character too many off the end.
Why would they cleaning characters of the end in the first place?
LLMs ramble unless you stop them forcefully. That can lead to partial sentences that need to be cleaned up.
That’s not a problem inherent to LLMs, people building things with LLMs don’t normally need to account for this.
I can’t say it never happens, but if you’re using an appropriately trained LLM with an appropriate system prompt, this concern should be uncommon enough that trying to compensate for it with code will be more likely to introduce problems than just leaving it.
You just explained how it is a problem inherent to most LLMs. Most spammers aren’t able or willing to train a model.
Every large hosted LLM drones on and on. It helps them land on the correct answer more often. And they always return to the mean of their training even with prompting. Try telling a model not to reply with “Sure thing!” or some other shit and it’ll do it anyway. Far easier to just cut that shit out.
There are lots of (relatively) high quality free models they can host themselves, or use hosted models. They don’t need to train their own models or use models without applicable training data.
If your bar for “droning on and on” is them saying “ok” then sure I guess? But that seems like a crazy bar.
What system prompt are you using, when you’re getting responses that “drone on and on”?
Don’t get me wrong, I hate AI.
But I also worked on LLM integrations for a year, so I had to develop a reasonable grasp of their capabilities and use, beyond just using the chat apps, even if I wouldn’t call myself an expert
Vibe coding.
List ends are exclusive, a new programmer could easily make that mistake.