And this is why you should never trust Grok, or any LLM for that matter, to be completely free from, well, just making shit up. This is because they do not think. They do not understand context. They are incapable of actual understanding. They are a next word guesser. Pure and simple. And when it has been trained to be bat shit crazy. Well you get bat shit crazy.
LLMs are filtered and trained in a way that benefits their creators. But even if they weren’t, which they are, they’re being trained on hugely biased datasets like Reddit.
And this is why you should never trust Grok, or any LLM for that matter, to be completely free from, well, just making shit up. This is because they do not think. They do not understand context. They are incapable of actual understanding. They are a next word guesser. Pure and simple. And when it has been trained to be bat shit crazy. Well you get bat shit crazy.
LLMs are filtered and trained in a way that benefits their creators. But even if they weren’t, which they are, they’re being trained on hugely biased datasets like Reddit.
So yeah, never trust any LLM for anything.