

Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.
How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Yeah, like, have you ever met one of those crazy guys who think the pyramids were literally built by aliens? Humans can get caught in a confidently wrong state as well.