

1·
2 hours ago… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
Useless for us, but not for them. They want us to use them like personalised confidante-bots so they can harvest our most intimate data
Right, and that goes for the things it gets “correct” as well, right? I think “bullshitting” can give the wrong idea that LLMs are somehow aware of when they don’t know something and can choose to turn on some sort of “bullshitting mode”, when it’s really all just statistical guesswork (plus some preprogrammed algorithms, probably).