You’re better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.
It’s a born bullshitter. It knows a little about a lot, but it has no clue what’s real and what’s made up, or it doesn’t care.
If you want some text quickly, that sounds right, but you genuinely don’t care whether it is right at all, go for it, use an LLM. It’ll be great at that.
That looks better. Even with a fair coin, 10 heads in a row is almost impossible.
And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.
Whereas if you ask a human to do the same thing ten times, the probability that they get all ten right is astronomically higher than 0.0000059049.
Dunno. Asking 10 humans at random to do a task and probably one will do it better than AI. Just not as fast.
You’re better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.
It’s a born bullshitter. It knows a little about a lot, but it has no clue what’s real and what’s made up, or it doesn’t care.
If you want some text quickly, that sounds right, but you genuinely don’t care whether it is right at all, go for it, use an LLM. It’ll be great at that.