• Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    That looks better. Even with a fair coin, 10 heads in a row is almost impossible.

    And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Whereas if you ask a human to do the same thing ten times, the probability that they get all ten right is astronomically higher than 0.0000059049.

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          You’re better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.

          It’s a born bullshitter. It knows a little about a lot, but it has no clue what’s real and what’s made up, or it doesn’t care.

          If you want some text quickly, that sounds right, but you genuinely don’t care whether it is right at all, go for it, use an LLM. It’ll be great at that.