• hoppolito@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    18 hours ago

    As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:

    1. The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.

      There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.

    2. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.

      If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.

    • WhiteOakBayou@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      17 hours ago

      like the LLM that was finding cancers and people were initially impressed but then they figured out the LLM had just correlated a DR’s name on the scan to a high likelihood of cancer. Once the complicating data point was removed, the LLM no longer performed impressively. Point #2 is very Goodhart’s law adjacent.

    • yes_this_time@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Good points. What’s novel information vs. wrong information? (And subtly wrong is harder to understand than very wrong)

      At some point it’s hitting a user who is giving feedback, but I imagine data lineage once it gets to the end user its tricky to understand.