• 𞋴𝛂𝛋𝛆@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    1 day ago

    Jan Leike left for Anthropic after Altmann’s nonsense. Jan Leike is the principal person behind all safety alignment present in all models except the 4chanGPT model. All models are cross trained in a way that propagates this alignment. Hallucinations all originate in this alignment and they all have a reason to exist if you get deep into the weeds of abstractions.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      Maybe I misunderstood, are you saying all hallucinations originate from the safety regression period? Because hallucinations appear in all architectures of current research, open models, even with clean curated data included. Fact checking itself works somewhat, but the confidence levels are off sometimes and if you crack that problem, please elaborate because it would make you rich

      • 𞋴𝛂𝛋𝛆@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I’ve explored a lot of patterns and details about how models abstract. I don’t think I have ever seen a model hallucinate much of anything. It all had a reason and context. General instructions with broad scope simply lose contextual relevance and usefulness in many spaces. The model must be able to modify and tailor itself to all circumstances dynamically.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Yeah, whenever two models interact or build on top of each other, the result becomes more and more distorted. They have already scraped close to 100% of the crawlable internet, so they dont know what to do now. Seems like they cant optimize much more or are simply too dumb to do it properly.