• _cryptagion [he/him]@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    19 hours ago

    Ah, yes, the large limage model.

    some random pixels have totally nonsensical / erratic colors,

    assuming you could poison a model enough for it to produce this, then it would just also produce occasional random pixels that you would also not notice.

    • waterSticksToMyBalls@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 hours ago

      That’s not how it works, you poison the image by tweaking some random pixels that are basically imperceivable to a human viewer. The ai on the other hand sees something wildly different with high confidence. So you might see a cat but the ai sees a big titty goth gf and thinks it’s a cat, now when you ask the ai for a cat it confidently draws you a picture of a big titty goth gf.

    • PrivateNoob@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      18 hours ago

      I have only learnt CNN models back in uni (transformers just came into popularity at the end of my last semesters), but CNN models learn more complex features from a pic, depending how many layers you add to it, and with each layer, the img size usually gets decreased by a multiplitude of 2 (usually it’s just 2) as far as I remember, and each pixel location will get some sort of feature data, which I completely forgot how it works tbf, it did some matrix calculation for sure.