• dil@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    26 minutes ago

    Fine with this, want a tiktok alt and loops went nowhere

    • dil@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 minutes ago

      Guess it got an update and you can maybe self host now, idk no marketing budget = nothing federated is taking off, shorts platforms need ppl

  • Rose56@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 hours ago

    Sure pal, sure. That’s what all said, no AI, and look now.
    Can’t wait whe they will offer him 1bilion or scare him to death to put AI on it.

  • Doomsider@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    To get the perfect selfie please pull your tongue back in your mouth. Good, now close you mouth. Perfect, now lower your phone thirty degree. Great, now lower it thirty degrees more. Almost there! Now lower thirty more degrees and put it the fuck away in your pocket.

    Congratulations, you now have the perfect selfie.

    • khepri@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      4 hours ago

      Jack Dorsey more than deserves the hate and I’m happy to discuss it with you.

    • Dayroom7485@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      We like our negativity here - it’s still okay to disagree and be positive though!

      That being said, Dorsey was fine selling his last media company to the highest-bidding fascist. Chances are he‘ll do it again.

      Personally, I won’t use any social media that isn’t billionaire-proof.

  • REDACTED@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    Honestly, I don’t really trust any cryptobro, I see them on similar level as AI-bros

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    118
    arrow-down
    1
    ·
    1 day ago

    So what’s the angle? The Internet is getting flooded by AI slop. AI needs fresh REAL content to train with. That’s the angle. You are there to provide frsh amd original content to feed the AI.

      • Apathy@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        16 hours ago

        Is you a youngin? Cause no product under the control of a billionaire is free. If it’s free, you are the product. AI is hated and they’re trying to make a product using that hate as a basis for target audience

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          Nothing is free, If they can sell ads to people because they don’t like AI, they will. They’re rebooting it with about the same intent as it was originally designed to have.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      11
      ·
      1 day ago

      Again with this idea of the ever-worsening ai models. It just isn’t happening in reality.

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 hours ago

        It has been proven over and over that this is exactly what happens. I don’t know if it’s still the case, but ChatGPT was strictly limited to training data from before a certain date because the amount of AI content after that date had negative effects on the output.

        This is very easy to see because an AI is simply regurgitating algorithms created based on its training data. Any biases or flaws in that data become ingrained into the AI, causing it to output more flawed data, which is then used to train more AI, which further exacerbates the issues as they become even more ingrained in those AI who then output even more flawed data, and so on until the outputs are bad enough that nobody wants to use it.

        Did you ever hear that story about the researchers who had 2 LLMs talk to each other and they eventually began speaking in a language that nobody else could understand? What really happened was that their conversation started to turn more and more into gibberish until they were just passing random letters and numbers back and forth. That’s exactly what happens when you train AI on the output of AI. The “AI created their own language” thing was just marketing.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        15 hours ago

        Not only it is actually happening, it’s actually well researched and mathematically proven.

      • pulsewidth@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        22 hours ago

        The same reality where GPT5’s launch a couple months back was a massive failure with users and showed a lot of regression to less reliable output than GPT4? Or perhaps the reality where most corporations that have used AI found no benefit and have given up reported this year?

        LLMs are good tools for some uses, but those uses are quite limited and niche. They are however a square peg being crammed into the round hole of ‘AGI’ by Altman etc while they put their hands out for another $10bil - or, more accurately while they make a trade swap deal with MS or Nvidia or any of the other AI orobouros trade partners that hype up the bubble for self-benefit.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        22 hours ago

        People really latched onto the idea, which was shared with the media by people actively working on how to solve the problem

        • gerryflap@feddit.nl
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          Fun fact, this loop is kinda how one of the generative ML algorithms works. This algorithm is called Generative Adversarial Networks or GAN.

          You have a so-called Generator neural network G that generates something (usually images) from random noise and a Discriminator neural network D that can take images (or whatever you’re generating) as input and outputs whether this is real or fake (not actually in a binary way, but as a continuous value). D is trained on images from G, which should be classified as fake, and real images from a dataset that should be classified as real. G is trained to generate images from random noise vectors that fool D into thinking they’re real. D is, like most neural networks, essentially just a mathematical function so you can just compute how to adjust the generated image to make it appear more real using derivatives.

          In the perfect case these 2 networks battle until they reach peak performance. In practice you usually need to do some extra shit to prevent the whole situation from crashing and burning. What often happens, for instance, is that D becomes so good that it doesn’t provide any useful feedback anymore. It sees the generated images as 100% fake, meaning there’s no longer an obvious way to alter the generated image to make it seem more real.

          Sorry for the infodump :3

      • boonhet@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 day ago

        Well the AI-based AI detector isn’t actively making creative people’s work disappear into a sea of gen-AI “art” at least.

        There’s good and bad use cases for AI, I consider this a better use case than generating art. Now the question is whether or not it’s feasible to detect AI this way.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Indeed.

          I have an Immich instance running on my home server that backs up my and my wife’s photos. It’s like an open source Google Photos.

          One of its features is an local AI model that recognises faces and tags names on them, as well as doing stuff like recognising when a picture is of a landscape, food, etc.

          Likewise, Firefox has a really good offline translation feature that runs locally and is open source.

          AI doesn’t have to be bad. Big tech and venture capital is just choosing to make it so.

    • Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      The ban doesn’t need a 100% perfect AI screening protocol to be a success.

      Just the fact that AI is banned might appeal to a wide demographic. If the ban is actually enforced, even in just 25% of the most blatant cases, it might be just the push a new platform needs to take off.

    • nondescripthandle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      2 days ago

      Just the threat of being able to summarily remove AI content and hand out account discipline will cut down drastically on AI and practically eliminate the really low effort ‘slop’, it’s not perfect but it’s damn useful.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        1 day ago

        It’s also going to make it really easy to take down the content you don’t like, just accuse it of being AI and watch the witch hunting roll in. I’ve seen plenty of examples of traditional artists getting accused of using AI in other forums, I don’t imagine this will be any different.

        • oftenawake@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          I got accused of being an AI for writing a comment reply to someone which was merely informative, empathic and polite!

        • nondescripthandle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          24 hours ago

          People already mass report to abuse existing AI moderation tools. It’s already starting to be accounted for and I can’t imagine it so much as slowing down implementing an anti AI rule if I’m being honest.

    • edryd@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      Just because something might be hard means we should give up before even trying?

    • osaerisxero@kbin.melroy.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      Only if we let it be. There’s no technical reason why the origin of a video couldn’t have a signature generated by the capture device, or legally requiring AI models to do the same for any content they generate. Anything without an origin sticker is assumed to be garbage by default. Obviously there would need to be some way to make captures either anonymous or not at the user’s choice, and nation states can evade these things with sufficient effort like they always do, but we could cut a lot of slop out by doing some simple stuff like that.

      • kinsnik@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        while a phone signing a video to show that it was captured with the camera is possible, it will be easy too to fake the signature. all it would take would be a hacked device to steal the private key. and even if apple/google/samsung have perfectly secure systems to sign the origin of the video, there would be ton of cheaper phones that would likely won’t.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        “Legally” doesn’t mean shit if it’s not enforceable. Besides, removing watermarks is trivial.

        There is no technically rigorous way to filter AI content, unfortunately.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      6
      ·
      1 day ago

      Or just had a filter to hide it. I don’t feel like banning something from everyone just because I personally don’t like it.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I think such a filter wouldn’t function well enough to keep up, such is the case with search engines which offer the feature, and instead a 0 tolerance ban would be the only effective method.

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 hour ago

          Zero tolerance ban still requires a method of detecting AI content in order to enforce said ban. Having such detection system in place would then just as well give people the option to choose for themselves whether they want to see such content or not. Ofcourse such filter isn’t 100% accurate but neither is a total ban. Some of that content will always get through.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            54 minutes ago

            Humans can make reports to contribute to banning accounts and even IPs that prove problematic.

            Humans contributing tags for filters would be like fighting the tide with a spoon.

      • MourningDove@lemmy.zip
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        1 day ago

        I would and I’d have no problem with it at all. If people want AI slop, they can go find it where it is allowed.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        8
        ·
        edit-2
        1 day ago

        personally, I would ban it at the federal level and anytime you use it someone shows up at your house and destroys everything and throws away your computer. and then you go to jail. and then anyone who tries to visit you in jail gets punched in the face. and you have to eat poop in jail

        • fonix232@fedia.io
          link
          fedilink
          arrow-up
          14
          ·
          1 day ago

          Not really just short form, it’s more of a take on video feeds rather than just the limited length quickcontent Vine was famous about.

          Obviously the focus is still on short(ish) content format, but I see more and more people transition to longer videos to deliver content. On YT/Facebook most videos I see nowadays are 10min or above.

          • SouthFresh@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            You’re not wrong, but an arbitrary maximum video length is the least of my problems with a Dorsey product

          • SouthFresh@lemmy.world
            link
            fedilink
            English
            arrow-up
            23
            ·
            1 day ago

            That would be based on the server’s policies, same as Lemmy or Mastodon.

            I’d trust a federated environment a billion times more than anything Jack Dorsey is doing

            • mark@programming.dev
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              1 day ago

              Yeah, will never understand why these big billionaires keep taking these “we are for the people” stances, but are still trying to spin up these same ol for-profit, centralized products. If they really cared, they’d use that money to help nonprofits or decentralized services and stay out of the damn way.

              • BassTurd@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 day ago

                It’s because it’s profitable, that’s why they do it. As long as they don’t Elon Musk, most people either don’t know who these people are or don’t care. And if they do go full EM, then most people still don’t care and it’s still profitable.