Looks so real !

    • nullptr@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      4 hours ago

      I think you made a mistake, is english you second language? So basically the adjective always goes before the noun it refers to, so it should be “i love seeing everybody’s takes on stupid AI”

      Minor neatpick, i used the term LLM for reason, otherwise have a great day!

  • LuigiMaoFrance@lemmy.ml
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    2 days ago

    We don’t know how consciousness arises, and digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution. There are huge economic and ethical incentives to deny consciousness in non-humans. We do the same with animals to justify murdering them for our personal benefit.
    We cannot know who or what possesses consciousness. We struggle to even define it.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution

      No they don’t. Digital networks don’t act in any way like a electro-chemical meat wad programmed by DNA.

      Might as well call a helicopter a hummingbird and insist they could both lay eggs.

      We cannot know who or what possesses consciousness.

      That’s sophism. You’re functionally asserting that we can’t tell the difference between someone who is alive and someone who is dead

      • yermaw@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        I dont think we can currently prove that anyone other than ourselves are even conscious. As far as I know I’m the only one. The people around me look and act and appear conscious, but I’ll never know.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          20 hours ago

          I dont think we can currently prove that anyone other than ourselves are even conscious.

          You have to define consciousness before you can prove it. I might argue that our definition of consciousness is fuzzy. But not so fuzzy that “a human is conscious and a rock is not” is up for serious debate.

          The people around me look and act and appear conscious, but I’ll never know.

          You’re describing Philosophical Zombies. And the broad answer to the question of “How do I know I’m not just talking to a zombie?” boils down to “You have to treat others as you would expect to be treated and give them the benefit of the doubt.”

          Mere ignorance is not evidence of a thing. And when you have an abundance of evidence to the contrary (these other individuals who behave and interact with me as I do, thus signaling all the indications of the consciousness I know I possess) defaulting to the negative assertion because you don’t feel convinced isn’t skeptical inquiry, its cynical denialism.

          The catch with AI is that we have ample evidence to refute the claims of consciousness. So a teletype machine that replicates human interactions can be refuted as “conscious” on the grounds that its a big box full of wires and digital instructions which you know in advance was designed to create the illusion of humanity.

          • yermaw@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            20 hours ago

            My point was more “if we cant even prove that each other are sentient, how can we possibly prove that a computer cant be?”.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 hours ago

              If you can’t find ample evidence of human sentience then you either aren’t looking or are deliberately misreading the definition of the term.

              If you can’t find ample evidence that computers aren’t sentient, same goes.

              You can definitely put blinders on and set yourself up to be fooled, one way or another. But there’s a huge difference between “unassailable proof” and “ample convincing data”.

        • gedhrel@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          Really? I know. So either you’re using that word wrong or your first principles are lacking.

  • bampop@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 days ago

    People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don’t think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn’t believe it experiences or feels anything as I do, even though it claims to do so. Especially if it’s based on an LLM, since they are superficial imitations by design.

    • yermaw@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way

      I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.

      You might be fooled for a little while, but eventually your curious monkey brain would start poking around the edges and exposing the flaws. At this point, it would not be a question of whether you can continue to be fooled, but whether you strategically ignore the flaws to preserve the illusion or tear the machine apart in disgust.

      I still wouldn’t believe it experiences or feels anything as I do, even though it claims to do so

      People have submitted to less. They’ve worshipped statues and paintings and trees and even big rocks, attributing consciousness to all of them.

      But Animism is a real escoteric faith. You believe it despite the evidence in front of you, not because of it.

      I’m putting my money down on a future where large groups of people believe AIs are more than just human, they’re magical angels and demons.

      • bampop@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.

        In its current stage, no. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.

        Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t become immortal by having a machine imitate my behaviour. There’s bad actors working to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, and in time this will turn into a very subtle and pervasive problem for humankind. The intrinsic fakeness of it is a concerning aspect. It’s like we’re getting a puppet show version of what AI could have been.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          20 hours ago

          I don’t think we’re so far from having machines that pass the Turing test 100%.

          The Turing test isn’t solved with technology, its solved with participants who are easier to fool or more sympathetic to computer output as humanly legible. In the end, it can boil down to social conventions far more than actual computing capacity.

          Per the old Inglorious Bastards gag

          You can fail the Turing Test not because you’re a computer but because you’re a British computer.

          Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone.

          We’ve ingested a bunch of early 21st century digital markers for English language Western oriented human speech and replicated those patterns. But human behavior isn’t limited to Americans shitposting on Reddit. Neither is American culture a static construct. As the spread between the median user and the median simulated user in the computer dataset diverges, the differences become more obvious.

          Do we think the designers at OpenAI did a good enough job to keep catching up to the current zeitgeist?

    • finitebanjo@piefed.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Although, if a person knowing the context still acts confused when people complain about AI, its about as honest as somebody trying to solve for circumference with an apple pie.

  • bss03@infosec.pub
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago
    Clair Obscur: Expedition 33

    Clair Obscur: Expedition to meet the Dessandre Family

  • HazardousBanjo@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    I think you’d have less dumb ass average Joes cumming over AI if they could understand that regardless as to whether or not the AI wave crashes and burns, the CEOs who’ve pushed for it won’t feel the effects of the crash.

  • Lightfire228@pawb.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    I suspect Turing Complete machines (all computers) are not capable of producing consciousness

    If that were the case, then theoretically a game of Magic the Gathering could experience consciousness (or similar physical systems that can emulate a Turing Machine)

    • nednobbins@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Most modern languages are theoretically Turing complete but they all have finite memory. That also keeps human brains from being Turing complete. I’ve read a little about theories beyond Turing completeness, like quantum computers, but I’m not aware of anyone claiming that human brains are capable of that.

      A game of Magic could theoretically do any task a Turing machine could do but it would be really slow. Even if it could “think” it would likely take years to decide to do something as simple as farting.

      • Lightfire228@pawb.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        19 hours ago

        I don’t think the distinction between “arbitrarily large” memory and “infinitely large” memory here matters

        Also, Turing Completeness is measuring the “class” of problems a computer can solve (eg, the Halting Problem)

        I conjecture that whatever the brain is doing to achieve consciousness is a fundamentally different operation, one that a Turing Complete machine cannot perform, mathematically


        Also also, quantum computers (at least as i understand them, which is, not very well) are still Turing Complete. They just use analog properties of quantum wave functions as computational components

        • nednobbins@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          15 hours ago

          There’s a real vs theoretical distinction. Turing machines are defined as having infinite memory. Running out of memory is a big issue that prevents computers from solving problems that Turing machines should be able to solve.

          The halting problem, a bunch of problems involving prime numbers, a bunch of other weird math problems are all things that can’t be solved with Turing machines. They can all sort of be solved in some circumstances (eg A TM can correctly classify many programs as either halting or not halting but there are a bunch of edge cases it can’t figure out, even with infinite memory).

          From what I remember, most researchers believe that human brains are Turing Complete. I’m not aware of any class of problem that humans can solve that we don’t think are solvable by sufficiently large computers.

          You’re right that Quantum Computers are Turing Complete. They’re just the closest practical thing I could think of to something beyond it. They often let you knock down the Big Oh relative to regular computers. That was my point though. We can describe something that goes beyond TC (like “it can solve the halting lemma”) but there don’t seem to be any examples of them.

          • Lightfire228@pawb.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            15 hours ago

            I’m not aware of any class of problem that humans can solve that we don’t think are solvable by sufficiently large computers.

            That is a really good point…hrmmm

            My conjecture is that some “super Turing” calculation is required for consciousness to arise. But that super Turing calculation might not be necessary for anything else like logic, balance, visual processing, etc

            However, if the brain is capable of something super Turing, I also don’t see why that property wouldn’t translate to super Turing “higher order” brain functions like logic…

            • nednobbins@lemmy.zip
              link
              fedilink
              arrow-up
              2
              ·
              15 hours ago

              We certainly haven’t ruled out the possibility that the human brain is capable of some sort of “super Turing” calculations. That would lead me to 2 questions;

              1. Can we devise some test to show this? If we expand our definition of “test” to include anything we can measure, directly or indirectly, through our senses?

              2. What do we think is the “magic” ingredient that allows humans to engage in “super turing” activities, that a computer doesn’t have? eg Are carbon compounds inherently more suited to intelligence than silicon compounds?

  • Thorry@feddit.org
    link
    fedilink
    arrow-up
    78
    arrow-down
    5
    ·
    edit-2
    3 days ago

    Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!

  • MercuryGenisus@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    3 days ago

    Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:

    Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 “passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative”

    The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time. Maybe it’s a good thing they aren’t like us.

    • Sconrad122@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      3 days ago

      Alternative way to phrase it, we don’t train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, “they” is anthropomorphizing LLM AI much more than it deserves, it’s not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data

      • Aeri@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Sometimes. I feel like LLM technology and it’s relationship with humans is a symptom of how poorly we treat each other.

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    4
    ·
    3 days ago

    As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

    • finitebanjo@piefed.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain’t the algorithm made to guess the next word.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      3 days ago

      We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.

      I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        3 days ago

        Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 days ago

          To be fair there’s zero evidence for anything outside our own subjective experience of it, we’re just kind of running with the assumption that our subjective experience is an accurate representation of reality.

        • finitebanjo@piefed.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          2 days ago

          We’ve actually got a pretty good understanding of the human brain, we just don’t have the tech that could replicate it with any sort of budget nor a use case for it. Spoiler, there is no soul.

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    3 days ago

    The example I gave my wife was “expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school”

  • finitebanjo@piefed.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    3 days ago

    And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I tried to submit an SCP once but theres a “review process” and it boils down to only getting in by knowing somebody who is in.

      • peopleproblems@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        3 days ago

        Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        It clearly demonstrably is. Thats the problem, people are estimating AI to be approximate of Humans but its so so so much worse in every way.

        • lauha@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          2 days ago

          You are comparing AI to a person who wrote a dictionary i.e. a domain experts. Take an average person from a street and they’ll write the same slop as current AIs

          • finitebanjo@piefed.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            2 days ago

            But you wouldn’t hire a random person from a street to write the dictionary. You wouldn’t hire a nonspecialist to do anything. If you did, you could at least expect a human to learn and grow, or have a bare minimum standard for ethics, morals, or responsibility for their actions. You could at least expect a person to be capable of learning or growing. You cannot expect that from an AI.

            AI have no use case.

            • lauha@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 day ago

              If you did, you could at least expect a human to learn and grow, or have a bare minimum standard for ethics, morals, or responsibility for their actions.

              Some do, but you are somehow ignoring the currently most talked about person in the world, the president of the united states. And the party in power. And all the richest men in the world. And literally all the large corporations.

              The problem is you are not looking for AI to be average human. You are looking for the domain expert of literally everything and behavior of the best us, but trained on the behaviour of average of all of us.

              • finitebanjo@piefed.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Lmao, this tech bro is convinced only a minority of people have any learning capacity.

                The Republicans were all trained with carrots and sticks, too.

            • FridaySteve@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              2 days ago

              It sorts data and identifies patterns and trends. You may be referring only to AI enabled LLMs tasked with giving unique and creative output which isn’t going to give you any valuable results.