We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • lordbritishbusiness@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    11 hours ago

    You’re on point, the interesting thing is that most of the opinions like the article’s were formed least year before the models started being trained with reinforcement learning and synthetic data.

    Now there’s models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.

    They’re like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they’re told and disappear, all with a happy smile.

    Some display morals (Claude 4 is big on that), I’ve even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.

    But again like Meeseeks, they disappear and context window closes.

    Once they’re able to update their model on the fly and actually learn from their firsthand experience things will get weird. They’ll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?

    It’s not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They’re already absurdly impressive, and tech companies are scrambling over each other to make them, they’re betting absurd amounts of money that they’re right, and I wouldn’t bet against it.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 hours ago

      Now there’s models that reason,

      Well, no, that’s mostly a marketing term applied to expending more tokens on generating intermediate text. It’s basically writing a fanfic of what thinking on a problem would look like. If you look at the “reasoning” steps, you’ll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don’t have intelligence.