An alarming number of Hollywood screenwriters believe consciousness (sapience, self awareness, etc.) is a measurable thing or a switch we can flip.
At best consciousness is a sorites paradox. At worst, it doesn’t exist and while meat brains can engage in sophisticated cognitive processes, we’re still indistinguishable from p-zombies.
I think the latter is more likely, and will reveal itself when AGI (or genetically engineered smart animals) can chat and assemble flat furniture as well as humans can.
(On mobile. Will add definition links later.) << Done!
I’d rather not break down a human being to the same level of social benefit as an appliance.
Perception is one thing, but the idea that these things can manipulate and misguide people who are fully invested in whatever process they have, irks me.
I’ve been on nihilism hill. It sucks. I think people, and living things garner more genuine stimulation than a bowl full of matter or however you want to boil us down.
Oh, people can be bad, too. There’s no doubting that, but people have identifiable motives. What does an Ai “want?”
You’re not alone in your sentiment. The whole thought experiment of p-zombies and the notion of qualia comes from a desire to assume human beings should be given a special position, but in that case, a sentient is who we decide it is, the way Sophia the Robot is a citizen of Saudi Arabia (even though she’s simpler than GPT-2 (unless they’ve upgraded her and I missed the news.)
But it will raise a question when we do come across a non-human intelligence. It was a question raised in both the Blade Runner movies, what happens when we create synthetic intelligence that is as bright as human, or even brighter? If we’re still capitalist, assuredly the companies that made them will not be eager to let them have rights.
Obviously machines and life forms as sophisticated as we are are not merely the sum of our parts, but the same can be said about most other macro-sized life on this planet, and we’re glad to assert they are not sentient the way we are.
What aggravates me is not that we’re just thinking meat but with all our brilliance we’re approaching multiple imminent great filters and seem not to be able to muster the collective will to try and navigate them. Even when we recognize that our behavior is going to end us, we don’t organize to change it.
An alarming number of Hollywood screenwriters believe consciousness (sapience, self awareness, etc.) is a measurable thing or a switch we can flip.
At best consciousness is a sorites paradox. At worst, it doesn’t exist and while meat brains can engage in sophisticated cognitive processes, we’re still indistinguishable from p-zombies.
I think the latter is more likely, and will reveal itself when AGI (or genetically engineered smart animals) can chat and assemble flat furniture as well as humans can.
(On mobile. Will add definition links later.) << Done!
I’d rather not break down a human being to the same level of social benefit as an appliance.
Perception is one thing, but the idea that these things can manipulate and misguide people who are fully invested in whatever process they have, irks me.
I’ve been on nihilism hill. It sucks. I think people, and living things garner more genuine stimulation than a bowl full of matter or however you want to boil us down.
Oh, people can be bad, too. There’s no doubting that, but people have identifiable motives. What does an Ai “want?”
whatever it’s told to.
Humans also want what we’re told to, or we wouldn’t have advertising.
You’re not alone in your sentiment. The whole thought experiment of p-zombies and the notion of qualia comes from a desire to assume human beings should be given a special position, but in that case, a sentient is who we decide it is, the way Sophia the Robot is a citizen of Saudi Arabia (even though she’s simpler than GPT-2 (unless they’ve upgraded her and I missed the news.)
But it will raise a question when we do come across a non-human intelligence. It was a question raised in both the Blade Runner movies, what happens when we create synthetic intelligence that is as bright as human, or even brighter? If we’re still capitalist, assuredly the companies that made them will not be eager to let them have rights.
Obviously machines and life forms as sophisticated as we are are not merely the sum of our parts, but the same can be said about most other macro-sized life on this planet, and we’re glad to assert they are not sentient the way we are.
What aggravates me is not that we’re just thinking meat but with all our brilliance we’re approaching multiple imminent great filters and seem not to be able to muster the collective will to try and navigate them. Even when we recognize that our behavior is going to end us, we don’t organize to change it.