Please assign probabilities to the following (for the next 3 decades):
probability an AI smarter than any human on any intellectual task a human can do might come to exist (superintelligence);
given (1), probability it decides to kill all humans to achieve its goals (misaligned);
given (2), probability it is successful at killing all humans;
bonus: given 1 and 2, probability that we don’t even notice it wants to kill us, e.g. because we don’t know how to understand what it’s thinking.
Since the AI is smarter than me, I only need to propose one plausible method by which it could exterminate all humans. It can come up with a method at least as good as me, most likely something much better though. The typical answer here would be that it bio-engineers a lethal virus which is initially harmless (to avoid detection), but responds to some trigger like the introduction of a certain chemical or maybe a strong radio signal. If it’s very smart, and has a very good understanding of bioengineering, it should be able to produce a virus like this by paying a laboratory to e.g. perform some CRISPR operations on some existing bacteria strain (or even just mix some chemicals together if Sagan turns out to be right about bioengineering) and mail a sample somewhere. It can wait until everyone is infected before triggering the strain.
Well, the probability you have for the AI apocalypse should ultimately be the product of those three numbers. I’m curious which of those is the one you think is so unlikely.
Please assign probabilities to the following (for the next 3 decades):
bonus: given 1 and 2, probability that we don’t even notice it wants to kill us, e.g. because we don’t know how to understand what it’s thinking.
Since the AI is smarter than me, I only need to propose one plausible method by which it could exterminate all humans. It can come up with a method at least as good as me, most likely something much better though. The typical answer here would be that it bio-engineers a lethal virus which is initially harmless (to avoid detection), but responds to some trigger like the introduction of a certain chemical or maybe a strong radio signal. If it’s very smart, and has a very good understanding of bioengineering, it should be able to produce a virus like this by paying a laboratory to e.g. perform some CRISPR operations on some existing bacteria strain (or even just mix some chemicals together if Sagan turns out to be right about bioengineering) and mail a sample somewhere. It can wait until everyone is infected before triggering the strain.
Or how about you don’t assign me tasks and I don’t do them? Cuz I don’t remember signing up for a class.
Well, the probability you have for the AI apocalypse should ultimately be the product of those three numbers. I’m curious which of those is the one you think is so unlikely.