Recent comments here and on Facebook reminded me of what kind of crazy AI the Singularity Institute must imagine when trying to come up with a scenario that supports their mission. But then I realized again that the real problem here is that they actually don’t imagine any specific AI at all. Their whole mission is an artifact of too much vagueness. The result is the prediction of a process that has more in common with out-of-control self-replicating robots, i.e. “grey goo“, than an actual general intelligence.
Some features of the AI that they seem to have in mind:
1.a The AI is eventually going to interpret any natural language request in an almost completely arbitrary manner yet biased in a way that will guarantee it to cause great enough damage to cause human extinction.
1.b The AI will arrive at the correct interpretation of a natural language request if it is necessary to deceive humans.
2.a The AI is either not going to compute a cost-benefit analysis, to choose which goals are instrumentally useful in executing a natural language request, or any cost-benefit analysis, irregardless of the nature of the natural language request, is going to result in actions that will cause great enough damage to cause human extinction.
2.b If it is useful in deceiving humans then the AI will do a cost-benefit analysis resulting in actions that appear to be perfectly aligned with human volition, just so that it can later follow through on some completely arbitrary but dangerous interpretation.
It should be obvious that those features are explicitly engineered to yield the desired result that AI is an existential risk rather than being an evidence based prediction of how real-world AI is going to behave.
The problem is that the whole AI risk movement is all talk, no walk. Their predictions are based on intuition not knowledge of real-world AI. Their ideas are full of vague terminology and unjustified assertions.
The whole idea that an AI is going to care to protect itself by all means is pure anthropomorphization.