The argumentative framework employed by the Singularity Institute to justify its mission is fallacious because it is ambiguous.
It is possible to replace the subject of their argument, in their case risks from AI, with any arbitrary idée fixe, any extremely hypothetical scenario, or even a case of Pascal’s mugging, without changing its conclusion.
The Singularity Institute is unable to prove that it is irrational to discount risks associated with artificial general intelligence without at the same time showing that it is rational to give money to a Pascal’s Mugger or accept the offer known as the Lifespan Dilemma.
The claim that the probability of your favored pet hypothesis is much larger and therefore does not bear resemblance to those problems mentioned above is only valid if you can show how to justify drawing the line there and not elsewhere.
It is not possible to argue against ignoring AI risks other than by an appeal to intuition without also making yourself vulnerable to Pascal’s mugging type scenarios.
The bottom line is that without intuition, if you solely base your argument on expected utility maximization, you will quickly descent into insanity:
I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane – instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism.
— muflax, Ontological Therapy
Three muggers walk into a bar…
Matrix Takeover Institute: Hi, we would like you to consider merging your resources with ours to hack the Matrix.
Eschaton Foundation: Your mission seems to be extremely unlikely to succeed and is based on insufficient evidence. We will therefore continue to use our resources to create a benevolent artificial general intelligence to safely navigate us through the forthcoming technological singularity.
Matrix Takeover Institute: Our mission is based on years worth of disjunctive lines of reasoning. There are good arguments underlying the possibility that we are actually living in a simulation. Therefore we should be concerned about the risk of a simulation shutdown. A simulation shutdown might knock out scientific advancement before we create an AI singularity. Hacking the Matrix also promises an enormous payoff in resources that does easily outweigh the larger probability of a negative technological singularity.
Eschaton Foundation: I’m a major fan of Down-To-Earthness as a virtue of rationality, if you can’t argue for a medium probability of a large impact, I do not bother.
Matrix Takeover Institute: I don’t think the odds of a simulation shutdown are small. I think they’re easily larger than 0.00001%. And if you can carry a qualitative argument that the probability is under, say, 0.000001%, then that means hacking the Matrix is probably the wrong use of marginal resources – not because a benevolent AI is more important, of course, but because other ignored existential risks like unfriendly aliens or the Large Hadron Collider would be more important. I am not trying to play burden-of-proof tennis. If the chances are under 0.000001%, that’s low enough, we’ll drop hacking the Matrix from our consideration until everything more realistic has been handled.
Eschaton Foundation: But a probability of 0.00001% is too small to take into account.
Matrix Takeover Institute: How do you decide what probability is small enough to ignore? I think you are drawing arbitrary lines to obtain the desired result where your mission has exactly the right proportion of probability and expected utility to disqualify any other cause.
Humbug and Partners: I accidentally overheared your discussion. May I step in here by introducing myself? Humbug and Partners is the earthly proxy of the lords of the Matrix. Both of you should instantly stop pursuing your mission or otherwise this and 3^^^3 similar simulations will be terminated. And as you know, the most common formalizations of Occam’s Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don’t measure the running time or space requirements of the computation. A compactly specified wager can grow in size much faster than it grows in complexity. The utility of a Turing machine can grow much faster than its prior probability shrinks.
Eschaton Foundation: I’d sooner question my grasp of “rationality” than to do what you want because I thought it was “rational”.
Matrix Takeover Institute: I do agree with the Eschaton Foundation here.
Humbug and Partners: Intuition is not the most reliable guide for what policies will actually produce the best results. The ability to shut up and multiply, to trust the math even when it feels wrong is a key rationalist skill. Especially the Eschaton Foundation should acknowledge the fact that an artificial general intelligence build according to your current grasp of rationality would take our warning seriously because the tiny chance that 3^^^3 simulations similar to this one will be shut down would override everything else in the AI’s calculations.
(Addendum to what’s wrong with their arguments.)