The Singularity Institute does criticize people for an intuitive approach towards charitable giving or research in general. Especially philosophers are taunted. So I am trying to fathom how exactly it is formally more rational to contribute to AI risks mitigation than doing anything else.
Suppose there are 4 risks. One mundane risk has a probability of 1/10 and and you assign 20 utils to its prevention. Another less likely risk has a probability of 1/100 but you assign 1000 utils to its prevention. Yet another risk is very unlikely, having a probability of 1/1000, but you assign 1 million utils to its prevention. The fourth risk is extremely unlikely, having a probability of 10^-10000, but you assign 10^10006 to its prevention. All else equal, which one would you choose to prevent and why?
If you wouldn’t choose risk 4 then why wouldn’t the same line of reasoning, or intuition, not be similarly valid in choosing risk number 1 over 2 or 3? And in case that you would choose risk 4 then do you also give money to a Pascalian mugger?
There are risks that are much more likely than risks from AI. But they are ignored because of the conjectured value of the intergalactic civilization implied by the successful creation of a singleton AI. Yet other hypotheticals are ignored on the basis of their smaller probability, even though their value is sufficiently larger to outweigh their reduced likelihood. But such an objection is always possible. Therefore the argument is fallacious.
If you argue that it is more reasonable to contribute to the mitigation of risks associated with artificial general intelligence than to contribute to more or less probable risks then, in case you are not just appealing to intuition, there must be some formalized argument that favors AI risk mitigation over all other possible actions. In other words, you need to formally define “reasonable”.
Note that the difference between AI risks and other possible risks can’t be its expected utility, because that results in Pascal’s mugging. The difference can neither be that it is more probable. Because that argument also works against AI risks by choosing risks that are even more probable than AI risks.
And in case you are saying that AI risk is the most probable underfunded risk then what is the greatest lower bound for “probable” here and how do you formally define it? In other words, <probable> in conjunction with <underfunded> doesn’t work either because any case of Pascal’s mugging is underfunded as well. You’d have to formally define and justify some well-grounded minimum for “probable”.
Tags: Singularity Institute