Pascal’s wager: Better safe than sorry?

Here are some interesting scenarios with low or unstable probabilities but potentially enormous pay-offs. Some of the given arguments in favor of taking these scenarios seriously are also thought-provoking.

Note that not all of the descriptions below are quotes, some are short summaries which might not adequately reflect the original author’s statements. Please read up on the the original sources, they are provided after the description of each scenario. Also note that I do not want to judge any of these scenarios but merely list them here in order to highlight possible similarities. And despite the title, it is not my intention to suggest that the scenarios listed here are cases of Pascal’s wager, but merely that there seems to be no clear cutoff between Pascal’s wager type arguments and finite expected value calculations.

The order in which these scenarios are listed is roughly by how seriously I take them, where the scenario listed at the end is the one that I take the least seriously.

1. Large asteroid strikes are low-probability, high-death events–so high-death that by some estimates the probability of dying from an asteroid strike is on the same order as dying in an airplane crash. [Source: Planetary Defense is a Public Good]

2. It’s often argued that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. Voting might be worth a charitable donation of somewhere between $100 and $1.5 million. [Source: Voting is like donating thousands of dollars to charity]

3. A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. A highly capable decision maker can have an irreversible impact on humanity. None of this proves that AI will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. [Source: Of Myths And Moonshine]

4. We should cut way back on accidental yelling to aliens, such as via Arecibo radar sending, if continuing at current rates would over the long run bring even a one in a billion chance of alerting aliens to come destroy us. And even if this chance is now below one in a billion, it will rise with time and eventually force us to cut back. So let’s start now to estimate such risks, and adapt our behavior accordingly. [Source: Should Earth Shut the Hell Up?]

5. GMOS might introduce “systemic risk” to the environment. The chance of ecocide, or the destruction of the environment and potentially humans, increases incrementally with each additional transgenic trait introduced into the environment. The downside risks are so hard to predict — and so potentially bad — that it is better to be safe than sorry. The benefits, no matter how great, do not merit even a tiny chance of an irreversible, catastrophic outcome. [Source: The Trouble With the Genetically Modified Future]

6. Cooling something to a temperature close to absolute zero might be an existential risk. Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying it is less likely than one in a million is likely very overconfident. [Source: Cool risks outside the envelope of nature]

7. Fundamental physical operations — atomic movements, electron orbits, photon collisions, etc. — could collectively deserve significant moral weight. The total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. [Source: Is there suffering in fundamental physics?]

8. Suppose someone comes to me and says, “Give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people. A compactly specified wager can grow in size much faster than it grows in complexity.  The utility of a Turing machine can grow much faster than its prior probability shrinks. [Source: Pascal’s Mugging: Tiny Probabilities of Vast Utilities]

I will expand this list as I come across similar scenarios.

Further reading

Tags: ,

  • Xagor et Xavier

    If Pascal’s Wager / Pascal’s mugging arguments are a problem, I think they would be a pervasive problem no matter what situation an agent would find itself occupying. After all, in any situation, you could randomly imagine an idea that you’re really in the Matrix, but that it can be hacked by acting in a very particular way, and that hacking the Matrix would give you extreme utility. But the argument can be adapted to justify any kind of bizarre behavior, if you just believe that that particular sort of bizarre behavior is a way to hack the Matrix. Yet, to a consistent utilitarian, one would imagine all such implausibilities to cancel out somehow unless utilitarianism itself is flawed.

    It seems Pascalian arguments must be dealt with as a whole class, not individually. Obviously, you can then engineer a thought experiment where the strategy to cancel out Pascalian arguments also makes one unaware of a real (very implausible) existential risk. Thus you can’t have certainty, but so it is with philosophy.

  • Xagor et Xavier

    Do you think median EV could fix the Pascal problems? Instead of taking the mean, which has no breakdown point, list every possible explanation you can think of (and the value if that explanation were true) and then pick the median. In practical cases, most of the algorithmically simple predictions will produce reasonable values, and then there will be a whole bunch of “what if we were in the matrix” predictions or explanations with extreme positive or negative values.

    In essence, instead of saying that any claim of extraordinary evidence can be canceled as long as the implications were the proposition true is extreme enough, one says that some claims are just too outrageous.

    Finding more evidence of the claim would mean that more algorithmically simple predictions would align with it, and so the value given were the claim true would also be the value given by a whole lot of other predictions or explanations, thus increasing the chance that the median would become this value.

    It is fairly ad-hoc, however. There’s no direct argument to say whether one should use the median or a trimmed mean of some sort, and if so, how wide the interval on the trimmed mean ought to be.

  • seahen

    Medians have problems of their own — for example, they’d justify not buying insurance, even if the insurance company was selling it at a loss.