Open Problems in Ethics and Rationality

Problems in Ethics

— Louis CK, on why his life is evil because he drives an infinity while people are dying (comedy).

If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values.

[...]

Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.

The Infinitarian Challenge to Aggregative Ethics

In Derek Parfit’s original formulation the Repugnant Conclusion is characterized as follows: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984). The Repugnant Conclusion highlights a problem in an area of ethics which has become known as population ethics. The last three decades have witnessed an increasing philosophical interest in questions such as “Is it possible to make the world a better place by creating additional happy people?” and “Is there a moral obligation to have children?” The main problem has been to find an adequate theory about the moral value of states of affairs where the number of people, the quality of their lives, and their identities may vary. Since, arguably, any reasonable moral theory has to take these aspects of possible states of affairs into account when determining the normative status of actions, the study of population ethics is of general import for moral theory. As the name indicates, Parfit finds the Repugnant Conclusion unacceptable and many philosophers agree. However, it has been surprisingly difficult to find a theory that avoids the Repugnant Conclusion without implying other equally counterintuitive conclusions. Thus, the question as to how the Repugnant Conclusion should be dealt with and, more generally, what it shows about the nature of ethics has turned the conclusion into one of the cardinal challenges of modern ethics.

The Repugnant Conclusion (Wikipedia: Mere addition paradox)

The utility monster is a thought experiment in the study of ethics. It was created by philosopher Robert Nozick in 1974 as a criticism of utilitarianism.

In the thought experiment, a hypothetical being is proposed who receives as much or more utility from each additional unit of a resource he consumes as the first unit he consumes. In other words, the utility monster is not subject to diminishing marginal returns with regard to utility, but instead experiences constant marginal returns, or even increasing marginal returns.

Since ordinary people receive less utility with each additional unit consumed, if the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the doctrine of utilitarianism.

Utility monster (see e.g. here)

The nonidentity problem probes some of our most intuitive beliefs regarding the moral status of acts whose effects are restricted to persons who, at the time the act is performed, do not yet but will exist. As we try to articulate just when, and why, some such future-directed acts are wrong, we find ourselves forced to think carefully about the structure of moral law: is it “person-affecting” in nature or is it “impersonal” in nature? Can, in other words, an act that affects no person who does or ever will exist for the worse be wrong? Or is the wrongness of any particular act dependent (at least in part) on something beyond what that act does, or can be expected to do, to any such person?

The Nonidentity Problem

Problems in rationality

The ‘expected value’ of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is $1, and there are an infinite number of them, this sum is an infinite number of dollars. A rational gambler would enter a game iff the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the finite entry price was. But it seems obvious that some prices are too high for a rational agent to pay to play. Many commentators agree with Hacking’s (1980) estimation that “few of us would pay even $25 to enter such a game.” If this is correct—and if most of us are rational—then something has gone wrong with the standard decision-theory calculations of expected value above. This problem, discovered by the Swiss eighteenth-century mathematician Daniel Bernoulli is the St. Petersburg paradox. It’s called that because it was first published by Bernoulli in the St. Petersburg Academy Proceedings (1738; English trans. 1954).

The St. Petersburg Paradox

The most common formalizations of Occam’s Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don’t measure the running time or space requirements of the computation.  What if this makes a mind vulnerable to finite forms of Pascal’s Wager? A compactly specified wager can grow in size much faster than it grows in complexity.  The utility of a Turing machine can grow much faster than its prior probability shrinks.

Pascal’s Mugging: Tiny Probabilities of Vast Utilities

For a more concise analysis of the problem see this PDF by Nick Bostrom.

So right now you’ve got an 80% probability of living 10^^10 years.  But if you give me a penny, I’ll tetrate that sucker!  That’s right – your lifespan will go to 10^^(10^^10) years!  That’s an exponential tower (10^^10) tens high!  You could write that as 10^^^3, by the way, if you’re interested.  Oh, and I’m afraid I’ll have to multiply your survival probability by 99.99999999%.

What?  What do you mean, no?  The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10-1000 to purchase it!  Well, that and the penny, of course.  If you turn down this offer, what does it say about that whole road you went down before?  Think of how silly you’d look in retrospect!  Come now, pettiness aside, this is the real world, wouldn’t you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years?  Those arrows suppress a lot of detail, as the saying goes!  If you can’t have Significantly More Fun with tetration, how can you possibly hope to have fun at all?

Hm?  Why yes, that’s right, I am going to offer to tetrate the lifespan and fraction the probability yet again… I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that… oh, don’t make that face at me, if you want to refuse the whole garden path you’ve got to refuse some particular step along the way.

Wait!  Come back!  I have even faster-growing functions to show you!  And I’ll take even smaller slices off the probability each time!  Come back!

The Lifespan Dilemma

Conclusion

I haven’t come across any good reasons to believe that the aforementioned problems do not constitute a reductio ad absurdum of rationality and ethics. There are many other problems, some of which I haven’t read up on so far and probably many more that I don’t know about. But the above problems are insofar special as the methods leading up to them may in principle be “correct” but nonetheless lead to seemingly absurd or undesirable consequences.

I am not referring to the weirdness of the conclusions but the foreseeable scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of certain conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right.

I can’t justify this, but an example would be quantum suicide. I wouldn’t commit quantum suicide even given a high confidence in the many-worlds interpretation of quantum mechanics being true. Logical implications just don’t seem enough in some cases.

To be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.

We are not going to stop pursuing whatever terminal goal we have chosen just because someone promises us even more utility if we do what that person wants. We are not going to stop loving our girlfriend just because there are other people who do not approve our relationship and who together would experience more happiness if we divorced than the combined happiness of us and our girlfriend being in love. Therefore we already informally established some upper and lower bounds.

Maybe I am simply biased and have been unable to overcome it yet. But my best guess right now is that it is always ethically indifferent what we do and that we simply have to draw a lot of arbitrary lines and arbitrarily refuse some steps. I have read about people who went all batshit crazy taking ideas in ethics and rationality too seriously. That way madness lies, and I am not willing to choose that path yet.

Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.

Until the above problems are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications, if only because I still lack the necessary educational background to trust my comprehension and judgement of the various underlying concepts and methods used to arrive at those implications.

Further reading

GiveWell, the SIAI and risks from AI

Objections to Coherent Extrapolated Volition

Moral Impossibility in the Petersburg Paradox : A Literature Survey and Experimental Evidence

Constraints and Animals

The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it

The Paradoxes of Future Generations and Normative Theory

Future generations: A challenge for moral theory

The person-affecting restriction, comparativism, and the moral status of potential people.

Tags: , , , , ,