August 2011

You are currently browsing the monthly archive for August 2011.

In this post I try to fathom an informal definition of Self, the “essential qualities that constitute a person’s uniqueness”. I assume that the most important requirement for a definition of self is time-consistency. A reliable definition of identity needs to allow for time-consistent self-referencing, since any agent that is unable to identify itself over time will be prone to make inconsistent decisions.

Data Loss

Obviously most humans don’t want to die, but what does that mean? What is it that humans try to preserve when they sign up for Cryonics? It seems that an explanation must account and allow for some sort of data loss.

The Continuity of Consciousness

It can’t be about the continuity of consciousness as we would have to refuse general anesthesia due to the risk of “dying” and most of us will agree that there is something more important than the continuity of consciousness that makes us accept a general anesthesia when necessary.

Computation

If the continuity of consciousness isn’t the most important detail about the self then it very likely isn’t the continuity of computation either. Imagine that for some reason the process evoked when “we” act on our inputs under the control of an algorithm halts for a second and then continues otherwise unaffected, would we don’t mind to be alive ever after because we died when the computation halted? This doesn’t seem to be the case.

Static Algorithmic Descriptions

Although we are not partly software and partly hardware, we could, in theory, come up with an algorithmic description of the human machine, of our selfs. Might it be that algorithm that we care about? If we were to digitize our self we would end up with a description of our spatial parts, our self at a certain time. Yet we forget that all of us possess such an algorithmic description of our selfs and we’re already able back it up. It is our DNA.

Temporal Parts

Admittedly our DNA is the earliest version of our selfs, but if we don’t care about the temporal parts of our selfs but only about a static algorithmic description of a certain spatiotemporal position, then what’s wrong with that? It seems a lot, we stop caring about past reifications of our selfs, at some point our backups become obsolete and having to fall back on them would equal death. But what is it that we lost, what information is it that we value more than all of the previously mentioned possibilities? One might think that it must be our memories, the data that represents what we learnt and experienced. But even if this is the case, would it be a reasonable choice?

Indentity and Memory

Let’s just disregard the possibility that we often might not value our future selfs and so do not value our past selfs either for that we lost or gained important information, e.g. if we became religious or have been able to overcome religion.

If we had perfect memory and only ever improved upon our past knowledge and experiences we wouldn’t be able to do so for very long, at least not given our human body. The upper limit on the information that can be contained within a human body is 2.5072178×10^38 megabytes, if it was used as a perfect data storage. Given that we gather much more than 1 megabyte of information per year, it is foreseeable that if we equate our memories with our self we’ll die long before the heat death of the universe. We might overcome this by growing in size, by achieving a posthuman form, yet if we in turn also become much smarter we’ll also produce and gather more information. We are not alone either and the resources are limited. One way or the other we’ll die rather quickly.

Does this mean we shouldn’t even bother about the far future or is there maybe something else we value even more than our memories? After all we don’t really mind much if we forget what we have done a few years ago.

Time-Consistency and Self-Reference

It seems that there is something even more important than our causal history. I think that more than everything we care about our values and goals. Indeed, we value the preservation of our values. As long as we want the same we are the same. Our goal system seems to be the critical part of our implicit definition of self, that which we want to protect and preserve. Our values and goals seem to be the missing temporal parts that allow us to consistently refer to us, to identify our selfs at different spatiotempiral positions.

Using our values and goals as identifiers also resolves the problem of how we should treat copies of our self that are featuring alternating histories and memories, copies with different causal histories. Any agent that does feature a copy of our utility function ought to be incorporated into our decisions as an instance, as a reification of our selfs. We should identify with our utility-function regardless of its instantiation.

Stable Utility-Functions

To recapitulate, we can value our memories, the continuity of experience and even our DNA, but the only reliable marker for the self identity of goal-oriented agents seems to be a stable utility function. Rational agents with an identical utility function will to some extent converge to exhibit similar behavior and are therefore able to cooperate. We can more consistently identify with our values and goals than with our past and future memories, digitized backups or causal history.

But even if this is true there is one problem, humans might not exhibit goal-stability.

Tags: , , , , , ,

Morality is an objective property of a system that consists of a person that utters moral statements and the specific entity in, or feature of, the world that the statement identifies or denotes. Yet Morality can be explained in terms of lower level interactions. This does not contradict, systems can have properties that their parts alone do not.

Ethical statements

Let’s take a look at two ethical statements:

  1. It is morally wrong for Alice to lie to Bob.
  2. It is morally wrong for Bob to strangle Alice.

What do people really mean when they utter those statements? Let’s try to pin down the underlying reasons and motivations of the first statement by paraphrasing it:

1: Due to my genetically hard-coded intuitions about appropriate behavior within groups of primates, my upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, I feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what I want, without feeling inappropriately influenced by me, by signaling my objection to certain behaviors as an appeal to the order of higher authority.

But what is meant by an appeal to the order of “higher authority”? To make this more clear, let’s now take a look at a chat between hypothetical Bob and myself:

Alexander: I don’t want you to strangle Alice.

Bob: I don’t care what you want!

Alexander: Strangling Alice might have detrimental effects on your other preferences.

Bob: So? I don’t care, I assign infinite utility to world-states where Alice is dead!

Alexander:  But it is morally wrong to strangle Alice.

Bob: Hmm…I think you are right, I don’t want to be immoral!

What happened here? I have been trying to convince Bob not to kill Alice. In other words, I tried to get Bob to do what I want. I used three different methods:

  1. Accounting for third-party preferences.
  2. Weighing one preference against all other preferences.
  3. Evoking guilt.

Explanatory remarks to methods 1-3:

1: Primates don’t like to be readily controlled by other primates. To get them to do what you want you have to make them believe that they actually want to do it themselves.

2: Humans who are in a temporary rage often discount all long-term consequences of their decisions. To be persuasive it might take some subtle, non-obvious incentive.

3: Using moral language is really a form of coercive persuasion. Since when I say, “It is morally wrong to strangle Alice.”, I actually signal, “If you strangle Alice you will feel guilty.” It is a manipulative method that subtly influences Bob to say, “You are right, I don’t want to be immoral!”, when what he actually means is, “I don’t want to feel guilty!”

Method #3 works by making use of various cultural and otherwise present connotations carried by the label “morally wrong”, primarily by evoking negative emotions and the prospect of a loss of social reputation. The difference to methods #1,2 is that #3 does derive its authority from a complex (obscure) interrelationship of evolutionary, emotional, environmental and cultural factors. While method #1 asks Bob to be altruistic and #2 selfish, method #3 does posit a fuzzy imperative.

Further reading

‘Moral Ontology’ by Richard Carrier

Pluralistic Moral Reductionism

Trivers on Self-Deception

Ego syntonic thoughts and values

The limits of introspection

Homo Hypocritus Signals

Tags: , , ,