Newcomb’s problem, Omega and Split-brain Patients

Background: Newcomb’s problem

Objective: The problem I am trying to highlight with this post is not the difficulty of predicting another agent accurately but (1) the problem of stating precisely what it is that Omega is predicting in the first place (2) that locating and isolating a discrete agent in a continuous universe by e.g. formalizing the boundaries of the physical system in question seems to be nontrivial for complex agents (3) how to think about decision making when decisions are determined not just by the agent (as arbitrarily defined by humans) but by the larger environment.

The ability to accurately predict the decision making of other agents is insufficient if it is not possible to define what is meant by <decision making> and <agent>.

Newcomb’s problem: Ignoring the problems mentioned above, one-boxing is the correct strategy, given that Omega is correct more then 50.05% of the time. Since,

Two-boxing:

If (prediction == One-box)

return (1000000+1000)

else

return 1000

Expected value: y = (1-x)*(1000000+1000)+x*1000 = -1000000x+1001000 where 0<x<1 is the probability of a correct prediction.

One-boxing:

If (prediction == One-box)

return 1000000

else

return 0

Expected value: z = x(1000000)+(1-x)*0 = 1000000x where 0<x<1 is the probability of a correct prediction.

Two-boxing versus One-boxing:

y > z

-1000000x+1001000 > 1000000x

1001000 > 2000000x

1001000/2000000 > x

0.5005 > x

As long as the probability of a correct prediction is less than 50.05%, two-boxing has the larger expected value.

Consider the following scenarios:

(S1) Omega predicts that you will end up taking both boxes because, even though at some point you did precommit to one-boxing, you change your mind and take both boxes.

(S2) Omega predicts that you will end up taking both boxes because of a stroke causing brain damage.

(S3) Omega predicts that a sudden wind gust will cause you to stumble and topple over both boxes, even though you did precommit to taking only one box.

(S4) You make up one half of a split brain residing in the same body. You precommit to one-boxing while the other personality sharing the body with you chooses two-boxing. You have no control of the movement of the body except that you are the one who can talk.

Remarks:

If a body harboring two or more personalities with different precommitment strategies about Newcomb-like problems ends up taking both boxes then did all of the agents who reside in that body take two boxes or just the one that happened to control the body during the critical moment?

It seems possible to adopt a wide range of definitions of “agency” when trying to reason about and predict the behavior of other agents. It is possible to define an agent as a global or a local physical system, respectively slice of space. In other words, when examining a system it is possible to either act based on the assumption that the whole system is a coherent entity or to assign the quality of agency to arbitrary sub-procedures of the system and examine them in isolation.

For example, if Omega was to assign the quality of agency to a volume of space approximately the size of the human brain, would then a precommitment to one-boxing satisfy Omega’s condition to put $1,000,000 into box A? Would then a case of two-boxing as a result of e.g. brain damage caused by external factors be ignored?

So what is it that Omega predicts when your actions are ultimately the local behavior of a larger physical system we call the universe?

Can you formalize the difference between what it means to take both boxes due to (1) changing your mind for subtle reasons (e.g. reading a decision theory paper) (2) changing your mind for not so subtle reasons (e.g. brain damage) (3) because you are not in control of the the larger physical system (e.g. sudden strong wind causes you to stumble) or (4) because you do not control “your” body (e.g. multiple personality disorder)?

Tags: , ,

  • Aris Katsaris

    To the extent that any of this seems at all relevant to decision theory, is the extent to which it affects the corresponding probability of Omega guessing correctly.

    If one already understands that Omega doesn’t need to be a “perfect” predictor, that a 99.9% or 70% or even 51% accuracy still works qualitatively in the same manner, then all this is rather besides the point. (Here is the relevant calculations for any X percentage of accuracy: http://lesswrong.com/lw/hqs/why_do_theists_undergrads_and_less_wrongers_favor/97lq )

    Omega is described as a “honest and perfect predictor” as an assumption and an attempt at *simplification* of the issue (you no longer need to calculate levels of certainty of his accuracy and trustworthiness), but seems to pose a stumbling block for those very intelligent people who think that it’s very intelligent to find very intelligent ways to fight the hypothetical.

    But fighting the hypothetical very intelligently, is like punching yourself in the face very intelligently.

    For those people, I’ll just suggest: replace “perfect predictor” with “predictor of 99.9% accuracy” if that helps you.

  • The point of the post is not the accuracy of predicting how physical systems are going to behave but instead (1) the accuracy of the statement of prediction (e.g. a precise definition of the boundaries of the physical system in question) (2) the nature of self / how to isolate a discrete agent in a continuous universe / how to think about decision making when decisions are determined not just by the brain but by the larger environment etc.

  • Updated the post outlining why one-boxing is correct and what it is that I wanted to highlight using the example of split-brain patients.

  • Pingback: Alexander Kruel · Why do I do what I do?()