The page is preserved for historical and game-theoretic reasons, and because the author likely put a shit-ton of work into it.
Last modified: 2012-03-08 (finished). Epistemic state: emotional.
Warning: this is a crazy post. I’m not sugarcoating the insanity here. You might skip this one.
I wanted to make a certain point and develop a way out of the problem, but progress is leading me into a different direction right now. This post is already 2 weeks old and the longer I wait, the less it applies to my current situation, so I’m putting it out now. I might at least reference it later, going “look how crazy some of this shit made me!”.
Every couple of years I have something new to freak out over. Back
in 2002, it was love. 2004, truth. 2006,
beauty God. 2008, freedom… from
samsara. (Ok, now I’m really just shoehorning Moulin Rouge!
references into this paragraph.) 2010, consciousness. 2012, it
seems, will be time and causality.
In all the previous problems, I seem to have made actual progress once I recognized and admitted to myself what the underlying implication or intention behind asking the question was. As long as I was in denial about my motives, I couldn’t get anywhere. So let’s try it again.
Instead of an explanation, a little play:
I better stop there. That’s only a small fragment of the whole mess. I didn’t even mention uncertainty about meta-ethics, utility calculations (‘cause as XiXiDu has correctly observed, if utilitarianism is right, we never ever get to relax, and have to fully embrace the worst consequences of Pascal’s Mugging), how it removes “instances” as meaningful concepts so that “I will clone you and torture the clone” stops being a threat, but “I will make my calculations dependent on your decision” suddenly is, or how all of this fits so perfectly together, you’d think it’s all actually true.
What I want to talk about is this: it’s completely eating me alive. This is totally basilisk territory. You don’t get to ever die (this really bums me out because I don’t like being alive), you have to deal with everything at once right now (no FAI to save you, not even future-you), any mistake causes massive harm (good luck being perfect) and really, normalcy is impossible. How can you worry about bloody coffee or sex if all of existence is at stake because algorithmic dependencies entangle you with so vast a computational space? You have to deal with not just Yahweh, but all possible gods, and you are watching cat videos)? Are you completely insane?!
This is not just unhealthy. This is “I’m having a mental breakdown, someone give me the anti-psychotics please”. I’ve tried this belief propagation thing. As a result, I don’t belief in time, selves, causality, simplicity, physics, plans, goals, ethics or anything really anymore. I have absolutely no ground to stand on, nothing I can comfortably just believe, no idea how to make any decision at all. I can’t even make total skepticism work because skepticism itself is an artifact of inference algorithms and moral luck just pisses on your uncertainty.
I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane - instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism.
The only thing worth doing in modal realism is finding some way to stop caring about the rest of the multiverse. Discount by complexity, measure, psychological distance, whatever, as long as you discount enough to make infinity palpable. It won’t work and you know it, but what else can you do? Take it seriously?
Have people ever considered the implications of straightforward analytical philosophy? You have no self and there is no time. All person-moments of all persons are as much future-you as what you think is future-you. Normal consequences don’t matter because this is a Big World and everything exists infinitely often. The Universe Does Not Forget. Prevention? Totally impossible. Everything that can happen is happening. Any reference to something is not literally impossible is actually resolved. This is not just the minor disappointment we felt when we realized Earth wasn’t the center of the universe. This time, the universe isn’t the center of the universe, if you catch my drift. Instead of changing the world, you are reduced to decision theory, intentions and dependencies, forced to interact with everything that it is possible to interact with. Life, death, a body, a will, a physical world - all delusions. This is like unlearning object permanence!
I think the bloody continentals were right all along. Analytical philosophy is fundamentally insane. When I was still sitting in classical archeology classes, I could at least fantasize about how I would maybe someday get over my awkwardness and at least get a cat, if not a relationship, but now I can’t even make pasta without worrying that any inconsistency in my decision making opens me up for exploitation by acausal superintelligences. I thought I was nervous when I had to enter a public laundry room in my dorm (and had a panic attack almost every week)? Try not ever dying and knowing that whatever decision you make now will determine all of existence because you are only this decision algorithm right now and nothing ever helps because algorithms don’t change.
You might try the “I am the instantiation of an algorithm” sleight-of-hand, but that’s really problematic. Do you also believe God has given you information about the Absolute Encoding Scheme? (If yes, want some of my anti-psychotics?) How can you know what spatial arrangement of particles “encodes” what particular algorithm? This is an unsolvable problem.
But worse than that, even if you could do it, I don’t think you actually grasp the implications of such a view. Here’s Susan Blackmore, giving an eloquent description of how the position is typically envisioned:
This “me” that seems so real and important right now, will very soon dissipate and be gone forever, along with all its hopes, fears, joys and troubles. Yet the words, actions and decisions taken by this fleeting self will affect a multitude of future selves, making them more or less insightful, moral and effective in what they do, as well as more or less happy.
“Very soon”? Try Plank time. Blackmore is still acting as if this were Memento, where person-moments last seconds, maybe even minutes, as if any feature of consciousness at all would survive the time scale the universe actually runs on. This is not the case. Even the most barest of sensation takes milliseconds to unfold. Plank time is 10^41 times faster than that.
Besides, taking the person-moment view completely screws over your sense of subjective anticipation and continuation. Or rather, there is no continuation. There is no future-you. Morally, all future instances of all people are in the same reference class. (Unless you want to endorse extreme anti-universalism. Not that I’d mind, but it’s not very popular these days.) See how evil your own actions are, shamelessly favoring a very narrow class of people? I honestly don’t know if should be more troubled by the insanity of this view, or the implied sociopathy of virtually all actions once you take it seriously.
Breathe. Take an Outside View.
Will Newsome once remarked:
The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow.
Exactly. Once you begin taking this whole “analytical thought” thing seriously, it will try to hog as many resources as it can, trying to convert everything into analytical problems. And you can’t get more analytical than “literally everything is algorithms”. Result: massive panic attacks, nothing gets ever done, everything needs to be analyzed to death. (Case in fucking point: the whole akrasia mess on LW.) I can’t even watch a movie without immediately thinking about what game-theoretic considerations the characters must be making, who is exploiting who, why acting this way will support a monstrosity of hostile memeplexes and screw over whole populations you monster, oh for fuck’s sake, you haven’t non-ironically enjoyed a movie for years, so shut up already.
But what else can I do? Reject the only worldview that actually makes internal sense?
Consider an alternative. A simple model, one that doesn’t actually explain much; it doesn’t want to. It’s a strength, it claims. It goes like this:
And with this, muflax felt enlightened.
For a moment, that is.
Because when you doubt your thought processes because you suspect they are emotionally exploiting you… and you reach a conclusion based on an enlightened state of mind you feel when thinking this conclusion… well, then you ain’t paying much attention.
Philosophy overdose? I recommend the traditional cure: hemlock.
I tend not to believe in any version of Tegmark IV for exactly the reason you give here: I don't know how you could ever get probabilities out of it. So it appears to fail as an explanation. I'll have to re-examine this view if you have a simple way of expressing the claim formally, since I tend to believe that something like Many-Worlds will turn into an explanation someday (based on its apparent simplicity and my belief that a finite version would give us probabilities of some kind as soon as we clarified the question). But barring that, I don't think either of us knows what you're talking about.
You say that it "fits so perfectly together," but what does that mean neurologically and why should we care? In the real world we have no reason to expect reliability from any part of our thought save that which controls expectation, and really just that which could affect behavior. The rest could be complete garbage for all it matters to evolution. (The fact that we have any capacity for logic at all suggests that reality is made of math in a sense, but you must know that getting from there to the OP requires some big leaps.) You've started with a kind of reasoning that looks like it might work, but as far as I can tell the direction has no reason to work at all.
But if you really think you have to deal with this, then go ahead and do it! Create from yourself a virtue ethicist who doesn't care about outcomes, and who can draw a bright line around the 'future' versions of itself who continue to pursue its Work. As it is conditionally written: "seek me only! Then the joys of my love will redeem ye from all pain."
What kind of probabilities are you looking for? The Universal Prior, i.e. some complexity prior over all computable algorithms, should work reasonably well in a Tegmarkian world. Sure, it's not tractable, but that's part of the problem. Just another form of moral luck. (Also, Chaitin's constant.)
Doesn't strike me as harder than probability in any infinite world. (A lack of straightforward probability-as-frequency is a reason I don't like infinite ontologies. Hard to escape them, though.)
You’ve started with a kind of reasoning that looks like it might work, but as far as I can tell the direction has no reason to work at all.
Yeah, skipped many aspects. This ain't an explanation, it's an emotional coredump.
Trying to untangle exactly what assumption are going into this was the planned second part of the post, i.e. figuring out how to handle the implied invisible (or at least what axioms are on the table).
The basic argument might go like this:
What makes you think anything except what you directly experience exists? Because your best model of the world implies it. So we might add the rule that anything a model
implies, exists. (If we accept the model.)
Is it necessary for implied existence to be causally linked to you? No. See MWI - what distinguishes a distant world from our world? Nothing, both are equally real. Everything that is implied is real. Keeps the rule nice and simple.
So we have modal realism, with "possible" determined by our model of the world. The only difference between "actual" and "possible" is a pointer.
What makes a model good? It gives the highest compression of the data you have, i.e. a Kolmogorov prior (or something close to it).
What program is the best compression of the world? The Universal Dovetailer.
Once you take this view (and I'm definitely not saying it's correct), then everything fits so neatly. Compression, information theory, cryptography, decision theory, Bayes, cosmology, anthropics, ... . It all becomes the Big Fat Snarl of Modal Realism That Eats Your Brain.
If you reject it, then at what step? A good point would be the Kolmogorov(-ish) prior. Compression isn't good enough, it also needs to find us. It's not enough that we're "somewhere in the data", but it needs to produce the actual pointer as well.
Then you have no Occam's Razor, no MWI, really complex physics, and just generally induction becomes a whole lot harder. Simplicity goes out the fucking window.
That might be the way to go, but it ain't pretty either.
Or you make the evolutionary-skeptical argument that the Alternative in my post points out - why trust the reasoning process at all? It might exploit you, or be biased, or just plain nuts. Try adopting total skepticism of your thoughts, tell me how far you get. (Been there, done that.)
(Also, I'm not arguing P(modal realism | evolution), but P(modal realism | mind), though P(evolution | modal realism ∧ mind) is relevant as well.)
Create from yourself a virtue ethicist who doesn’t care about outcomes, and who can draw a bright line around the ‘future’ versions of itself who continue to pursue its Work.
Going into the direction of non-consequentialist morality in an attempt to enforce locality is definitely one way I'm going with this. I favor deontology right now, but that's all still fairly open. I (weakly) suspect that all non-consequentialist attempts converge anyway.
Have you thought through what basing an ontology on ADT-like reasoning would lead to? From the UDT/ADT perspective it seems like we can better understand the rules for acausal interaction, i.e. how "simulations" affect "measure". The vague "exists or doesn't exist" stuff gets a more analyzable framework and we can see specifically where naive algorithmic-probabilistic epistemology breaks down as a special case of decision theory; or at least, once we accept modal realism then UDT/ADT gives us a way of analyzing how that affects us that is more general than assuming a universal prior and hoping that that makes sense. It unifies the Kantian-phenomenological perspective with the Leibnizian-metaphysical one in a way that isn't quite well pinned down yet but at least suggests ways of clarifying the mess. (I think this also leads the way to understanding God as Chaitin's omega but that speculation is even less well pinned down. But I like the "without God there is no hypercomputation" aspect.)
Some of my wanting to re-frame the modal realist stuff is because I perceive much wisdom in Mitchell Porter and Michael Vassar's semi-Kantian emphasis on what we actually experience; on the other hand, what I actually experience points toward a lot of weird agentic invisible stuff just behind the veil.
Hot damn I sound like a crank sometimes. Oh well, it's fun.
What kind of probabilities are you looking for?
Practical ones, actual probabilities (or reliable qualitative estimates) that we could use to test the theory. (I'd accept a retroactive prediction at this point.) You're ignoring the part that makes it a theory. Without that, we have no reason to trust it.
You just said you have "no idea how to make any decision at all." Apparently by accepting this computationalist-Tegmark abstraction that you (with a brain that evolved to get status in a prehistoric tribe) thought might clarify a philosophical issue, you've destroyed the part of epistemology that actually helps. What seems like the obvious solution?
Agreed about the complete impracticality of it all, but rejecting it because of that gets you into a lot of trouble itself.
(I'm not sure if we even disagree. I'll try to defend the modal realist view anyway (despite being very skeptical of it) because I think it may have a point, but I might just be defending it for contrarian reasons.)
Say we actually knew the world were like what I described. Assume it actually were extremely hard, if not impossible to make correct decisions. Would you expect an agent that figured it out to produce practical results?
I'm very sympathetic to the view that any belief that doesn't make usable predictions should be ignored. But if you take it seriously, you lose theoretical elegance, much of metaphysics, metaethics changes dramatically, etc. - and you bias yourself strongly towards finitism, small worlds and so on.
And you still have to figure out a way to deal with the implied invisible. No known model is deterministic from our perspective. What about the outcomes that we didn't observe, but still predicted?
If they still happened, then modal realism stands. If not, then the theories become really ugly and the universe would contain an element of raw unpredictable chaos. Some feature that magically blips certain things into existence and not others. Entirely arbitrarily so.
This might well be the way to go. But it's a philosophical massacre.
Say we actually knew the world were like what I described...Would you expect an agent that figured it out to produce practical results?
I wouldn't expect a random conscious agent to produce practical results at all, regardless of what it figured out! Your theory seems to take us right back to Boltzmann brain territory.
If they still happened, then modal realism stands.
Doesn't this require assumptions about what our physical theories will say once we work out the details? Many-Worlds from what I can tell doesn't look like it demands conscious observers in every 'possible' situation, or even every configuration with non-zero amplitude. I'll grant you that more than one hypothesis might lead to this. But again, if it leaves us with no reason to expect predictions to work, when in fact you know perfectly well that we can make useful predictions, then maybe you made a mistake somewhere. Maybe we don't understand this well enough to believe it yet.
That goes double for a theory whose appeal seems 'philosophical' more than practical or probabilistic. If you find it elegant because your primate brain thinks it could 'explain' consciousness by defining algorithms as conscious algorithms -- and so far I haven't seen you try to make a formal argument on other grounds -- well then, perhaps the form of reasoning which has absolutely no evolutionary reason to work doesn't work here. Because if it turns out that we have no reason to trust even the types of reasoning that appeared more closely linked to reality (by way of predictions that tended to increase reproductive success), then maybe you really shouldn't believe anything at all. And my limited-macrocosm view does not appear to have that problem.
I think Leibniz's monadology has all the correct elements of computationalism without the unfortunate baggage connoted by the word "computation". I've been thinking for awhile that someone should create a metaphysic by the name of "neomonadology", but Leibniz already had the intuitions behind algorithmic probability that I think are important, so I think the biggest things to add to what is already implicit in monadology would be recent ideas about abstract machines and computation in context which question the strict divide between a monad/computation and its perceived embedding. The same ideas would also be nice to import into UDT/ADT.
The common answer is, why assume Christianity? I can postulate a new god that will send you to Heaven only if you aren't a Christian.
This argument still seems really fallacious to me. The Christian God is clearly privileged because He is already highly significant to the decision calculi of the people with whom you are entangled. If the rules for acausal interaction are such that god needs the consent of a world/program in order to influence it, as is the typical stance held by those who choose not to negotiate with counterfactual terrorists (as if it were possible to avoid!), then God is privileged as the being whose influence the people have most consented to. The relevant trope is "Gods Need Prayer Badly". The Catholics get around this unfortunate situation by having faith that the Holy Ghost is representative of the divine will of the Form of the Good (who just so happens to also be the Form of Power), because otherwise we're totally screwed anyway; a more cynical view would say, well, we're just totally screwed, nothing to do about it. I think a correct decision theoretic framing would find some middleground via renormalization.
I sometimes wonder if smart atheists like Eliezer know this but explicitly promote bad arguments for atheism anyway because they think God is totally cramping their style and would prefer it if He had less influence over local reality. If so I deem such activism imprudent.
As a side note, I was part of a study of people at ultra high risk for schizophrenia about six months ago. Some SL5 ideas came up in the interviews. The lady interviewing me had no idea what to make of it: clearly I was talking about typical insane ideas, but I was doing so in a calm and lucid manner, with terminology she wasn't familiar with and in compound-complex sentences that she didn't quite have the intelligence to follow. It was sort of funny, in an absurd way.
any mistake causes massive harm (good luck being perfect)
"Be ye therefore perfect, as your Father which is in heaven is perfect." -- Jesus. I think it's a pretty ridiculous exhortation, to the point of hilarity. And yet it's deadly serious.
but now I can't even make pasta without worrying that any inconsistency in my decision making opens me up for exploitation by acausal superintelligences.
This is what I was talking about with the Christians. Basically, if any inconsistency is equivalent to consenting to be Dutch booked by power-hungry gods then we're totally screwed no matter what. Hence their saying we might as well Hope that there is an omnibenevolent God and have Faith in Him; the alternative is simply discounted because there's nothing we can do in those universes.
"Very soon"? Try Plank time. Blackmore is still acting as if this were Memento, where person-moments last seconds, maybe even minutes, as if any feature of consciousness at all would survive the time scale the universe actually runs on. This is not the case. Even the most barest of sensation takes milliseconds to unfold. Plank time is 10^41 times faster than that.
I think this might be an unwise reification of physical knowledge and I would instead look for more phenomenological ideas about what time is and how it should be divided.
I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice "let's build an AI so we can fuck catgirls all day" universe.
It adds up to something like Hinduism or Christianity, which is normality for many people, or on a meta level, something like postmodernism, which is also normality for many people. Many continental philosophers would say that we must construct our own spirituality for modern times where tradition is no more. People have understood these things for a long time, it's only modern science-y types that are just rediscovering some of the most basic facts of the universe. Dharma is about how to live correctly in such a fluxious universe. Buddhism is about how to escape the universe of samsara, and Schopenhauer's philosophy is like Buddhism for those without as much time to spare. Nietzscheism is about confronting such a universe because fuck you, universe, challenge accepted. Some would tell you to just follow the will of God; Nietzsche would call that equivalent to suicide but more contemptible. There ain't no easy way out, but it's not like nobody's dealt with these problems before.
But weirdly, despite the fact that I believe in gods and magic and thus that all this theoretical acausal mumbo-jumbo actually has effects on local phenomena, and despite the fact that I am obsessed with justification and moral philosophy and think about it many hours a day, I still seem significantly less stressed about all this than you, XiXiDu, Roko, etc. I'm not sure what's going on there. Maybe it's because I have faith in Meta.
By the way muflax, have we met in real life? Do you live in the Berkeley area? I'd like to talk to more people you who aren't complacent and who take moral philosophy seriously.
Agreed about the monads, though I've only begun thinking about them very recently (and mostly after I wrote this post).
Or rather, this modal realist anxiety is the current state of one thread of thinking I'm going through, and I don't know if that will lead anywhere or if it just gets aborted and replaced by some vastly better ontology. Other threads, including the monad one (mostly prompted by Mitchell Porter), are still very young, but keep on telling me I shouldn't freak out so much, it will all work out fine, here's your self back, relax.
So the whole anxiety is heavily compartmentalized, and as I wrote about elsewhere, I'm actually ok with it right now. Though how well that works in the long-run, I don't know.
The Christian God is clearly privileged because He is already highly significant to the decision calculi of the people with whom you are entangled. If the rules for acausal interaction are such that god needs the consent of a world/program in order to influence it, as is the typical stance held by those who choose not to negotiate with counterfactual terrorists (as if it were possible to avoid!), then God is privileged as the being whose influence the people have most consented to.
Good point! I didn't consider that. But how do you pick out God? I could plausible consent to a vast variety of gods, and by not fully understanding them, might easily get screwed that way. You might try to look at history to see what god does in fact influence us most, but that's not really obvious, and as the Tibetans noticed, the nature of a god can be quite subjective.
Or is there some special meta way you use to pick God? (I suspect so, though I can't quite figure out how it works...)
Or honestly perhaps, I'm bothered by gods in general for moral reasons. I can't seem to forgive them for tolerating such a world. It's a reversal of sin, essentially, in that I seek a way to forgive God, and without that, wouldn't even bother with the rest, even if I fully believed it. (Not that I blame God. Occasionally, I think of it in terms of being rightfully condemned to isolation from God, without a way to overcome it, like I'm Satan in Hell. (And I'm beginning to accept this isolation as just.) On other days I think that's just a really bad thought and I should get on with my life.)
About the schizophrenia thing, yeah, I noticed that too. It's kinda funny how almost every symptom of mental disorders and delusions can be mapped to philosophical positions. (I have a draft about that somewhere, comparing stuff like Capgras to straightforward SL5 / Theravada views.)
It's like the only difference between advanced schizophrenia and philosophy is the attitude and voluntariness of the belief.
People have understood these things for a long time, it’s only modern science-y types that are just rediscovering some of the most basic facts of the universe.
My (exaggerated) contempt for the continentals is more of a historical artifact, 'cause that's the community I initially emerged from. I spend most of my teen years reading Nietzsche, talking about UFOs and magic and so on. I eventually got fed up, maybe somewhat disappointed and jumped ship to the skeptics. (Though mostly the crazier ones, like Julian Jaynes or RAW.) Now I discover that the science people who kept talking about normality and purely natural phenomena are just as crazy as the rest, maybe much less so in practice.
By the way muflax, have we met in real life? Do you live in the Berkeley area? I’d like to talk to more people you who aren’t complacent and who take moral philosophy seriously.
Nope. Germany, I'm afraid.
What if we don't bite the algorithm-bullet, and just say it's a physicalist universe, decide that asking for the experience of 'green' is going nowhere unless we also ask 'who' is experiencing green (because then we can point to physical happenings in a brain, or computer, or...)? I feel you're going there in the second dialogue anyway.
Ok, but that doesn't solve the ontology problem. You can point physicalism in the right direction, but the crucial information you're using here comes from outside physicalism. You already have to know that there are subjects and that they experience things like "green". Physicalism alone wouldn't tell you that.
I don't deny the role of brains or embrace any particular ontology. It's just that physicalism is fundamentally unable to cope with this situation, and an algorithmic view retains much of physicalism, but introduces a lot of seemingly insane stuff.
You're not an algorithm, you're a state machine. Well, even that's not true. More precisely, you are an entity, persisting in time, causally interacting with other entities in a particular world. You have various possible states and if we abstracted away all the ontological details, we'd get the state machine description of you. If instead you pay attention to the ontological details of your states, you get phenomenology. And finally, if you try to make a causal model of reality which includes you and your properties, you're now concerned with the physical ontology of the self.
The big step is to get past the oscillation between selfless materialism and selfless idealism. In your mindfulness dialogue, you seem to recognize that your "computational idealism" is just another ontological projection which ignores the projector, but lacking a third way you experience this awareness as a holiday from conceptualization of reality. In order to develop an alternative, I recommend Husserl, Merleau-Ponty, Fichte, and Schelling. That might give you the ingredients for an ontology of phenomenal subjectivity.
Exactly. Algorithmic idealism is at least something I can understand, something that makes internal sense. It's almost certainly wrong, but it at least provides a possible solution how the seeming mind/matter dualism we experience could be resolved.
You're quite right that right now, I don't see a third way to deal with this mess. I'm already looking into the philosophy you recommend. Husserl is high on my reading list.
(It doesn't help that I spent much of 2010 emerged in materialist (Dennett, etc.) and Buddhist (Theravada) philosophy that tried very hard to convince me to let go of the idea of a "self" or "subjective observer", and I'm now so deeply confused that I often have no idea what some people could possible mean when they speak of a "self" and they think I don't have.)