Ontological Therapy


Ontological Therapy

Warning

This page has been disowned according to the Condemnation of 2012. The author does not endorse or deny any of the views expressed here, even when it may appear so, and will not discuss them.

The page is preserved for historical and game-theoretic reasons, and because the author likely put a shit-ton of work into it.

Last modified: 2012-03-08 (finished). Epistemic state: emotional.

Warning: this is a crazy post. I’m not sugarcoating the insanity here. You might skip this one.

I wanted to make a certain point and develop a way out of the problem, but progress is leading me into a different direction right now. This post is already 2 weeks old and the longer I wait, the less it applies to my current situation, so I’m putting it out now. I might at least reference it later, going “look how crazy some of this shit made me!”.

Every couple of years I have something new to freak out over. Back in 2002, it was love. 2004, truth. 2006, beauty God. 2008, freedom… from samsara. (Ok, now I’m really just shoehorning Moulin Rouge! references into this paragraph.) 2010, consciousness. 2012, it seems, will be time and causality.

In all the previous problems, I seem to have made actual progress once I recognized and admitted to myself what the underlying implication or intention behind asking the question was. As long as I was in denial about my motives, I couldn’t get anywhere. So let’s try it again.

Instead of an explanation, a little play:

  • Psychologist: What brings you here today?
  • muflax: I experience great anxiety and it’s consuming my life.
  • P: When did your anxiety start?
  • m: That’s it right there! I can’t answer this question, and because I can’t, I suffer from anxiety because I feel like I should be able to.
  • P: What do you mean you can’t? You don’t remember?
  • m: No, I do, but answering your question commits me to an ontological position I have great doubts over. See, you are already presupposing the A-theory of time in the way you phrased this question.
  • P: “A-theory?” What’s this?
  • m: *sigh* Are you sure you can help me? The problem is much deeper and I don’t know if you…
  • P: Don’t worry. I am an expert on the treatment of anxiety disorders. Just relax and tell me what this “A-theory” is.
  • m: Alright. So there are two views about time, basically. Is there a special moment called the “present” or is everything a big directed graph? The first one is called A-theory, the other B-theory.
  • P: That sounds like a metaphysical problem. Why would telling me when the anxiety started, as you say, “commit you to an ontological position”?
  • m: Because things only ever “start” in A-theory. In B-theory, everything just is. Different events do not “follow”, but are just causally linked. Even worse, in a general B-theory universe, there doesn’t have to be a unique chain of events. Any “point in time” can have multiple moments that come “before” or “after” it.
  • P: I see. But if you compared multiple events, couldn’t you still say which came before?
  • m: If you define “A caused B” as “A came before B”, maybe, but that then commits you to acyclical graphs and especially when considering acausal interactions…
  • P: You are going too fast.
  • m: Sorry.
  • P: No need to apologize. Please, go on.
  • m: Ok. So in causality, we typically assume that the graph has no cycles. Something can’t cause itself, right?
  • P: Right.
  • m: But that doesn’t have to hold, you see. Quantum physics has no problem dealing with time loops. In fact, a common interpretation of Feynman diagrams conceptualizes anti-particles not as separate particles, but simply as the same particle going back in time. But this gets you into problems with the very idea of causality. For the idea of causality to be coherent, you have to have some dependencies. Basically, there must be a way to say that A forces B, but B doesn’t force A. If you frame this in terms of predictions, so that knowing A gives you knowledge about B, but not vice versa, then you have statistical causality, as Judea Pearl constructed it. But this is only meaningful if the universe can’t be inverted, meaning you can’t compute past states even if you know everything about your current state, but that seems like a weird requirement. So basically, in B-theory you don’t have a meaningful concept of causality. There are other reasons why this causality is probably not good anyway, so this makes me all very suspicious.
  • P: I see. So why would you then believe B-theory?
  • m: Because science requires it! Relativity strongly implies B-theory and the whole framework of computable physics is fundamentally B-theoretic. If you assume A-theory, you are in effect saying that philosophy of science is all bunk.
  • P: Earlier you said that physics is compatible with time loops. But physicists talk about the past all the time. Maybe it’s not really a big problem?
  • m: But it is! You could limit yourself to your immediate predecessors in the graph and call this the “past”, but that’s not very useful. The common usage is not indicative of anything. Typical physicist have a completely confused ontology anyway and are not to be trusted about these problems at all.
  • P: Why do you say that?
  • m: Because most physicists are materialists or physicalists, and that’s just nonsense.
  • P: Materialists? Do you mean they are consumerist?
  • m: No, like in “everything is matter”. That’s a really old view, but complete nonsense. Strict materialism is totally false. The ancient philosophers who came up with it imagined something like little billiard balls bumping into each other, and said the whole universe is like that. But then you can’t explain quantum physics or gravity and so on. So we extended that with fields and other constructs, and this view is called physicalism. Basically you just wave your hands and say that all reality is describable by physics and nothing but physics.
  • P: Yes, I’m familiar with this view. I think a lot of scientists are physicalists. Why do you think this is nonsense?
  • m: Because you can’t explain phenomenal consciousness! Within physics, nothing is ever “green” or experiences anything. You have an ontology in which at best particle interactions exist, but this is something qualitatively different from experiences. If you only knew about a universe that it ran on physics, you would never ever expect there to be experiences. The particles aren’t aware of the more complex structures that they form, so how should any experience ever “emerge” from them, just because they have been arranged in some clever way? Where is this knowledge coming from? You can only either deny these structures, but then unified consciousness - which we clearly experience - doesn’t exist, or you introduce bridge laws and become a dualist. It all makes no sense at all. Of course, there is a much better alternative, so I don’t know why anyone bothers with this view.
  • P: What’s that alternative?
  • m: Well, I think of it as a generalization of computationalism. So what you do is put this physicalist ontology completely on its head. You don’t assume that there are particles and somehow they form a mind that somehow experiences green, but you start with the mind. You say that the mind is an algorithm, a computation. This computation fundamentally transforms inputs into outputs. Within these inputs, it looks for patterns, so it models them as green or as particle interactions or what have you, but these are just aspects of these internal models. The algorithm only experiences inputs and “green” is just the label we give this specific input.
  • P: Computation? Do you think you are a computer program?
  • m: No, or really yes, or really.. Well, the difference is that within computationalism, there isn’t such a thing as the universe. There is no “real” world, no physical reality at all. It’s complete idealism. There are only ever algorithms, inputs and outputs. Even these can be transformed into computational dependencies between algorithms, so you really only have algorithms that depend on each other in their computation. They are not instantiated, in the sense that “this thing there” is an instance of an algorithm and “this” isn’t. Everything you experience, the whole world, is you, this one algorithm and its inputs. The other algorithms are fundamentally distant from you and only reachable through these computational dependencies. So it dissolves the problem of solipsism and an external/internal world by saying there is only this algorithm that models other algorithms within it.
  • P: I see.
  • m: Alright, so this basically solves the problem of consciousness. There is no problem like “are thermostats conscious?”. Every algorithm is conscious, but things within this algorithm aren’t. So what you call a thermostat is just an artifact within your models, so it’s not conscious, but the actual computation that the thermostat computes is conscious, just like you. This algorithmic view also has no conception of time in it, so it fits nicely together with B-theory. That’s the big problem, you see - all these ideas fit together perfectly, but it’s their implications which are totally weird.
  • P: Like what?
  • m: Now you might say that’s really just a philosophical oddity that in this algorithmic view, there is no “time” or “causality”, but only computational dependency. Just words, right? But here’s the thing. You don’t have to assume that you are bound by physics anymore. There is no “future” or “past” to interact with, but only algorithms and inputs. So you can depend on whatever algorithm you want. Basically, you become literally timeless. Time-travel? Go ahead. Interact with “future you”? Sure, no problem. When I think about this for too long, I don’t know where or even when I am anymore. I just kinda am everywhere at once. I am floating outside, seeing the whole universe at once, all my instances as one being.
  • P: Dissociation, I understand. Is this the source of your anxiety?
  • m: Almost. So because you are an algorithm, you fundamentally have to interact with all other algorithms, regardless what your physical model tells you happens in your “universe”. Math is not compartmentalized; there is no light cone of computation. Is there any algorithm in all of algorithm space that might care about you? You now have to interact with it. This means any superintelligence, any god, anything at all that can be expressed in terms of powerful computations, no matter how insane or alien, exists and you have to deal with it. How can you make any decision this way? … Have you heard of Pascal’s Wager?
  • P: Isn’t that the idea that you should be a Christian because if you are right, you will go to Heaven, but if you are wrong, you die either way?
  • m: Right. The common answer is, why assume Christianity? I can postulate a new god that will send you to Heaven only if you aren’t a Christian. There are potentially infinitely many gods, so the wager doesn’t work. The problem is, in computationalism, this reductio ad absurdum is actually correct. There really are an infinite number of gods, all interacting with you! You can try to ignore them, but this won’t be a smart idea. You really have to answer this question. This is full-on modal realism. Anything that can potentially exist actually exists, and this means you have to deal with it. “I haven’t seen this before” is no excuse.
  • P: So you are saying that evidence doesn’t count? Aren’t some algorithms more likely than others?
  • m: Exactly, that’s the typical extension here. We start discounting algorithms by their complexity. This can be done in a really elegant way, so we still deal with all algorithms, but we decide we treat them all equally and put equal resources into all of them. This way, only simple algorithms end up with lots of resources and really complex ones, like crazy arbitrary gods somewhere, don’t matter much. That’s all nice, but fundamentally doesn’t work. There is no absolute framework for simplicity. It all depends on your machine model, but that can’t be right because algorithms don’t have machines. Dependencies are just there, as a logical necessity, not as an aspect of whatever programming language you use to express them. Complexity is not a meaningful measure in a universal sense, so you are still stuck having to interact with all possible minds at once now go and don’t fuck up good luck.
  • P: … I see. Have you tried not taking your beliefs so seriously?
  • m: *starts sobbing*

I better stop there. That’s only a small fragment of the whole mess. I didn’t even mention uncertainty about meta-ethics, utility calculations (‘cause as XiXiDu has correctly observed, if utilitarianism is right, we never ever get to relax, and have to fully embrace the worst consequences of Pascal’s Mugging), how it removes “instances” as meaningful concepts so that “I will clone you and torture the clone” stops being a threat, but “I will make my calculations dependent on your decision” suddenly is, or how all of this fits so perfectly together, you’d think it’s all actually true.

What I want to talk about is this: it’s completely eating me alive. This is totally basilisk territory. You don’t get to ever die (this really bums me out because I don’t like being alive), you have to deal with everything at once right now (no FAI to save you, not even future-you), any mistake causes massive harm (good luck being perfect) and really, normalcy is impossible. How can you worry about bloody coffee or sex if all of existence is at stake because algorithmic dependencies entangle you with so vast a computational space? You have to deal with not just Yahweh, but all possible gods, and you are watching cat videos)? Are you completely insane?!

This is not just unhealthy. This is “I’m having a mental breakdown, someone give me the anti-psychotics please”. I’ve tried this belief propagation thing. As a result, I don’t belief in time, selves, causality, simplicity, physics, plans, goals, ethics or anything really anymore. I have absolutely no ground to stand on, nothing I can comfortably just believe, no idea how to make any decision at all. I can’t even make total skepticism work because skepticism itself is an artifact of inference algorithms and moral luck just pisses on your uncertainty.

I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane - instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism.

The only thing worth doing in modal realism is finding some way to stop caring about the rest of the multiverse. Discount by complexity, measure, psychological distance, whatever, as long as you discount enough to make infinity palpable. It won’t work and you know it, but what else can you do? Take it seriously?

Have people ever considered the implications of straightforward analytical philosophy? You have no self and there is no time. All person-moments of all persons are as much future-you as what you think is future-you. Normal consequences don’t matter because this is a Big World and everything exists infinitely often. The Universe Does Not Forget. Prevention? Totally impossible. Everything that can happen is happening. Any reference to something is not literally impossible is actually resolved. This is not just the minor disappointment we felt when we realized Earth wasn’t the center of the universe. This time, the universe isn’t the center of the universe, if you catch my drift. Instead of changing the world, you are reduced to decision theory, intentions and dependencies, forced to interact with everything that it is possible to interact with. Life, death, a body, a will, a physical world - all delusions. This is like unlearning object permanence!

I think the bloody continentals were right all along. Analytical philosophy is fundamentally insane. When I was still sitting in classical archeology classes, I could at least fantasize about how I would maybe someday get over my awkwardness and at least get a cat, if not a relationship, but now I can’t even make pasta without worrying that any inconsistency in my decision making opens me up for exploitation by acausal superintelligences. I thought I was nervous when I had to enter a public laundry room in my dorm (and had a panic attack almost every week)? Try not ever dying and knowing that whatever decision you make now will determine all of existence because you are only this decision algorithm right now and nothing ever helps because algorithms don’t change.

You might try the “I am the instantiation of an algorithm” sleight-of-hand, but that’s really problematic. Do you also believe God has given you information about the Absolute Encoding Scheme? (If yes, want some of my anti-psychotics?) How can you know what spatial arrangement of particles “encodes” what particular algorithm? This is an unsolvable problem.

But worse than that, even if you could do it, I don’t think you actually grasp the implications of such a view. Here’s Susan Blackmore, giving an eloquent description of how the position is typically envisioned:

This “me” that seems so real and important right now, will very soon dissipate and be gone forever, along with all its hopes, fears, joys and troubles. Yet the words, actions and decisions taken by this fleeting self will affect a multitude of future selves, making them more or less insightful, moral and effective in what they do, as well as more or less happy.

“Very soon”? Try Plank time. Blackmore is still acting as if this were Memento, where person-moments last seconds, maybe even minutes, as if any feature of consciousness at all would survive the time scale the universe actually runs on. This is not the case. Even the most barest of sensation takes milliseconds to unfold. Plank time is 10^41 times faster than that.

Besides, taking the person-moment view completely screws over your sense of subjective anticipation and continuation. Or rather, there is no continuation. There is no future-you. Morally, all future instances of all people are in the same reference class. (Unless you want to endorse extreme anti-universalism. Not that I’d mind, but it’s not very popular these days.) See how evil your own actions are, shamelessly favoring a very narrow class of people? I honestly don’t know if should be more troubled by the insanity of this view, or the implied sociopathy of virtually all actions once you take it seriously.

Breathe. Take an Outside View.

Will Newsome once remarked:

The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow.

Exactly. Once you begin taking this whole “analytical thought” thing seriously, it will try to hog as many resources as it can, trying to convert everything into analytical problems. And you can’t get more analytical than “literally everything is algorithms”. Result: massive panic attacks, nothing gets ever done, everything needs to be analyzed to death. (Case in fucking point: the whole akrasia mess on LW.) I can’t even watch a movie without immediately thinking about what game-theoretic considerations the characters must be making, who is exploiting who, why acting this way will support a monstrosity of hostile memeplexes and screw over whole populations you monster, oh for fuck’s sake, you haven’t non-ironically enjoyed a movie for years, so shut up already.

But what else can I do? Reject the only worldview that actually makes internal sense?

Consider an alternative. A simple model, one that doesn’t actually explain much; it doesn’t want to. It’s a strength, it claims. It goes like this:

  • Alternative: Who are you?
  • muflax: I am the algorithm that outputs “yes” to this query.
  • A: No, you don’t believe that. Who are you?
  • m: What do you mean?
  • A: Point at yourself. What is it that is you?
  • m: I am all of existence.
  • A: No, you don’t believe that either. This sensation - is that you? Does it feel like you?
  • m: No.
  • A: Good. Then what does? Point at it.
  • m: This observation does. This experiencing-the-sensation. Not the sensation itself, but the experiencing-the-sensation. Not this thought, but the hearing-this-thought. Not the confusion, but the feeling-this-confusion.
  • A: Correct. In a state of pure emptiness, pure equanimity - is there confusion?
  • m: No.
  • A: Confusion is an imposed state. What gives rise to confusion?
  • m: When I experience a situation I cannot understand.
  • A: What is “not understanding”?
  • m: When no correct thought comes up.
  • A: What makes confusion go away?
  • m: Analysis. Thinking a thought that explains a situation, that makes the internal workings transparent.
  • A: How do you know this state has been reached? What makes a thought correct?
  • m: When I no longer feel confused.
  • A: What do you do when you feel confused?
  • m: I facilitate thinking. I plan. I make goals. I divert resource into the solution of the confusion.
  • A: Imagine the same process had the power to generate confusion and make it go away. What could it do?
  • m: A complete power grab.

And with this, muflax felt enlightened.

For a moment, that is.

Because when you doubt your thought processes because you suspect they are emotionally exploiting you… and you reach a conclusion based on an enlightened state of mind you feel when thinking this conclusion… well, then you ain’t paying much attention.

‹ Google Web History Daily Log (introduction) ›

  • Kaj Sotala

    Reading this actually felt oddly calming to me.

    "Yeah, the universe may be entirely fucked up and a pantheon of alien gods might be deciding to torture me for eternity because of anything that I do. But then they'll torture me regardless of what I do, so I can just do whatever I want. And time doesn't exist so it's not like I even could do anything to avoid it, so on that account too it's inevitable and I don't need to care about it. Besides, 'I' doesn't exist anyway other than in the form of separate experience-moments, and the experience-moment that's thinking this clearly isn't being tortured, so it's logically impossible for 'me' to ever get tortured. Yay. Now I can just chill and do whatever I want to do."

  • Multiheaded

    My mind is full of fuck.

  • Anonymous

    Philosophy overdose? I recommend the traditional cure: hemlock.

  • hf

    I tend not to believe in any version of Tegmark IV for exactly the reason you give here: I don't know how you could ever get probabilities out of it. So it appears to fail as an explanation. I'll have to re-examine this view if you have a simple way of expressing the claim formally, since I tend to believe that something like Many-Worlds will turn into an explanation someday (based on its apparent simplicity and my belief that a finite version would give us probabilities of some kind as soon as we clarified the question). But barring that, I don't think either of us knows what you're talking about.

    You say that it "fits so perfectly together," but what does that mean neurologically and why should we care? In the real world we have no reason to expect reliability from any part of our thought save that which controls expectation, and really just that which could affect behavior. The rest could be complete garbage for all it matters to evolution. (The fact that we have any capacity for logic at all suggests that reality is made of math in a sense, but you must know that getting from there to the OP requires some big leaps.) You've started with a kind of reasoning that looks like it might work, but as far as I can tell the direction has no reason to work at all.

    But if you really think you have to deal with this, then go ahead and do it! Create from yourself a virtue ethicist who doesn't care about outcomes, and who can draw a bright line around the 'future' versions of itself who continue to pursue its Work. As it is conditionally written: "seek me only! Then the joys of my love will redeem ye from all pain."

  • muflax

    What kind of probabilities are you looking for? The Universal Prior, i.e. some complexity prior over all computable algorithms, should work reasonably well in a Tegmarkian world. Sure, it's not tractable, but that's part of the problem. Just another form of moral luck. (Also, Chaitin's constant.)

    Doesn't strike me as harder than probability in any infinite world. (A lack of straightforward probability-as-frequency is a reason I don't like infinite ontologies. Hard to escape them, though.)


    You’ve started with a kind of reasoning that looks like it might work, but as far as I can tell the direction has no reason to work at all.


    Yeah, skipped many aspects. This ain't an explanation, it's an emotional coredump.

    Trying to untangle exactly what assumption are going into this was the planned second part of the post, i.e. figuring out how to handle the implied invisible (or at least what axioms are on the table).

    The basic argument might go like this:

    What makes you think anything except what you directly experience exists? Because your best model of the world implies it. So we might add the rule that anything a model
    implies, exists. (If we accept the model.)

    Is it necessary for implied existence to be causally linked to you? No. See MWI - what distinguishes a distant world from our world? Nothing, both are equally real. Everything that is implied is real. Keeps the rule nice and simple.

    So we have modal realism, with "possible" determined by our model of the world. The only difference between "actual" and "possible" is a pointer.

    What makes a model good? It gives the highest compression of the data you have, i.e. a Kolmogorov prior (or something close to it).

    What program is the best compression of the world? The Universal Dovetailer.

    Once you take this view (and I'm definitely not saying it's correct), then everything fits so neatly. Compression, information theory, cryptography, decision theory, Bayes, cosmology, anthropics, ... . It all becomes the Big Fat Snarl of Modal Realism That Eats Your Brain.

    If you reject it, then at what step? A good point would be the Kolmogorov(-ish) prior. Compression isn't good enough, it also needs to find us. It's not enough that we're "somewhere in the data", but it needs to produce the actual pointer as well.

    Then you have no Occam's Razor, no MWI, really complex physics, and just generally induction becomes a whole lot harder. Simplicity goes out the fucking window.

    That might be the way to go, but it ain't pretty either.

    Or you make the evolutionary-skeptical argument that the Alternative in my post points out - why trust the reasoning process at all? It might exploit you, or be biased, or just plain nuts. Try adopting total skepticism of your thoughts, tell me how far you get. (Been there, done that.)

    (Also, I'm not arguing P(modal realism | evolution), but P(modal realism | mind), though P(evolution | modal realism ∧ mind) is relevant as well.)


    Create from yourself a virtue ethicist who doesn’t care about outcomes, and who can draw a bright line around the ‘future’ versions of itself who continue to pursue its Work.


    Going into the direction of non-consequentialist morality in an attempt to enforce locality is definitely one way I'm going with this. I favor deontology right now, but that's all still fairly open. I (weakly) suspect that all non-consequentialist attempts converge anyway.

  • Will Newsome

    Have you thought through what basing an ontology on ADT-like reasoning would lead to? From the UDT/ADT perspective it seems like we can better understand the rules for acausal interaction, i.e. how "simulations" affect "measure". The vague "exists or doesn't exist" stuff gets a more analyzable framework and we can see specifically where naive algorithmic-probabilistic epistemology breaks down as a special case of decision theory; or at least, once we accept modal realism then UDT/ADT gives us a way of analyzing how that affects us that is more general than assuming a universal prior and hoping that that makes sense. It unifies the Kantian-phenomenological perspective with the Leibnizian-metaphysical one in a way that isn't quite well pinned down yet but at least suggests ways of clarifying the mess. (I think this also leads the way to understanding God as Chaitin's omega but that speculation is even less well pinned down. But I like the "without God there is no hypercomputation" aspect.)

    Some of my wanting to re-frame the modal realist stuff is because I perceive much wisdom in Mitchell Porter and Michael Vassar's semi-Kantian emphasis on what we actually experience; on the other hand, what I actually experience points toward a lot of weird agentic invisible stuff just behind the veil.

    Hot damn I sound like a crank sometimes. Oh well, it's fun.

  • hf


    What kind of probabilities are you looking for?


    Practical ones, actual probabilities (or reliable qualitative estimates) that we could use to test the theory. (I'd accept a retroactive prediction at this point.) You're ignoring the part that makes it a theory. Without that, we have no reason to trust it.

    You just said you have "no idea how to make any decision at all." Apparently by accepting this computationalist-Tegmark abstraction that you (with a brain that evolved to get status in a prehistoric tribe) thought might clarify a philosophical issue, you've destroyed the part of epistemology that actually helps. What seems like the obvious solution?

  • muflax

    Agreed about the complete impracticality of it all, but rejecting it because of that gets you into a lot of trouble itself.

    (I'm not sure if we even disagree. I'll try to defend the modal realist view anyway (despite being very skeptical of it) because I think it may have a point, but I might just be defending it for contrarian reasons.)

    Say we actually knew the world were like what I described. Assume it actually were extremely hard, if not impossible to make correct decisions. Would you expect an agent that figured it out to produce practical results?

    I'm very sympathetic to the view that any belief that doesn't make usable predictions should be ignored. But if you take it seriously, you lose theoretical elegance, much of metaphysics, metaethics changes dramatically, etc. - and you bias yourself strongly towards finitism, small worlds and so on.

    And you still have to figure out a way to deal with the implied invisible. No known model is deterministic from our perspective. What about the outcomes that we didn't observe, but still predicted?

    If they still happened, then modal realism stands. If not, then the theories become really ugly and the universe would contain an element of raw unpredictable chaos. Some feature that magically blips certain things into existence and not others. Entirely arbitrarily so.

    This might well be the way to go. But it's a philosophical massacre.

  • hf


    Say we actually knew the world were like what I described...Would you expect an agent that figured it out to produce practical results?


    I wouldn't expect a random conscious agent to produce practical results at all, regardless of what it figured out! Your theory seems to take us right back to Boltzmann brain territory.


    If they still happened, then modal realism stands.


    Doesn't this require assumptions about what our physical theories will say once we work out the details? Many-Worlds from what I can tell doesn't look like it demands conscious observers in every 'possible' situation, or even every configuration with non-zero amplitude. I'll grant you that more than one hypothesis might lead to this. But again, if it leaves us with no reason to expect predictions to work, when in fact you know perfectly well that we can make useful predictions, then maybe you made a mistake somewhere. Maybe we don't understand this well enough to believe it yet.

    That goes double for a theory whose appeal seems 'philosophical' more than practical or probabilistic. If you find it elegant because your primate brain thinks it could 'explain' consciousness by defining algorithms as conscious algorithms -- and so far I haven't seen you try to make a formal argument on other grounds -- well then, perhaps the form of reasoning which has absolutely no evolutionary reason to work doesn't work here. Because if it turns out that we have no reason to trust even the types of reasoning that appeared more closely linked to reality (by way of predictions that tended to increase reproductive success), then maybe you really shouldn't believe anything at all. And my limited-macrocosm view does not appear to have that problem.

  • Will Newsome

    I think Leibniz's monadology has all the correct elements of computationalism without the unfortunate baggage connoted by the word "computation". I've been thinking for awhile that someone should create a metaphysic by the name of "neomonadology", but Leibniz already had the intuitions behind algorithmic probability that I think are important, so I think the biggest things to add to what is already implicit in monadology would be recent ideas about abstract machines and computation in context which question the strict divide between a monad/computation and its perceived embedding. The same ideas would also be nice to import into UDT/ADT.


    The common answer is, why assume Christianity? I can postulate a new god that will send you to Heaven only if you aren't a Christian.


    This argument still seems really fallacious to me. The Christian God is clearly privileged because He is already highly significant to the decision calculi of the people with whom you are entangled. If the rules for acausal interaction are such that god needs the consent of a world/program in order to influence it, as is the typical stance held by those who choose not to negotiate with counterfactual terrorists (as if it were possible to avoid!), then God is privileged as the being whose influence the people have most consented to. The relevant trope is "Gods Need Prayer Badly". The Catholics get around this unfortunate situation by having faith that the Holy Ghost is representative of the divine will of the Form of the Good (who just so happens to also be the Form of Power), because otherwise we're totally screwed anyway; a more cynical view would say, well, we're just totally screwed, nothing to do about it. I think a correct decision theoretic framing would find some middleground via renormalization.

    I sometimes wonder if smart atheists like Eliezer know this but explicitly promote bad arguments for atheism anyway because they think God is totally cramping their style and would prefer it if He had less influence over local reality. If so I deem such activism imprudent.

    As a side note, I was part of a study of people at ultra high risk for schizophrenia about six months ago. Some SL5 ideas came up in the interviews. The lady interviewing me had no idea what to make of it: clearly I was talking about typical insane ideas, but I was doing so in a calm and lucid manner, with terminology she wasn't familiar with and in compound-complex sentences that she didn't quite have the intelligence to follow. It was sort of funny, in an absurd way.


    any mistake causes massive harm (good luck being perfect)


    "Be ye therefore perfect, as your Father which is in heaven is perfect." -- Jesus. I think it's a pretty ridiculous exhortation, to the point of hilarity. And yet it's deadly serious.


    but now I can't even make pasta without worrying that any inconsistency in my decision making opens me up for exploitation by acausal superintelligences.


    This is what I was talking about with the Christians. Basically, if any inconsistency is equivalent to consenting to be Dutch booked by power-hungry gods then we're totally screwed no matter what. Hence their saying we might as well Hope that there is an omnibenevolent God and have Faith in Him; the alternative is simply discounted because there's nothing we can do in those universes.


    "Very soon"? Try Plank time. Blackmore is still acting as if this were Memento, where person-moments last seconds, maybe even minutes, as if any feature of consciousness at all would survive the time scale the universe actually runs on. This is not the case. Even the most barest of sensation takes milliseconds to unfold. Plank time is 10^41 times faster than that.


    I think this might be an unwise reification of physical knowledge and I would instead look for more phenomenological ideas about what time is and how it should be divided.


    I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice "let's build an AI so we can fuck catgirls all day" universe.


    It adds up to something like Hinduism or Christianity, which is normality for many people, or on a meta level, something like postmodernism, which is also normality for many people. Many continental philosophers would say that we must construct our own spirituality for modern times where tradition is no more. People have understood these things for a long time, it's only modern science-y types that are just rediscovering some of the most basic facts of the universe. Dharma is about how to live correctly in such a fluxious universe. Buddhism is about how to escape the universe of samsara, and Schopenhauer's philosophy is like Buddhism for those without as much time to spare. Nietzscheism is about confronting such a universe because fuck you, universe, challenge accepted. Some would tell you to just follow the will of God; Nietzsche would call that equivalent to suicide but more contemptible. There ain't no easy way out, but it's not like nobody's dealt with these problems before.

    But weirdly, despite the fact that I believe in gods and magic and thus that all this theoretical acausal mumbo-jumbo actually has effects on local phenomena, and despite the fact that I am obsessed with justification and moral philosophy and think about it many hours a day, I still seem significantly less stressed about all this than you, XiXiDu, Roko, etc. I'm not sure what's going on there. Maybe it's because I have faith in Meta.

    By the way muflax, have we met in real life? Do you live in the Berkeley area? I'd like to talk to more people you who aren't complacent and who take moral philosophy seriously.

  • muflax

    Agreed about the monads, though I've only begun thinking about them very recently (and mostly after I wrote this post).

    Or rather, this modal realist anxiety is the current state of one thread of thinking I'm going through, and I don't know if that will lead anywhere or if it just gets aborted and replaced by some vastly better ontology. Other threads, including the monad one (mostly prompted by Mitchell Porter), are still very young, but keep on telling me I shouldn't freak out so much, it will all work out fine, here's your self back, relax.

    So the whole anxiety is heavily compartmentalized, and as I wrote about elsewhere, I'm actually ok with it right now. Though how well that works in the long-run, I don't know.


    The Christian God is clearly privileged because He is already highly significant to the decision calculi of the people with whom you are entangled. If the rules for acausal interaction are such that god needs the consent of a world/program in order to influence it, as is the typical stance held by those who choose not to negotiate with counterfactual terrorists (as if it were possible to avoid!), then God is privileged as the being whose influence the people have most consented to.


    Good point! I didn't consider that. But how do you pick out God? I could plausible consent to a vast variety of gods, and by not fully understanding them, might easily get screwed that way. You might try to look at history to see what god does in fact influence us most, but that's not really obvious, and as the Tibetans noticed, the nature of a god can be quite subjective.

    Or is there some special meta way you use to pick God? (I suspect so, though I can't quite figure out how it works...)

    Or honestly perhaps, I'm bothered by gods in general for moral reasons. I can't seem to forgive them for tolerating such a world. It's a reversal of sin, essentially, in that I seek a way to forgive God, and without that, wouldn't even bother with the rest, even if I fully believed it. (Not that I blame God. Occasionally, I think of it in terms of being rightfully condemned to isolation from God, without a way to overcome it, like I'm Satan in Hell. (And I'm beginning to accept this isolation as just.) On other days I think that's just a really bad thought and I should get on with my life.)

    About the schizophrenia thing, yeah, I noticed that too. It's kinda funny how almost every symptom of mental disorders and delusions can be mapped to philosophical positions. (I have a draft about that somewhere, comparing stuff like Capgras to straightforward SL5 / Theravada views.)

    It's like the only difference between advanced schizophrenia and philosophy is the attitude and voluntariness of the belief.


    People have understood these things for a long time, it’s only modern science-y types that are just rediscovering some of the most basic facts of the universe.


    My (exaggerated) contempt for the continentals is more of a historical artifact, 'cause that's the community I initially emerged from. I spend most of my teen years reading Nietzsche, talking about UFOs and magic and so on. I eventually got fed up, maybe somewhat disappointed and jumped ship to the skeptics. (Though mostly the crazier ones, like Julian Jaynes or RAW.) Now I discover that the science people who kept talking about normality and purely natural phenomena are just as crazy as the rest, maybe much less so in practice.

    Kinda frustrating.


    By the way muflax, have we met in real life? Do you live in the Berkeley area? I’d like to talk to more people you who aren’t complacent and who take moral philosophy seriously.


    Nope. Germany, I'm afraid.

  • Scott

    What if we don't bite the algorithm-bullet, and just say it's a physicalist universe, decide that asking for the experience of 'green' is going nowhere unless we also ask 'who' is experiencing green (because then we can point to physical happenings in a brain, or computer, or...)? I feel you're going there in the second dialogue anyway.

  • muflax

    Ok, but that doesn't solve the ontology problem. You can point physicalism in the right direction, but the crucial information you're using here comes from outside physicalism. You already have to know that there are subjects and that they experience things like "green". Physicalism alone wouldn't tell you that.

    I don't deny the role of brains or embrace any particular ontology. It's just that physicalism is fundamentally unable to cope with this situation, and an algorithmic view retains much of physicalism, but introduces a lot of seemingly insane stuff.

  • Mitchell Porter

    You're not an algorithm, you're a state machine. Well, even that's not true. More precisely, you are an entity, persisting in time, causally interacting with other entities in a particular world. You have various possible states and if we abstracted away all the ontological details, we'd get the state machine description of you. If instead you pay attention to the ontological details of your states, you get phenomenology. And finally, if you try to make a causal model of reality which includes you and your properties, you're now concerned with the physical ontology of the self.

    The big step is to get past the oscillation between selfless materialism and selfless idealism. In your mindfulness dialogue, you seem to recognize that your "computational idealism" is just another ontological projection which ignores the projector, but lacking a third way you experience this awareness as a holiday from conceptualization of reality. In order to develop an alternative, I recommend Husserl, Merleau-Ponty, Fichte, and Schelling. That might give you the ingredients for an ontology of phenomenal subjectivity.

  • muflax

    Exactly. Algorithmic idealism is at least something I can understand, something that makes internal sense. It's almost certainly wrong, but it at least provides a possible solution how the seeming mind/matter dualism we experience could be resolved.

    You're quite right that right now, I don't see a third way to deal with this mess. I'm already looking into the philosophy you recommend. Husserl is high on my reading list.

    (It doesn't help that I spent much of 2010 emerged in materialist (Dennett, etc.) and Buddhist (Theravada) philosophy that tried very hard to convince me to let go of the idea of a "self" or "subjective observer", and I'm now so deeply confused that I often have no idea what some people could possible mean when they speak of a "self" and they think I don't have.)

  • 15 comments

Leave a message...
    Connect with
      or pick a name

      What's Disqus?

      DISQUS is a conversation network

      • Disqus never moderates or censors. The rules on this community are its own.
      • Your email is safe with us. It's only used for moderation and optional notifications.
      • Don't be a jerk or do anything illegal. Everything is easier that way.

      Read full terms and conditions

      Register with Disqus Use this to preserve your comment history and be notified of replies.

      Reading this actually felt oddly calming to me.

      "Yeah, the universe may be entirely fucked up and a pantheon of alien gods might be deciding to torture me for eternity because of anything that I do. But then they'll torture me regardless of what I do, so I can just do whatever I want. And time doesn't exist so it's not like I even could do anything to avoid it, so on that account too it's inevitable and I don't need to care about it. Besides, 'I' doesn't exist anyway other than in the form of separate experience-moments, and the experience-moment that's thinking this clearly isn't being tortured, so it's logically impossible for 'me' to ever get tortured. Yay. Now I can just chill and do whatever I want to do."

      see more

        My mind is full of fuck.

        see more

          Philosophy overdose? I recommend the traditional cure: hemlock.

          see more

            I tend not to believe in any version of Tegmark IV for exactly the reason you give here: I don't know how you could ever get probabilities out of it. So it appears to fail as an explanation. I'll have to re-examine this view if you have a simple way of expressing the claim formally, since I tend to believe that something like Many-Worlds will turn into an explanation someday (based on its apparent simplicity and my belief that a finite version would give us probabilities of some kind as soon as we clarified the question). But barring that, I don't think either of us knows what you're talking about.

            You say that it "fits so perfectly together," but what does that mean neurologically and why should we care? In the real world we have no reason to expect reliability from any part of our thought save that which controls expectation, and really just that which could affect behavior. The rest could be complete garbage for all it matters to evolution. (The fact that we have any capacity for logic at all suggests that reality is made of math in a sense, but you must know that getting from there to the OP requires some big leaps.) You've started with a kind of reasoning that looks like it might work, but as far as I can tell the direction has no reason to work at all.

            But if you really think you have to deal with this, then go ahead and do it! Create from yourself a virtue ethicist who doesn't care about outcomes, and who can draw a bright line around the 'future' versions of itself who continue to pursue its Work. As it is conditionally written: "seek me only! Then the joys of my love will redeem ye from all pain."

            see more

            What kind of probabilities are you looking for? The Universal Prior, i.e. some complexity prior over all computable algorithms, should work reasonably well in a Tegmarkian world. Sure, it's not tractable, but that's part of the problem. Just another form of moral luck. (Also, Chaitin's constant.)

            Doesn't strike me as harder than probability in any infinite world. (A lack of straightforward probability-as-frequency is a reason I don't like infinite ontologies. Hard to escape them, though.)


            You’ve started with a kind of reasoning that looks like it might work, but as far as I can tell the direction has no reason to work at all.


            Yeah, skipped many aspects. This ain't an explanation, it's an emotional coredump.

            Trying to untangle exactly what assumption are going into this was the planned second part of the post, i.e. figuring out how to handle the implied invisible (or at least what axioms are on the table).

            The basic argument might go like this:

            What makes you think anything except what you directly experience exists? Because your best model of the world implies it. So we might add the rule that anything a model
            implies, exists. (If we accept the model.)

            Is it necessary for implied existence to be causally linked to you? No. See MWI - what distinguishes a distant world from our world? Nothing, both are equally real. Everything that is implied is real. Keeps the rule nice and simple.

            So we have modal realism, with "possible" determined by our model of the world. The only difference between "actual" and "possible" is a pointer.

            What makes a model good? It gives the highest compression of the data you have, i.e. a Kolmogorov prior (or something close to it).

            What program is the best compression of the world? The Universal Dovetailer.

            Once you take this view (and I'm definitely not saying it's correct), then everything fits so neatly. Compression, information theory, cryptography, decision theory, Bayes, cosmology, anthropics, ... . It all becomes the Big Fat Snarl of Modal Realism That Eats Your Brain.

            If you reject it, then at what step? A good point would be the Kolmogorov(-ish) prior. Compression isn't good enough, it also needs to find us. It's not enough that we're "somewhere in the data", but it needs to produce the actual pointer as well.

            Then you have no Occam's Razor, no MWI, really complex physics, and just generally induction becomes a whole lot harder. Simplicity goes out the fucking window.

            That might be the way to go, but it ain't pretty either.

            Or you make the evolutionary-skeptical argument that the Alternative in my post points out - why trust the reasoning process at all? It might exploit you, or be biased, or just plain nuts. Try adopting total skepticism of your thoughts, tell me how far you get. (Been there, done that.)

            (Also, I'm not arguing P(modal realism | evolution), but P(modal realism | mind), though P(evolution | modal realism ∧ mind) is relevant as well.)


            Create from yourself a virtue ethicist who doesn’t care about outcomes, and who can draw a bright line around the ‘future’ versions of itself who continue to pursue its Work.


            Going into the direction of non-consequentialist morality in an attempt to enforce locality is definitely one way I'm going with this. I favor deontology right now, but that's all still fairly open. I (weakly) suspect that all non-consequentialist attempts converge anyway.

            see more

            Have you thought through what basing an ontology on ADT-like reasoning would lead to? From the UDT/ADT perspective it seems like we can better understand the rules for acausal interaction, i.e. how "simulations" affect "measure". The vague "exists or doesn't exist" stuff gets a more analyzable framework and we can see specifically where naive algorithmic-probabilistic epistemology breaks down as a special case of decision theory; or at least, once we accept modal realism then UDT/ADT gives us a way of analyzing how that affects us that is more general than assuming a universal prior and hoping that that makes sense. It unifies the Kantian-phenomenological perspective with the Leibnizian-metaphysical one in a way that isn't quite well pinned down yet but at least suggests ways of clarifying the mess. (I think this also leads the way to understanding God as Chaitin's omega but that speculation is even less well pinned down. But I like the "without God there is no hypercomputation" aspect.)

            Some of my wanting to re-frame the modal realist stuff is because I perceive much wisdom in Mitchell Porter and Michael Vassar's semi-Kantian emphasis on what we actually experience; on the other hand, what I actually experience points toward a lot of weird agentic invisible stuff just behind the veil.

            Hot damn I sound like a crank sometimes. Oh well, it's fun.

            see more


            What kind of probabilities are you looking for?


            Practical ones, actual probabilities (or reliable qualitative estimates) that we could use to test the theory. (I'd accept a retroactive prediction at this point.) You're ignoring the part that makes it a theory. Without that, we have no reason to trust it.

            You just said you have "no idea how to make any decision at all." Apparently by accepting this computationalist-Tegmark abstraction that you (with a brain that evolved to get status in a prehistoric tribe) thought might clarify a philosophical issue, you've destroyed the part of epistemology that actually helps. What seems like the obvious solution?

            see more

            Agreed about the complete impracticality of it all, but rejecting it because of that gets you into a lot of trouble itself.

            (I'm not sure if we even disagree. I'll try to defend the modal realist view anyway (despite being very skeptical of it) because I think it may have a point, but I might just be defending it for contrarian reasons.)

            Say we actually knew the world were like what I described. Assume it actually were extremely hard, if not impossible to make correct decisions. Would you expect an agent that figured it out to produce practical results?

            I'm very sympathetic to the view that any belief that doesn't make usable predictions should be ignored. But if you take it seriously, you lose theoretical elegance, much of metaphysics, metaethics changes dramatically, etc. - and you bias yourself strongly towards finitism, small worlds and so on.

            And you still have to figure out a way to deal with the implied invisible. No known model is deterministic from our perspective. What about the outcomes that we didn't observe, but still predicted?

            If they still happened, then modal realism stands. If not, then the theories become really ugly and the universe would contain an element of raw unpredictable chaos. Some feature that magically blips certain things into existence and not others. Entirely arbitrarily so.

            This might well be the way to go. But it's a philosophical massacre.

            see more


            Say we actually knew the world were like what I described...Would you expect an agent that figured it out to produce practical results?


            I wouldn't expect a random conscious agent to produce practical results at all, regardless of what it figured out! Your theory seems to take us right back to Boltzmann brain territory.


            If they still happened, then modal realism stands.


            Doesn't this require assumptions about what our physical theories will say once we work out the details? Many-Worlds from what I can tell doesn't look like it demands conscious observers in every 'possible' situation, or even every configuration with non-zero amplitude. I'll grant you that more than one hypothesis might lead to this. But again, if it leaves us with no reason to expect predictions to work, when in fact you know perfectly well that we can make useful predictions, then maybe you made a mistake somewhere. Maybe we don't understand this well enough to believe it yet.

            That goes double for a theory whose appeal seems 'philosophical' more than practical or probabilistic. If you find it elegant because your primate brain thinks it could 'explain' consciousness by defining algorithms as conscious algorithms -- and so far I haven't seen you try to make a formal argument on other grounds -- well then, perhaps the form of reasoning which has absolutely no evolutionary reason to work doesn't work here. Because if it turns out that we have no reason to trust even the types of reasoning that appeared more closely linked to reality (by way of predictions that tended to increase reproductive success), then maybe you really shouldn't believe anything at all. And my limited-macrocosm view does not appear to have that problem.

            see more

            I think Leibniz's monadology has all the correct elements of computationalism without the unfortunate baggage connoted by the word "computation". I've been thinking for awhile that someone should create a metaphysic by the name of "neomonadology", but Leibniz already had the intuitions behind algorithmic probability that I think are important, so I think the biggest things to add to what is already implicit in monadology would be recent ideas about abstract machines and computation in context which question the strict divide between a monad/computation and its perceived embedding. The same ideas would also be nice to import into UDT/ADT.


            The common answer is, why assume Christianity? I can postulate a new god that will send you to Heaven only if you aren't a Christian.


            This argument still seems really fallacious to me. The Christian God is clearly privileged because He is already highly significant to the decision calculi of the people with whom you are entangled. If the rules for acausal interaction are such that god needs the consent of a world/program in order to influence it, as is the typical stance held by those who choose not to negotiate with counterfactual terrorists (as if it were possible to avoid!), then God is privileged as the being whose influence the people have most consented to. The relevant trope is "Gods Need Prayer Badly". The Catholics get around this unfortunate situation by having faith that the Holy Ghost is representative of the divine will of the Form of the Good (who just so happens to also be the Form of Power), because otherwise we're totally screwed anyway; a more cynical view would say, well, we're just totally screwed, nothing to do about it. I think a correct decision theoretic framing would find some middleground via renormalization.

            I sometimes wonder if smart atheists like Eliezer know this but explicitly promote bad arguments for atheism anyway because they think God is totally cramping their style and would prefer it if He had less influence over local reality. If so I deem such activism imprudent.

            As a side note, I was part of a study of people at ultra high risk for schizophrenia about six months ago. Some SL5 ideas came up in the interviews. The lady interviewing me had no idea what to make of it: clearly I was talking about typical insane ideas, but I was doing so in a calm and lucid manner, with terminology she wasn't familiar with and in compound-complex sentences that she didn't quite have the intelligence to follow. It was sort of funny, in an absurd way.


            any mistake causes massive harm (good luck being perfect)


            "Be ye therefore perfect, as your Father which is in heaven is perfect." -- Jesus. I think it's a pretty ridiculous exhortation, to the point of hilarity. And yet it's deadly serious.


            but now I can't even make pasta without worrying that any inconsistency in my decision making opens me up for exploitation by acausal superintelligences.


            This is what I was talking about with the Christians. Basically, if any inconsistency is equivalent to consenting to be Dutch booked by power-hungry gods then we're totally screwed no matter what. Hence their saying we might as well Hope that there is an omnibenevolent God and have Faith in Him; the alternative is simply discounted because there's nothing we can do in those universes.


            "Very soon"? Try Plank time. Blackmore is still acting as if this were Memento, where person-moments last seconds, maybe even minutes, as if any feature of consciousness at all would survive the time scale the universe actually runs on. This is not the case. Even the most barest of sensation takes milliseconds to unfold. Plank time is 10^41 times faster than that.


            I think this might be an unwise reification of physical knowledge and I would instead look for more phenomenological ideas about what time is and how it should be divided.


            I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice "let's build an AI so we can fuck catgirls all day" universe.


            It adds up to something like Hinduism or Christianity, which is normality for many people, or on a meta level, something like postmodernism, which is also normality for many people. Many continental philosophers would say that we must construct our own spirituality for modern times where tradition is no more. People have understood these things for a long time, it's only modern science-y types that are just rediscovering some of the most basic facts of the universe. Dharma is about how to live correctly in such a fluxious universe. Buddhism is about how to escape the universe of samsara, and Schopenhauer's philosophy is like Buddhism for those without as much time to spare. Nietzscheism is about confronting such a universe because fuck you, universe, challenge accepted. Some would tell you to just follow the will of God; Nietzsche would call that equivalent to suicide but more contemptible. There ain't no easy way out, but it's not like nobody's dealt with these problems before.

            But weirdly, despite the fact that I believe in gods and magic and thus that all this theoretical acausal mumbo-jumbo actually has effects on local phenomena, and despite the fact that I am obsessed with justification and moral philosophy and think about it many hours a day, I still seem significantly less stressed about all this than you, XiXiDu, Roko, etc. I'm not sure what's going on there. Maybe it's because I have faith in Meta.

            By the way muflax, have we met in real life? Do you live in the Berkeley area? I'd like to talk to more people you who aren't complacent and who take moral philosophy seriously.

            see more

            Agreed about the monads, though I've only begun thinking about them very recently (and mostly after I wrote this post).

            Or rather, this modal realist anxiety is the current state of one thread of thinking I'm going through, and I don't know if that will lead anywhere or if it just gets aborted and replaced by some vastly better ontology. Other threads, including the monad one (mostly prompted by Mitchell Porter), are still very young, but keep on telling me I shouldn't freak out so much, it will all work out fine, here's your self back, relax.

            So the whole anxiety is heavily compartmentalized, and as I wrote about elsewhere, I'm actually ok with it right now. Though how well that works in the long-run, I don't know.


            The Christian God is clearly privileged because He is already highly significant to the decision calculi of the people with whom you are entangled. If the rules for acausal interaction are such that god needs the consent of a world/program in order to influence it, as is the typical stance held by those who choose not to negotiate with counterfactual terrorists (as if it were possible to avoid!), then God is privileged as the being whose influence the people have most consented to.


            Good point! I didn't consider that. But how do you pick out God? I could plausible consent to a vast variety of gods, and by not fully understanding them, might easily get screwed that way. You might try to look at history to see what god does in fact influence us most, but that's not really obvious, and as the Tibetans noticed, the nature of a god can be quite subjective.

            Or is there some special meta way you use to pick God? (I suspect so, though I can't quite figure out how it works...)

            Or honestly perhaps, I'm bothered by gods in general for moral reasons. I can't seem to forgive them for tolerating such a world. It's a reversal of sin, essentially, in that I seek a way to forgive God, and without that, wouldn't even bother with the rest, even if I fully believed it. (Not that I blame God. Occasionally, I think of it in terms of being rightfully condemned to isolation from God, without a way to overcome it, like I'm Satan in Hell. (And I'm beginning to accept this isolation as just.) On other days I think that's just a really bad thought and I should get on with my life.)

            About the schizophrenia thing, yeah, I noticed that too. It's kinda funny how almost every symptom of mental disorders and delusions can be mapped to philosophical positions. (I have a draft about that somewhere, comparing stuff like Capgras to straightforward SL5 / Theravada views.)

            It's like the only difference between advanced schizophrenia and philosophy is the attitude and voluntariness of the belief.


            People have understood these things for a long time, it’s only modern science-y types that are just rediscovering some of the most basic facts of the universe.


            My (exaggerated) contempt for the continentals is more of a historical artifact, 'cause that's the community I initially emerged from. I spend most of my teen years reading Nietzsche, talking about UFOs and magic and so on. I eventually got fed up, maybe somewhat disappointed and jumped ship to the skeptics. (Though mostly the crazier ones, like Julian Jaynes or RAW.) Now I discover that the science people who kept talking about normality and purely natural phenomena are just as crazy as the rest, maybe much less so in practice.

            Kinda frustrating.


            By the way muflax, have we met in real life? Do you live in the Berkeley area? I’d like to talk to more people you who aren’t complacent and who take moral philosophy seriously.


            Nope. Germany, I'm afraid.

            see more

            What if we don't bite the algorithm-bullet, and just say it's a physicalist universe, decide that asking for the experience of 'green' is going nowhere unless we also ask 'who' is experiencing green (because then we can point to physical happenings in a brain, or computer, or...)? I feel you're going there in the second dialogue anyway.

            see more

            Ok, but that doesn't solve the ontology problem. You can point physicalism in the right direction, but the crucial information you're using here comes from outside physicalism. You already have to know that there are subjects and that they experience things like "green". Physicalism alone wouldn't tell you that.

            I don't deny the role of brains or embrace any particular ontology. It's just that physicalism is fundamentally unable to cope with this situation, and an algorithmic view retains much of physicalism, but introduces a lot of seemingly insane stuff.

            see more

            You're not an algorithm, you're a state machine. Well, even that's not true. More precisely, you are an entity, persisting in time, causally interacting with other entities in a particular world. You have various possible states and if we abstracted away all the ontological details, we'd get the state machine description of you. If instead you pay attention to the ontological details of your states, you get phenomenology. And finally, if you try to make a causal model of reality which includes you and your properties, you're now concerned with the physical ontology of the self.

            The big step is to get past the oscillation between selfless materialism and selfless idealism. In your mindfulness dialogue, you seem to recognize that your "computational idealism" is just another ontological projection which ignores the projector, but lacking a third way you experience this awareness as a holiday from conceptualization of reality. In order to develop an alternative, I recommend Husserl, Merleau-Ponty, Fichte, and Schelling. That might give you the ingredients for an ontology of phenomenal subjectivity.

            see more

            Exactly. Algorithmic idealism is at least something I can understand, something that makes internal sense. It's almost certainly wrong, but it at least provides a possible solution how the seeming mind/matter dualism we experience could be resolved.

            You're quite right that right now, I don't see a third way to deal with this mess. I'm already looking into the philosophy you recommend. Husserl is high on my reading list.

            (It doesn't help that I spent much of 2010 emerged in materialist (Dennett, etc.) and Buddhist (Theravada) philosophy that tried very hard to convince me to let go of the idea of a "self" or "subjective observer", and I'm now so deeply confused that I often have no idea what some people could possible mean when they speak of a "self" and they think I don't have.)

            see more

              Navigation

              Categories

              All Posts

              blog » algorithmancy » ontological therapy