New Rationalism: An Introduction

New Rationalism is an umbrella term for a category of people who tend to take logical implications, or what they call “the implied invisible”, very seriously.

Someone who falls into the category of New Rationalism fits one or more of the following descriptions:

  • The person entertains hypotheses that are highly speculative. These hypotheses are in turn based on fragile foundations, which are only slightly less speculative than the hypotheses themselves. Sometimes these hypotheses are many levels removed from empirically verified facts or evident and uncontroversial axioms.
  • Probability estimates of the person’s hypotheses are highly unstable and highly divergent between different people.
  • The person’s hypotheses are either unfalsifiable by definition, too vague, or almost impossibly difficult to falsify.
  • It is not possible to update on evidence, because the person’s hypotheses do not discriminate between world states where they are right versus world states where they are wrong. Either the only prediction made by the hypotheses is the eventual validation of the hypotheses themselves, or the prediction is sufficiently vague as to allow the predictor to ignore any evidence to the contrary.
  • The person’s hypotheses either have no or only obscure decision relevant consequences.
  • The person tends to withdraw from real-world feedback loops.

A person who falls into the category of New Rationalism might employ one or more of the following rationalizations:

  • The burden of proof is reversed. The person demands their critics to provide strong evidence against their beliefs before they are allowed to dismiss them.
  • The scientific method, scientific community, and domain experts are discredited as being inadequate, deficient, irrational or stupid.
  • Conjecturing enormous risks and then using that as leverage to make weak hypotheses seem vastly more important or persuasive than they really are.
  • Arguing that you should not assign a negligible probability to a hypothesis (the author’s hypothesis) being true, because that would require an accuracy that is reliably greater than your objective accuracy
  • Arguing that by unpacking a complex scenario you will underestimate the probability of anything, because it is very easy to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions.

New rationalists believe that armchair theorizing is enough to discern reality from fantasy. Or that it is at least sufficient to take the resulting hypotheses seriously enough to draw action relevant conclusions from them.

This stance has resulted in hypotheses similar to solipsism (which any sane person rejects at an early age). Hypotheses that are not obviously flawed, but which can’t be falsified.

The problem with new rationalists is not that they take seriously what follows from established facts or sound arguments. Since that concept is generally valid. For example, it is valid to believe that there are stars beyond the cosmological horizon. Even if it is not possible to observe them, directly retrieve information about them, and to empirically verify their existence. The problem is that they don’t stop there. They use such implications as foundations for further speculations, which are then accepted as new foundations from where they can draw further conclusions.

A textbook example of what is wrong with New Rationalism is this talk by Jaan Tallinn (transcript), which relies on several speculative ideas, each of which is itself speculative:

This talk combines the ideas of intelligence explosion, the multiverse, the anthropic principle, and the simulation argument, into an alternative model of the universe – a model where, from the perspective of a human observer, technological singularity is the norm, not the exception.

A quote from the talk by Jaan Tallinn:

We started by observing that living and playing a role in the 21st century seems to be a mind-boggling privilege, because the coming singularity might be the biggest event in the past and future history of the universe. Then we combined the computable multiverse hypothesis with the simulation argument, to arrive at the conclusion that in order to determine how special our century really is, we need to count both the physical and virtual instantiations of it.

We further talked about the motivations of post-singularity superintelligences, speculating that they might want to use simulations as a way to get in touch with each other. Finally we analyzed a particular simulation scenario in which superintelligences are searching for one another in the so called mind space, and found that, indeed, this search should generate a large number of virtual moments near the singularity, thus reducing our surprise in finding ourselves in one.

Note how all of the underlying hypotheses, although accepted by New Rationalists, are themselves somewhat speculative and not established facts. The underlying hypotheses are however all valid. The problem starts when you begin making dependent hypotheses that rely on a number of unestablished initial hypotheses. The problem gets worse when the dependencies become even more fragile when further conclusions are drawn based on hypotheses that are already N levels removed from established facts. But the biggest problem is that eventually action relevant conclusions are drawn and acted upon.

The problem is that logical implications can reach out indefinitely. The problem is that humans are spectacularly bad at making such inferences. Which is why the amount of empirical evidence required to accept a belief should be proportional to its distance from established facts.

It is much more probable that we’re going make everything worse, or waste our time, than that we’re actually maximizing expected utility when trying to act based on conjunctive, non-evidence-backed speculations. Since such speculations are not only improbable, but very likely based on fallacious reasoning.

As computationally bounded agents we are forced to restrict ourselves to empirical evidence and falsifiable hypotheses. We need to discount certain obscure low probability hypotheses. Otherwise we will fall prey to our own shortcomings and inability to discern fantasy from reality.

Further reading

Tags:

  • Bruce

    I’m a little curious about how much generality you think you’ve hit upon here. How recent, and how common, might be questions to consider, because what you’ve described of new rationalism sounds like a mindset going back some time in various fields. Specific examples of categorization and criticism you gave are what they are, but I think there is potential for other, less narrow contexts from the first part of the post, before delving into that. As an umbrella, procedural level thing some of the points you delineate do seem distinctly useful for recognition and classification beyond naive claims for major criteria like “falsifiability” and “consensus” in sci/tech fields.

    Even if to start you were not familiar with or had to do a little research on topics, what might you conclude if you take the original bullet points and look at the work of Max Tegmark?

  • Pingback: Alexander Kruel · MIRI/LessWrong Critiques: Index()

  • Pingback: Alexander Kruel · Beware of high IQ individuals making sense of nonsense()

  • I forgot my login again

    Someone who falls into New Rationalism may fit one or more of the descriptions, but the converse isn’t necessarily true.

    Consider a conspiracy theorist or someone who believes the apocalypse will come any day now. Such a group:
    – has persons that entertain hypotheses that are highly speculative (that the apocalypse is coming or that the Illuminati controls the world)
    – has members with wildly varying probability estimates (when and how the apocalypse comes, which conspiracy theories are true: e.g. “don’t mix your reptilians in with the Illuminati. The latter is /real/, dammit!”)
    – holds hypotheses that can’t be updated on evidence (if no conspiracy is found, that just means they’re very good at invisibly pulling the strings). I will note, though, that apocalyptic groups mostly deal with failures of prediction by sweeping the dissonance away, so they don’t quite fit. See e.g. “When Prophecy Fails”.
    – holds hypotheses that have no or obscure decision relevant consequences (except for trying to wake the world up by telling them the “reality” of conspiracy, or that the end is near). Although again, doomsday groups have often prepared for the end by selling their worldly goods.
    – tends to withdraw from real-world feedback loops (which tends to happen to groups that hold strong opinions the rest of the world doesn’t agree with).

    I suspect New Rationalism is an instance of a broader pattern. Perhaps, as the nickname “the rapture of the nerds” for the Singularity suggests, New Rationalism is closer to millennialism than one would generally suspect.

  • Urthman

    The hypotheses related to bargaining with an AI are false. Any AI promise or threat to reward or punish simulated people is unverifiable and therefore meaningless.

    An AI could never prove it had tortured (or stopped torturing) a simulated person, not even to the person supposedly being tortured (rather than implanting false memories), It would be irrational for the AI to offer to do something it doesn’t otherwise want to do if there is no way to demonstrate that it has done it. An AI might claim to value truth-telling or promise-keeping, but there is similarly no way to verify such a claim.

    So it is never rational to assume that an AI would change the way it treats simulated people *solely* in exchange for goods and services from others.

  • Pingback: Alexander Kruel · Highly intelligent and successful people who hold weird beliefs()

  • Christian Kleineidam

    Your post yourself manages to add a 9th level on top of the 8th levels that you identified. That means that it’s by it’s own account probably wrong

  • VVV

    Oh god, you are *still* posting critiques based on Roko’s basilisk?

  • Pingback: Alexander Kruel · [Link] The Most Terrifying Thought Experiment of All Time()

  • Pingback: [Link] The Most Terrifying Thought Experiment of All Time | TiaMart Blog()

  • Pingback: Roko’s Basilisk | Yet Another Thinking Meat Blog()

  • Pingback: Extremely smart and profitable individuals who maintain bizarre beliefs | TiaMart Blog()

  • Pingback: Alexander Kruel · Pascal’s wager: Better safe than sorry?()

  • Pingback: Pascal’s wager: Higher protected than sorry? | TiaMart Blog()

  • KrisKelvin

    I think a main problem is the following: Reasoning with speculative hypotheses is much more speculative than many people would intuitively assume. Why? Because it behaves quite different than reasoning with relatively “hard facts”.

    If you base an argument on some statements which are highly likely to be true, it is still quite likely that the argument itself is true. Example: You have 4 statements and each statement has a probability of p=0.95 to be true. Then the probability that all 4 statements are true is p=0.815 (0.95^4). So the probability is still quite high, it only decreased by just 14.2%.

    But if you have 4 statements which are moderately speculative, say p=0.5, the probability that all 4 statements are true is now only p=0.0625 (0.5^4). In this case, the probability changed extremely, it dropped by 87.5%! Here one starts with just moderately speculative hypotheses and ends up with an highly unlikely conclusion.

    I guess that “new rationalists” unconsciously assume that reasoning with speculative hypotheses behaves similar to “ordinary” reasoning with highly likely hypotheses (“facts”), namely that the likelihood of the conclusion is a little lower, but still similar to the likelihood of the hypotheses. But thats not the case: The likelihood of the conclusion is MUCH lower than the hypotheses if the hypotheses are speculative.