The Singularity Institute: How They Brainwash You

Note: The Singularity Institute is now called the Machine Intelligence Research Institute (MIRI).

The Less Wrong community is a key venue for SI (Singularity Institute) recruitment. Underlying the communities popularity is a sequence of blog posts by Eliezer Yudkowsky that he wrote to explain all the basics that people needed to know before they could follow the advanced arguments. He perceives a post on Less Wrong to be important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project.

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading. But certain dangers lurk beneath their shiny surface. This post is meant to expose some of those dangers.

Method 1: Appeal to authority

264116795_790ffce202_oEven though the sequences are basically made up of a lot of sane material, most of which is especially related to rationality and methods for debiasing, they also contain small amounts of outrageous craziness in the form of highly contagious memes, most of which are related to certain pet hypotheses and cherished beliefs hold by the author.

Since the main focus of the sequences is on rationality and cognitive biases the author can easily be perceived to be less biased and more rational than other people. Additional excursions into meta ethics and altruism have the effect that in one form or the other the underlying message being repeated over and over again is that what the author says is true and good while everyone else is stupid.

Although I have not seen much evidence of the authors exceptional wisdom, I do not intent to reject that message as factually wrong. But I believe that the message has dangerous side effects.

The problem is that if someone utters 100 statements that seem witty and agreeable then even if statement 101 seems crazy it might easily be taken on board as a free-rider if the same person claims that the statement is as rational as everything else they have been saying. The crazy ideas will incrementally worm their way into the readers confidence, hiding in plain sight, and disappear into the crowd of sanity.

But even if people feel uncomfortable with accepting the few outrageous claims being made in the sequences, if eventually the major takeaway is that the author is exceptionally smart and rational they might conclude that it is them who are probably wrong about the ideas that seem crazy and that the author probably knows more than they do.

Method 2: Peer pressure

Many members of the so called nonconformist – nerd/SF-fan/technophile etc. – crowd are desperately looking for a community they can belong to. A community which is similarly inclined, made up of people who largely have the same interests and who hold the same beliefs. Less Wrong is such a community.

The problem is that for someone looking for community and friendship, someone excited to have finally come across like-minded people, it is much harder to critically examine the disagreeable parts of that community. It is much easier to just ignore those problems than to risk to offend people within the community and be rejected. Particularly if people who are being rejected are denounced as lesser rationalists or even trolls.

Especially in combination with the employment of an internal reputation system and an initial majority of people willing to defend the leader of the community, it becomes incredible hard to openly doubt those beliefs that the leader is most fond of.

Method 3: Installing a cognitive rootkit

If you are smart, rationalization is easy. You can make up genuine arguments or cobble together concepts and ideas to defend your cherished beliefs. The result can be an intricate argumentative framework that shields you from any criticism, yet seems perfectly sane and rational from the inside.

All of Yudkowsky’s arguments about the dangers and benefits of AI are just appeals to intuition of various kinds, as indeed are the counter-arguments. So I wouldn’t hold your breath waiting for that to be settled. If he wants to live his own life based on his own hunches, that’s fine, but I see no reason for anyone else to take his land-grabs on terms like “rationality” and “altruism” at all seriously, merely because it’s not currently possible to provide mathematically rigorous proofs that his assignments of probabilities to various scenarios are incorrect. There’s an almost limitless supply of people who believe that their ideas are of Earth-shattering importance, and that it’s incumbent on the rest of the world to either follow them or spend their life proving them wrong. — Greg Egan

Skeptic: I don’t see how a technological singularity is possible.

Singularity Institute: Notice that 0 And 1 are not probabilities. You are not infinitely certain that a technological singularity is impossible.

Skeptic: True, but I believe that the probability of a technological singularity is epsilon (essentially zero).

Singularity Institute: Maybe you believe to believe that. You are very likely overconfident. I doubt that you can provide the necessary amount of evidence. The overconfidence effect is a well-established bias in which someone’s subjective confidence in their judgments is reliably greater than their objective accuracy, especially when confidence is relatively high.

Skeptic: Okay! Let me restate what I am trying to say here. The probability is low enough to ignore the possibility. Personally, in ordinary life I tend to just discount events whose probability drops below a certain threshold, which of course isn’t defined in any mathematically precise way. And one good reason is that there are so many such events that there’s not enough time to consider all of them.

Singularity Institute: A reasonable person would choose a probability between 0.1 and 0.9. A probability much below 0.1 seems like overconfidence: to hold such an opinion would require deep and carefully argued insight into why a technological singularity is impossible.

Skeptic: Even if I was to accept your probability range, there are much more tangible issues to worry about than the possibility of a technological singularity. For example climate change. A likely effect of global warming is the extinction of up to 80% of species.

Singularity Institute: In which you neglect the implied scope of consequences. This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization. Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.

Skeptic: Yet it feels wrong to neglect tangible scenarios such as climate change due to the possibility of vast amounts of value associated with some hypothetical inference based scenario. My intuition is that it is impossible to take that idea seriously because it would imply having to take similar ideas seriously, reasoning which will ultimately make you privilege unlikely high-utility outcomes over much more probable hypotheses, or even theories that are based on strong empirical evidence.

Singularity Institute: The ability to shut up and multiply, to trust the math even when it feels wrong is a key rationalist skill.

Skeptic: But just because it is math it doesn’t mean that it can’t be broken or unreliable given certain circumstances. To use a metaphor, you seem to be saying that we should try to control an aircraft solely by reference to instruments and completely ignore outside visual cues. I don’t think this is reasonable.

Singularity Institute: Ultimately, when you reflect on how your mind operates, and consider questions like “why does occam’s razor work?” and “why do I expect the future to be like the past?”, you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.

Skeptic: But given that we are ultimately doomed to use our own mind, to evaluate the correctness of our methods, you seem to overcompensate for possible biases if you ignore human intuition and do not discount the implied invisible. After all, humans are not expected utility maximizers and do not feature an unbounded utility function. It therefore seems sensible to apply a discount rate.

Singularity Institute: I am advocating against the idea that you should compound a 5% discount rate a century out when you are valuing global catastrophic risk management. Experiments have confirmed that spontaneous preferences by both human and nonhuman subjects follow a hyperbolic curve. If your temporal discounting follows any curve other than the exponential, you’ll have time-inconsistent goals that force you to wage war against your future selves. For instance, when offered the choice between $50 now and $100 a year from now, many people will choose the immediate $50. However, given the choice between $50 in five years or $100 in six years almost everyone will choose $100 in six years, even though that is the same choice seen at five years’ greater distance. While a 5%-per-year discount rate, compounded exponentially, implies that it is worth saving a single person from torture today, at the cost of 168 people being tortured a century later, or a googol persons being tortured 4,490 years later. If you wouldn’t burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600, then clearly, you do not have a 5%-per-year temporal discount rate in your pure preferences.

Further reading

Tags: