MIRI/LessWrong Mockery: Index

I decided to outsource almost all posts which are outdated, meant to be outright satire, rants, or which could generally be perceived as being pejorative or actively malicious. Instead of deleting those posts I herewith distance myself from the content below.

For serious, specific and non-satirical criticism, please visit my homepage or this page, which lists all known critiques of the Machine Intelligence Research Institute (MIRI), formerly known as the Singularity Institute for Artificial Intelligence, and LessWrong.

Contents

O Robot Cult

I was overly worried about dying as a young teenager. You have to keep the right balance. I suspect that the people who are mocked in the above video are those that are overly worried or take any action, however unlikely, to survive, and preach that lifestyle.

Research on how to stop aging? Great. Research on cryonics? Great. Telling people that they are lousy parents for not signing up their kids for cryonics because there will soon be an intelligence trillions of times smarter that will fix all damages? Crazy!


LessWrong / CFAR / MIRI / Eliezer Yudkowsky / Julia Galef / Skeptics?

Here are two videos that I just stumbled upon. The first is analyzing Eliezer Yudkowsky and the organizations that he implicitly controls. The second is just satire.

Note: I have not been involved in the production of the below videos or their descriptions. The videos were written and narrated by Robert Gross. That I am posting these videos does not mean that I agree with everything that is being said in them.

[Google+ discussion]

1. A critical look at transhumanist organization Lesswrong and its umbrella organizations.

2. This video is an example of Gross’s Law: [ a | a' ], or, the ease with which an opponent of a position can make that position sound absurd is directly proportional to the odds that the position actually *is* absurd.

References:

Lest the above references give you a wrong impression, please scroll down here and check out some of my latest posts in which I outline more serious, specific and non-satirical criticism. Or see a list of all critical posts here.


AI Doomsday Recipe

[Google+ Discussion]

(1) Leave your artificial intelligence (AI) as vague as possible so that nobody can outline flaws in the scenario that you want to depict.

(2) Claim that almost any AI is going to be dangerous because all AI’s want to take over the world. For example, if you ask the AI “Hey AI, calculate 1+1“, the AI goes FOOOOM and the end of the world follows seconds later.

(2.1) If someone has doubts just use buzzwords such as ‘anthropomorphic bias’ to ridicule them.

(3) Forego the difficulty of outlining why anyone would want to build the kind of AI you have in mind. We’re not concerned with how practical AI is developed after all.

(4) Make your AI as powerful as you can imagine. Since you are ignoring practical AI development and don’t bother about details this should be no problem.

(4.1) If someone questions the power of your AI just outline how humans can intelligently design stuff that monkeys don’t understand. Therefore humans can design stuff that humans don’t understand which will then itself start to design even more incomprehensible stuff.

(5) Outline how as soon as you plug a superhuman machine into the Internet it will be everywhere moments later deleting all your porn videos. Don’t worry if you have no idea how that’s supposed to work in practice because your AI is conjectured to be much smarter than you are so you are allowed to depict scenarios that you don’t understand at all.

(5.1) If someone asks how much smarter the AI you expect to be just make up something like “1000 times smarter”. Don’t worry about what that means because you never defined what intelligence is supposed to be in the first place.

(5.2) If someone calls bullshit on your doomsday scenario just conjecture nanotechnology to make your AI even more powerful, because everyone knows from science fiction how nanotech can pretty much fuck up everything.

(6) If nothing else works frame your concerns as a prediction of a worst case scenario that needs to be taken seriously, even given a low probability of its occurrence, due to the scale of negative consequences associated with it. Portray yourself as a concerned albeit calm researcher who questions the mainstream opinion due to his strong commitment to our collective future. To dramatize the situation even further you can depict the long term consequences and conjecture the possibility of an intergalactic civilization that depends on us.


Catholic Church vs. MIRI

(Disclaimer: This is mostly fun and not to be taken seriously. Although the fact that I perceive it to be necessary to add this disclaimer should be food for thought.)

Today I stumbled upon the following passage in the Compendium of the Catechism of the Catholic Church that offer the Church’s teaching on who will be “saved” and how (emphasis mine):

171. What is the meaning of the affirmation “Outside the Church there is no salvation”?

846-848

This means that all salvation comes from Christ, the Head, through the Church which is his body. Hence they cannot be saved who, knowing the Church as founded by Christ and necessary for salvation, would refuse to enter her or remain in her. At the same time, thanks to Christ and to his Church, those who through no fault of their own do not know the Gospel of Christ and his Church but sincerely seek God and, moved by grace, try to do his will as it is known through the dictates of conscience can attain eternal salvation.

The connection to the Machine Intelligence Research Institute (MIRI) is Roko’s basilisk: If you know about existential risks but do nothing to support the creation of friendly AI (God) then it will eventually torture you until the end of the universe.

technogod

“Whoever knowingly chooses to save one life, when they could have saved two – to say nothing of a thousand lives, or a world – they have damned themselves as thoroughly as any murderer.” — Eliezer Yudkowsky, One Life Against the World

And that’s not the only similarity between the Catholic Church and the rationalist community related to MIRI. Here is the full list:

The following creed has been written muflax:

There is no Science but Bayes and it is our Method.

I believe in Probability Theory, the Foundation, the wellspring of knowledge,
I believe in Bayes, Its only Interpretation, our Method.
It was discovered by the power of Induction and given form by the Elder Jaynes.
It suffered from the lack of priors, was complicated, obscure, and forgotten.
It descended into AI winter. In the third millennium it rose again.
It ascended into relevance and is seated at the core of our FAI.
It will be implemented to judge the true and the false.
I believe in the Sequences,
Many Worlds, too slow science,
the solution of metaethics,
the cryopreservation of the brain,
and sanity everlasting.
Phyg.

Further reading:

  1. How They Brainwash You
  2. How their arguments are broken (Addendum)
  3. We are MIRI. Argument is futile.
  4. Should you trust them?
  5. MIRI/LessWrong Critiques: Index

Charity Explosion: Maximizing Charitable Giving

If you are interested in charitable giving you might want to know how to get the most bang for your buck, or maximizing how much good you do. This taxonomy of levels of charities, listed in ascending order of effectiveness, might help you out.

A Taxonomy of Charities

Level I: Standard Charity

A standard charity directly turns money into goods, such as e.g. mosquito nets, helping those in need.

Level II: Meta Charity

A meta charity evaluates Level I charities to identify outstanding charities that are proven, cost-effective, scalable, and transparent.

Level III: Meta Meta Charity

A meta meta charity evaluates Level II charities, identifying outstanding meta charities that are successful at identifying outstanding Level I charities.

Level IV: Fundraising Charity

A Level IV charity fundraises for whoever the best Level III charity recommends, and raises more than a dollar with each dollar it receives.

Level V: Recursive Charity

A Level V charity raises funds for itself and uses those funds to improve its fundraising capabilities. This leads to a so called charity explosion, leaving all charities from previous categories far behind.

Level VI: Pascal’s Charity

A Level VI charity features a low but non-negligible probability of an extremely high but finite return, e.g. saving 3^^^^3 lives.

Level VII: Infinite Charity

Level VII consists of charities whose infinite value does smother all other considerations of mere finite values, subsumes all other levels, therefore brings closure to the hierarchy of charities, and there cannot be say a Level VIII.

Since such a charity is a logically coherent and imaginable possibility it should be assigned a finite positive probability.


Less Wrong: What annoys me MOST

Disclaimer: This is a rant, you may skip it. If you are a devoted Less Wrong member or Singularity Institute associate, please prepare yourself and calm down or leave immediately.

I just have to write this down. What annoys me the most about lesswrong.com is not, as you might have thought, the in-group favoritism, cultish and creepy behaviordoom-mongering, naive belief in progress, constant moralizing, horrible moderatorsreputation systeminsane bullshit, censorship, condescending attitude, or their withdrawal from any criticism.

What really makes me cringe is their Politics is the Mind-Killer credo. They parrot it like a bunch of zombies!

To quote RationalWiki:

“Politics is the mindkiller” is the mindkiller. The meme came from a Yudkowsky post about how politics-related discussion reliably goes off the rails. This has developed into a community aversion to anything even tangentially political — even the word “politics” itself is avoided and euphemised as “mindkilling” as a discussion-stopper. This gets wacky when the discussion is of an actual existential risk to humanity, global warming, or indeed almost anything else actually practical, despite the site’s claimed serious interest in existential risk, as these genius autodidacts show just how to use their newfound rationality skills for rationalisation. Libertarian politics are of course the neutral baseline, it’s other politics that are mindkilling.

Please stop it! It is ridiculous! If all of your dearly beloved rationality does not enable you to talk about such a mundane subject as politics in a civilized and thoughtful manner then you are doing it wrong!


Cognitive Dissonance

Related to: Rationality, Irrationality or Magic?

Here is another example that I forgot to add to the last post:

Example 1: Roko Basilisk

Bullshit Meter: Confused / Cognitive Dissonance

lesswrong.com user:orthonormal seems to be one of those people who became worried about certain decision theoretic thought experiments:

Yet he went to extensively research decision theory. Is it me or is there some sort of discrepancy here?

It seems difficult to research decision theory while at the same time fearing that one could think about related ideas in sufficient detail. Since if there is any way to stumble upon that detail it is surely by doing decision theoretic research?

But similar discrepancies seem to occur elsewhere as well.

Another discrepancy is how people associated with lesswrong.com try to avoid arguments in favor of multiplying small probabilities by vast utilities. There doesn’t seem to be a good formal argument on why a rational agent shouldn’t do that. What speaks against it is the intuition that it is crazy and won’t work out. But neither do they dismiss the underlying theories nor do they “shut up and multiply” and accept the consequences.


Rationality, Irrationality or Magic?

I got the idea that XiXiDu was going crazy because he didn’t see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn’t want to do the former just because a philosophical argument told him to and couldn’t quite manage the latter. — Risto Saarelma

Okay, people are talking about me. But I can’t figure out what they are actually suggesting?

First of all, I am not the one who is claiming that we should freak out about risks associated with artificial intelligence. I am not doing anything besides chatting about the topic, asking questions and writing some posts and comments.

Incidentally I am one of a few people who bothered to ask actual AI researchers what they think. Is that a sign of craziness? Maybe. Maybe that was already too much effort.

Anyway, what alternative to door-to-door singularity advocacy or outright irrationality do you offer? The impenetrable #’?!! of people like Will Newsome or muflax?

On a side-note, here is some shit that is really freaking me out. I must be missing something or those people use some sort of magical procedure to arrive at conclusions and make their decisions:

Example 1: Simulation Shutdown

Bullshit Meter: Batshit insane

I certainly can’t rule out the possibility that we live in a computer simulation. I think Nick Bostrom (Oxford) is right that the probability that we are in a simulation is high enough that we should be somewhat concerned about the risk of simulation shutdown…

— Luke Muehlhauser, CEO of the Singularity Institute

Risks from synthetic biology and simulation shutdown look like they might knock out scientific advancement before we create an AI singularity.

— Luke Muehlhauser, CEO of the Singularity Institute

Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it’s easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.

— Luke Muehlhauser, CEO of the Singularity Institute

As far as I can tell he is claiming to be able to reason about the risk of a simulation shutdown in such a way as to draw action-relevant conclusions regarding having an FAI team? Something is very wrong…

Example 2: AI FOOM

Bullshit Meter: Delusional

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

— Luke Muehlhauser, CEO of the Singularity Institute

Fantasy!

Example 3: Cryonics / Brain peservation

Bullshit Meter: Overconfident / Delusional

I’ve signed up for cryonics (with Alcor) because I believe that if civilization doesn’t collapse then within the next 100 years there will likely be an intelligence trillions upon trillions of times smarter than anyone alive today.

— James Miller

The biggest single charity donation I’ve made so far is ~$100. But now I’m donating $5000 to an exceptionally worthy cause. And I suggest you donate too. Here’s my cause:

People who “die” today could live again in the future, perhaps forever, as brain emulations (= uploads, ems), if enough info were saved today about their brains.

— Robin Hanson, Plastination Is Near

So he contributed 50 times the amount of any previous donation in support of brain plastination research? How did he decide to do that? Especially how did he figure out that it is 50 times more worthwhile than contributing that money to any other charity? How do you calculate that?


Possible reasons for a perception of lesswrong/SIAI as a cult

Note: You can find a list of almost all major and minor critiques of LessWrong and the Machine Intelligence Research Institute (MIRI), formerly known as the Singularity Institute for Artificial Intelligence, here.

Related to: The Singularity Institute: How They Brainwash You

See also: Is criticism of SIAI/LW a result of hostility?

It’s one thing to accuse atheists of making their atheism like a religion, or accuse a particular group of atheists of being cult-like. In my experience there’s never anything behind those accusations, but in theory you could have a conversation about them by asking, “in what ways?” and so on. But whenever I hear accusations of being “no different” from fundamentalism or a cult, it just makes me rage. — Chris Hallquist, “No different from fundamentalists/ a cult

(This post was originally written as a quick comment reply to a third-party blog post linked to above. But because I made a mistake, and was unable to edit the comment, I posted a corrected version here. So please consider that this post is not intended to be a well-thought-out critique.)

Here are just a few possible reasons for the perception that lesswrong/SIAI is a cult:

1.) Doomsday Prophecy

[…] I would be asking for more people to make as much money as possible if they’re the sorts of people who can make a lot of money and can donate a substantial amount fraction, never mind all the minimal living expenses, to the Singularity Institute.

This is crunch time. This is crunch time for the entire human species. […] and it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays […]

[…] having seen that intergalactic civilization depends on us, in one sense, all you can really do is try not to think about that, and in another sense though, if you spend your whole life creating art to inspire people to fight global warming, you’re taking that ‘forgetting about intergalactic civilization’ thing much too far. — Video Q&A with Eliezer Yudkowsky

2.) They are trying to save humanity by taking over the world.

“I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)” — Eliezer Yudkowsky in an interview with John Baez

“After all, if you had the complete decision process, you could run it as an AI, and I’d be coding it up right now.”— Eliezer Yudkowsky in a comment on Less Wrong

3.) Believe into the existence of superhuman beings.

Examples are friendly/unfriendly AI’s and the widespread discussion of the Simulation Argument.

I certainly can’t rule out the possibility that we live in a computer simulation. I think Nick Bostrom (Oxford) is right that the probability that we are in a simulation is high enough that we should be somewhat concerned about the risk of simulation shutdown… — Luke Muehlhauser, CEO of the Singularity Institute

I bet there’s at least one up-arrow-sized hypergalactic civilization folded into a halting Turing machine with 15 states, or something like that. […] It might perhaps be more limited than this in mere practice, if it’s just running on a laptop computer or something. — Eliezer Yudkowsky, Complexity and Intelligence

4.) Believe into a life beyond death.

Examples are cryonics or mind uploading.

I’ve signed up for cryonics (with Alcor) because I believe that if civilization doesn’t collapse then within the next 100 years there will likely be an intelligence trillions upon trillions of times smarter than anyone alive today. — James Miller, lesswrong.com

5.) Believe into hell and heaven.

Examples are the promise of a posthuman future or artificial hells as a result of blackmail by superintelligences by means of acausal trade.

6.) Withdrawal from real-world feedback loops.

More: ‘We are SIAI. Argument is futile.

Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science… right? — Eliezer Yudkowsky, Science Doesn’t Trust Your Rationality

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified. — Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

7.) The existence of bootcamps.

See for example “Rationality Boot Camp“.

8.) Cultish language and behavior.

See for example “Epistle to the New York Less Wrongians“ or “I Stand by the Sequences“.

If you don’t sign up your kids for cryonics then you are a lousy parent. — Eliezer Yudkowsky, Normal Cryonics

9.) Idolization of Eliezer Yudkowsky.

10.) A large corpus of content written by a single author and an appeal to his intelligence and rationality.

11.) Use of a reputation system to select for agreement rather than peer review.

More: ‘How the LessWrong reputation system sucks‘.

12.) Portrayal of outsiders as less intelligent or irrational.

See for example ‘Philosophy: A Diseased Discipline‘.

If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. — Eliezer Yudkowsky in a comment on ‘Should I believe what the SIAI claims’

13.) Mobs of lesswrong.com members appear wherever there is disagreement from experts.

I recently wrote a blog post about my own AI project, but it attracted a bunch of adversarial comments from folks influenced by the Singularity Institute for AI’s (rather different) perspective on the best approach to AI R&D. — Ben Goertzel, The Singularity Institute’s Scary Idea (and Why I Don’t Buy It)

14.) Use of lesswrong.com and “rationality” as a key venue for SIAI recruitment.

  • The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: “…after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity…” (Reference: An interview with Eliezer Yudkowsky).
  • Less Wrong is used to ask for donations.
  • You can find a logo with a link to SI in the header and a logo and a link to LessWrong on SIAI’s frontpage.
  • LessWrong is mentioned as an achievement of SI (Quote: “Less Wrong is important to the Singularity Institute’s work towards a beneficial Singularity”).
  • A quote from the official SIAI homepage“Less Wrong is [...] a key venue for SIAI recruitment”.

The beginning of the end?

I expected more disagreement than this. Was my post really that persuasive?

— Luke Muehlhauser

I wonder if that’s how it is going to end once everyone willing to voice criticism finally gave up. Are they going to mistake a lack of opposition as an affirmation of their beliefs?

It looks like that the time is nigh where they finally manage to create a community of people who “wear a uniform and march in lockstep” . A community in which any dissent is frowned upon. A self-referential feedback loop where members keep updating in favor of their beliefs based on the evidence provided by their mutual agreement.

I never had the nerves to seriously engage with that community. Not in a thoughtful way. And now I don’t care anymore and neither have the time.

The above comment from Luke Muehlhauser barely crosses the line of annoyance that is sufficient to cause me to write this rant and publicly proclaim that his post was exceptionally poor rather than persuasive. Or to use the words of someone else:

A lot of sound and fury, signifying nothing. SI was judged and found wanting by someone who wasn’t part of the groupthink …. and now the groupthinkers simply cannot understand why real rational people (as opposed to pseudo-rational wannabees) can see through the charade.

Well, I don’t actually think that it is a charade. They seem to believe what they are saying. But they are genuinely incapable of noticing their shortcomings or understand people who find their arguments deeply unpersuasive.

I don’t have much hope anymore that they are going to notice. Their line of defense is just too strong.

Their weapons are the vagueness and the expected value of their claims. Their tactic, in the light of the imprecision of their own beliefs, is to demand impossible specificity of their opponents and a proof of their own wrongness that their opponents are predictable not be able to deliver.

Dragon slayer: At some point a dragon will appear in your garage and you should better take that possibility seriously because you might be eaten.

Dragon skeptic: Dragons are mythical creatures, I doubt that I will stumble upon a dragon any time soon.

Dragon slayer: Given what we know about physics, dragons are possible. And since being eaten is extremely negative, even given the lack of any empirical evidence in support of the possibility of a dragon appearing in your garage, you should take that possibility seriously. Just think about your possible children and your children’s children and their children. The loss in expected value resulting from your death will be enormous. You have to save those people!

Dragon skeptic: To refine my estimations regarding your dragon, what do you anticipate to happen before the dragon will appear in my garage, is there any possibility to update on evidence before the dragon appears?

Dragon slayer: No, I don’t know enough about dragons to be more specific about my prediction. I will know when I see the dragon though.

Dragon skeptic: Hmm. Could you then tell me what led you to believe that a dragon might appear in my garage and why it would be dangerous?

Dragon slayer: We know that once upon a time huge giant reptiles roamed the earth. We also know that flamethrowers are technical feasible. Further, most giant animals do not care about human well-being, which makes them extremely dangerous. Well okay, elephants and whales are not dangerous but you can’t reasonably expect that most giant flame throwing reptiles are like that…

Dragon skeptic: Ok, let’s assume such a thing is possible. Why would it appear in my garage? I mean, sure, evolution might result in such a thing as a dragon at some point but…

Dragon slayer: I didn’t say it will happen tomorrow, I don’t like to talk about time frames.

Dragon skeptic: I see. Then what do you suggest that I do?

Dragon slayer: Man forges his own destiny. Give me money so that I can think more about dragons and forge a sword with which I can slay any dragon. I call it Excalibur.

Dragon skeptic: Interesting. I’d like to see your credentials that you are a sophisticated blacksmith.

Dragon slayer: I am afraid I can’t do that. A proof of my skills would involve the disclosure of dangerous knowledge! The kind of sword that I forge could be abused to slay humans more easily than dragons.

Dragon skeptic: You make a strong case. Have $3,000.


LessWrong Idioms Explained

The Sequences

Meaning: A large corpus of blog posts that obfuscate folly and justify unfounded beliefs by a huge amount of unrelated clutter and an appeal to the intelligence and rationality of the author.

“Go read the Sequences!” means “I don’t know how to justify this bullshit but you either won’t read hundreds of lesswrong posts just to continue this discussion or you will end up sufficiently brainwashed to believe this shit yourself.”

Troll

Meaning: Usually someone challenging or identifying bullshit.

“You are a troll!” means “I don’t like you for revealing my crackpottery.”

Traditional rationalist

Meaning: Someone who has not yet been brainwashed or who can not be brainwashed.

Intelligence explosion

Meaning: Another term for “magic”. Mainly used to justify arbitrary bullshit.

Solomonoff induction

Meaning: One of several technically correct but uncomputable and practically useless methods.

Mostly used as impressive-sounding jargon and to sprinkle enough mathematics and logic over certain fantasies to give them a veneer of respectability.

The use of alleged approximations to such methods under arbitrary real world circumstances, by fallible humans, is actively dangerous as it can erroneously lead to, or be abused, to justify various amounts of bullshit.

Prediction

Meaning: Framing extraordinary beliefs as predictions without providing a time frame is a usual tactic used to withdraw from real-world feedback loops. Additionally such predictions are made sufficiently vague as to allow the predictor to ignore any evidence to the contrary.

Rationality

Meaning: Sophisticated methods used to evade any criticism in a way that seems to place the burden of proof on the critic.

Consequentalism

Meaning: The self-delusional idea that the long term detriments of one’s actions are computable, that values can be static and that one can assign value in a time consistent way etc. etc.

Existential Risk

Meaning: Inventing enormous risks and then using that as leverage to make weak arguments and unlikely scenarios seem vastly more important or persuasive than they really are is a tactic very similar to those that have contributed to the hold of religion on people.

Scope insensitivity

Meaning: A term that is used to rationalize highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.

To be continued…

  • seahen

    Level V is called an investment bank.

  • seahen

    Samuel Johnson wrote, “I never desire to converse with a man who has written more than he has read.” Is Eliezer one of those guys?

  • Squirrel

    I’m personally most amused at the hypocrisy of the Less Wrong people, how they claim to be rationalists but are anything but.

    As in, I told one of the members that my ex-partner (a fellow member) was emotionally abusive of me. If he were really rationalist, he’d collect the data from multiple sources and come to his own conclusion. Instead he got defensive and stressed the closeness that he had with my abusive ex, while dismissing the content of my complaint.

    Then again, what do I know? I’m just a hysterical female.