Eliezer Yudkowsky Quotes

MIRI/LessWrong Critiques: Index

Note: This is a page of quotes. Naturally some minor caveats might be missing. But I believe that each quote does indeed reflect the authors current or past beliefs. Below each quote is a link to the full context of the quote.

If you believe that any of these quotes is deceptive, where “deceptive” means that the larger context does indicate that the author does not support the meaning of the stand-alone quote, let me know by commenting below.

Quotes

And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.

— Eliezer Yudkowsky, on SL4

I think my efforts could spell the difference between life and death for most of humanity, or even the difference between a Singularity and a lifeless, sterilized planet [...] I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.

— Eliezer Yudkowsky, Eliezer, the person

Striving toward total rationality and total altruism comes easily to me. [...] I’ll try not to be an arrogant bastard, but I’m definitely arrogant. I’m incredibly brilliant and yes, I’m proud of it, and what’s more, I enjoy showing off and bragging about it. I don’t know if that’s who I aspire to be, but it’s surely who I am. I don’t demand that everyone acknowledge my incredible brilliance, but I’m not going to cut against the grain of my nature, either. The next time someone incredulously asks, “You think you’re so smart, huh?” I’m going to answer, “*Hell* yes, and I am pursuing a task appropriate to my talents.” If anyone thinks that a Friendly AI can be created by a moderately bright researcher, they have rocks in their head. This is a job for what I can only call Eliezer-class intelligence.

— Eliezer Yudkowsky, in a reply to Ben Goertzel

Whoever knowingly chooses to save one life, when they could have saved two – to say nothing of a thousand lives, or a world – they have damned themselves as thoroughly as any murderer.

— Eliezer Yudkowsky, One Life Against the World

If you don’t sign up your kids for cryonics then you are a lousy parent.

— Eliezer Yudkowsky, Normal Cryonics

To build an AI that discovers the orbits of the planets, the programmers need not know the math of Newtonian mechanics, only the math of Bayesian probability theory.

— Eliezer Yudkowsky, Singularity Hypotheses: A Scientific and Philosophical Assessment

Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science… right?

— Eliezer Yudkowsky, Science Doesn’t Trust Your Rationality

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.

— Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)

— Eliezer Yudkowsky in an interview with John Baez

Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.

— Eliezer Yudkowsky in a comment on ‘Towards a New Decision Theory’

If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.

— Eliezer Yudkowsky in a comment on ‘Should I believe what the SIAI claims’

I sometimes get questioned on why I don’t drink, but my response of “I don’t dare lose a single brain cell” […] is usually accepted with a smile.

— Eliezer Yudkowsky comments on ‘Social Necessity of Drinking’

One might think that the possibility of CEV punishing people couldn’t possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous.

I don’t usually talk like this, but I’m going to make an exception for this case.

Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

There’s an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail.

Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive to ACTUALLY BLACKMAIL YOU.

If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. We probably also have the FAI take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail.

Meanwhile I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)

You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

(For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)

— Eliezer Yudkowsky in a reply to a post by Roko suggesting that friendly AI might torture those who “who knew about existential risks but who didn’t give 100% of their disposable incomes to it.”

See also the following comments: #1, #2, #3

I must warn my reader that my first allegiance is to the Singularity, not humanity. I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I’m with Them. You have been warned.

— Eliezer Yudkowsky comments on Vinge’s Singularity (In April 2001, Eliezer said that these comments no longer describe his opinions, found at Friendly AI.)

[…] I would be asking for more people to make as much money as possible if they’re the sorts of people who can make a lot of money and can donate a substantial amount fraction, never mind all the minimal living expenses, to the Singularity Institute.

This is crunch time. This is crunch time for the entire human species. […] and it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays […]

[…] having seen that intergalactic civilization depends on us, in one sense, all you can really do is try not to think about that, and in another sense though, if you spend your whole life creating art to inspire people to fight global warming, you’re taking that ‘forgetting about intergalactic civilization’ thing much too far.

Video Q&A with Eliezer Yudkowsky

I’ll readily concede that my exact species extinction numbers were made up. But does it really matter? Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.

— Eliezer Yudkowsky replies to John Baez

Find whatever you’re best at; if that thing that you’re best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. If the thing that you’re best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity institute where [it] will be used by other people.

[…]

Aside from that, though, I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement, and everything else feeds into that.

[…] it’s completely respectable to work hours doing what you’re best at, and then transfer the sort of expected utilons that a society assigns to that to the Singularity Institute […]

— Eliezer Yudkowsky’s advice for Less Wrong readers who want to help save the human race.

[…] an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project […]

— Eliezer Yudkowsky comments on ‘Less Wrong Rationality and Mainstream Philosophy’

I bet there’s at least one up-arrow-sized hypergalactic civilization folded into a halting Turing machine with 15 states, or something like that. […] It might perhaps be more limited than this in mere practice, if it’s just running on a laptop computer or something.

— Eliezer Yudkowsky, Complexity and Intelligence

Tags:

  • http://timtyler.org/ Tim Tyler

    Heh! Your long-awaited E.Y. dossier – at last!

  • http://juridicalcoherence.blogspot.com Stephen R. Diamond

    It’s truly damning. I wonder if any LWer besides Tim Tyler has read it.

    I had pretty much dismissed the idea that LW is a “cult.” It’s just a somewhat dangerous entity: a “rationalist” group showing minimal concern with its own rationality: how its collective process affects its output. But these quotes smack of a megalomanic cult leader an an arrogantly stupid autodidact Maybe it’s a cult in formation.

  • http://math.ucr.edu/home/baez John Baez

    You wrote:

    Eliezer Yudkowsky in an interview with John Baez:

    Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.

    But this remark is not part of the interview you linked to, and I don’t recall him saying any such thing in any of my interviews with him. I asked him about metaethics and he offered a more humble reply. I would like to know why he thinks it’s a “solved problem”.

  • Will Newsome

    John Baez is right, that quote comes from a conversation with Wei Dai, in the comments section of Wei Dai’s LessWrong post “Towards a new decision theory”.

  • Furcas

    Interesting quotes, I guess? Do they have something in common, besides the one who wrote them? Am I supposed to conclude something from them?

  • http://kruel.co Alexander Kruel

    John and Will, the link to the context can always be found below each quote:

    Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.

    — Eliezer Yudkowsky in a comment on ‘Towards a New Decision Theory’

  • http://math.ucr.edu/home/baez/ John Baez

    Oh, whoops – I jumped into the middle and thought the context was being provided above the quote. Sorry!

    I guess Yudkowsky’s supposed solution of the problem of metaethics is lurking somewhere around here.

  • http://juridicalcoherence.blogspot.com Stephen R. Diamond

    Spoiler alert: Here’s Yudkowsky’s solution to metaethics:

    Morality is the function the brain computes to determine what’s right, given a completely informed subject.

    A motive to read the spoiler might be that the essay contains more than 7,000 words–all to say only the above. The alternative to faith in my summary is te read the last few paragraphs, which do all the work. You’ll be skipping his argument for consequentialism, but the argument makes no contribution to his conclusion.

    Here’s what I don’t understand. Why do people read and follow this guy? If it’s a cult, where’s the charisma? Yudkowsky is a bad writer. Most of what he writes is second-hand scholarship on rationality without attribution. When he tries to make an original contribution, this theory of metaethics is the sort of result to expect. Why do readers wade through this prolixity to read the “sequences”?

    I have an unsupported hypothesis, but it’s not completely satisfying. He’s constructed something that appeals to the religious longings of many new atheists with religious upbringing. The Singularity is god; “rationality” is ritual.

  • http://juridicalcoherence.blogspot.com Stephen R. Diamond

    Oh. Can’t leave out the Sequences: the Bible.

  • http://juridicalcoherence.blogspot.com Stephen R. Diamond

    “Interesting quotes, I guess? Do they have something in common, besides the one who wrote them? Am I supposed to conclude something from them?”–Furcas

    This is the kind of reaction I call “militant stupidity.”

    But if you try to criticize Yudkowsky based on ideas patent in a given essay (such as the 7,000-word metaethics *summation*), the response will be that you’re not entitled to criticize unless you’ve read the entire sequence. A rather perverse concept of “intellectual honesty.” (See http://tinyurl.com/75xvp4e)

  • Oliver B

    FYI these quotes have raised EY in my estimation.

  • khafra

    “I have an unsupported hypothesis, but it’s not completely satisfying. He’s constructed something that appeals to the religious longings of many new atheists with religious upbringing. The Singularity is god; “rationality” is ritual.”

    – Stephen R. Diamond

    Readers familiar with GAI criticism would have taken you more seriously if you hadn’t aired that hypothesis. It’s equivalent to asking a theologian “can God make a rock so heavy he can’t lift it?” There are informed criticisms one can make, but that particular criticism is the mark of someone stuck on the far side of the pons asinorum.

  • http://juridicalcoherence.blogspot.com Stephen R. Diamond

    “Readers familiar with GAI criticism would have taken you more seriously if you hadn’t aired that hypothesis. It’s equivalent to asking a theologian “can God make a rock so heavy he can’t lift it?” There are informed criticisms one can make, but that particular criticism is the mark of someone stuck on the far side of the pons asinorum.”

    1. Since *I’m* not familiar with GAI criticism, I wouldn’t have been able to avoid the pitfall had I wanted to.

    2. Theologians *ought* to take the problem about the heavy rock more seriously. God’s ability to limit his own power is the only internally consistent conclusion to the “problem of evil.”

    3. I’ve followed Less Wrong for years, intrigued, and the religion hypothesis only recently occurred to me. The “sequences” are actually what largely convinced me. I’d mostly ignored them for a long time, without considering why. Recently, I recognized they consisted of writing that is simply terrible. A rambling mess. Don’t all religions have holy books so prolix they’re painful to read; that it takes *dedication* just to read? I can’t find a better explanation for the popularity of that prolix mess called the sequences beside the Bible analogy. I think that to understand the appeal of the sequences, you have to realize that the bad writing is *part* of the appeal. How often is that pattern encountered outside a religious context? (The content of the sequences is mostly second-hand scholarship, so content doesn’t outweigh the manner of presentation.)

  • Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index

  • Pingback: Alexander Kruel · MIRI/CFAR/LW/Yudkowsky Mockery: Videos by Robert Gross

  • seahen

    Has Eliezer always been a secular Jew, or was he raised religious?