Public Relations: SIAI vs. FHI

SIAI has already been way too reckless with PR, and has permanently lost the possibility of becoming a high-status charity / think-tank. I’m not sure this was avoidable, but then, FHI doesn’t have this problem (yet) and they overlap a great deal. — muflax

Some quick rules:

  • Avoid strong emotions.
  • Don’t act as if you are right on grounds of being more intelligent or rational.
  • Speak in cold and implacable arguments instead.
  • Do not write more than necessary.
  • Be as precise and concise as possible.
  • Try not to engage with people if you are not willing to bear their stupidity or recapitulate your basic arguments.
  • Always stay calm, professional and obliging.
  • Whenever possible formulate criticism as questions rather than corrections.

Examples of what should be avoided are the following quotes:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified. — Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. — Eliezer Yudkowsky in a comment on ‘Should I believe what the SIAI claims’

I am the wrong person to judge if a “a doctorate in AI would be negatively useful”. I guess it is technically useful. And I am pretty sure that it is wrong to say that others are “not remotely close to the rationality standards of Less Wrong”. That’s of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for AI risks are dependent on.

But that’s besides the point. Those statements are clearly false when it comes to public relations.

If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.

Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don’t think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.

In this respect it is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won’t be enough to say, “I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards.” Because at the point that you utter the word “Singularity” you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.

Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a “research fellow of the Singularity Institute”? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn’t write that line myself, it’s actually from an email conversation with a top-notch person that didn’t give me their permission to publish it under their name). In any case, you won’t make them listen to you, let alone do what you want.

Compare the following:

Eliezer Yudkowsky, research fellow of the Singularity Institute.

Education: -

Professional Experience: -

Awards and Honors: A lot of karma on lesswrong.com and many people like his Harry Potter fanfiction.

vs.

Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.

Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.

Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.

Awards and Honors: He holds various awards and is listed in the Who’s Who in computer science.

Who are people going to listen to? Well, okay…the first Eliezer might receive a lot of karma on lesswrong.com, the other doesn’t have enough time for that.

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SIAI negatively. And the number of those people is growing. Many won’t even participate there because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you.

Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

Think about it this way. What do you think will happen if SIAI does continue to act like it does and real experts feel uncomfortable about their statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage their reputation permanently.

Don’t start a war if you don’t expect to be able to win it. It is much easier to damage a reputation than to build one, especially if you support a cause that can easily trigger the absurdity heuristic in third-party people.

Being rude to people who don’t get it will just cause them to reinforce their opinion and tell everyone that you are wrong instead. Which will work, because your arguments are complex and in support of something that sounds a lot like science fiction.

A better route is to just ignore them, if you are not willing to discuss the matter over, or to explain how exactly they are wrong. And if you consider both routes to be undesirable, then do it like FHI and don’t host a public forum.

Tags: ,

  • Tim Tyler

    The FHI doesn’t seem quite as bad as the SIAI from my perspective, though their game plans seem essentially similar. Bostrom seems to have more humility than Yudkowsky – which does help with public relations.

  • Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index()

  • http://newstechnica.com David Gerard

    The other thing is that FHI are actual academics. A small difference for LW readers, a large one for the rest of the world.