The Singularity Institute (SI, now called ‘Machine Intelligence Research Institute’ (MIRI)) created an insurmountable fortress that shields them from any criticism:
Skeptic: If you are so smart and rational, why don’t you fund yourself? Why isn’t your organisation sustainable?
SI: Rationality is only aimed at expected winning.
Skeptic: But you don’t seem to be winning yet. Have you considered the possibility that your methods are suboptimal? Have you set yourself any goals, that you expect to be better able to achieve than less rational folks, to test your rationality?
SI: One could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.
Skeptic: Okay, but given that you spend a lot of time on refining your rationality, you must believe that it is worth it somehow? What makes you think so then?
SI: We are trying to create a friendly artificial intelligence implement it and run the AI, at which point, if all goes well, we Win. We believe that rationality is very important to achieve that goal.
Skeptic: I see. But there surely must be some sub-goals that you anticipate to be able to solve and thereby test if your rationality skills are worth the effort?
SI: Many of the problems related to navigating the Singularity have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of the problem.
Skeptic: Has there been any success in formalizing one of the problems that you need to solve?
SI: There are some unpublished results that we have had no time to put into a coherent form yet.
Skeptic: Well, it seems that there is no way for me to judge if it is worth it to read up on your writings on rationality.
SI: If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.
Skeptic: Awesome, I’ll do that next time. But for now, why would I bet on you or even trust that you know what you are talking about?
SI: We spent a lot of time on debiasing techniques and thought long and hard about the relevant issues.
Skeptic: That seems to be insufficient evidence in favor of your accuracy given the nature of your claims and that you are asking for money.
SI: We make predictions. We make statements of confidence of events that merely sound startling. You are asking for evidence we couldn’t possibly be expected to be able to provide, even given that we are right.
Skeptic: But what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
SI: No, once the evidence is available it will be too late. You’re entitled to arguments, but not (that particular) proof.
Skeptic: But then why would I trust you instead of actual experts who studied AI and who tell me that you are wrong?
SI: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of SI/LW, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
Skeptic: But you have never achieved anything when it comes to AI, why would I trust your reasoning on the topic more than the opinion of those experts?
SI: That is magical thinking about prestige. Prestige is not a good indicator of quality.
Skeptic: Ummm, okay. You won’t convince me without providing further evidence.
SI: We call that motivated cognition. You created a fully general counterargument you can use to discount any conclusion.