This page lists all known critiques of the Machine Intelligence Research Institute (MIRI), formerly known as the Singularity Institute for Artificial Intelligence, and LessWrong.
What I would like the Singularity Institute to publish
I would like to see the Singularity Institute, or someone who is convinced of the scary idea, to publish a paper that states concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to devote their life to the development of friendly artificial intelligence. I want them to state numeric probability estimates and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober and evidence backed calculations. I would like to see a precise and compelling review of the methodologies the Singularity Institute used to arrive at their conclusions.
Implicit constraints of practical goals
The goal “Minimize human suffering” is, on its most basic level, a problem in physics and mathematics. Ignoring various important facts about the universe, e.g. human language and values, would be simply wrong. In the same way that it would be wrong to solve the theory of everything within the scope of cartoon physics. Any process that is broken in such a way would be unable to improve itself much.
Risks from AI and Charitable Giving
In this post I just want to take a look at a few premises (P#) that need to be true simultaneously to make SIAI a wortwhile charity from the point of view of someone trying to do as much good as possible by contributing money. I am going to show that the case of risks from AI is strongly conjunctive, that without a concrete and grounded understanding of AGI an abstract analysis of the issues is going to be very shaky, and that therefore SIAI is likely to be a bad choice as a charity. In other words, that which speaks in favor of SIAI does mainly consist of highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Why I am skeptical of risks from AI
In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish.
There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities solely derive their appeal from being vague.
Is an Intelligence Explosion a Disjunctive or Conjunctive Event?
Explosive recursive self-improvement is one of the presuppositions for the possibility of risks from AI. The problem is that this and other presuppositions are largely ignored and left undefined. All of the disjunctive arguments put forth by the SIAI are trying to show that there are many causative factors that will result in the development of unfriendly artificial general intelligence. Only one of those factors needs to be true for us to be wiped out by AGI. But the whole scenario is at most as probable as the assumption hidden in the words <artificial general intelligence> and <explosive recursive self-improvement>.
Asteroids, AI and what’s wrong with rationality
Our current methods seem to be biased in new and unexpected ways. Pascal’s mugging, the Lifespan Dilemma and blackmailing are just a few examples on how an agent build according to our current understanding of rationality could fail.
And that’s just what we already know about.
Our current theories are not enough to build an artificial general intelligence that will be reliably in helping us to achieve our values, even if those values could be thoroughly defined or were computable even in principle.
Objections to Coherent Extrapolated Volition
It seems to me that becoming more knowledgeable and smarter is gradually altering our utility functions. But what is it that we are approaching if the extrapolation of our volition becomes a purpose in and of itself? Extrapolating our coherent volition will distort or alter what we really value by installing a new cognitive toolkit designed to achieve an equilibrium between us and other agents with the same toolkit.
Would a singleton be a tool that we can use to get what we want or would the tool use us to do what it does, would we be modeled or would it create models, would we be extrapolating our volition or rather follow our extrapolations?
Furniture robots as an existential risk? Beware cached thoughts!
Don’t just assume vague ideas such as <explosive recursive self-improvement>, try to approach the idea in a piecewise fashion. Start out with some narrow AI such as IBM Watson or Apple’s Siri and add various hypothetical self-improvement capabilities, but avoid quantum leaps. Try to locate at what point those systems start acting in an unbounded fashion, possibly influencing the whole world in a catastrophic way. And if you manage to locate such a tipping-point then take it apart even further. Start over and take even smaller steps, be more specific. How exactly did your well-behaved expert system end up being an existential risk?
Being specific about AI risks
The only way you can arrive at any scenario where an artificial general intelligence is going to kill all humans is by being vague and unspecific, by ignoring real world development processes and by using natural language to describe some sort of fantasy scenario and invoke lots of technological magic.
Once you have to come up with a concrete scenario and outline specifically how that is supposed to happen you’ll notice that you will never actually reach such a tipping point as long as you do not deliberately design the system to behave in such a way.
Taking over the world to compute 1+1
If your superintelligence is too dumb to realize that it doesn’t have to take over the world in order to compute 1+1 then it will never manage to take over the world in the first place.
How far is AGI?
I don’t believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.
The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justifiedeven in principle to be confident in that prediction.
The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence.
We are SIAI. Argument is futile.
The Singularity Institute (SIAI) created an insurmountable fortress that shields them from any criticism.
Should you trust the Singularity Institute?
Imagine that instead of their actual mission the problem that they were trying to solve, what they were doing, was to create healthy ice cream. Except that in their case there would be no governmental oversight, no food safety regulations and no inspections. Would you trust them to produce safe food because they are smart, polite and assure you that all they care about is to benefit humanity?
Why you should be afraid of friendly AI
I feel deeply uncomfortable about SI’s mission to create friendly AI. Much more so than about the prospect of a hypothetical paperclip maximizer transforming the universe into something devoid of human values AND devoid of suffering.
Why you should be wary of the Singularity Institute
I expand on the reasons to be afraid of the Singularity Institute, outline some of the key factors for why they believe what they believe and behave like they do and state what I think, if not why.
The Singularity Institute: How They Brainwash You
I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading. But certain dangers lurk beneath their shiny surface. This post is meant to expose some of those dangers.
The Singularity Institute: How their arguments are broken
The argumentative framework employed by the Singularity Institute to justify its mission is fallacious because it is ambiguous.
It is possible to replace the subject of their argument, in their case risks from AI, with any arbitrary idée fixe, any extremely hypothetical scenario, or even a case of Pascal’s mugging, without changing its conclusion.
The Singularity Institute: Addendum to what’s wrong with their arguments
Suppose there are 4 risks. One mundane risk has a probability of 1/10 and and you assign 20 utils to preventing it. Another less likely risk has a probability of 1/100 but you assign 1000 utils to preventing it. Yet another risk is very unlikely, having a probability of 1/1000, but you assign 1 million utils to preventing it. The fourth risk is extremely unlikely, having a probability of 10^-10000, but you assign 10^10006 to preventing it. All else equal, which one would you choose to prevent and why?
The Fallacy of AI Drives
Rationality? Come on, this is serious!
What is it that makes the kind of rationality advocated at lesswrong.com superior to the traditional rationality of Richard Dawkins and Richard Feynman et al.?
Open Problems in Ethics and Rationality
Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.
- Some arguments are more convincing than they should be.
- Technically correct and yet wrong.
- Public Relations: SIAI vs. FHI
- Is criticism of SIAI/LW a result of hostility?
- How the LessWrong reputation system sucks
- Less Wrong: What annoys me MOST
- AI Risks and Probability Estimates
- Masturbation: A Waste of Hope?
- The Revelation
- PZ Myers aptly sums it up
- All Talk, No Walk
- Much-Better-Life Simulator™ – Sales Conversation
- Roko’s Basilisk: Everything you need to know
- Eliezer Yudkowsky Quotes
- Substrate Neutrality: Representation vs. Reproduction
- How Eliezer Yudkowsky is a dualist
Interview series on risks from AI
In 2011, Alexander Kruel (XiXiDu) started a Q&A style interview series asking various people about their perception of artificial intelligence and possible risks associated with it.
- Thoughts on the Singularity Institute (SI)
- The Fallacy of Dumb Superintelligence
- The pathetic state of computer vision
- Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set
- The Hanson-Yudkowsky AI-Foom Debate
- Foom Debate, Again
- The Singularity Institute’s Scary Idea (and Why I Don’t Buy It).
- The Curve of Capability
- The Betterness Explosion
- Is The City-ularity Near?
- Debating Yudkowsky
- Why AI may not foom
- What To Do?
- Bayesian Computations of Expected Utility
- Pascal’s scams
- On Utility of Incompetent Efforts
- “The Singularity”
- How far can AI jump?
- SIA says AI is no big threat
- Why I’m Not Afraid of the Singularity
- A belief propagation graph
- What’s the Likelihood of the Singularity? Part One: Artificial Intelligence
- When Exactly Will Computers Go Ape-Shi* and Take Over?
- Three arguments against the singularity
- Tech Luminaries Address Singularity