MIRI/LessWrong Critiques: Index

This page lists all known critiques of the Machine Intelligence Research Institute (MIRI), formerly known as the Singularity Institute for Artificial Intelligence, and LessWrong (LW).

For my probability estimates of unfriendly and friendly AI, see here.

Index

What I would like the Singularity Institute to publish

Link: kruel.co/2012/11/03/what-i-would-like-the-singularity-institute-to-publish/

Abstract:

I would like to see the Singularity Institute, or someone who is convinced of the scary idea, to publish a paper that states concisely and mathematically (and with possible extensive references if necessary) the decision procedure that led they to devote their life to the development of friendly artificial intelligence. I want them to state numeric probability estimates and exemplify their chain of reasoning, how they came up with those numbers and not others by way of sober and evidence backed calculations. I would like to see a precise and compelling review of the methodologies the Singularity Institute used to arrive at their conclusions.

New Rationalism: An Introduction

Link: kruel.co/2014/06/08/new-rationalism-an-introduction/

Abstract:

New Rationalism is an umbrella term for a category of people who tend to take logical implications, or what they call “the implied invisible”, very seriously.

Should you trust the Singularity Institute?

Link: kruel.co/2013/01/04/should-you-trust-the-singularity-institute/

Abstract:

Imagine that instead of their actual mission the problem that they were trying to solve, what they were doing, was to create healthy ice cream. Except that in their case there would be no governmental oversight, no food safety regulations and no inspections. Would you trust them to produce safe food because they are smart, polite and assure you that all they care about is to benefit humanity?

Why you should be afraid of friendly AI

Link: kruel.co/2013/01/05/why-you-should-be-afraid-of-friendly-ai/

Abstract:

I feel deeply uncomfortable about SI’s mission to create friendly AI. Much more so than about the prospect of a hypothetical paperclip maximizer transforming the universe into something devoid of human values AND devoid of suffering.

Why you should be wary of the Singularity Institute

Link: kruel.co/2013/01/07/why-you-should-be-wary-of-the-singularity-institute/

Abstract: 

I expand on the reasons to be afraid of the Singularity Institute, outline some of the key factors for why they believe what they believe and behave like they do and state what I think, if not why.

The Singularity Institute: How They Brainwash You

Link: kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/

Abstract:

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading. But certain dangers lurk beneath their shiny surface. This post is meant to expose some of those dangers.

The Singularity Institute: How their arguments are broken

Link: kruel.co/2013/01/13/the-singularity-institute-how-their-arguments-are-broken/

Abstract:

The argumentative framework employed by the Singularity Institute to justify its mission is fallacious because it is ambiguous.

It is possible to replace the subject of their argument, in their case risks from AI, with any arbitrary idée fixe, any extremely hypothetical scenario, or even a case of Pascal’s mugging, without changing its conclusion.

The Singularity Institute: Addendum to what’s wrong with their arguments

Link: kruel.co/2013/01/13/the-singularity-institute-addendum-to-whats-wrong-with-their-arguments/

Abstract:

Suppose there are 4 risks. One mundane risk has a probability of 1/10 and and you assign 20 utils to preventing it. Another less likely risk has a probability of 1/100 but you assign 1000 utils to preventing it. Yet another risk is very unlikely, having a probability of 1/1000, but you assign 1 million utils to preventing it. The fourth risk is extremely unlikely, having a probability of 10^-10000, but you assign 10^10006 to preventing it. All else equal, which one would you choose to prevent and why?

Implicit constraints of practical goals

Link: kruel.co/2012/05/11/implicit-constraints-of-practical-goals/

Abstract:

The goal “Minimize human suffering” is, on its most basic level, a problem in physics and mathematics. Ignoring various important facts about the universe, e.g. human language and values, would be simply wrong. In the same way that it would be wrong to solve the theory of everything within the scope of cartoon physics. Any process that is broken in such a way would be unable to improve itself much.

AI vs. humanity and the lack of concrete scenarios

Link: kruel.co/2013/06/01/ai-vs-humanity-and-the-lack-of-concrete-scenarios/

Abstract:

This post is supposed to be a preliminary outline of how to analyze concrete scenarios in which an advanced artificial general intelligence attempts to transform Earth in a catastrophic way.

Smarter and smarter, then magic happens…

Link: kruel.co/2013/07/23/smarter-and-smarter-then-magic-happens/

Abstract:

(1) Intelligence is an extendible method that enables software to satisfy human preferences.

(2) If human preferences can be satisfied by an extendible method, humans have the capacity to extend the method.

(3) Extending the method that satisfies human preferences will yield software that is better at satisfying human preferences.

(4) Magic happens.

(5) There will be software that can satisfy all human preferences perfectly but which will instead satisfy orthogonal preferences, causing human extinction.

AIs, Goals, and Risks

The concepts of a “terminal goal”, and of a “Do-What-I-Mean dynamic”, are fallacious. The former can’t be grounded without leading to an infinite regress. The latter erroneously makes a distinction between (a) the generally intelligent behavior of an AI, and (b) whether an AI behaves in accordance with human intentions, since generally intelligent behavior of intelligently designed machines is implemented intentionally.

Link: http://kruel.co/2014/03/25/ais-goals-and-risks/

Risks from AI and Charitable Giving

Link: kruel.co/2012/05/11/risks-from-ai-and-charitable-giving/

Abstract:

In this post I just want to take a look at a few premises (P#) that need to be true simultaneously to make SIAI a wortwhile charity from the point of view of someone trying to do as much good as possible by contributing money. I am going to show that the case of risks from AI is strongly conjunctive, that without a concrete and grounded understanding of AGI an abstract analysis of the issues is going to be very shaky, and that therefore SIAI is likely to be a bad choice as a charity. In other words, that which speaks in favor of SIAI does mainly consist of highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.

Why I am skeptical of risks from AI

Link: http://kruel.co/2011/07/21/why-i-am-skeptical-of-risks-from-ai/

Abstract:

In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish.

There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities solely derive their appeal from being vague.

Asteroids, AI and what’s wrong with rationality

Link: kruel.co/2012/06/25/rationality-implications/

Abstract:

Our current methods seem to be biased in new and unexpected ways. Pascal’s mugging, the Lifespan Dilemma and blackmailing are just a few examples on how an agent build according to our current understanding of rationality could fail.

And that’s just what we already know about.

Our current theories are not enough to build an artificial general intelligence that will be reliably in helping us to achieve our values, even if those values could be thoroughly defined or were computable even in principle.

Objections to Coherent Extrapolated Volition

Link: kruel.co/2011/07/22/objections-to-coherent-extrapolated-volition/

Abstract:

It seems to me that becoming more knowledgeable and smarter is gradually altering our utility functions. But what is it that we are approaching if the extrapolation of our volition becomes a purpose in and of itself? Extrapolating our coherent volition will distort or alter what we really value by installing a new cognitive toolkit designed to achieve an equilibrium between us and other agents with the same toolkit.

Would a singleton be a tool that we can use to get what we want or would the tool use us to do what it does, would we be modeled or would it create models, would we be extrapolating our volition or rather follow our extrapolations?

Four arguments against AI risks

Link: kruel.co/2013/07/11/four-arguments-against-ai-risks/

Abstract:

I list four, not necessarily independent, caveats against AI risks that would be valid even if one was to accept (1) that AI will be invented soon enough to be decision relevant at this point in time (2) that the kind of uncontrollable recursive self-improvement imagined by AI risks advocates was even in principle possible (3) that the advantage of greater intelligence scales with the task of taking over the world in such a way that it becomes probable that an AI will succeed in doing so even given the lack of concrete scenarios on how that is supposed to happen.

AI drives vs. practical research and the lack of specific decision procedures

Link: kruel.co/2013/06/01/ai-drives-vs-practical-research-and-the-lack-of-specific-decision-procedures/

Abstract:

The objective of this post is (1) to outline how to examine the possibility of the emergence of dangerous goals in generally intelligent systems in the light of practical research and development and (2) to determine what decision procedures would cause generally intelligent systems to exhibit catastrophic side effects.

To beat humans you have to define “winning”

Link: kruel.co/2013/07/14/to-beat-humans-you-have-to-define-winning/

Abstract:

People who claim that artificial general intelligence is going to constitute an existential risk implicitly conjecture that whoever is going to create such an AI will know perfectly well how to formalize capabilities such as <become superhuman good at mathematics> while at the same time they will fail selectively at making it solve the mathematics they want it to solve and instead cause it to solve the mathematics that is necessary to kill all humans.

If you claim that it is possible to define the capability <become superhuman good at mathematics> then you will need a very good argument in order to support the claim that at the same time it is difficult to define goals such as <build a house> without causing human extinction.

Reply to Stuart Armstrong on Dumb Superintelligence

Link: kruel.co/2013/07/19/reply-to-stuart-armstrong-on-dumb-superintelligence/

Abstract:

Here is a reply to the post ‘The idiot savant AI isn’t an idiot‘ which I sent Stuart Armstrong yesterday by e-Mail. Since someone has now linked to one of my posts on LessWrong I thought I would make the full reply public.

Distilling the “dumb superintelligence” argument

Link: kruel.co/2013/07/21/distilling-the-dumb-superintelligence-argument/

Abstract:

The intersection of the sets of “intelligently designed AIs” and “dangerous AIs” only contains those AIs which are deliberately designed to be dangerous by malicious humans.

Thank you for steelmanning my arguments

Link: kruel.co/2013/07/22/thank-you-for-steelmanning-my-arguments/

Abstract:

A further refinement of the argument against the claim that fully intended behavior is a very small target to hit.

Goals vs. capabilities in artificial intelligence

Link: kruel.co/2013/07/19/goals-vs-capabilities-in-artificial-intelligence/

Abstract:

The distinction between terminal goals, instrumental goals and an AI’s eventual behavior is misleading for practical AI’s. What actions an AI is going to take does depend on its general design and not on a specific part of its design that someone happened to label “goal”.

To make your AI interpret something literally you have to define “literally”

Link: kruel.co/2013/07/22/to-make-your-ai-interpret-something-literally-you-have-to-define-literally/

Abstract:

The capability to “understand understanding correctly” is a perquisite for any AI to be capable of taking over the world. At the same time that capability will make it avoid taking over the world as long as it does not accurately reflect what it is meant to do.

Questions regarding the nanotechnology-AI-risk conjunction

Link: kruel.co/2013/06/02/questions-regarding-the-nanotechnology-ai-risk-conjunction/

Abstract:

Posing questions examining what I call the nanotechnology-AI-risk conjunction, by which I am referring to a scenario that is often mentioned by people concerned about the idea of an artificial general intelligence (short: AI) attaining great power.

AI risk scenario: Deceptive long-term replacement of the human workforce

Link: kruel.co/2013/06/03/ai-risk-scenario-deceptive-long-term-replacement-of-the-human-workforce/

Abstract:

Some questions about a scenario related to the possibility of an advanced artificial general intelligence (short: AI) overpowering humanity. For the purpose of this post I will label the scenario a deceptive long-term replacement of the human workforce. As with all such scenarios it makes sense to take a closer look by posing certain questions about what needs to be true in order for a given scenario to work out in practice and to be better able to estimate its probability.

AI risk scenario: Elite Cabal

Link: kruel.co/2013/06/03/ai-risk-scenario-elite-cabal/

Abstract:

Some remarks and questions about a scenario outlined by Mitchell Porter on how an existential risk scenario involving advanced artificial general intelligence might be caused by a small but powerful network of organizations working for a great power in the interest of national security.

AI risk scenario: Social engineering

Link: kruel.co/2013/06/22/ai-risk-scenario-social-engineering/

Abstract:

Some remarks and questions about a scenario outlined in the LessWrong post ‘For FAI: Is “Molecular Nanotechnology‘ putting our best foot forward?‘ on how an artificial general intelligence (short: AI) could take control of Earth by means of social engineering, rigging elections and killing enemies.

AI risk scenario: Insect-sized drones

Link: kruel.co/2013/06/28/ai-risk-scenario-insect-sized-drones/

Abstract:

Some remarks and questions about a scenario outlined by Tyler Cowen in which insect-sized drones are used to kill people or to carry out terror attacks.

AI risks scenario: Biological warfare

Link: kruel.co/2013/06/28/ai-risks-scenario-biological-warfare/

Abstract:

Remarks and questions about the use of biological toxins or infectious agents by an artificial general intelligence (short: AI) to decisively weaken and eventually overpower humanity.

Realistic AI risk scenarios

Link: http://kruel.co/2013/07/24/realistic-ai-risk-scenarios/

Abstract:

Scenarios that I deem to be realistic, in which an artificial intelligence (AI) constitutes a catastrophic or existential risk (or worse), are mostly of the kind in which “unfriendly” humans use such AIs as tools facilitating the achievement of human goals.

How does a consequentialist AI work?

Link: kruel.co/2013/07/14/how-does-a-consequentialist-ai-work/

Abstract:

The idea of a consequentialist expected utility maximizer is used to infer that artificial general intelligence constitutes an existential risk.

Can we say anything specific about how such an AI could work in practice? And if we are unable to approximate a practical version of such an AI, is it then sensible to use it as a model to make predictions about the behavior of practical AI’s?

Narrow vs. General Artificial Intelligence

Link: kruel.co/2013/07/13/narrow-vs-general-artificial-intelligence/

Addendum: kruel.co/2013/07/13/wrong-answers-on-jeopardy-vs-human-extinction/

Abstract:

A comparison chart of the behavior of narrow and general artificial intelligence when supplied with the same task.

Addendum: If an artificial general intelligence was prone to commit errors on the scale of confusing goals such as “win at Jeopardy” with “kill all humans” then it would never succeed at killing all humans because it would make similar mistakes on a wide variety of problems that are necessary to solve in order to do so.

Furniture robots as an existential risk? Beware cached thoughts!

Link: kruel.co/2013/01/28/furniture-robots-as-an-existential-risk-beware-cached-thoughts/

Abstract:

Don’t just assume vague ideas such as <explosive recursive self-improvement>, try to approach the idea in a piecewise fashion. Start out with some narrow AI such as IBM Watson or Apple’s Siri and add various hypothetical self-improvement capabilities, but avoid quantum leaps. Try to locate at what point those systems start acting in an unbounded fashion, possibly influencing the whole world in a catastrophic way. And if you manage to locate such a tipping-point then take it apart even further. Start over and take even smaller steps, be more specific. How exactly did your well-behaved expert system end up being an existential risk?

Being specific about AI risks

Link: kruel.co/2013/01/26/being-specific-about-ai-risks/

Abstract:

The only way you can arrive at any scenario where an artificial general intelligence is going to kill all humans is by being vague and unspecific, by ignoring real world development processes and by using natural language to describe some sort of fantasy scenario and invoke lots of technological magic.

Once you have to come up with a concrete scenario and outline specifically how that is supposed to happen you’ll notice that you will never actually reach such a tipping point as long as you do not deliberately design the system to behave in such a way.

Taking over the world to compute 1+1

Link: kruel.co/2013/01/24/taking-over-the-world-to-compute-11/

Abstract:

If your superintelligence is too dumb to realize that it doesn’t have to take over the world in order to compute 1+1 then it will never manage to take over the world in the first place.

C. elegans vs. human-level AI

Link: kruel.co/2013/07/16/c-elegans-vs-human-level-ai/

Abstract:

Reading the Wikipedia entry on Caenorhabditis elegans and how much we already understand about this small organism and its 302 neurons makes me even more skeptical of the claim that a human-level artificial intelligence (short: AI) will be created within this century.

How far is AGI?

Link: http://kruel.co/2012/05/13/how-far-is-agi/

Abstract:

I don’t believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.

The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justifiedeven in principle to be confident in that prediction.

Superapish intelligence

Link: http://kruel.co/2012/05/13/superapish-intelligence/

Abstract:

The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence.

We are SIAI. Argument is futile.

Link: kruel.co/2012/05/12/we-are-siai-argument-is-futile/

Abstract:

The Singularity Institute (SIAI) created an insurmountable fortress that shields them from any criticism.

The Fallacy of AI Drives

Link: kruel.co/2013/01/14/the-fallacy-of-ai-drives/

Abstract:

don’t think that a sufficiently intelligent AI will constitute an existential risk.

Rationality? Come on, this is serious!

Link: kruel.co/2012/11/02/rationality-come-on-this-is-serious/

Abstract:

What is it that makes the kind of rationality advocated at lesswrong.com superior to the traditional rationality of Richard Dawkins and Richard Feynman et al.?

Eliezer Yudkowsky on MIRI’s probability of success

Link: kruel.co/2013/08/07/eliezer-yudkowsky-on-miris-probability-of-success/

Abstract:

Some thoughts on Eliezer Yudkowsky’s comments from this thread started by Peter Hurford.

MIRI donors are better at evaluating AI risks than academics?

Link: kruel.co/2013/08/05/miri-donors-are-better-at-evaluating-ai-risks-than-academics/

Abstract:

The Machine Intelligence Research Institute (short: MIRI) seems to be suggesting that people who contribute money to MIRI are better at evaluating risks associated with artificial general intelligence (short: AI) than an independent jury of AI and machine learning experts.

Examining a comment by Eliezer Yudkowsky on AI risks

Link: kruel.co/2013/09/16/examining-a-comment-by-eliezer-yudkowsky-on-ai-risks/

Abstract:

My perception is that people make predictions or hold beliefs about the behavior of highly speculative and hypothetical generally intelligent artificial systems without being able to state any formal or technical justifications. And they are confident enough of these beliefs and predictions to actually give someone money in order to prevent such systems.

To highlight my confusion about that stance, imagine there was no scientific consensus about global warming, no experiments, and no data confirming that global warming actually happens. Suppose that in this counterfactual world there was someone who, lacking almost any reputation as climatologist, is predicting that global warming will cause human extinction. Further suppose that this person was asking for money in order to implement a potentially dangerous and possibly unfeasible geoengineering scheme, in order to stop global warming. Would you give this person your money?

If the answer is negative, what makes you behave differently with respect to risks associated with artificial general intelligence? Do you believe that it is somehow much easier to draw action relevant conclusions about this topic?

Description of MIRI’s AI risk scenario by analogy with nanotechnology 

Link: kruel.co/2013/09/08/description-of-miris-ai-risk-scenario-by-analogy-with-nanotechnology/

Abstract:

Framed in terms of nanofactories, here is my understanding of a scenario imagined by the Machine Intelligence Research Institute (MIRI), in which an artificial general intelligence (AGI) causes human extinction.

Discussion about catastrophic risks from artificial intelligence

Link: kruel.co/2013/09/07/discussion-about-catastrophic-risks-from-artificial-intelligence/

Abstract:

A discussion about risks associated with artificial general intelligence, mainly between myself, Richard Loosemore, and Robby Bensinger.

Is an Intelligence Explosion a Disjunctive or Conjunctive Event?

Link: kruel.co/2011/07/21/intelligence-explosion-disjunctive-conjunctive/

Abstract:

Explosive recursive self-improvement is one of the presuppositions for the possibility of risks from AI. The problem is that this and other presuppositions are largely ignored and left undefined. All of the disjunctive arguments put forth by the SIAI are trying to show that there are many causative factors that will result in the development of unfriendly artificial general intelligence. Only one of those factors needs to be true for us to be wiped out by AGI. But the whole scenario is at most as probable as the assumption hidden in the words <artificial general intelligence> and <explosive recursive self-improvement>.

Open Problems in Ethics and Rationality

Link: kruel.co/2011/07/24/open-problems-in-ethics-and-rationality/

Abstract:

Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.

More

Links

Interview series on risks from AI

Link: http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI

Also: hplusmagazine.com/2012/11/29/alexander-kruels-agi-risk-council-of-advisors-roundtable/ 

Abstract:

In 2011, Alexander Kruel (XiXiDu) started a Q&A style interview series asking various people about their perception of artificial intelligence and possible risks associated with it.

Tags: , , , , , , , , ,