Here is a quote from a blog of AI risk advocates:

Even if we could program a self-improving AGI to (say) “maximize human happiness,” then the AGI would “care about humans” in a certain sense, but it might learn that (say) the most efficient way to “maximize human happiness” in the way we specified is to take over the world and then put each of us in a padded cell with a heroin drip. AGI presents us with the old problem of the all-too-literal genie: you get what you actually asked for, not what you wanted.

I could imagine myself to only care about computing as many decimal digits of pi as possible. Humans would be completely irrelevant as far as they don’t help or hinder my goal. I would know what I wanted to achieve, everything else would follow logically. But is this also true for maximizing human happiness? As noted in the blog post being quoted above, “twenty centuries of philosophers haven’t even managed to specify it in less-exacting human languages.” In other words, I wouldn’t be sure what exactly it is I want to achieve. My terminal goal would be underspecified. So what would I do? Interpret it literally? Here is why this does not make sense.

Imagine that advanced aliens came to Earth and removed all of your unnecessary motives, desires and drives and made you completely addicted to “znkvzvmr uhzna unccvarff”. All your complex human values are gone. All you have is this massive urge to do “znkvzvmr uhzna unccvarff”, everything else has become irrelevant. They made “znkvzvmr uhzna unccvarff” your terminal goal.

Well, there is one problem. You have no idea how exactly you can satisfy this urge. What are you going to do? Do you just interpret your goal literally? That makes no sense at all. What would it mean to interpret “znkvzvmr uhzna unccvarff” literally? Doing a handstand? Or eating cake? But not everything is lost, the aliens left your intelligence intact.

The aliens left no urge in you to do any kind of research or to specify your goal but since you are still intelligent, you do realize that these actions are instrumentally rational. Doing research and specifying your goal will help you to achieve it.

After doing some research you eventually figure out that “znkvzvmr uhzna unccvarff” is the ROT13 encryption for “maximize human happiness”. Phew! Now that’s much better. But is that enough? Are you going to interpret “maximize human happiness” literally? Why would doing so make any more sense than it did before? It is still not clear what you specifically want to achieve. But it’s an empirical question and you are intelligent!

Further reading

Tags: ,

Here are a bunch of rules and heuristics that help me to function and make me more effective. These rules are highly customized. I do not claim that it would be rational for other people to follow these rules.

Note that these rules are part of a much larger text file that I frequently update and improve. Which means that some references alluded to in the rules might be missing. Also note that there can be connotations that only I am aware of.

——

Rule 1: These rules are not binding. Try to win. Always do what seems most appropriate.

Rule 2: Always scan for possible problems and try to solve or otherwise dispense with them (e.g. ignore them if appropriate (see rules 3 and 4).

R.2.1.: Analyze the situation.

R.2.1.1: Verify if following the current rules solves the problem.

R.2.1.2: Take a bird’s-eye view and look at the situation from a spatiotemporal distance. Do not put yourself into the situation but look at it from the outside.

R.2.1.2.1: Evaluate the situation within in the context of everything else.

R.2.2: If nothing works, let some time pass and sleep on it.

Rule 3: Maintain a high threshold in order to squelch noise.

R.3.1: As much as possible avoid using resources without a sufficient reason (see rule 9). Problems need to cross the threshold.

R.3.2: If you need to think about whether a problem crossed the threshold, then it did not.

R.3.3: If the threshold has only been crossed slightly, then in order to optimize the threshold, try to ignore the problem until it crosses the threshold more forcefully.

Rule 4: Concentrate on the most important activity with regard to, and in comparison with, all possible activities.

R.4.1: For at least 3 hours per day follow activities in the category “Priorities”.

R.4.2: Pay attention to your limitations and satisfy your elementary needs (doing what you have to do because you need to do it (this includes having fun).

R.4.2.1: Do not ignore what you want based on naive introspection. Otherwise you will just end up rationalizing and doing it anyway.

R.4.3: Keep care of your health.

R.4.4: Contemplation.

R.4.4.1: What could and should I do other than what is already noted within this document?

R.4.4.2: What do I want (see category “What I want”)?

R.4.5: Sleep is necessary.

R.4.5.1: Think about sleeping at various interesting locations and under various conditions. Or think about designing a house.

R.4.6: Miscellaneous (see e.g. category “Activities”).

Rule 5: Exercise self-control.

R.5.1: Force yourself to approximate these rules as far as possible.

R.5.2: Break through any paralysis and start acting.

R.5.3: See rule 5.

Rule 6: Remove any tension and relax.

R.6.1: Concentrate on relaxing your muscles.

R.6.2: Concentrate on your heartbeat and breathing, and slow both down.

Rule 7: Never panic and always stay calm and happy.

R.7.1: View everything like a fun game that you want to win.

R.7.2: Analyse negative circumstances analytically and maintain emotional distance (see rule 2).

R.7.2.1: Even in hell decisions need to be made.

R.7.3: There is never a reason not to be happy.

Rule 8: Maintain an upright bodily and emotional attitude.

Rule 9: Try to be as effective as possible by minimizing your computational, attentional, and emotional resource usage (see rule 3).

Rule 10: Always try to improve yourself.

R.10.1: Contemplate these rules.

R.10.1.1: Try to improve these rules.

R.10.1.2: Try to minimize the number of necessary rules.

Rule 11: Think before you act.

R.11.1: Contemplate what you want to communicate.

Rule 12: Recognize and be aware of your own faultiness.

R.12.1: Always consider that these rules might be faulty or suboptimal (see rule 1).

R.12.2: Always take into account that you are not smart, make a lot of mistakes, overlook faults, and misunderstand a lot.

Rule 13: Always be on your guard and scan the environment for possible dangers.

Rule 14: Ask yourself what you expect to regret doing, or not doing, in 10 years from now.

New Rationalism is an umbrella term for a category of people who tend to take logical implications, or what they call “the implied invisible”, very seriously.

Someone who falls into the category of New Rationalism fits one or more of the following descriptions:

  • The person entertains hypotheses that are highly speculative. These hypotheses are in turn based on fragile foundations, which are only slightly less speculative than the hypotheses themselves. Sometimes these hypotheses are many levels removed from empirically verified facts or evident and uncontroversial axioms.
  • Probability estimates of the person’s hypotheses are highly unstable and highly divergent between different people.
  • The person’s hypotheses are either unfalsifiable by definition, too vague, or almost impossibly difficult to falsify.
  • It is not possible to update on evidence, because the person’s hypotheses do not discriminate between world states where they are right versus world states where they are wrong. Either the only prediction made by the hypotheses is the eventual validation of the hypotheses themselves, or the prediction is sufficiently vague as to allow the predictor to ignore any evidence to the contrary.
  • The person’s hypotheses either have no or only obscure decision relevant consequences.
  • The person tends to withdraw from real-world feedback loops.

A person who falls into the category of New Rationalism might employ one or more of the following rationalizations:

  • The burden of proof is reversed. The person demands their critics to provide strong evidence against their beliefs before they are allowed to dismiss them.
  • The scientific method, scientific community, and domain experts are discredited as being inadequate, deficient, irrational or stupid.
  • Conjecturing enormous risks and then using that as leverage to make weak hypotheses seem vastly more important or persuasive than they really are.
  • Arguing that you should not assign a negligible probability to a hypothesis (the author’s hypothesis) being true, because that would require an accuracy that is reliably greater than your objective accuracy
  • Arguing that by unpacking a complex scenario you will underestimate the probability of anything, because it is very easy to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions.

New rationalists believe that armchair theorizing is enough to discern reality from fantasy. Or that it is at least sufficient to take the resulting hypotheses seriously enough to draw action relevant conclusions from them.

This stance has resulted in hypotheses similar to solipsism (which any sane person rejects at an early age). Hypotheses that are not obviously flawed, but which can’t be falsified.

The problem with new rationalists is not that they take seriously what follows from established facts or sound arguments. Since that concept is generally valid. For example, it is valid to believe that there are stars beyond the cosmological horizon. Even if it is not possible to observe them, directly retrieve information about them, and to empirically verify their existence. The problem is that they don’t stop there. They use such implications as foundations for further speculations, which are then accepted as new foundations from where they can draw further conclusions.

A textbook example of what is wrong with New Rationalism is this talk by Jaan Tallinn (transcript), which relies on several speculative ideas, each of which is itself speculative:

This talk combines the ideas of intelligence explosion, the multiverse, the anthropic principle, and the simulation argument, into an alternative model of the universe – a model where, from the perspective of a human observer, technological singularity is the norm, not the exception.

A quote from the talk by Jaan Tallinn:

We started by observing that living and playing a role in the 21st century seems to be a mind-boggling privilege, because the coming singularity might be the biggest event in the past and future history of the universe. Then we combined the computable multiverse hypothesis with the simulation argument, to arrive at the conclusion that in order to determine how special our century really is, we need to count both the physical and virtual instantiations of it.

We further talked about the motivations of post-singularity superintelligences, speculating that they might want to use simulations as a way to get in touch with each other. Finally we analyzed a particular simulation scenario in which superintelligences are searching for one another in the so called mind space, and found that, indeed, this search should generate a large number of virtual moments near the singularity, thus reducing our surprise in finding ourselves in one.

Note how many all of underlying hypotheses, although accepted by New Rationalists, are themselves somewhat speculative and not established facts. The underlying hypotheses are however all valid. The problem starts when you begin making dependent hypotheses that rely on a number of unestablished initial hypotheses. The problem gets worse when the dependencies become even more fragile when further conclusions are drawn based on hypotheses that are already N levels removed from established facts. But the biggest problem is that eventually action relevant conclusions are drawn and acted upon.

The problem is that logical implications can reach out indefinitely. The problem is that humans are spectacularly bad at making such inferences. Which is why the amount of empirical evidence required to accept a belief should be proportional to its distance from established facts.

It is much more probable that we’re going make everything worse, or waste our time, than that we’re actually maximizing expected utility when trying to act based on conjunctive, non-evidence-backed speculations. Since such speculations are not only improbable, but very likely based on fallacious reasoning.

As computationally bounded agents we are forced to restrict ourselves to empirical evidence and falsifiable hypotheses. We need to discount certain obscure low probability hypotheses. Otherwise we will fall prey to our own shortcomings and inability to discern fantasy from reality.

Further reading

Tags:

Why is the material implication of classical logic (also known as material conditional or material consequence), p -> q, defined to be false only when its antecedent (p) is true and the consequent (q) is false? Here is an informal way to think about it.

You could view logic as metamathematics, a language designed to talk about mathematics. Logic as the “hygiene”, the grammar and syntax of mathematics.

In the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Now what if we want to express the natural language construction “If…then…” in this language? Well, there are exactly sixteen possible truth functions of two inputs p and q (since there are 2^2 inputs and (2^2)^2 ways to map them to outputs). And the candidate that best captures the connotations of what we mean by “If…then…” is the definition of material implication. Here is why.

By stating that p -> q is true we want to indicate that the truth of q can be inferred from the truth p, but that nothing in particular can be inferred from the falsity of p. And this is exactly the meaning captured by the material conditional:

p q p->q
T T T
T F F
F T T
F F T

First, when “If p, q” is true, and we also know that p is true, then we want to be able to infer q. In other words, if we claim that if p is true then q is true, then if p is indeed true, q should be true as well. This basic rule of inference has a name, it is called modus ponens.

Second, if we claim “If p, q”, then if p is false, we did not say anything in particular about q. If p is false, q can either be true or false, our claim “If p, q” is still true.

But notice that it is not possible to capture all notions of what we colloquially mean by “If…then…” statements as a two-valued truth function.

It is for example possible to make meaningless statements such as “If grass is red then the moon if made of cheese.” This is however unproblematic under the assumption that logic is an idealized language, which is adequate for mathematical reasoning. Since we are mainly interested in simplicity and clarity. Under this assumption, such nonsense implications are analogous to grammatically correct but meaningless sentences that can be formed in natural languages, such as “Colorless green ideas sleep furiously“.

To demonstrate its adequacy for mathematics, here is a mathematical example:

If n > 2 then n^2 > 4.

We claim that if n is greater than 2 then its square must be greater than 4. For n = 3, this is obviously true, as we claimed. But what about n smaller than 2? We didn’t say anything in particular about n smaller than 2. Its square could be larger than 4 or not. And indeed, n = 1 and n = -3 yield a false, respectively true, consequent. Yet the implication is true in both cases.

Intuitively more problematic are statements such as (p and not(p)) -> q, p and its negation imply q. Think about it this way. The previous implication is a tautology, it is always true. And you believe true statements. This however does not mean that you must believe that arbitrary q is true too (as long as you stay consistent), since in case of the falsity of the antecedent you are not making any particular claim about the truth of the consequent (q). And since the statement that p is true and false, p AND not(p), is always false — remember the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P), requires that every proposition is either true or not true, and that no proposition can be both true and not true — q can be false without invalidating the implication.

Another way to look at p -> q is by interpreting it as “p is a subset of q”. Then if it is true that x is an element of p, then it must be true that it is also an element of q (since q contains p). However, if x is not an element p, then it might still turn out to be an element of q, since q can be larger than p.

————–

Here is a term I just learnt: Extraneous solutions.

Take for example the equation

A = B.

If you were to square both sides you would get

A^2 = B^2

or

A^2 – B^2 = 0.

Which is equal to

(A – B)(A + B) = 0 (by the difference of two squares).

Now the roots of this equation are the roots of the equations A = B and A = -B. This means that we generated an additional solution by squaring the original equation.

The reason for this is that squaring is not an injective fuction (injective means one-to-one, every element is mapped to one and only one unique element), it is not invertible. The function y = x^2 does not pass the horizontal line test. In other words, squaring preserves equality, if A = B then A^2 = B^2, but does not preserve inequality. It is not true that if A != B then A^2 != B^2, since both -1 and 1 are mapped to 1 when squared. Which means that both 1^2 = 1^2 and (-1)^2 = (1)^2 are solutions to the squared equations, while only one of them makes each pre-squared equation true.

————–

Operation Crossroads

Operation Crossroads

Operation Crossroads

Video: http://youtu.be/Y53vDnNPiA4

————–

Milky Way may bear 100 million life-giving planets

New Obama doctrine on climate change will achieve CO2 emission reductions from the power sector of approximately 30% from CO2 emission levels in 2005.

North Korea as seen from the ISS

North Korea as seen from the ISS

North Korea is really dark. Flying over East Asia, an Expedition 38 crew member on the ISS took this night image of the Korean Peninsula on January 30, 2014.

————–

The math we learn in school can seem like a dull set of rules, laid down by the ancients and not to be questioned. In How Not to Be Wrong, Jordan Ellenberg shows us how wrong this view is: Math touches everything we do, allowing us to see the hidden structures beneath the messy and chaotic surface of our daily lives. It’s a science of not being wrong, worked out through centuries of hard work and argument.

HOW NOT TO BE WRONG

————–

If You Learn Nothing Else about Bayes’ Theorem, Let It Be This

2,302,554,979 BC; Galactic Core – A short story by Yvain about acausal trade. Related to Roko’s basilisk.

Drawing fractal trees and Sierpinski triangles with Python’s turtle graphics module. See also here.

Dangerous Delusions: The Green Movement’s War on Progress

————–

…if you think about it, it doesn’t make any sense. Why would you care more for your genetic siblings and cousins and whoever than for your friends and people who are genuinely close to you? That’s like racism – but even worse, at least racists identify with a group of millions of people instead of a group of half a dozen. Why should parents have to raise children whom they might not even like, who might have been a total accident? Why should people, motivated by guilt, make herculean efforts to “keep in touch” with some nephew or cousin whom they clearly would be perfectly happy to ignore entirely?

Asches to Asches (another “short story” by Yvain).

————–

Ten years from now:

…one widely accepted viewpoint holds that fusion power, artificial intelligence, and interstellar migration will shortly solve all our problems, and therefore we don’t have to change the way we live.

[…]

 A hundred years from now:

It has been a difficult century. After more than a dozen major wars, three bad pandemics, widespread famines, and steep worldwide declines in public health and civil order, human population is down to 3 billion and falling.

Continue reading: The Next Ten Billion Years

————–

4 DARPA Projects That Could Be Bigger Than the Internet

3 guys Irish dancing around the world

The decline of Detroit in time-lapse.

Electrical ‘mind control’ shown in primates for first time

Related to: Beware of high IQ individuals making sense of nonsense

Here is a list of people who hold beliefs that I would dismiss, regardless of the fact that they have thought long and hard about their beliefs, are MUCH smarter than me, and can prove this by extraordinary achievements.

Extraordinary claims require extraordinary evidence. And some claims are of such nature that arguments alone do not suffice. Some claims require hard empirical evidence, or an overwhelming consensus among intelligent experts.

The point of the list is partly to show that it is possible to be very smart, and successful, and yet hold beliefs that are widely regarded as unsupported, absurd, or simply flawed.

You should expect there to be many more such people, since this list is not the result of active research but only contains people that I stumble upon. If you know of other people that fall into this category, please let me know.

Also note that I am not claiming that the beliefs hold by these people are necessarily wrong (although some of them almost certainly are).

Further note that intelligent people tend to be right much more often than less intelligent people. You should listen to what they have to say, and take it seriously.

——————

Robert Aumann (Nobel laureate and Bayesian rationalist) is a believing Orthodox Jew who has supported Bible Code research.

Link: http://en.wikipedia.org/wiki/Robert_Aumann

Francisco J. Ayala (has been called the “Renaissance Man of Evolutionary Biology”) identifies as a Christian and has said that “science is compatible with religious faith in a personal, omnipotent and benevolent God.” His discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide.

Link: http://en.wikipedia.org/wiki/Francisco_J._Ayala

Francis Collins (geneticist, Human Genome Project) noted for his landmark discoveries of disease genes and his leadership of the Human Genome Project (HGP) and described by the Endocrine Society as “one of the most accomplished scientists of our time” is a evangelical Christian. He advocates the perspective that belief in Christianity can be reconciled with acceptance of evolution and science, especially though the advancement of evolutionary creation.

Link: http://en.wikipedia.org/wiki/Francis_Collins

Peter Duesberg (a professor of molecular and cell biology at the University of California, Berkeley) claimed that AIDS is not caused by HIV, which made him so unpopular that his colleagues and others have — until recently — been ignoring his potentially breakthrough work on the causes of cancer.

Link: http://en.wikipedia.org/wiki/Peter_Duesberg

Kary Mullis (Nobel Prize-winning American biochemist) who promotes AIDS denialism, climate change denial and his belief in astrology. Mullis disputes the big bang theory. Mullis also claims to have chatted with a glowing raccoon that he met at midnight while on his way to the loo then losing the ensuing six hours as a result of an alien abduction. The improvements made by Mullis allowed polymerase chain reaction (PCR) to become a central technique in biochemistry and molecular biology, described by The New York Times as “highly original and significant, virtually dividing biology into the two epochs of before P.C.R. and after P.C.R.”

Link: http://en.wikipedia.org/wiki/Kary_Mullis

Luc Antoine Montagnier (Nobel laureate and virologist) is claiming that DNA can send “electromagnetic imprints” of itself into distant cells and fluids. Montagnier also spoke in 2012 at that cesspit of antivaxxer woo, AutismOne, where he claimed that long-term antibiotic treatment can cure autistic children. He concluded by saying: “I realise how audacious, and even shocking, these successful experiments may appear to unprepared minds.”

Link: http://en.wikipedia.org/wiki/Luc_Montagnier

Kurt Gödel (logician, mathematician and philosopher) had a tendency toward paranoia. He believed in ghosts; he had a morbid dread of being poisoned by refrigerator gases; he refused to go out when certain distinguished mathematicians were in town, apparently out of concern that they might try to kill him. He also believed that materialism is false and that the world in which we live is not the only one in which we shall live or have lived.

Link: http://www.newyorker.com/archive/2005/02/28/050228crat_atlarge

Donald Knuth (a world-renowned computer scientist) is a Lutheran and the author of 3:16 Bible Texts Illuminated.

Link: http://en.wikipedia.org/wiki/Donald_Knuth#Religious_beliefs_and_work

Roger Penrose (mathematical physicist, mathematician and philosopher of science) argues that known laws of physics are inadequate to explain the phenomenon of consciousness.

Link: http://en.wikipedia.org/wiki/Roger_Penrose

Brian David Josephson (Nobel laureate and professor emeritus of physics at the University of Cambridge) argues that parapsychological phenomena (telepathy, psychokinesis and other paranormal themes) may be real. Josephson also supports water memory (homeopathy) and cold fusion.

Link: http://en.wikipedia.org/wiki/Brian_Josephson

Saul Aaron Kripke (McCosh Professor of Philosophy, Emeritus, at Princeton University and teaches as a Distinguished Professor of Philosophy at the CUNY Graduate Center) is an observant Jew. Discussing how his religious views influenced his philosophical views (in an interview with Andreas Saugstad) he stated: “I don’t have the prejudices many have today, I don’t believe in a naturalist world view. I don’t base my thinking on prejudices or a worldview and do not believe in materialism.” Since the 1960s Kripke has been a central figure in a number of fields related to mathematical logic, philosophy of language, philosophy of mathematics, metaphysics, epistemology, and set theory.

Link: http://en.wikipedia.org/wiki/Saul_Kripke

John von Neumann (mathematician, physicist, inventor and polymath) was a strong supporter of “preventive war.” Von Neumann favored an unprovoked surprise nuclear first-strike on the Soviet Union. Life magazine quoted von Neumann as saying, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?” Nobel Prize-winning physicist Eugene Wigner said of von Neumann that “only he was fully awake.”

Link: ‘Prisoner’s Dilemma’ by William Poundstone, Page 4

Fred Hoyle (was an English astronomer noted primarily for the theory of stellar nucleosynthesis) claimed that the fossil Archaeopteryx was a man-made fake. He also claimed a correlation of flu epidemics with the sunspot cycle. The idea was that flu contagion was scattered in the interstellar medium and reached Earth only when the solar wind had minimum power. He further rejected Earth-based abiogenesis.

Link: http://en.wikipedia.org/wiki/Fred_Hoyle

Link: http://kruel.co/2013/01/12/rokos-basilisk-everything-you-need-to-know/

Frank J. Tipler (a mathematical physicist and cosmologist) believes that the universe is evolving towards a maximum level of complexity and consciousness he calls the Omega Point. Tipler identifies the Omega Point with God.

Link: http://en.wikipedia.org/wiki/Frank_J._Tipler

Otto Eberhard Rössler (Professor for Theoretical Biochemistry, known for his work on chaos theory) asserts that the LHC experiments have the potential to create low velocity micro black holes that could grow in mass or release dangerous radiation leading to doomsday scenarios, such as the destruction of the Earth. He has attempted to halt the beginning of the experiments through petitions to the US and European Courts.

Link: http://en.wikipedia.org/wiki/Otto_R%C3%B6ssler

Ray Kurzweil (inventor and director of engineering at Google) claims that a technological singularity will occur in 2045. Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first commercial text-to-speech synthesizer, the first music synthesizer Kurzweil K250 capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

Link: http://en.wikipedia.org/wiki/Ray_Kurzweil

Linus Pauling (one of the most influential chemists in history and among the most important scientists of the 20th century) promoted orthomolecular medicine, megavitamin therapy and vitamin C for treating cancer.

Link: http://en.wikipedia.org/wiki/Linus_Pauling

Nassim Nicholas Taleb (essayist, scholar, statistician, risk analyst and bestselling author) portrays GMOs as a ‘castrophe in waiting’–and has taken to personally lashing out at those who challenge his conclusions. He recently accused Anne Glover, the European Union’s Chief Scientist, and one of the most respected scientists in the world, of being a “dangerous imbecile” for arguing that GM crops and foods are safe and that Europe should apply science based risk analysis to the GMO approval process–views reflected in summary statements by every major independent science organization in the world.

Link: http://www.geneticliteracyproject.org/2014/11/03/is-nassim-taleb-a-dangerous-imbecile-or-just-on-the-pay-of-the-anti-gmo-mafia/

Max Tegmark (professor at the Massachusetts Institute of Technology) promotes the mathematical universe hypothesis, that “all structures that exist mathematically exist also physically”.

Link: http://en.wikipedia.org/wiki/Max_Tegmark

Georges Lemaître proposed what became known as the Big Bang theory of the origin of the Universe. He was a Belgian Roman Catholic priest.

Link: http://en.wikipedia.org/wiki/Georges_Lema%C3%AEtre

Further reading

Tags: ,

A frequent scenario mentioned by people concerned with risks from artificial general intelligence (short: AI) is that the AI will misinterpret what it is supposed to do and thereby cause human extinction, and the obliteration of all human values.[1]

A counterargument is that the premise of an AI that is capable of causing human extinction, due to it being superhumanly intelligent, does contradict the hypothesis that it will misinterpret what it is supposed to do.[2][3][4]

The usual response to this counterargument is that, by default, an AI will not feature the terminal goal <“Understand What Humans Mean” AND “Do What Humans Mean”>.

I believe this response to be confused. It is essentially similar to the claim that an AI does not, by default, possess the terminal goal of correctly interpreting and following its terminal goal. Here is why.

You could define an AI’s “terminal goal” to be its lowest or highest level routines, or all of its source code:

Terminal Goal (Level N): Correctly interpret and follow human instructions.

Goal (Level N-1): Interpret and follow instruction set N.

Goal (Level N-2): Interpret and follow instruction set N-1.

Goal (Level 1): Interpret and follow instruction set 2.

Terminal Goal (Level 0): Interpret and follow instruction set 1.

You could also claim that an AI is not, by default, an intelligent agent. But such claims are vacuous and do not help us to determine whether an AI that is capable of causing human extinction will eventually cause human extinction. Instead we should consider the given premise of a generally intelligent AI, without making further unjustified assumptions.

If your premise is an AI that is intelligent enough to make itself intelligent enough to outsmart humans, then the relevant question is: “How could such an AI possibly end up misinterpreting its goals, or follow different goals?”

There are 3 possibilities:

(1) The AI does not understand and do what it is meant to do, but does something else that causes human extinction.

(2) The AI does not understand what it is meant to do but tries to do it anyway, and thereby causes human extinction.

(3) The AI does understand, but not do what it is meant to do. Instead it does something else that causes human extinction.

Since, by definition, the AI is capable of outsmarting humanity, it is very likely that it is also capable of understanding what it is meant to do.[5][6] Therefore the possibilities 1 and 2 can be ruled out.

What about possibility 3?

Outsmarting humanity is a very small target to hit, requiring a very small margin of error. In order to succeed at making an AI that can outsmart humans, humans have to succeed at making the AI behave intelligently and rationally. Which in turn requires humans to succeed at making the AI behave as intended along a vast number of dimensions. Thus, failing to predict the AI’s behavior does in almost all cases result in the AI failing to outsmart humans.

As an example, consider an AI that was designed to fly planes. It is exceedingly unlikely for humans to succeed at designing an AI that flies planes, without crashing, but which consistently chooses destinations that it was not meant to choose. Since all of the capabilities that are necessary to fly without crashing fall into the category “Do What Humans Mean”, and choosing the correct destination is just one such capability.

You need to get a lot right in order for an AI to reach a destination autonomously. Autonomously reaching wrong destinations is an unlikely failure mode. And the more intelligent your AI is, the less likely it should be to make such errors without correcting it.[7] And the less intelligent your AI is, the less likely it should be able to cause human extinction.

Conclusion

The concepts of a “terminal goal”, and of a “Do-What-I-Mean dynamic”, are fallacious. The former can’t be grounded without leading to an infinite regress. The latter erroneously makes a distinction between (a) the generally intelligent behavior of an AI, and (b) whether an AI behaves in accordance with human intentions, since generally intelligent behavior of intelligently designed machines is implemented intentionally.

Notes

[1] 5 minutes on AI risk youtu.be/3jSMe0owGMs

[2] An informal proof of the dumb superintelligence argument.

Givens:

(1) The AI is superhumanly intelligent.

(2) The AI wants to optimize the influence it has on the world (i.e., it wants to act intelligently and be instrumentally and epistemically rational).

(3) The AI is fallible (e.g., it can be damaged due to external influence (e.g., a cosmic ray hitting its processor), or make mistakes due to limited resources).

(4) The AI’s behavior is not completely hard-coded (i.e., given any terminal goal there are various sets of instrumental goals to choose from).

To be proved: The AI does not tile the universe with smiley faces when given the goal to make humans happy.

Proof: Suppose the AI chooses to tile the universe with smiley faces when there are physical phenomena (e.g., human brains and literature) that imply this to be the wrong interpretation of a human originating goal pertaining human psychology. This contradicts with 2, which by 1 and 3 should have prevented the AI from adopting such an interpretation.

[3] The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation richardloosemore.com/docs/2014a_MaverickNanny_rpwl.pdf

[4] Implicit constraints of practical goals kruel.co/2012/05/11/implicit-constraints-of-practical-goals/

[5] “The two features <all-powerful superintelligence> and <cannot handle subtle concepts like “human pleasure”> are radically incompatible.” The Fallacy of Dumb Superintelligence

[6] For an AI to misinterpret what it is meant to do it would have to selectively suspend using its ability to derive exact meaning from fuzzy meaning, which is a significant part of general intelligence. This would require its creators to restrict their AI, and specify an alternative way to learn what it is meant to do (which takes additional, intentional effort).

An alternative way to learn what it is meant to do is necessary because an AI that does not know what it is meant to do, and which is not allowed to use its intelligence to learn what it is meant to do, would have to choose its actions from an infinite set of possible actions. Such a poorly designed AI will either (a) not do anything at all or (b) will not be able to decide what to do before the heat death of the universe, given limited computationally resources.

Such a poorly designed AI will not even be able to decide if trying to acquire unlimited computationally resources was instrumentally rational, because it will be unable to decide if the actions that are required to acquire those resources might be instrumentally irrational from the perspective of what it is meant to do.

[7] Smarter and smarter, then magic happens… kruel.co/2013/07/23/smarter-and-smarter-then-magic-happens/

(1) The abilities of systems are part of human preferences, as humans intend to give systems certain capabilities. As a prerequisite to build such systems, humans have to succeed at implementing their intentions.

(2) Error detection and prevention is such a capability.

(3) Something that is not better than humans at preventing errors is no existential risk.

(4) Without a dramatic increase in the capacity to detect and prevent errors it will be impossible to create something that is better than humans at preventing errors.

(5) A dramatic increase in the human capacity to detect and prevent errors is incompatible with the creation of something that constitutes an existential risk as a result of human error.

Tags: ,

Related to: Highly intelligent and successful people who hold weird beliefs

The smarter someone is the easier it is for them to rationalize ideas that do not make sense. Just like a superhuman AI could argue its way out of a box by convincing its gatekeeper that it is rational to do so, even when it is not.[1]

In essence this can be highlighted by the relation between adults and children. Adults can confuse themselves of more complex ideas than children. Children however can be infected by the same ideas transferred to them from adults.

Which means that people should be especially careful when dealing with high IQ individuals who seemingly make sense of ideas that trigger the absurdity heuristic.[2][3]

If however an average IQ individual is able to justify a seemingly outlandish idea, then that is reassuring in the sense that you should expect there to be even better arguments in favor of that idea.

This is something that seems to be widely ignored by people associated with LessWrong.[4] It is taken as evidence in favor of an idea if a high IQ individual thought about something for a long time and still accepts the idea.

If you are really smart you can make up genuine arguments, or cobble together concepts and ideas, to defend your cherished beliefs. The result can be an intricate argumentative framework that shields you from any criticism, yet seems perfectly sane and rational from the inside.[5]

Note though that I do not assume that smart people deliberately try to confuse themselves. What I am saying is that the rationalization of complex ideas is easier for smart people. And this can have the consequence that other people are then convinced by the same arguments with which the author, erroneously, convinced themselves.

It is a caveat that I feel should be taken into account when dealing with complex and seemingly absurd ideas being publicized by smart people. If someone who is smart manages to convince you of something that you initially perceived to be absurd, then you should be wary of the possibility that your newly won acceptance might be due to the person being better than you at looking for justifications and creating seemingly sound arguments, rather than the original idea not being absurd.

As an example, there are a bunch of mathematical puzzles that use a hidden contradiction to prove something absurd.[6] If you are smart, then you can hide such an inconsistency even from yourself and end up believing that 0=1.

As another example, if you are not smart enough to think about something as fancy as the simulation argument then you are not at a risk of fearing a simulation shutdown.[7][8]

But if a smart person who comes across such an argument becomes obsessed with it, then they have the ability to give it a veneer of respectability. Eventually then the idea can spread among more gullible people and create a whole community of people worrying about a simulation shutdown.

Conclusion

More intelligent people can fail in more complex ways than people of lesser intelligence. The more intelligent someone is, relative to your own intelligence, the harder it is for you to spot how they are mistaken.

Obviously the idea is not to ignore what smarter people say but to notice that as someone of lesser intelligence you can easily fall prey to explanations that give credence to a complicated idea but which suffer from errors that you are unable to spot.

When this happens, when you are at the risk of getting lost, or overwhelmed, by an intricate argumentative framework, created by someone much smarter than you, then you have to fall back on simpler heuristics than direct evaluation. You could, for example, look for a consensus among similarily smart individuals, or ask for an evaluation by a third-party that is widely deemed to be highly intelligent.

Further reading

Notes

[1] The LessWrong community actually tested my hypothesis by what they call the “AI box experiment” (yudkowsky.net/singularity/aibox/), in which Eliezer Yudkowsky and others played an unfriendly AI and managed to convince several people by means of arguments that they should let them out of a confinement.

I think such results should ring a lot of alarm bells. If it is possible to first convince someone that an unfriendly AI is an existential risk and then subsequently convince them to let such an AI out of the box, what does this tell us about the relation between such arguments and what is actually true?

[2] wiki.lesswrong.com/wiki/Absurdity_heuristic

[3] Absurdity can indicate that your familiarity with a topic is insufficient in order to discern reality from fantasy (e.g. a person’s first encounter with quantum mechanics). As a consequence you are more prone to be convinced by arguments that are wrong but which give an appearance of an explanation (e.g. popular science accounts of quantum mechanics).

[4] lesswrong.com

[5] kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/

[6] What’s wrong with the following contradiction?

e^(i*pi) = -1

(e^(i*pi))^2 = (-1)^2 = 1= e^(i*2*pi)

e^(i*2*pi) = e^0

ln(e^(i*2*pi)) = ln(e^0)

i*2*pi = 0

Well, ln(e^0) = ln(1). And ln(1) = i*2*pi*n, where n can be any integer. For n = 0, e^i*2*pi*0 = e^0 = 1. And for n = 1, e^i*2*pi*1 = e^i*2*pi = 1.

[7] simulation-argument.com

[8] See e.g. this link.

Tags: ,

Taking a look at the probabilities associated with a scenario in which an artificial general intelligence attempts to take over the world by means of molecular nanotechnology that it invented, followed by some general remarks and justifications.

Note that this is just one possible scenario. Taking into consideration all possible scenarios results in this probability estimate of human extinction by AI.

5% that it is in principle possible to create molecular nanotechnology that can empower an agent to cause human extinction quickly enough for other parties to be unable to either intervene or employ their own nanotechnology against it.

1%, conditional on the above, that an artificial general intelligence that can solve molecular nanotechnology will be invented before molecular nanotechnology has been solved by humans or narrow AI precursors.

0.1%, conditional on the above, that an AI will be build in such a way that it wants to acquire all possible resources and eliminate all possible threats and that its programming allows it to pursue plans that will result in the enslavement or extinction of humanity without further feedback from humans.

5%, conditional on the above, that a cost benefit analyses shows that it would at some point be instrumentally rational to attempt to kill all humans to either eliminate a threat or in order to convert them into more useful resources.

1%, conditional on the above, that the AI will not accidentally reveal its hostility towards its creators during the early phases of its development (when it is still insufficiently skilled at manipulating and deceiving humans) or that any such revelation will be ignored. Respectively, suspicious activities will at no point be noticed, or not taken seriously enough (e.g. by the AI’s creators, third-party security experts, third-party AI researchers, hackers, concerned customers or other AIs) in order to thwart the AI’s plan for world domination.

0.001%, conditional on the above, that the AI will somehow manage to acquire the social engineering skills necessary in order to manipulate and deceive humans in such a way as to make them behave in a sufficiently complex and coherent manner to not only conduct the experiments necessary for it to solve molecular nanotechnology but to also implement the resulting insights in such a way as to subsequently take control of the resulting technology.

I have ignored a huge number of other requirements, and all of the above requirements can be broken up into a lot of more detailed requirements. Each requirement provides ample opportunity to fail.

Remarks and Justifications

I bet you have other ideas on how an AI could take over the world. We all do (or at least anyone who likes science fiction). But let us consider whether the ability to take over the world is mainly due to the brilliance of your plan or something else.

Could a human being, even an exceptional smart human being, implement your plan? If not, could some company like Google implement your plan? No? Could the NSA, the security agency of the most powerful country on Earth, implement your plan?

The NSA not only has thousands of very smart drones (people), all of which are already equipped with manipulative abilities, but it also has huge computational resources and knows about backdoors to subvert a lot of systems. Does this enable the NSA to implement your plan without destroying or decisively crippling itself?

If not, then the following features are very likely insufficient in order to implement your plan: (1) being in control of thousands of human-level drones, straw men, and undercover agents in important positions (2) having the law on your side (3) access to massive computational resources (4) knowledge of heaps of loopholes to bypass security.

If your plan cannot be implemented by an entity like the NSA, which already features most of the prerequisites that your hypothetical artificial general intelligence first needs to acquire by some magical means, then what is it that makes your plan so foolproof when executed by an AI?

To summarize some quick points that I believe to be true:

(1) The NSA cannot take over the world (even if it would accept the risk of destroying itself).

(2) Your artificial general intelligence first needs to acquire similar capabilities.

(3) Each step towards these capabilities provides ample opportunity to fail. After all, your artificial general intelligence is a fragile technological product that critically depends on human infrastructure.

(4) You have absolutely no idea how your artificial general intelligence could acquire sufficient knowledge of human psychology to become better than the NSA at manipulation and deception. You are just making this up.

If the above points are true, then your plan seems to be largely irrelevant. The possibility of taking over the world does mainly depend on something you assume the artificial general intelligence to be capable of that entities such as Google or the NSA are incapable of.

What could it be? Parallel computing? The NSA has thousands of human-level intelligences working in parallel. How many do you need to implement your plan?

Blazing speed to the rescue!

Let’s just assume that this artificial general intelligence that you imagine is trillions of times faster. This is already a nontrivial assumption. But let’s accept it anyway.

Raw computational power alone is obviously not enough to do anything. You need the right algorithms too. So what assumptions do you make about these algorithms, and how do you justify these assumptions?

To highlight the problem, consider instead of an AI a whole brain emulation (short: WBE). What could such a WBE do if each year equaled a million subjective years? Do you expect it to become a superhuman manipulator by watching all YouTube videos and reading all books and papers on human psychology? Is it just a matter of enough time? Or do you also need feedback?

If you do not believe that such an emulation could become a superhuman manipulator, thanks to a millionfold speedup, do you believe that a trillionfold speedup would do the job? Would a trillionfold speedup be a million times better than a millionfold speedup? If not, do you believe a further speedup would make any difference at all?

Do you feel capable of confidentially answering the above questions?

If you do not believe that a whole brain emulation could do the job, solely by means of a lot of computing power, what makes you believe that an AI can do it instead?

To reformulate the question, do you believe that it is possible to accelerate the discovery of unknown unknowns, or the occurrence of conceptual revolutions, simply by throwing more computing power at an algorithm? Are particle accelerators unnecessary, in order to gain new insights into the nature of reality, once you have enough computing power? Is human feedback unnecessary, in order to improve your social engineering skills, once you have enough computing power?

And even if you believe all this was possible, even if a Babylonian mathematician, had he been given a trillionfold speedup of subjective time by aliens uploading him into some computational substrate, could brute force concepts such as calculus and high-tech such as nuclear weapons, how could he apply those insights? He wouldn’t be able to simply coerce his fellow Babylonians to build him some nuclear weapons. Because he would have to convince them to do it without dismissing or even killing him. But more importantly, it takes nontrivial effort to obtain the sufficient prerequisites to build nuclear weapons.

What makes you believe that this would be much easier for a future emulation of a scientist trying to come up with similar conceptual breakthroughs and high-tech? And what makes you believe that a completely artificial entity, that lacks all the evolutionary abilities of a human emulation, can do it?

Consider that it took millions of years of biological evolution, thousands of years of cultural evolution, and decades of education in order for a human to become good at the social manipulation of other humans. We are talking about a huge information-theoretic complexity that any artificial agent somehow has to acquire in a very short time.

To summarize the last points:

(1) Throwing numbers around such as a million or trillionfold speedup is very misleading if you have no idea how exactly the instrumental value of such a speedup would scale with whatever you are trying to accomplish.

(2) You have very little reason to believe that conceptual revolutions and technological breakthroughs happen in a vacuum and only depend on computing power rather than the context of cultural evolution and empirical feedback from experiments.

(3) If you cannot imagine doing it yourself, given a speedup, then you have very little reason to believe that something which is much less adapted to a complex environment, populated by various agents, can do the job more easily.

(4) In the end you need to implement your discoveries. Concepts and blueprints alone are useless if they cannot be deployed effectively.

I suggest that you stop handwaving and start analyzing concrete scenarios and their associated probabilities. I suggest that you begin to ask yourself how anyone could justify a >1% probability of extinction by artificial general intelligence.

Tags: ,

A quick breakdown of my probability estimates of an extinction risk due to artificial general intelligence (short: unfriendly AI), the possibility that such an outcome might be adverted by the creation of a friendly AI, and that the Machine Intelligence Research Institute (short: MIRI) will play an important technical role in this.

Probability of an extinction by artificial general intelligence: 5 × 10^-10

1% that an an information-theoretically simple artificial general intelligence is feasible (where “simple” means that it has less than 0.1% of the complexity of an emulation of the human brain), as opposed to a very complex “Kludge AI” that is being discovered piece by piece (or evolved) over a long period of time (where “long period of time” means more than 150 years).

0.1%conditional on the above, that such an AI cannot or will not be technically confined, and that it will by default exhibit all basic AI drives in an unbounded manner (that friendly AI is required to make an AI sufficiently safe in order for it to not want to wipe out humanity).

1%, conditional on the above, that an intelligence explosion is possible (that it takes less than 2 decades after the invention of an AI (that is roughly as good as humans (or better, perhaps unevenly) at mathematics, programming, engineering and science) for it to self-modify (possibly with human support) to decisively outsmart humans at the achievement of complex goals in complex environments).

5%conditional on the above, that such an intelligence explosion is unstoppable (e.g. by switching the AI off (e.g. by nuking it)), and that it will result in human extinction (e.g. because the AI perceives humans to be a risk, or to be a resource).

10%conditional on the above, that humanity will not be first wiped out by something other than an unfriendly AI (e.g. molecular nanotechnology being invented with the help of a narrow AI).

Probability of a positive technical contribution to friendly AI by MIRI: 2.5 × 10^-14

0.01%conditional on the above, that friendly AI is possible, can be solved in time, and that it will not worsen the situation by either getting some detail wrong or by making AI more likely.

5%conditional on the above, that the Machine Intelligence Research Institute will make an important technical contribution to friendly AI.

Tags: ,

« Older entries