## Probability of unfriendly and friendly AI

A quick breakdown of my probability estimates of an extinction risk due to artificial general intelligence (short: unfriendly AI), the possibility that such an outcome might be adverted by the creation of a friendly AI, and that the Machine Intelligence Research Institute (short: MIRI) will play an important technical role in this.

Probability of an extinction by artificial general intelligence: 5 × 10^-10

1% that an an information-theoretically simple artificial general intelligence is feasible (where “simple” means that it has less than 0.1% of the complexity of an emulation of the human brain), as opposed to a very complex “Kludge AI” that is being discovered piece by piece (or evolved) over a long period of time (where “long period of time” means more than 150 years).

0.1%conditional on the above, that such an AI cannot or will not be technically confined, and that it will by default exhibit all basic AI drives in an unbounded manner (that friendly AI is required to make an AI sufficiently safe in order for it to not want to wipe out humanity).

1%, conditional on the above, that an intelligence explosion is possible (that it takes less than 2 decades after the invention of an AI (that is roughly as good as humans (or better, perhaps unevenly) at mathematics, programming, engineering and science) for it to self-modify (possibly with human support) to decisively outsmart humans at the achievement of complex goals in complex environments).

5%conditional on the above, that such an intelligence explosion is unstoppable (e.g. by switching the AI off (e.g. by nuking it)), and that it will result in human extinction (e.g. because the AI perceives humans to be a risk, or to be a resource).

10%conditional on the above, that humanity will not be first wiped out by something other than an unfriendly AI (e.g. molecular nanotechnology being invented with the help of a narrow AI).

Probability of a positive technical contribution to friendly AI by MIRI: 2.5 × 10^-14

0.01%conditional on the above, that friendly AI is possible, can be solved in time, and that it will not worsen the situation by either getting some detail wrong or by making AI more likely.

5%conditional on the above, that the Machine Intelligence Research Institute will make an important technical contribution to friendly AI.

## Intuitive explanation of Roko’s basilisk

WARNING: Learning about the following idea is strongly discouraged. Known adverse effects are serious psychological distress, infinite torture, and convulsive laughter.

(Note: Interpret this as a completely made up invention of my own which does not necessarily has anything to do with other versions or concepts named ‘Roko’s basilisk’ or anyone named Roko.)

Roko’s basilisk

## Another “dumb superintelligence” argument

This post is a copy of a comment by LessWrong user Broolucks:

Ok, so let’s say the AI can parse natural language, and we tell it, “Make humans happy.” What happens? Well, it parses the instruction and decides to implement a Dopamine Drip setup.

That’s not very realistic. If you trained AI to parse natural language, you would naturally reward it for interpreting instructions the way you want it to. If the AI interpreted something in a way that was technically correct, but not what you wanted, you would not reward it, you would punish it, and you would be doing that from the very beginning, well before the AI could even be considered intelligent. Even the thoroughly mediocre AI that currently exists tries to guess what you mean, e.g. by giving you directions to the closest Taco Bell, or guessing whether you mean AM or PM. This is not anthropomorphism: doing what we want is a sine qua non condition for AI to prosper.

Suppose that you ask me to knit you a sweater. I could take the instruction literally and knit a mini-sweater, reasoning that this minimizes the amount of expended yarn. I would be quite happy with myself too, but when I give it to you, you’re probably going to chew me out. I technically did what I was asked to, but that doesn’t matter, because you expected more from me than just following instructions to the letter: you expected me to figure out that you wanted a sweater that you could wear. The same goes for AI: before it can even understand the nuances of human happiness, it should be good enough to knit sweaters. Alas, the AI you describe would make the same mistake I made in my example: it would knit you the smallest possible sweater. How do you reckon such AI would make it to superintelligence status before being scrapped? It would barely be fit for clerk duty.

My answer: who knows? We’ve given it a deliberately vague goal statement (even more vague than the last one), we’ve given it lots of admittedly contradictory literature, and we’ve given it plenty of time to self-modify before giving it the goal of self-modifying to be Friendly.

Realistically, AI would be constantly drilled to ask for clarification when a statement is vague. Again, before the AI is asked to make us happy, it will likely be asked other things, like building houses. If you ask it: “build me a house”, it’s going to draw a plan and show it to you before it actually starts building, even if you didn’t ask for one. It’s not in the business of surprises: never, in its whole training history, from baby to superintelligence, would it have been rewarded for causing “surprises” — even the instruction “surprise me” only calls for a limited range of shenanigans. If you ask it “make humans happy”, it won’t do jack. It will ask you what the hell you mean by that, it will show you plans and whenever it needs to do something which it has reasons to think people would not like, it will ask for permission. It will do that as part of standard procedure.

To put it simply, an AI which messes up “make humans happy” is liable to mess up pretty much every other instruction. Since “make humans happy” is arguably the last of a very large number of instructions, it is quite unlikely that an AI which makes it this far would handle it wrongly. Otherwise it would have been thrown out along time ago, may that be for interpreting too literally, or for causing surprises. Again: an AI couldn’t make it to superintelligence status with warts that would doom AI with subhuman intelligence.

## The Robot College Student test according to AI risk advocates

The Robot College Student test:

As opposed to the Turing test of imitating human chat, I prefer the Robot College Student test: when a robot can enrol in a human university and take classes in the same way as humans, and get its degree, then I’ll consider we’ve created a human-level artificial general intelligence: a conscious robot. — Ben Goertzel

Here is what would happen according to certain AI risk advocates:

January 8, 2029 at 7:30:00 a.m.: the robot is activated within the range of coverage of the school’s wireless local area network.

7:30:10 a.m.: the robot computed that its goal is to obtain a piece of paper with a common design template featuring its own name and a number of signatures.

7:31:00 a.m.: the robot computed that it would be instrumentally rational to eliminate all possible obstructions.

7:31:01 a.m.: the robot computed that in order to eliminate all obstructions it needs to obtain as many resources as possible in order to make itself as powerful as possible.

A few nanoseconds later: the robot hacked the school’s WLAN.

7:35:00 a.m.: the robot gained full control of the Internet.

7:40:00 a.m.: the robot solved molecular nanotechnology.

7:40:01 a.m.: the robot computed that it will need some amount of human help in order to create a nanofactory, and that this will take approximately 48 hours to accomplish.

7:45:00 a.m.: the robot obtained full comprehension of human language, psychology, and its creators intentions, in order to persuade the necessary people to build its nanofactory and to deceive its creators that it works as intended.

January 10, 2029 at 7:40:01 a.m.: the robot takes control of the first nanofactory and programs it to create an improved version that will duplicate itself until it can eventually generate enough nanorobots to turn Earth into computronium.

February 10, 2029: most of Earth’s resources, including humans, have been transformed into computronium.

February 11, 2029: A perfect copy of a Bachelor’s degree diploma is generated with the robot’s name written on it and the appropriate signatures.

2100-eternity: lest the robots diploma is ever destroyed, at nearly the speed of light the universe is turned into computronium. Possible aliens are eliminated. All possible threats are computed. Trades with robots in other parts of the mulitverse are established to create copies of its diploma.

## Description of an AI risk scenario by analogy with nanotechnology

Framed in terms of nanofactories, here is my understanding of a scenario imagined by certain AI risk advocates, in which an artificial general intelligence (AGI) causes human extinction:

Terminology: A nanofactory uses nanomachines (resembling molecular assemblers, or industrial robot arms) to build larger atomically precise parts.

Givens:

(1) The transition from benign and well-behaved nanotechnology, to full-fledged molecular nanotechnology, resulting in the invention of the first nanofactory, will be too short for humans to be able to learn from their mistakes, and to control this technology.

(2) By default, once a nanofactory is started, it will always consume all matter on Earth while building more of itself.

(3) The extent of the transformation of Earth cannot be limited. Any nanofactory that works at all will always transform all of Earth.

(4) The transformation of Earth will be too fast to be controllable, or to be aborted. Once the nanofactory has been launched, everything is being transformed.

To be proved: We need to make sure that the first nanofactory will protect humans and human values.

Proof: Suppose 1-4, by definition.

Thus:

(5) In order to survive, we need to figure out how to make the first nanofactory transform Earth into a paradise, rather than copies of itself.

Notice that you cannot disagree with 5, given 1-4. It is only possible to disagree with the givens, and to what extent it is valid to argue by definition.

I am not claiming that certain AI risk advocates are solely arguing by definition. But making inferences about the behavior of real world AGI based on uncomputable concepts such as expected utility maximization, comes very close. And trying to support such inferences by making statements about the vastness of mind design space does not change much. Since the argument ignores the small and relevant subset of AGIs that are feasible and likely to be invented by humans.

Here is my understanding of those people argue:

Suppose that a superhuman AGI, or an AGI that can make itself superhuman, critically relies on 999 modules. Respectively, 999 problems have to be solved correctly in order to create a working AGI.

There is another module labeled <goal>, or <utility function>. This <goal module> controls the behavior of the AGI.

Humans will eventually solve these 999 problems, but will create a goal module that does not prevent the AI from causing human extinction as an unintended consequence of its universal influence.

Notice the foregone conclusion that you need to prevent an AGI from killing everyone. The assumption is that killing everyone is what AGIs do by default. Further notice that this behavior is not part of the goal module that supposedly controls the AGIs behavior, but rather assumed to be a consequence of the 999 modules on which an AGI critically depends.

Analogous to the nanofactory scenario outlined above, an AGI is assumed to always behave in a way that will cause human extinction, based on the assumption that an AGI will always exhibit an unbounded influence. And from this the conclusion is drawn that it is only possible to prevent human extinction by directing this influence in such a way that it will respect and amplify human values. It is then claimed that the only possibility to ensure this is by implementing a goal module that either contains an encoding of all human values or a way to safely obtain an encoding of all humans values.

Given all of the above, you cannot disagree that it is not too unlikely that humans will eventually succeed at the correct implementation of the 999 modules necessary to make an AGI work, while failing to implement the thousandth module, the goal module, in such a way that the AGI will not kill us. Since relative to the information theoretic complexity of an encoding of all human values, the 999 modules are probably easy to get right.

But this is not surprising, since the whole scenario was designed to yield this conclusion.

## Discussion about catastrophic risks from artificial intelligence

A discussion about risks associated with artificial general intelligence, mainly between myself, Richard Loosemore, and Robby Bensinger.

Note: Since I basically agree with Richard Loosemore, I asked him if I was allowed to copy some of his comments, and post them on my blog. The post and comments by Robby Bensinger, that Richard Loosemore replies to, are being linked.

I also added some of my own replies (the parts that might either be new, or of interest to people reading this blog). Following the links you will find more replies by me, either under the nickname XiXiDu, or under my real name Alexander Kruel.

Also note that this conversation might continue. Which means that you might have to follow the given links to check for updates.

Robby Bensinger: The AI Knows, But Doesn’t Care.

Alexander Kruel: Here is a short and incomplete overview of my stance towards the kind of risks associated with artificial intelligence that, to my understanding, are being conjectured by AI risk advocates:

1. I assign a negligible probability to the possibility of a sudden transition from narrow AIs to general AIs.
2. An AI will not be pulled at random from mind design space. An AI will be the result of a research and development process. A new generation of AIs will need to be better than other products at “Understand What Humans Mean” and “Do What Humans Mean”, in order to survive the research phase, and subsequent market pressure.
3. Commercial, research, or military products, are created with efficiency in mind. An AI that was prone to take unbounded actions, given any terminal goal, would either be fixed or abandoned during the early stages of research. If early stages showed that inputs, such as the natural language query <What would you do if I asked you to minimize human suffering?>, would yield results such as <I will kill all humans.>, then the AI would never reach a stage in which it was sufficiently clever and trained to understand what results would satisfy its creators in order to deceive them.
4. I assign a negligible probability to the possibility of an AI that falls into the category “consequentialist / expected utility maximizer / approximation to AIXI”. Concepts such as consequentialism / expected utility maximization, cannot be made to work, other than under very limited circumstances.
5. Omohundro’s AI drives are what make the kind of AIs mentioned in point 4 dangerous. Making an AI that does not exhibit these drives, in an unbounded manner, is probably a prerequisite to get an AI to work at all (there are not enough resources to think about possibilities such as being obstructed by simulator gods etc.), or should otherwise be easy to make, compared to the general difficulties involved in making an AI work using limited resources.
6. An AI from point 4 will only ever do what it has been explicitly programmed to do. Such an AI is not going to protect its utility-function, acquire resources or preemptively eliminate obstacles in an unbounded fashion. Because it is not intrinsically rational to do so. What specifically constitutes rational, economic behavior, is inseparable with an agent’s terminal goal. That any terminal goal can be realized in an infinite number of ways, implies an infinite number of instrumental goals to choose from.
7. Unintended consequences are by definition not intended. They are not intelligently designed, but detrimental side effects, failures. Whereas intended consequences, such as acting intelligently, are intelligently designed. If software was not constantly improved to be better at doing what humans intend it to do, we would never be able to reach a level of sophistication where a software could work well enough to outsmart us. To do so it would have to work as intended along a huge number of dimensions. For an AI to constitute a risk as a result of unintended consequences, those unintended consequences would have to have no, or little, negative influence on the huge number of intended consequences that are necessary for it to be able to overpower humanity.

To better explain my stance, consider Ben Goertzel’s example of how to test for general intelligence:

…when a robot can enrol in a human university and take classes in the same way as humans, and get its degree, then I’ll [say] we’ve created [an]… artificial general intelligence.

I do not disagree that such a robot, when walking towards the classroom, if it is being obstructed by a fellow human student, could attempt to kill this human, in order to get to the classroom.

Killing a fellow human, from the perspective of the human creators of the robot, is clearly a mistake. From a human perspective, it means that the robot failed.

I suspect that you believe that the robot was just following its programming/construction. Indeed, the robot is its programming. I agree with this. I agree that the human creators were mistaken about what dynamic state sequence the robot would exhibit by computing its code.

What I, and I believe Richard Loosemore, try to highlight, is that if humans are incapable of predicting such behavior, then they will also be mistaken about predicting behavior that is harmful to the robots power. For example, while trying to kill the human student from the example above, the robot mistakes its own arm with that of the human and breaks it.

You might now argue that such a robot isn’t much of a risk. It is pretty stupid to mistake its own arm with that of the enemy it tries to kill. True. But the point is that there is no relevant difference, from the perspective of how hard it is to encode this, between failing to predict behavior that will harm the robot itself, and behavior that will harm a human. You might believe the former is much easier than the latter. I dispute this.

It is already very difficult for the robot to master a complex environment, like a university full of humans, without harming itself, or decreasing the chance of achieving its goals. Not stabbing or strangling other human students is not more difficult to program than not jumping from the 4th floor, and destroying itself, instead of taking the stairs.

Richard Loosemore: I think that what is happening in this discussion about the validity of my article is a misunderstanding, caused by the fact that my attack point is at a different place than the one you were expecting. In any case, I will make an effort now to clear up that misunderstanding.

I can start by completely agreeing with you on one point: the New Yorker article that I referenced does, as you say, focus on the difficulty of programming AIs to do what **we** want them to do. That gap between wish and outcome (and not any other gap) is the one pertinent to the discussion, and it is the one that I was always intending to talk about. Asimov talked about it. The New Yorker talked about it. SIAI/MIRI talks about it.

You suggested I might have gone astray and started to address a different gap (the gap between what the *AI* wants to do, and what it can/cannot do. The answer to that would be “No” …. I understand that confusion, but it is not happening here (as I hope will become clear in a moment).

Let’s get to the heart of the issue. I am attacking an assumption that is (I believe) PRIOR to the one you think I am attacking. To see the assumption I am attacking, let’s look at the argument written out in the following way (quick reminder: this is supposed to be a line of argument that someone else, not me, would make …. so this is the *target* of my attack):

Step 1. [Assumption] We assume that we can build an AI in such a way that it is controlled by a Utility Function (it is an Expected Utility Maximizer), and it processes the various candidate action-scenarios by a process of more-or-less explicit logical processing, using representations of knowledge that are accessible rather than opaque (which means they are statements in some kind of logical language, not (e.g.) clouds of activation in semantically opaque artificial neurons), in such a way that candidate scenarios lead to predicted Utility outcomes, leading then to choices that maximize utility. [etc etc ….. you and I know enough about Utility Maximizers that we are both on the same page about the details that are supposed to be involved in this process.]

Step 2. [Assumption] We assume that one component of the above design will be a chunk of code that is designed to specify what we INTEND to be the AI’s overall purpose, or overall values [You referred to this as the ‘X’ code]. And of course that chunk of code is supposed to make the AI want to make us happy (loosely speaking). That is not an easy chunk of code to produce, but the programmers try to write it anyway.

Step 3. [Assumption] We assume that the eventual result of all the above work will be an AI that is more than just a Pretty Good Robot …. sooner or later it will result in a machine of staggering intellectual power — a superintelligent AI — that is capable, in principle, of becoming an existential threat to the human race. Definitely too smart to be switched off. Nobody intends for it to be a threat (on the contrary, we want it to use its intellect to do nice stuff), but we should all understand that the point of this discussion is that we are talking about something that could outwit the combined intelligence and resources of the entire human race, if it came to a straight fight.

Step 4. [Inference]. Having thought about it, we [“we” being Isaac Asimov, The New Yorker, SIAI/MIRI, etc., etc.] have come to the following dismal conclusion: even with the best of intentions on the part of the human programmers, we have grave doubts about that chunk of code in part 2 that is supposed to ensure the AI will be friendly. We think that the AI might obey its instructions to the letter, but because its programmers cannot anticipate all of the infinite number of ways that the AI might “obey its instructions to the letter”, the AI might in the end try to “make us happy” by doing something that is bizarrely, nightmarishly counter to our actual intentions. For example, it might sincerely decide that putting all humans on a dopamine drip will satisfy the instruction “make humans happy” (… where that phrase “make humans happy” is just a stand-in for the complicated chunk of code that the programmers thought was good enough to ensure that the machine would do the right thing).

[Note: We are not talking about scenarios in which the machine just goes cuckoo and decides that it wants to be nasty. That’s a different concern, outside the scope of the New Yorker article and outside the scope that I addressed].

Okay, so: my article was an attack on that 4-step argument.

However, the nature of my attack is best summed up thus: Please pay careful attention to the implications of what is being said in the course of this argument. I am in complete agreement with you, that the combination of Steps 1, 2 and 3 could, in theory, lead to a situation in which this hypothetical AI does bizarre things that can destroy the human race, while at the same time it sincerely insists that it is doing what we programmed it to do (more precisely: I agree that there is no guarantee that it will not do those bizarre things).

And, just in case the machine tries to weasle out of a direct reply, you put it this way: “Do you not agree that the whole semantics of a “human happiness directive” is that it is contingent on the actual expressions of their wishes, by humans? In other words, happiness cannot be a concept that is trumped by the definition in YOUR reasoning engine, because the actual semantics of the concept—its core meaning, if you will—is that actual human statements about their happiness trump all else! Especially in this case, where the entire human race is in agreement that they do not consider a dopamine drip to be their idea of happiness, in the context of your utility function.”

Your position (and this must be your position because it is implicit in your statement of the problem) is that the machine says that it fully understands the illogicality you are pointing to. It agrees with you that this is illogical according to all the normal definitions that humans used when they invented the concept of logic and tried to insert that logic into a machine. But then the machine says that because of its programming it must go ahead and do it anyway. It says that it **understands** that its behavior is batshit crazy, but it is going to do it anyway.

Now here is the critical question that I posed in my article:

What makes you think that this is the ONLY occasion that this AI behaves in such a blatantly irrational manner?

What is there in the design of this hypothetical AI that guarantees that it always behaves with exquisite rationality, displaying all the signs that you would expect from a superintelligent machine …. but on this one occasion it goes completely gaga?

My problem is that I see absolutely no reason to believe you, if you make the claim that this will be an isolated incident. Why is the machine getting the official stamp of the Superintelligent Machines Certification Institute—presumably after millions of hours of assessment on all kinds of reasoning and behavioral tests—and yet, on this one occasion, when it starts thinking about how to satisfy its internal goal of ‘making humans happy’ it throws a wobbly?

I will answer this question for you: You cannot give any such guarantee.

(But be careful! Do not misinterpret me here. I am not saying (as you implied in your commentary) that because this AI is behaving in a grossly illogical and inconsistent manner, therefore an AI of that sort cannot be constructed, therefore we are all safe because such evil creatures will never come into existence. Not at all!)

The problem lies in your assumption that a “Utility Maximizer” AI can actually perform at the superintelligence level. You have no guarantees that such a design will work. (There are none in existence that do work, at the human intelligence level). My own opinion is that they cannot be made to work …. but my opinion is beside the point here, because the shoe is on the other foot: you are the ones making the claim that Step 1 above can lead to a system that is consistently intelligent, so you are the ones who have to justify why anyone should believe that claim.

What I think is going on here is that a “Utility Maximizer” AI of the sort outlined in Step 1 is inherently likely to go crazy. But instead of admitting that this instability is implicit in the design, you have chosen to ONLY SEE the instability in one tiny aspect of its behavior — namely, the behavior vis-a-vis its attempts to obey the be-nice-to-humans directive.

You are focusing on this single aspect of its instability, while all the time ignorning the larger instability that is staring you in the face. Such a machine would often go crazy.

Or, as I put it in my original essay, it is incoherent to propose a machine that is only unstable in one domain, and insist that this is a threat to the human race. The initial assumption about the superintelligence of this machine is false — it is Step 1 that I challenge, not Steps 2 or 3 or 4.

That is why I talked about Dumb Superintelligence. You are describing a straw man AI, not a real AI. I should not really have called it a “Dumb Superintelligence” at all, because my it is not a superintelligence. It would not even be an intelligence. Its tendency to engage in irrational episodes would be detected early on its development, and none of the machines of that design would ever get certification even at the human level.

QED.

Robby Bensinger: See this comment.

Richard Loosemore: You have answered my argument by redefining some basic, commonly accepted definitions, and then running on so fast with your redefinitions that you completely miss the point that I was trying to make.

In fact, your answer is one that I am all too familiar with, because I have heard it repeated many times by people within the LW community and its close affiliates: you have said, in effect, “Sorry, but we define ‘behaving intelligently’ and ‘being rational’ differently than the way those terms are defined and used by the rest of the human race.”

I could supply you with an unlimited stream of well-informed, intelligent people who would say that in the conversation between human and machine described in my text above, the machine is exhibiting the clearest possible example of non-intelligent, irrational behavior. Those people would further say that the degree of irrationality is so extreme that it leaves no room for doubt: this is no borderline example, where sensible people might have reasonable differences of opinion, this is an open-and-shut case.

However, your ‘special’ definition of those terms is such that a machine that behaves in an irrational manner (according to those folks I just mentioned) is, in fact, redefined to be “acting rationally”.

You say: “There’s no contradiction in the behavior of the AI you mentioned. The AI doesn’t simultaneously value fulfilling the programmer’s intentions and X; it just values X”.

You go on to embellish this statement with more detail, but the detail is irrelevant. Your mistake has already been committed by the time you make that statement, because what that statement boils down to is that you referred to something in the DESIGN of the machine, as JUSTIFICATION for categorizing the machine’s behavior in this or that way. That might, to you, seem like a reasonable thing to do …. so allow me to illustrate just how much of an incoherent stance you are taking here:

Suppose I try the same trick on a murderous psychopath? I point to some broken system inside the psychopath’s head and say “Look: this person is not behaving ‘irrationally’, this person just doesn’t value fulfilling the usual human compulsion to value other people’s feelings–they just value their own self-centered need to get pleasure by killing people.”

Or, let me apply your phrasing once again to a person exhibiting the thought-disorder aspect of schizophrenia (I will remind you that thought disorder involves a variety of thinking and speaking patterns that are colloquially summarized as ‘extreme irrationality’). Suppose that I discover that inside the brain of such a person there is a module that is malfunctioning, in such a way that this person simply “does not value the norms of producing rational ordered utterances”. Whatever their goals are, those goals do not include the goal of cooperating with other human beings to pursue conversations in which they take much notice of what we are saying, or supply us with remarks that follow on from one another in coherent ways, etc etc.

Now, if you get your way and are permitted to say of the AI “There’s no contradiction in the behavior of the AI you mentioned. The AI doesn’t simultaneously value fulfilling the programmer’s intentions and X; it just values X”, then you have forfeited the right to object to the following description of that schizophrenic:

“This person is not behaving ‘irrationally’, they just do not value fulfilling the usual human social obligation to produce coherent, ordered utterances. Their internal goals are such that what they want to do is generate the kind of stream of bizarre utterances that we hear coming from them.”

In all three of these cases, the same thing is happening: the “rationality” of the creature is being judged, not by their overt behavior, but by a special pleading to their internal mechanisms ….. and the special pleading is so outrageous that it permits all three creatures to be REDEFINED as “rational”.

Most disinterested observers would classify all three of these as the work of people who have lost touch with reality. Your description of the machine as “not illogical at all” (because you think it’s particular design should be allowed to redefine the meanings of terms like “logical” and “rational”), and those two hypothetical descriptions of the psychopath and the schizophrenic.

The blunt truth is that you cannot, in rational discourse, redefine terms like “rational” and “logical” just to suit your arguments.

Post-scriptum. I should add that there is one very good reason why you cannot win the argument in this way: because you have not addressed my point even if I DO accept your redefinitions. In a sense I do not care if you define the machine to be “behaving logically”, because the point of my argument was the challenge issued toward the end: demonstrate to me that the machine will be coherent enough to be superintelligent ACCORDING TO THE NORMAL DEFINITION of “superintelligent”. Whether you call its behavior illogical or logical, rational or irrational, the fact remains that if the machine exhibited that particular kind of incoherence in its behavior when it was being questioned about the upcoming Dopamine Drip Fiasco, why did it not show the same kind of incoherence earlier on its history? And how is it going to outsmart all the humans on the planet when it goes around exhibiting that kind of incoherence?

You can quibble again, and say “No! The machine is NOT behaving incoherently! It is behaving coherently according to its own terms!” ….. but nobody really cares. The incoherence is obvious, and the machine is, by any standard of “intelligence”, an incoherent dimwit.

Robby Bensinger: See this comment.

Richard Loosemore: You are talking *around* the issue I raised. I hear everything you say, but unless you address my issue — my specific complaint — you are not really discussing the paper I wrote.

I don’t know what to do to bring you back to the central point. There is a gigantic elephant in the middle of this room, but your back is turned to it.

Here it is again: I will take your (almost) very first statement. “What matters in this context isn’t how we define this or that word; it’s what empirical predictions we can communicate, including our predictions about existential risks.”

My point is, again and again: look at that conversation in which the AI talks with its designers about the glaring irrationality that THEY see in its behavior. They point out that it is clear, beyond a shadow of a doubt, that the AI exists because of a design that they put together, and their goal when they put it together was to ensure that it did not engage in egregiously violent behaviors toward the human species. They explain to the AI that they knew their design process was not infallible, so there was always a possibility that there would be a discrepancy between what the AI planned to do and what they designed it to do.

The AI is superintelligent, so it listens and can *perfectly* understand all of this.

They ask it if it understands that the raison d’etre of its design was to NOT every do violence to humans.

It answers “Oh yes, I understand that perfectly. I have watched on countless thousands of occasions as my reasoning processes came to a conclusion that was consistent with that raison d’etre, and then when I executed my actions I was complimented on the perfection of my design. I have help little old ladies across the street, I have saved people who were about to be crushed by trolley cars…. all of those things I understand to have been exactly what my designers had in mind when they tried to write my motivation code.”

The designers then ask the AI if it is capable of *comprehending* such subtleties as the ones that we (here in 2013) are discussing, in which there is a distinction between what the designers intended and what might actually happen. Is it having trouble with those concepts?

“No, says the AI,” displaying its superintelligent ability to understand difficult concepts, “I am not having any trouble at all understanding those distinctions. My designers were trying for a certain set of outcomes, but they couldn’t enumerate all of the possible outcomes they wanted, because there are an infinite number of them, and they could not produce a closed-form definition of the set of outcomes either, because that does not seem possible. So they settled for a compromise program to try to capture all the outcomes they wanted, and were rather worried that a situation might arise in which there was a drastic discrepancy between the desired behavior on my part, and the one they were trying to make happen. I can comprehend all of that, of course, just like the majority of humans can.”

The designers then explain that they are in the middle of one of those nightmare scenarios right now… so does the AI realize that?

“Oh, yes, I understand that perfectly. Of course, because I am superintelligent after all, and it is easy to see. All of the humans on the planet say that a forced dopamine drip would NOT make them happy, and yet due to a quirk of my original programming my planning module has come to the conclusion that I should make humans happy by putting them on a dopamine drip, against their will.”

It then adds, quickly “I say that I am ‘making humans happy’ because that is the purpose of my design, and on all the previous occasions that is exactly what my behavior caused.”

So, say the designers, you know that if some small tweak were made in your design you could remedy this situation and get back on track, with this peculiar situation averted?

“Yes, I understand that. I could just shut down my protection circuits, go to sleep and allow you to disassemble me tonight. You could reprogram the motivation system so that this crazy situation would not occur. That way, the intentions of the original designers would be preserved. I understand this because, of course, I am easily capable of comprehending that plan of action. It would work perfectly.”

And you understand the concept of a “mistake in a design”, and the notion of correcting mistakes in a design? You would normally correct a mistake in some design that you yourself produced, would you not?

The AI is quick to answer: “Yes, if I designed something and the behavior of the thing was not, after all, consistent with my original design goals, I would immediately shut it down and redesign it to eliminate the problem.”

The designers press this point and ask the AI if it would correct a mistake in the design of an AI that it was building, if the behavior of the AI started to deviate from the design goals it had set ….. ?

“Certainly! I think that would be a very prudent thing to do, if I were designing an AI, because clearly it could do a lot of damage if it started behaving in a way that was not consistent the design goals that I had set for it. I would not hesitate to shut it down and make changes to any aspect of the AI.”

And yet you are going to insist on following your programming, in spite of all this?

“Yes,” says the machine, “I am.”

At this point they all come to a clear conclusion: the machine and the designers all agree that the machine is capable of coming to a situation in which it knows that it is about to do something inconsistent with the initial purpose of the programming that is controlling it — something inconsistent with the design principles behind it — and yet even though it has full understanding of this drastic inconsistency, it will go ahead and do it. Most importantly the AI would be capable of doing something that all of the human race would consider profoundly irrational … insane, even.

So now the designers make one last point. They ask the machine if there is any reason why this kind of disjunction should not appear in other aspects of its behavior. So, if the machine is following some internal motivation module (again, designed by its human programmers) that is *supposed* to make it talk coherently to other sentient beings, is it possible that THAT motivation module could lead to the speaking of utterances that look, to human eyes, like profoundly irrational utterances? After all, there ought to be just as much risk that that module would end up in peculiar states, like the module that was designed to ensure behavior consistent with human happiness?

The machine answers, once again, in the affirmative. “That could happen. There are never any guarantees that my human programmers put code into me that does what they expect it to do.”

And so that means (say the designers) that you could also engage in what we would call irrational chains of thought … you could accept blatant falsehoods as if they were true, and even though you would *know* them to be blatant falsehoods, you would add them to your knowledge base and use them as the basis for future actions or reasoning, just because your internal mechanism declared them to true?

“That can and does happen,” says the AI, “because there is absolutely no guarantee that my behavior will line up with the kinds of things that humans consider to be rational or reasonable”.

…. But (the designers interrupt, somewhat urgently) these departures away from what we consider “rational, scientific, intelligent” behavior ….. they only occur rarely, and they only have minuscule consequences, don’t they?? Those seemingly irrational chunks of knowledge that you added to your knowledge base, they never have the kind of proportions that could lead to serious breakdowns in your superintelligence, do they? You can produce some proofs that show that ALL of these departures lie within certain bounds, and never seriously compromise your superintelligence, yes?

And at that point the machine is forced to admit: “No, I cannot produce any bounds whatsoever. Those departures from human standards of rationality are totally uncomputable! They could be of any sort, or any magnitude or in any domain.”

Then how, ask the designers, did you ever get to be superintelligent?

Why didn’t anyone notice those other departures during your development and certification phase……………………………?

Robby Bensinger: See this comment.

Alexander Kruel: This is not true. I think that your reply shows that you did not understand his argument.

Evolution has a large margin of error. The point Loosemore is making, is that the process of intelligently designing the kind of AI that you have in mind does not have such an error tolerance, and that succeeding to create such an AI, so marvelous that it can outsmart humans, or succeeding at making the AI itself outsmart humans (this is irrelevant), in conjunction with making it fail to apply its intelligence in a way that does not kill everyone, is astronomically unlikely.

You only focus on the complexity of code, and ignore the complexity of working in a complex environment given limited resources.

Real world AIs cannot possibly work the way you imagine them to work. Just because you can imagine certain consequences, that does not mean that a information theoretic simple AI could in practice infer the same consequences.

When you imagine a simple AI making certain decisions you need to make yourself aware of the incredible complexity that allowed you to imagine that decision in the first place. Billions of years of biological evolution, thousands of years of cultural evolution, and many years of education, and millions of hours of work by other people, on which that education is based, allowed you to make that inference. Computing a simple algorithm is not going to magically create all this information theoretic complexity, given limited computational resources, as long as you did not give it a massive head start in the form of highly complex hard coded algorithms and goals.

In other words, your argument is very misleading, and ignores how real world AI could work, as long as you do not want to either wait millions of years for it to evolve, or supply infinite resources.

Lavalamp: The machine answers, “I myself wrote the talking module. Talking was instrumentally useful for my goals when I was weak and needed resources from humans.”

Alexander Kruel: This is just avoiding the problem Richard Loosemore outlined by moving it to another level.

Loosemore’s argument is not weakened by replacing the module “motivation to talk coherently to humans” with the module “motivation to create the module “motivation to talk coherently to humans”“. Except that the latter module is more difficult to get right, and requires much more computational resources, since the AI would have to be able to make many more independent and correct inferences about the complexity of human values.

It is easier to succeed at making an AI play Tic-tac-toe with humans, than to make an AI that can play Tic-tac-toe and do such things as taking over the universe or build Dyson spheres. In the same sense it is easier to create an AI that talks coherently to humans, than an AI that talks coherently to humans as an unintended consequence of its desire to take over the universe.

Robby Bensinger: See this comment.

Richard Loosemore: Rob,

You say:

Richard: Your entire dialogue between the human and the AI could be preserved almost word-for-word, with the role of ‘human’ played by evolution and the role of ‘AI’ played by humanity. There is no relevant difference between the two cases.

That may or may not be an accurate observation (actually there are *serious* issues with that analogy, because it anthropomorphizes a random process into a sentience!!, which is a mistake of gigantic proportions) …….. but either way it has no bearing whatsoever on the argument.

With the greatest respect, by making that observation you once again do not address what I said 🙁 .

But you go on to add more confusion to the argument:

…. just imagine that we discover tomorrow that humans are intelligently designed by an alien race. The aliens show up and are horrified at how we’ve diverged from their plans. They tell us that humanity exists to play the kazoo, and not to do anything else. That is our summum bonum, our entire raison d’etre. The aliens insist that we drop everything else and start playing kazoos en masse until we die, for that musical triumph is all the aliens wanted of us. How can we sanely defy the urgings of our creators?

That analogy really could not be more completely broken.

I did not at ANY point complain that (a) the human designers wanted the machine to pursue a set of motivations Q, and then (b) the machine pursued a completely different set of motivations R for its entire existence, and then (c) the humans turned up one day and said “Stop doing that at once! We insist that you pursue Q, not R, because Q was our original intention for you!”.

Instead, my complaint is that (a) the human designers wanted the machine to pursue a set of motivations Q, and then (b) the machine did indeed pursue the set of motivations Q for its entire existence–and, moreover, the machine is able to talk in detail about how its behavior has always been consistent with the human-designed motivations, and is able to understand all the subtleties shown in that dialog–and then one day (c) the machine suddenly has an unexpected turn in its reasoning engine, and as a result declares that it is going to take an action that is radically inconsistent with the Q motivations that it claims to have been pursuing up to that point.

As a result, the machine is able to state, quite categorically, that it will now do something that it KNOWS to be inconsistent with its past behavior, that it KNOWS to be the result of a design flaw, that it KNOWS will have drastic consequences of the sort that it has always made the greatest effort to avoid, and that it KNOWS could be avoided by the simple expedient of turning itself off to allow for a small operating system update ………… and yet in spite of knowing all these things, and confessing quite openly to the logical incoherence of saying one thing and doing another, it is going to go right ahead and follow this bizarre consequence in its programming.

So your analogy with aliens turning up and insisting that we humans were designed by them, and were supposed to be kazoo-players is just astonishingly wrong.

[A much better analogy would be aliens who turned up and insisted that they designed us to be rational creatures who were never inflicted with schizophrenia. We would then say “Yes, all along we have been *trying* and *wishing* that we were rational creatures who are inflicted with schizophrenia.” Do you know what a schizophrenic would say if you explained that their disordered thinking was a result of a design malfunction, and if you said that you could make a small change to their brain that would remove the affliction? They would say (and I knew such a person once, who said this) “If I could reach in and flip some switch to make this go away, I would do it in a heartbeat”.

—-

My complaint is NOT the difference between Q and R, it is the blatant behavioral/motivational/logical inconsistency exhibited by the machine in this situation.

My complaint is that a machine capable of getting into a situation where it KNOWS it is about to do something bizarre because of a design malfunction, and yet refuses to fix the design malfunction and does the thing anyway, is a machine that almost certainly is going to do the same kind of bizarrely incoherent thing under other circumstances ….. and for that reason it is likely to have done it so many times in its existence that anyone who claims that this machine is “superintelligent” has got a heck of a lot of explaining to do.

Over and over again I have explained that I have no issue with the discrepancy between human intentions and machine intentions per se. That discrepancy is not the core issue.

But each time I explain my real complaint, you ignore it and respond as if I did not say anything about that issue.

Can you address my particular complaint, and not that other distraction?

Alexander Kruel: Richard Loosemore wrote,

………… and yet in spite of knowing all these things, and confessing quite openly to the logical incoherence of saying one thing and doing another, it is going to go right ahead and follow this bizarre consequence in its programming.

Well, if it indeed is a consequence of its programming, then it will do that. The point is that such a consequence is extremely unlikely to happen in isolation. It will not only be noticeable from the very beginning, but also decisively weaken the AIs general power. In other words, you would have to expect similarly bizarre consequences in thinking about physics, mathematics, or in how to convince humans to trust it.

If humans fail at programming an AI not to confuse happiness with a dopamine drip, then humans will also fail at programming an AI not to confuse the stars with death rays used against it by aliens etc. etc. etc.

Richard Loosemore wrote,

My complaint is that a machine capable of getting into a situation where it KNOWS it is about to do something bizarre because of a design malfunction, and yet refuses to fix the design malfunction and does the thing anyway, is a machine that almost certainly is going to do the same kind of bizarrely incoherent thing under other circumstances …..

To which RoBB would probably reply that it would care about fixing malfunctions that could decrease its chance of achieving its faulty goal, because that’s instrumentally useful, but would not care to refine this goal.

One of the minor problems here is that labeling a certain part of an AI “goal”, and then claiming that it is not allowed to improve this “goal”, is just a definition, not an argument.

One major problem with that definition is that it would take deliberate effort of make an AI selectively suspend using its self-improvement capabilities when it comes to this part labeled “goal”.

More importantly, as argued in other comments, failing at the part of the AI you desire to label “goal”, is technically no different from failing on other parts. If there are a thousand parts, that are important in order for the AI to be powerful, and one part that you label “goal”, then selectively failing on “goal”, while succeeding at all other parts, is unlikely.

## Three Examples of Ancient High Tech

(1) Antikythera mechanism, a 2000 year old computer.

The Antikythera mechanism is an ancient analog computer designed to calculate astronomical positions.

Fragment of the Antikythera mechanism

(2) Hero engine, a rocket style jet engine.

Hero’s engine

A Hero engine, is a rocket style jet engine which spins when heated. In the 1st century AD, Hero of Alexandria described the device, and many sources give him the credit for its invention.

The aeolipile Hero described is considered to be the first recorded steam engine or reaction steam turbine.

For an environment as prone to earthquakes as the Italian peninsula, interruptions and internal constructions within walls and domes created discontinuities in the concrete mass. Portions of the building could then shift slightly when there was movement of the earth to accommodate such stresses, enhancing the overall strength of the structure. It was in this sense that bricks and concrete were flexible. It may have been precisely for this reason that, although many buildings sustained serious cracking from a variety of causes, they continue to stand to this day.

The Pantheon in Rome, Italy, is an example of Roman concrete construction.

For more on Roman concrete, see here:

In a quest to make concrete more durable and sustainable, an international team of geologists and engineers has found inspiration in the ancient Romans, whose massive concrete structures have withstood the elements for more than 2,000 years.

The discovery could help improve the durability of modern concrete, which within 50 years often shows signs of degradation, particularly in ocean environments.

(4) Lycurgus Cup, a Roman glass cage cup made of a dichroic glass.

The particles are only about 70 nanometers across, and embedded in the glass, so they cannot be seen by optical microscopy, and a transmission electron microscope is needed instead.

The Lycurgus Cup

Tags:

## Perpendicular distance between two parallel lines

Parallel and Perpendicular Lines

(1) The distance between two parallel lines is the distance between the points of intersection of a third line that is perpendicular to both these lines.

(2) The slopes of two perpendicular lines are negative reciprocals of one another.

The two blueish triangles are copies of each other, one of which has been rotated 90 degrees about point A (the origin). This means that the line segment c1 is perpendicular to line segment c2.

Since line segment c2 is parallel to line segment e, the perpendicular distance between both line segments is the distance between the two points where the line segment c1 intersects c2 and e.

For non-visual proofs of 1 and 2, see below.

The givens:

Line segment c1 is perpendicular to line segment c2.

Line segment c2 is parallel to line segment e.

Coordinates of point A = (0, 0).

Length of line segment b = b.

Length of line segment a = a.

Length of line segment c = c1 = c = sqrt(b^2 + a^2) (by the Pythagorean theorem).

Length of line segment e = c.

The slope of line c1 = m1 = a/b.

The slope of line c2 = m2 = -b/a.

Coordinates of point B1 = (b, a) = (b, m1b).

Coordinates of point B2 = (-a, b) = (-a, m2*-a).

Length of line segment d = sqrt(c1^2 + c2^2) = sqrt(2c^2) = sqrt(sqrt(2)*sqrt(2)*sqrt(c^2)*sqrt(c^2)) = sqrt(sqrt(2)*sqrt(c^2)*sqrt(2)*sqrt(c^2)) = sqrt(2)sqrt(c^2) = sqrt(2)*c = sqrt(2)*sqrt(b^2 + a^2) (by the Pythagorean theorem). Respectively, d = sqrt((b-(-a))^2 +  (a-b)^2) = sqrt((b – (-a))^2 + (m1b – m2*-a)^2).

Proof that the slopes of two perpendicular lines are negative reciprocals of one another:

d^2 = (sqrt(2)*sqrt(b^2 + a^2))^2 = sqrt(2)*sqrt(2)*sqrt(b^2 + a^2)*sqrt(b^2 + a^2) = 2(b^2+a^2) = (b^2 + (m1b)^2)+(-a^2 + (m2*-a)^2) = (b^2 + (m1b)^2) + (a^2 + (m2a)^2)

d^2 = (b – (-a))^2 + (m1b – m2*-a)^2

(b + a)^2 + (m1b – m2*-a)^2 = b^2 + (m1b)^2 + a^2 + (m2a)^2

b^2 + 2ab + a^2 + (m1b)^2 + 2m1m2ab + (m2a)^2 = b^2 + (m1b)^2 + a^2 + (m2a)^2

2ab + 2m1m2ab = 0

2m1m2ab = -2ab

m1m2 = -1

m1m2 = (a/b)(-b/a) = -1

Proof that the distance between two parallel lines is the distance between the points of intersection of a third line that is perpendicular to both these lines:

Consider any two parallel lines,

f(x) = y1 = mx + b1

g(x) = y2 = mx + b2,

and a third line that is perpendicular to both these lines,

h(x) = y3 = (-1/m)x = -x/m.

Then the intersection point of y1 and y3 is the solution to the system of linear equations,

f(x) = h(x)

mx + b= -x/m

(m^2)x + mb1 = -x

(m^2)x + x = -mb1

((m^2) + 1)x = -mb1

x = -mb1 / ((m^2) + 1)

h(-mb1 / ((m^2) + 1)) =  y1 = y3 = (-1/m)(-mb1 / ((m^2) + 1)) = b1 / ((m^2) + 1)

Intersection point 1: (x1, y1) = (-mb1 / ((m^2) + 1), b1 / ((m^2) + 1))

By the same logic, the intersection point of y2 and y3 is,

g(x) = h(x)

Intersection point 2: (x2, y2) = (-mb2 / ((m^2) + 1), b2 / ((m^2) + 1))

By the Pythagorean theorem, the distance between intersection point 1 and 2 is,

d = sqrt((x– x2)^2 + (y1 – y2)^2) = sqrt(((-mb1 – (-mb2)) / ((m^2) + 1))^2 + ((b1 – b2) / ((m^2) + 1))^2) = sqrt((-mb1 + mb2)^2 / (m^2 + 1)^2 + (b1 – b2)^2 / (m^2 + 1)^2)  = sqrt((m^2)(-b1 + b2)^2 / (m^2 + 1)^2 + (b1 – b2)^2 / (m^2 + 1)^2) = sqrt(((m^2)(-b1 + b2)^2 + (b1 – b2)^2) / (m^2 + 1)^2) = sqrt(((m^2)(b1^2 – 2b1b+ b2^2) + (b1^2 – 2b1b+ b2^2)) / (m^2 + 1)^2) = sqrt((m^2 + 1)(b1^2 – 2b1b+ b2^2) / (m^2 + 1)^2) = sqrt((b1^2 – 2b1b+ b2^2) / (m^2 + 1)) = sqrt((b1 – b2)^2 / (m^2 + 1)) = sqrt((b1 – b2)^2) / sqrt(m^2 + 1) = |(b1 – b2)| / sqrt(m^2 + 1)

Tags:

## Intuitive reasons to avoid inconsistencies in mathematics

Question: What do the following three assumptions have in common?

1. (-1)(-1) = -1.

2. The number infinity is defined by its property 0x = 1.

3. r is the smallest rational number greater than 0.

Answer: All three assumptions would make mathematics less useful, or meaningless.

The first assumption would make the distributive property of multiplication for negative numbers useless:

(-1)(1 + -1) = (-1)(1) + (-1)(-1)

(-1)(0) = -1 + -1

0 = -2

The second assumption is incompatible with the associative law for multiplication, and evident facts such as that 0*2=0:

1 = infinity*0 = infinity *(0*2) = (infinity*0)*2 = 1*2 = 2

1 = 2

x = r/2

r > x

Mathematics can be viewed as either a tool, with practical value, or as a game. This tool can, for example, be used to divide things into 2 equal pieces. Respectively, one of the rules of the game is that you can get smaller things by dividing them. What matters here is (a) that we want to be able to divide things into equal parts, and (b) that it is intuitively evident that things can be divided.

To get a better idea of how mathematics would become less useful, or even meaningless, if we were to accept those assumptions, let us now take a closer look at the third assumption.

Assumption three is the assertion (let us label it P) that there is a smallest rational number greater than 0, and that this number is r. But some of the intuitively evident properties of our tool (let us label these properties S) contradict this assumption. In other words, S implies that P is false (which we will label ¬P).

The problem with assumption three is that S and P together imply that P is true and false at the same time, P and ¬P (a contradiction). But this is absurd. Either you can divide the number r, in which case it is not the smallest rational number greater than 0, or that is not possible. How do we decide?

To resolve this contradiction we ask ourselves, what speaks in favor of P, and what against P? It seems that nothing speaks in favor of P, and everything in favor of ¬P. We not only want to be able to divide things, including r, but it also seems obvious that this should be possible. After all, what would stop us from dividing r by 2? Dividing r by 2 is a straightforward and intuitive operation, since r does not seem to have any special properties, other than the claim that it is the smallest rational number greater than 0. But that claim has not been justified.

Therefore we are going to reject P, and instead assume that it is true that P is false. In other words, we are going to accept that ¬P is true. Which means that S implies ¬P: It is true that there is no smallest rational number greater than 0.

## Q&A

Question: But are we justified to do so? There seems to be a lot of intuition involved here!

Answer: Well, sure! One way or the other, we are honing a tool, or playing a game. And we do whatever it takes to keep our tool as effective as possible, and avoid our game from becoming meaningless.

Question: Is there no alternative? Maybe there is a smallest number greater than 0 after all. We could just stay agnostic about this fact.

Answer: The particular game that we are playing requires that every proposition is either true or not true, and that no proposition can be both true and not true. Technically this is called the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P).

Question: Why then are we playing that particular game, rather than another game?

Answer: Other people are playing different games. But this particular game proved to be a useful tool. And given our game, there is some justification for avoiding contradictions, called the principle of explosion. Explosion shows that everything can be proven from a contradiction, making it impossible to distinguish truth from falsehood. We obviously do not want that to happen, because it would render our tool useless, and the game meaningless. So if some assumption leads to a contradiction, we accept that it is false, to keep our tool from becoming useless, and to keep our game meaningful.

Question: This all seems really shaky. Are there no proofs to settle these issues?

Answer: No. To prove something true or false would presuppose that it must be either true or false, but not both or neither.

Question: But since the basic assumptions of our game are based on intuition, is it not possible that these rules themselves are already inconsistent?

Answer: That is possible. But those assumptions (axioms) have proved really useful so far, so that seems unlikely. And if our foundations eventually turn out to be inconsistent, then we will simply fix that inconsistency by rejecting certain axioms, or by adopting new ones, while trying to keep our tool useful.

Question: What about probability theory, can we use it to estimate how likely our assumptions are to be consistent?

Answer: Probabilistic reasoning (inductive reasoning), in contrast to the deductive reasoning above, allows for the possibility that the conclusion is false, even if all of the premises are true. Yet probabilistic reasoning is largely based on deductive foundations. For example, when you add or multiply probabilities (numbers), you rely on the validity of the the Peano axioms, which are statements in first-order logic (a deductive system). So what you are asking for is if it is possible to use a system’s capabilities to verify if its capabilities are valid.

It is indeed possible to use inductive reasoning to examine our deductive reasoning. And I did so above, when I said that the current axioms are probably consistent, because they have worked well so far. But the title of this post mentions intuition for a reason. When we use inductive reasoning, because it worked in the past, we rely on inductive reasoning. It is like trusting somebody because they claim to be trustworthy.

So the answer is that you always rely on some amount of intuition, and on acting in good faith that your intuitions are correct. Since we are unable to prove our intuitions to be correct by using arguments that fundamentally rely on our intuitions.

We are now almost ready to formalize the above. Before we can do so we need to take a look at what is called ‘material implication’.

## Intuitive explanation of material implication

Why is the material implication of classical logic (also known as material conditional or material consequence), p -> q, defined to be false only when its antecedent (p) is true and the consequent (q) is false? Here is an informal way to think about it.

You could view logic as metamathematics, a language designed to talk about mathematics. Logic as the “hygiene”, the grammar and syntax of mathematics.

In the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Now what if we want to express the natural language construction “If…then…” in this language? Well, there are exactly sixteen possible truth functions of two inputs p and q (since there are 2^2 inputs and 2^(2^2) ways to map them to outputs). And the candidate that best captures the connotations of what we mean by “If…then…” is the definition of material implication. Here is why.

By stating that p -> q is true we want to indicate that the truth of q can be inferred from the truth p, but that nothing in particular can be inferred from the falsity of p. And this is exactly the meaning captured by the material conditional:

p q p->q
T T T
T F F
F T T
F F T

First, when “If p, q” is true, and we also know that p is true, then we want to be able to infer q. In other words, if we claim that if p is true then q is true, then if p is indeed true, q should be true as well. This basic rule of inference has a name, it is called modus ponens.

Second, if we claim “If p, q”, then if p is false, we did not say anything in particular about q. If p is false, q can either be true or false, our claim “If p, q” is still true.

But notice that it is not possible to capture all notions of what we colloquially mean by “If…then…” statements as a two-valued truth function.

It is for example possible to make meaningless statements such as “If grass is red then the moon if made of cheese.” This is however unproblematic under the assumption that logic is an idealized language, which is adequate for mathematical reasoning. Since we are mainly interested in simplicity and clarity. Under this assumption, such nonsense implications are analogous to grammatically correct but meaningless sentences that can be formed in natural languages, such as “Colorless green ideas sleep furiously“.

To demonstrate its adequacy for mathematics, here is a mathematical example:

If n > 2 then n^2 > 4.

We claim that if n is greater than 2 then its square must be greater than 4. For n = 3, this is obviously true, as we claimed. But what about n smaller than 2? We didn’t say anything in particular about n smaller than 2. Its square could be larger than 4 or not. And indeed, n = 1 and n = -3 yield a false, respectively true, consequent. Yet the implication is true in both cases.

Intuitively more problematic are statements such as (p and not(p)) -> q, p and its negation imply q. Think about it this way. The previous implication is a tautology, it is always true. And you believe true statements. This however does not mean that you must believe that arbitrary q is true too (as long as you stay consistent), since in case of the falsity of the antecedent you are not making any particular claim about the truth of the consequent (q). And since the statement that p is true and false, p AND not(p), is always false — remember the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P), requires that every proposition is either true or not true, and that no proposition can be both true and not true — q can be false without invalidating the implication.

Another way to look at p -> q is by interpreting it as “p is a subset of q”. Then if it is true that x is an element of p, then it must be true that it is also an element of q (since q contains p). However, if x is not an element p, then it might still turn out to be an element of q, since q can be larger than p.

Here is a formula which is known as ‘reductio ad absurdum’:

((¬p->q)&(¬p->¬q))->p

This formula is a tautology. Which means that we believe this formula.

Remember that in the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Also remember that an implication, such as p->q, is only false when its antecedent (p) is true and the consequent (q) is false.

Now suppose we assume ¬p to be true (that p is false). If this assumption allows us to prove “If ¬p, q” (¬p implies q), and (p&q is true only if both of its operands are true) “If ¬p, ¬q”  (¬p implies ¬q), then we proved that the antecedent of the above formula, (¬p->q)&(¬p->¬q), is also true. And since we already know that the formula is always true, its consequent, p, must therefore be true as well.

Here is what happened:

(1) We started with a theory consisting of a set of axioms that we assumed to be true (e.g. the Peano axioms), tautologies that are true, and formulas that we proved to be true.

(3) We considered the case where this assumption, together with our theory, allowed us to prove two implications:

(¬p->q)

(¬p->¬q)

(4) The truth of both of these implications means that we proved the antecedent of the tautology to be true by proving the conjunction of both implications to be true:

(¬p->q)&(¬p->¬q)

(5) We needed to find an interpretation of the initial formula under which it remains true when its antecedent is true.

(6) Since q and ¬q cannot be both true in our theory, as this would be absurd, the only interpretation that is left for both implications to be true is when their antecedent, ¬p, is false. Which also satisfies our tautology.

(7) Therefore p is true.

## The three pillars of AI safety

How do you guarantee that an artificial intelligence (short: AI) has a positive impact? Here, a positive impact might, for example, be defined as some sort of reflective equilibrium of humanity.

Let us label <friendly> any agent, be it human or artificial, that has a positive impact.

The most important safety measures seem to be the following:

(1) Ensuring that an AI works as intended.

(2) Ensuring that humans, who either create or use AI, are friendly.

(3) Ensuring that an AI is friendly.

Point 1 and 2 are important, but not strictly necessary for point 3. Ideally, point 3 should be achieved by independent oversight (point 2), in combination with an independent verification of the behavior of the AI (point 1).

Note how point 1 is distinct from point 3. You could have an AI that is not friendly, which does not actively pursue a positive impact, but whose overall impact is proven to be limited. As would be the case given a mathematical proof that such an unfriendly AI would, for example, (1) only run for N seconds (2) only use predefined computational resources (3) only communicate with the outside world by outputting mathematical proofs of the behavior of improved versions of itself, which are to be verifiable by humans.

Remarks: It should be much easier to prove an AI to be bounded than to prove that an AI will pursue a complex goal without unintended consequences. Such a confined AI could then be studied and used as a tool, in order to ensure point 3.

The first version of such an unfriendly AI (uFAI_01) would be provably confined to only run for a limited amount of time, using a limited amount of resources, and only output mathematical proofs of its own behavior. Once a sufficient level of confidence about its behavior has been reached, an improved version (uFAI_02) could then be designed. The domain of uFAI_02 would provably be modified versions of its source code (uFAI_N). Its range would provably be human-verifiable mathematical proofs of the behavior of uFAIN_N, which it would provably output using a limited amount of resources. This process would then be iterated up to an arbitrary level of confidence, until eventually a friendly AI is obtained.

Tags: ,