Four arguments against AI risks

Related to: AI drives vs. practical research and the lack of specific decision procedures

It has recently been argued that criticisms have failed to puncture the arguments in favor of artificial general intelligence being an existential risk (short: AI risks). I vehemently disagree with this assessment and further claim that the arguments in favor of AI risks have so far been thoroughly unconvincing.

It also seems not to be just me who is unconvinced by the existing arguments in favor of AI risks. How many experts in a field related to AI have been convinced by AI risk arguments? And if none of the relevant experts have ever been exposed to those arguments then I have to conclude that the arguments have either deliberately or carelessly been shielded from real world feedback loops.

Below I list four, not necessarily independent, caveats against AI risks that would be valid even if one was to accept (1) that AI will be invented soon enough to be decision relevant at this point in time (2) that the kind of uncontrollable recursive self-improvement imagined by AI risk advocates was even in principle possible (3) that the advantage of greater intelligence scales with the task of taking over the world in such a way that it becomes probable that an AI will succeed in doing so even given the lack of concrete scenarios on how that is supposed to happen.

(1) An AI is not pulled at random from mind design space. An AI is the result of a research and development process. A new generation of AI’s needs to be better than other products at “Understand What Humans Mean” and “Do What Humans Mean” in order to survive the research phase and subsequent market pressure.

(2) An AI will only ever do what it has been explicitly programmed to do. An AI is not going to protect its utility-function, acquire resources or preemptively eliminate obstacles in an unbounded fashion. Because it is not intrinsically rational to do so. What specifically constitutes rational, economic behavior is inseparable with an agent’s terminal goal. That any terminal goal can be realized in an infinite number of ways implies an infinite number of instrumental goals to choose from.

(3) Commercial, research or military products are created with efficiency in mind. An AI that was prone to take unbounded actions given any terminal goal would either be fixed or abandoned during the early stages of research. If early stages showed that inputs such as the natural language query <What would you do if I asked you to minimize human suffering?> would yield results such as <I will kill all humans.> then the AI would never reach a stage in which it was sufficiently clever and trained to understand what results would satisfy its creators in order to deceive them.

(4) Unintended consequences are by definition not intended. They are not intelligently designed but detrimental side effects, failures. Whereas intended consequences, e.g. in the case of an artificial intelligence, such as acting intelligently, are intelligently designed. If software was not constantly improved to be better at doing what humans intend it to do we would never be able to reach a level of sophistication where a software could work well enough to outsmart us. To do so it would have to work as intended along a huge number of dimensions. For an AI to constitute a risk as a result of unintended consequences those unintended consequences would have to have no, or little, negative influence on the huge number of intended consequences that are necessary for it to be able to overpower humanity.

For more arguments and many more caveats, including elaborations of the arguments above, see here.

Tags: ,