AI vs. humanity and the lack of concrete scenarios

About this post: This post is supposed to be a preliminary outline of how to analyze concrete scenarios in which an advanced artificial general intelligence attempts to transform Earth in a catastrophic way.

Objective: Analyzing concrete scenarios helps to (1) better estimate the probability of possible catastrophic side effects associated with the invention of an advanced artificial general intelligence and helps to (2) design preemptive security measures.

Assumptions: For the purpose of this post I will assume an artificial general intelligence (short: AI) that is very roughly more intelligent than all of humanity and can process a greater amount of knowledge in a shorter period of time. I further assume that this agent does care about using all of the resources in the solar system for some goal unrelated to human values. Humans are considered a mere resource.

The question: Does such an AI constitute an existential risk? In other words, will such an AI cause human extinction?

This question can obviously be answered positive if such an AI is likely to achieve its goal. But how do we determine the probability of such a scenario? We have to carefully look at how such an AI could accomplish to defeat humanity (taking over the world).

We have to pay attention to a lot of factors if we want determine concrete scenarios of how an AI could overpower humanity and how probable each scenario is. Factors such as (1) the AI’s fragility to human counter strikes (2) dependency on the global infrastructure (3) ability to take control of external resources and to keep hold of those resources while remaining productive.

One of the most important question is how the advantage of greater intelligence scales with the task of taking over the world.

If we consider simple games such as Tic-tac-toe we can definitely say that superhuman intelligence would not be instrumentally useful at beating humans. You also won’t get a practical advantage by throwing more computational resources at the travelling salesman problem and other problems in the same class. The same might be said about improving a conversation in your favor by improving each sentence for thousands of years of subjective time. You will shortly hit diminishing returns. Especially if you lack the data to predict human opponents accurately.

Another example is due to Holden Karnofsky (source):

I find it somewhat helpful to analogize UFAI-human interactions to human-mosquito interactions. Humans are enormously more intelligent than mosquitoes; humans are good at predicting, manipulating, and destroying mosquitoes; humans do not value mosquitoes’ welfare; humans have other goals that mosquitoes interfere with; humans would like to see mosquitoes eradicated at least from certain parts of the planet. Yet humans haven’t accomplished such eradication…

Example scenario: Inventing new technologies to overpower humanity.

Consider that we are already at a point where we have to build billion dollar chip manufacturing facilities to run our mobile phones. We need to build huge particle accelerators to obtain new insights into the nature of reality. It takes a whole technological civilization to produce a modern smartphone.

In order to come up with new technologies an AI would somehow have to acquire large amounts of money. And even if it manages to do so, it is not easy to use the money. You can’t “just” build huge companies with fake identities, or use a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. How likely is it for an AI to simply create a new Intel or Apple over a few years without its creators noticing anything?

Further questions:

  • What is the net advantage of eidetic memory if you consider that humans can use tools to effectively achieve the same?
  • What advantage is there between humans who can extent their working memory using their tools and an AI? We can make a certain kind of psychological distinction between things we can hold in our mind without tools, and things we can’t. Does this mean there is some radical qualitative advantage (as opposed to the obvious speed advantages) in increasing the capacity of working memory? If an AI that we invented can hold a complex model in its mind, then we can also simulate such a model by making use of expert systems. Does being consciously aware of the model make a great difference in principle to what you can do with the model? If your brain had a 1000 times larger working memory, would you be better at problem solving? Probably. Would you be 1000 times better?
  • What is the advantage of more serial power? Do important problems related to taking over the world fall into complexity classes where throwing more computational resources at a problem does not lead to diminishing returns? Increases in raw processing power don’t translate to proportional increases in actual utility. Your brand new PC does not improve your life twice than the PC you bought 18 months ago.
  • What is the advantage of parallel computation? It is not clear how many tasks are easily decomposable into smaller operations. Consider that the U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet their advantage does scale sublinearly
  • What evidence do we have that most evolutionary designs are vastly less efficient than their technological counterparts? A lot of the apparent advantages of intelligent design is a result of making questionable comparisons like between birds and rockets. We haven’t been able to design anything that is nearly as efficient as natural flight. It is true that artificial flight can overall carry more weight. But just because a train full of hard disk drives has more bandwidth than your internet connection does not imply that someone with trains full of HDD’s would be superior at data transfer.
  • What is the advantage of copying? The first artificial general intelligence might be a state of the art technology which might run on state of the art hardware, rather than one AI of a huge ecosystem of different AI’s that run everywhere from smartphones to personal computers. To imagine that an AI could simply copy itself would be similar to imaging that IBM’s Blue Brain Project could simply be copied in such a way that not only nobody notices the unexpected use of bandwidth and surge up of everyone’s CPU load but that it would run effectively enough to make it worthwhile to take the risk of detection and increased instability due to using highly volatile infrastructure that was never adapted to run such a software. Further consider that a collective of humans and their tools can also think much faster than a single human being. Yet how great is the advantage? Sometimes a single human being can outsmart humanity. Yet humanity can kill a single human being. What does this indicate about the relation between (1) greater intelligence (2) faster thinking and (3) greater power?

Tags: ,

  1. Mitchell Porter’s avatar

    I’m going to go off-topic again and state that if we are interested in likely concrete scenarios, we should be considering something like: national-security elite of great power “X” have access to AI breakthroughs taken from the civilian world and then pushed over the edge by well-funded covert computer scientists. So the feedback loop of self-enhancement is not occurring solely within one single self-modifying program, but within a small but powerful network of organizations, whose value system is the “national interest” of one country…

Comments are now closed.