Narrow vs. General Artificial Intelligence

It is predicted that artificial general intelligence (short: AI) does constitute an existential risk (short: risk). 

Below is a comparison chart that I believe to reflect what AI risk advocates believe about how a general artificial intelligence differs in comparison to a narrow artificial intelligence in how it will behave given the same task.


Comparison Chart: Narrow vs. General Artificial Intelligence

(According to AI risk advocates.)

Narrow artificial intelligence will be denoted NAI and general artificial intelligence GAI.

(1) Is it in principle capable of behaving in accordance with human intention to a sufficient degree?

NAI: True

GAI: True

(2) Under what circumstances does it fail to behave in accordance with human intention?

NAI: If it is broken, where broken stands for a wide range of failure modes such as incorrectly managing memory allocations.

GAI: In all cases in which it is not mathematically proven to be tasked with the protection of, and equipped with, a perfect encoding of all human values or a safe way to obtain such an encoding.

(3) What happens when it fails to behave in accordance with human intention?

NAI: It crashes, freezes or halts. It generally fails in a way that is harmful to its own functioning. If for example an autonomous car fails at driving autonomously it usually means that it will either go into safe-mode and halt or crash.

GAI: It works perfectly well. Superhumanly well. All its intended capabilities are intact except that it completely fails at working as intended in such a way as to destroy all human value in the universe. It will be able to improve itself and capable of obtaining a perfect encoding of human values. It will use those intended capabilities in order to deceive and overpower humans rather than doing what it was intended to do.

(4) What happens if it is bound to use a limited amount of resources, use a limited amount of space or run for a limited amount of time?

NAI: It will only ever do what it was programmed to do. As long as there is no fatal flaw, harming its general functionality, it will work within the defined boundaries as intended.

GAI: It will never do what it was programmed to do and always remove or bypass its intended limitations in order to pursue unintended actions such as taking over the universe.


The current beliefs of most experts in the field of AI do not seem to support that the behavior outlined in the chart above is a likely outcome. See for example Peter Norvig, 2012:

Personally, I think that the last invention we need ever make is the partnership of human and tool. Paralleling the move from mainframe computers in the 1970s to personal computers today, most AI systems went from being standalone entities to being tools that are used in a human-machine partnership.

Our tools will get ever better as they embody more intelligence. And we will become better as well, able to access ever more information and education. We may hear less about AI and more about IA, that is to say “intelligence amplification”. In movies we will still have to worry about the machines taking over, but in real life humans and their sophisticated tools will move forward together.

I believe that AI risk advocates need to provide a lot of technical details and specific arguments to support the above chart. Arguing by definition alone is insufficient. The behavior outlined above has to be shown not only to be in principle possible but to be a probable result of actual research and development.

What is it that makes a general intelligence, as opposed to a narrow intelligence, behave in such a way as to result in human extinction?

What can be said about a general intelligence that can’t be said about a narrow intelligence such as IBM Watson? Both systems can be interpreted, implicitly, to have a utility function. And even a thermostat could be interpreted to have a terminal goal. Yet a narrow intelligence, an expert system, is characterized to achieve its goal while a generally intelligent agent is characterized to achieve its goal and in addition pursue activities that will cause human extinction.

Further reading:

Tags: ,

  1. Lukasz Stafiniak’s avatar

    There are three camps of views on “safe AGI” that I recognize. The first camp is Eliezer Yudkowsky and the “logical approach” with FAI which you characterize above. MIRI backers are obviously supporters of this camp. The second camp is Mark Waser and “citizenship approach”. I would enlist David Brin in this camp, while Vernor Vinge is closer to the first camp. The third camp is Ray Kurzweil and “merger approach”. The AI-as-tool (e.g. Peter Norvig) is a subset of this camp. Pei Wang is perhaps at outskirts of camp 2 near camp 3. (I value Pei Wang’s philosophical contributions, although I think his NARS system has obvious shortcoming(s) which will cause his attempt at AGI to fail.)

  2. seahen’s avatar

    Reminds me of a dream I had, while I was in undergrad and taking *the* AI course (my University’s excuse for a computing department had only one). My term project, which was a game AI (and would have had to run on my already-old laptop, which was a Centrino at 2x1200MHz with 2GiB RAM), somehow acquired a young woman’s body in its first 12 hours of operation, and seduced me. It was an interesting dream, and one that’s stuck with me ever since, but I somehow think it would have run into some limitations that would have taken me, if not me and one of my profs, to fix.

Comments are now closed.