Q&A with experts on risks from AI #5

[Click here to see a list of all interviews]

Professor John E. Laird is the founder of Soar Technology, an Ann Arbor company specializing in creating autonomous AI entities. His major research interest is in creating human-level artificial intelligent entities, with an emphasis on the underlying cognitive architecture. [Homepage]

Dr. Kristinn R. Thorisson has been developing A.I. systems and technologies for over two decades. He is the Coordinator / Principal Investigator of the HUMANOBS FP7 project and co-author of the AERA architecture, with Eric Nivel, which targets artificial general intelligence. A key driving force behind the project is Thorisson’s new Constructivist Methodology which lays out principles for why and how AI architectures must be given introspective and self-programming capabilities. [Homepage]

Q&A with John E. Laird

Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?

Explanatory remark to Q1:

P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/50%/90%

John E. Laird: I see this as a long way out. There are many technical/scientific hurdles, and there is not a general consensus that there is a need for the type of autonomous human-level machine intelligence from science fiction. Instead, I predict that we will see machine intelligence embedded into more and systems, making other systems “smart” but not as general as humans, and not with complete human-level intelligence. We will see natural language and speech becoming ubiquitous so we can communicate with devices (more than Siri) in the next 5-10 years. But I don’t see the development of autonomous HLMI coming anytime soon (such as robots in the movies – Data for example). There are many technical hurdles but there are also economic, political, and social issues. On the technical side, very few people are working on the problem of integrated human-level intelligence, and it is slow going. It would take significant, long term investment and I don’t see that happening anytime soon.

10% 20 years

50% 50 years

90% 80 years

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?

Explanatory remark to Q2:

P(human extinction | badly done AI) = ?

(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)

John E. Laird: 0% – I don’t see the development of AGI leading to this. There are other dangers of AI, where people (or governments) use the power that can be gained from machine intelligence to their own ends (financially, politically, …) that could end very badly (destruction of communication networks – bring down governments and economies) but the doomsday scenarios of Terminator and the Matrix just don’t make sense for many reasons. (Think James Bond evil genius’ instead of evil robots.) – If you want to get scared – watch Colossus, the Forbin Project  – but that also is just science fiction (along the lines of your next question as it turns out).

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?

John E. Laird: 0% – There is no reason to believe that an AGI could do this. First, why would an AGI be able to learn faster than humans. It takes lots of experience (which takes lots of real time) to learn about the world (that is why humans take >12 years to get to something intelligent). Just mining existing databases, etc. isn’t going to get you there – you need to interact with the world. Just getting lots of computers and lots of data doesn’t mean a system can get to superhuman intelligence. Also, using lots of distributed processing effectively (which I assume is the scenario you are thinking about) is problematic. Computation requires locality – to make an intelligent decision, you need to bring data together in one place. You have some aspects of intelligence distributed, but to be strategic, you need locality.

Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

John E. Laird: I don’t think so. It would be impossible to prove something like that for a system that is sufficiently complex.

Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

John E. Laird: No. There are many more things to lose sleep over than AGI.

Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?

John E. Laird: I think some awareness is important. Possibly a bit more than now, but this is not a pressing issue for human existence. As I stated earlier, we have much more to worry about how humans will use intelligent systems than how the intelligent systems will evolve on their own.

Q7: What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?

John E. Laird: Worry about bio-engineering – genetically altered avian flu virus – to me, that is much more likely to kill us off than AGI. Nanotechnology also has its scary side.

Q8: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?

John E. Laird: No – I can’t come up with such a milestone.

Q&A with Kristinn R. Thorisson

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Kristinn R. Thorisson:

10%: 2025
50%: 2045
90%: 2080

Mathematics and programming will surely come before engineering and science, by at least 20 years, with science emerging last.

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Kristinn R. Thorisson: I suspect that the task of making the next leap in building an AI becomes exponentially more difficult as intelligence grows, so if it took 100 years to develop a  human-level (measured roughly) AI system from the time when software was automatically running on a computer (around the middle of the 20th century), then the next milestone of roughly equal significance will be reached roughly 100 years later, or sometime in the timeframe between 2100-2180. However, before that milestone is reached it may already have been made irrelevant by other more interesting milestones based on e.g. running vast numbers of specially modified human-level AIs.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Kristinn R. Thorisson: I expect AIs to outperform humans in virtually every way, except perhaps on those points where evolution has guaranteed humans the necessary stability to grow and prosper, i.e. along the social and ethical dimensions – because it is difficult to engineer such capabilities in a top-down manner, they spring more naturally from (natural) evolution, and may in fact be dependent on that.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Kristinn R. Thorisson:

hours: 0.1%
days: 0.2%
<5 years: 2%

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Kristinn R. Thorisson: Not important at all. However, researching the risks associated with *human misuse* of such technology should be on the drawing board of governments everywhere in the next 10 years, ideally.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)

Kristinn R. Thorisson: Very low, approaching zero.

Tags:

  • Pingback: Experts answer Alexander Kruels Questions on the Risks of AI | Icelandic Institute for Intelligent Machines()

  • Muhammad al-Khwarizmi

    The discussion of AGI here generally presumes a) a high-degree of micromanagement by the designers, b) disembodiment and c) that what we would recognize as “intelligence” is innately far more difficult than basic sensorimotor tasks.

    My responses, respectively:

    a) An AGI’s design can and should be left relatively open-ended … 99% (or more) learning and 1% (or less) innate. Or something to that effect. Huge emphasis on learning and domain-general mechanisms. Mosaic development only works for very simple organisms.
    b) Aside from the philosophical issue of whether a disembodied agent can be intelligent or aware in any meaningful sense, the surest practical route to an AGI is through embodiment. As Laird put it: “Just mining existing databases, etc. isn’t going to get you there – you need to interact with the world.”
    c) As Rodney Brooks has pointed out, maybe the marvel that is intelligence is not so hard to achieve in some ways. It came pretty soon, relatively speaking, after all that time it took to develop swimming, walking, running, flying, crawling … to my knowledge, the “lower”, “primitive” organs of the brain are executed in greater genetic detail than the “higher” neocortex, which is highly uniform in its anatomy, even in adulthood. So rather than emphasizing a “bag of tricks” approach to cognition, we should be looking for the most basic underlying principles of extant neural intelligence—they are probably both few and broad—and trying to improve on them.

    In any case, I sure hope the interviewees are wrong. Otherwise the planet is staring down the barrel of another doomsday scenario: namely, the continued presence of the human species on its surface.

  • Pingback: h+ Magazine | Covering technological, scientific, and cultural trends that are changing human beings in fundamental ways.()