Q&A with experts on risks from AI #6

[Click here to see a list of all interviews]


Dr. Randal A. Koene is CEO and Founder of the not-for-profit science foundation carboncopies.org and CEO of start-up company NeuraLink Co., developing wireless neural probes at cellular scale. He is also the Director of Science for the 2045 Foundation.

Previously, Dr. Koene was Director of Analysis at nanotechnology company Halcyon Molecular in Silicon Valley (2010-2012), and Director of the Department of Neuroengineering at Tecnalia, third largest private research organization in Europe (2008-2010). He is a former Prof. at the Center for Memory and Brain of Boston University, and Founder of the Neural Engineering Corporation of Massachusetts. His research objective is whole brain emulation, creating the large-scale high-resolution representations and emulations of activity in neuronal circuitry that are needed in patient-specific neuroprostheses.

Dr. Koene has professional expertise in computational neuroscience, neural engineering, psychology, information theory, electrical engineering and physics. He organizes neural engineering efforts to obtain and replicate function and structure information that resides in the neural substrate for use in neuroprostheses and neural interfaces. Dr. Koene devised NETMORPH (netmorph.org), a computational framework for the simulated morphological development of neuronal circuitry, and a prototype for the development of Virtual Brain Laboratories to give neuroscientists, neuroengineers and clinicians large-scale high-resolution quantitative tools analogous to the computational tools that have become essential in fields such as genetics, chemistry or the aero-space industry. This effort bridges scales and will help determine how significant functions are encoded robustly in neural ensembles, and how those functions can nevertheless depend in specific ways on the detailed biophysics of particular component physiology.

Dr. Koene earned his Ph.D. in Computational Neuroscience at the Department of Psychology at McGill University, and his M.Sc. in Electrical Engineering with a specialization in Information Theory at Delft University of Technology. He is a member of the Oxford working group that convened in 2007 to create a first roadmap toward whole brain emulation (a descriptive term for the technological accomplishment of mind transfer to a different substrate that was first coined by Dr. Koene on his minduploading.org website in 2000).

Visit Dr. Koene’s personal web site randalkoene.com, carboncopies.org, minduploading.org or watch Dr. Koene present and discuss at TEDx in Tallinn, Estonia: Machines in Minds to Reverse engineer the Machine that is Mind.

Personal FAQ is available, based on biographical questions received. [Homepage] [Wikipedia]


AIDEUS is unlike other projects attempting to create a strong artificial intelligence on the basis of existing weak methods implementing some or other cognitive functions, they develop models of intelligent behavior, which are characterized by well-grounded universality, by increasing their practical applicability. Their goal is the creation of a strong artificial intelligence, solution of fundamental scientific problems, achievement of universality by embodied machine learning and decision-making systems. [Homepage]

The questions were answered by Alexey Potapov of AIDEUS.

THE INTERVIEW

Randal Koene: I do not think I would call myself an expert on A(G)I Risk. My expertise lies in the field of Whole Brain Emulation. But I do have overlapping interests with the AGI camp, so I think it is fine for me to try to answer your questions.

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Randal Koene: My estimates as of Dec. 2012 are:

10% by 2020
50% by 2035
90% by 2050

AIDEUS (Alexey Potapov): 2025/2035/2050

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Randal Koene: I think it depends on your notion of “better”. If to be substantially better then all it takes is to be 1000x faster at carrying out certain tasks involved (e.g. logical reasoning, proofs, etc) then it would not be much more difficult. It would be a matter of providing the necessary purpose-built hardware, possibly with some algorithmic streamlining.

If you mean something of greater conceptual magnitude, such as creating a mind that in addition to our capabilities is able to conceive of itself within a quantum mechanical universe where time and space are mere epiphenomena of the quantum mechanical interactions – and which can usefully progress in such a world… this could take a bit more effort.

AIDEUS (Alexey Potapov): It will be easier.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Randal Koene: Yes. Most of typical academic research is composed of some fairly trivial grunt work. In fact, building an Aritificial Reseacher is a goal that was probably largely spurred by that realization and the desire to focus on the interesting aspects rather than that grunt work.

So… if we create an AGI with human level performance overall, then I think it is entirely feasible to create an Artificial Researcher with above-human performance in academic research (when averaged).

AIDEUS (Alexey Potapov): Artificial intelligence will overwhelmingly outperform normal humans. However, friendly AI can help humans to advance to its own level.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Randal Koene:

Within hours: less than 0.1
Within days: less than 0.2
Within <5 years: 0.9-1.0

The reasons for my thinking in this area are the interdependence of things (see for example the nice movie I, Pencil: http://youtu.be/IYO3tOqDISE). This means that even if you have a thing in a box that is smart, that is not enough to “very quickly” put together everything you need to become a much greater thing. Information, material, energy and much more needs to travel to and fro to make that possible – and those things are limited in their rate in many ways.

AIDEUS (Alexey Potapov): 50%

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Randal Koene: I think it is quite important to be aware of those risks and to take active precautions about possible negative outcomes. On the flip-side, the only way to actually research the matter and become more aware of the real issues involved is to understand it better… and that is only possibly if you are intimately familiar with the topic, which again is directly related to working on it. So, I think that the matter is rather “academic” (in all senses of the work) unless one is also directly involved in A(G)I research.

AIDEUS (Alexey Potapov): Importance is moderate, because technical details of safe AI greatly depend on specific way of building AGI.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Randal Koene: VERY difficult question, because in reality the probability of human extinction within 100 years is not only dependent on the possibility of A(G)I. There are many things that could cause our extinction within that time-frame… and those things are in turn not all independent of one-another!

So, I think the probability is greater than 0. But beyond that, I don’t have the data to make an estimate that I would want to stand behind in publication. 😉

AIDEUS (Alexey Potapov): I think that this question is put in the slightly incorrect form, because singularity will bring drastic changes, and humanity will change within 100 years independent of (unsafe) AI. Biological human extinction will not matter. However, P(humans don’t participate in singularity | AI capable of self-modification and not provably non-friendly is created) = 90%. You can consider this as the answer to your question though.

Tags: