ai_risks_qa

You are currently browsing articles tagged ai_risks_qa.

This is a follow-up interview with professor of computer science Michael Littman[1][2] about artificial intelligence and the possible risks associated with it.

The Interview

Q1: You have been an academic in AI for more than 25 years during which time you mainly worked on reinforcement learning.[3][4][5] What are you currently working on and what are your plans for the future?

Michael Littman: My first paper, which I worked on with Dave Ackley in 1989, was called “Learning from natural selection in an artificial environment”. Recently, I’ve started to come back to the question we looked at in that paper—essentially, what should a learning algorithm try to optimize so that the resulting behavior is as “fit” as possible? Most reinforcement-learning research doesn’t make a distinction between the agent’s reward function and its actual task, but Satinder Singh[6] and his colleagues recently provided some evidence that it is conceptually useful to separate these two ideas and ask how to create a reward function that encourages an agent to excel at a task other than the one literally specified by the reward function.

In a way, it is a similar question to the control problem[7], but in a much less sinister context—we need a way of telling machines what we want them to do. I’m focused on end users, people without significant programming experience, and am looking at combinations of inverse reinforcement learning, good interface design, and more natural programming models that are easy to pick up. My collaborators and I are looking at these questions in the context of programming household devices (lights and thermostats) as well as with robots.

Q2: In a previous interview[8] you wrote that P(human extinction caused by badly done AI | badly done AI) is epsilon. You also voiced some skepticism about friendly AI[9] (a machine superintelligence that stably optimizes for humane values). Now that you have read Nick Bostrom’s book[10], ‘Superintelligence: Paths, Dangers, Strategies’, have you learnt something that changed your opinion, or caused you to interpret the questions differently?

Michael Littman: I was very impressed with Nick Bostrom’s book. It’s exquisitely thought out and I found the scope (in terms of coverage of micro and macro scales in both space and time) truly remarkable. That being said, I do not find the central premise—that we are in the process of bringing the ominous owl on the book’s cover into our midst—compelling. Note that I didn’t voice skepticism about friendly AI but about *provably* friendly AI. I’d argue that you can’t prove things about the real world, only about abstractions.

Q3: What is the current level of awareness of Nick Bostrom’s work within the field of AI, or his arguments, and do you recommend that people working to advance artificial intelligence should read his book?

Michael Littman: My guess is that the engagement of most AI researchers is at the level of friends and colleagues alerting them to the highly public statements of notable individuals like Musk (“summoning the demon”)[11] and Gates (“I don’t understand why some people are not concerned”)[12]. I think the field is well aware of the idea of the singularity, but not familiar with the subtleties and the depth of Bostrom’s work in this context. That being said, I do not think mainstream AI research is seriously dabbling with the idea of recursive self improvement[13] and, as such, Bostrom’s book seems like a pretty significant departure from their core interests and direction.

Q4: In an email you wrote that you believe the main disagreement between you and Nick Bostrom et al. to be whether an intelligence explosion[14][15][16][17][18][19][20][21][22][23] is a non-negligible consequence of AI research. In 2011 you wrote that the probability of a human level artificial general intelligence (AGI) to self-modify its way up to massive superhuman intelligence in less than 5 years is essentially zero (Addendum: In a previous interview he also wrote that P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = 1%, possibly misinterpreting the question I cited as P(superhuman intelligence within < 5 years)). Some people would call you overconfident.[24][25] Can you elaborate on the reasons underlying your estimate?

Michael Littman: I find your use of the word “overconfident” there to be quite interesting. I’m very interested in the problem of AGI and would love to be a part of the community that brings it about. An overconfident person, to me, would be someone who believes he or she can solve this problem in 5 years. More to your point, though, I don’t see massive superhuman intelligence to be something that is meaningful outside a specific cultural context. The development of what we might call massive superhuman intelligence will be an evolutionary process involving changes in the social, physical, and intellectual fabric on which our society is built. Changes like that take time.

Q5: Elon Musk has recently donated $10M to keep AI beneficial.[26] Consider someone whose goal is to maximize how much good they do[27], where “good” is defined as improving the world in order to reduce suffering and help humanity flourish. Do you believe that donating money in order to reduce risks associated with artificial intelligence (not just extinction type risks) might currently be an effective way to accomplish this goal?

Michael Littman: As you know, a number of my colleagues (including my dissertation advisor and many other colleagues for whom I have tremendous respect) signed an open letter[28] hosted by the Future of Life Institute calling for more attention to reducing risks associated with AI. I’ve followed up with a few of them and the most prevalent attitude is that AI, like all technologies, carries significant risks to society. At that level, I agree wholeheartedly that keeping technologists and scientists tuned in to the societal impacts of their work is exceedingly important. So, yes, I feel that supporting research on societal impacts of technology—including artificial intelligence—is a good investment for good.

However, if the risks we’re talking about are of the type detailed in Bostrom’s book—human-independent AI competing directly with humanity for control of our destiny—I don’t think that should be a high priority.

Q6: In another email you wrote that your personal takeaway from all this is to work harder to understand what intelligence *is*. How do you think about using e.g. Hutter’s specification of AIXI[29] as a model for AGI? Or asked more generally, do you think it is possible to work on AGI safety, or a formal definition of it, without researching and advancing AGI at the same time?

Michael Littman: I think the idea of seriously studying AGI safety in the absence of an understanding of AGI is futile. At a high level, raising awareness and scoping out possibilities is fine. But, proposing specific mechanisms for combatting this amorphous threat is a bit like trying to engineer airbags before we’ve thought of the idea of cars. Safety has to be addressed in context and the context we’re talking about is still absurdly speculative.

Q7: D. Scott Phoenix, co-founder of the A.I. startup Vicarious, recently wrote[30] that artificial superintelligence isn’t something that will be created suddenly or by accident. He further wrote that there will be a long iterative process of learning how these systems can be created and the best way to ensure that they are safe. What probability do you assign to the possibility that he is wrong, that either human or superhuman AGI will appear too quickly for us to ensure its safety if we don’t start working on the problem right now? Note that this question pertains whether the initial invention or emergence of AGI will take us by surprise, rather than the speed of its subsequent improvement or self-improvement.

Michael Littman: I agree with the perspective that it’s a long iterative process. I believe that the very notion of what we think intelligence *is* and what it is *for* will evolve significantly through this process. I think we’ll look back on this time much as we look back on earlier times, stunned at the naivety of our working hypotheses and surprised by our obliviousness to the fact that what we now take as a given is not only not given, but flat out wrong. If people are comfortable claiming that we know enough about intelligence today to extrapolate what superintelligence would be, it would be my turn to use the word “overconfident”.

See also

Recent commentary on AI risks by experts and others

Earlier commentary on AI risks

References

[1] http://en.wikipedia.org/wiki/Michael_L._Littman

[2] http://cs.brown.edu/~mlittman/

[3] http://scholar.google.com/scholar?q=Michael+Littman

[4] http://www.scholarpedia.org/article/Reinforcement_learning

[5] https://www.udacity.com/course/ud820

[6] http://web.eecs.umich.edu/~baveja/

[7] The control problem: how to keep future superintelligences under control. Some AI risk advocates claim that rather than trying to limit what an AI can do, we have to engineer its motivation system in such a way that it would choose not to do harm. One of the reasons underlying this claim is that a superintelligent AI would probably break free from any bonds we construct.

[8] http://lesswrong.com/r/discussion/lw/8wz/qa_with_michael_littman_on_risks_from_ai/

[9] http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence

[10] http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies

[11] http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/

[12] http://www.washingtonpost.com/blogs/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/

[13] http://wiki.lesswrong.com/wiki/Recursive_self-improvement

[14] Intelligence Explosion Microeconomics – https://intelligence.org/files/IEM.pdf

[15] Intelligence Explosion: Evidence and Import – https://intelligence.org/files/IE-EI.pdf

[16] Why an Intelligence Explosion is Probable – http://richardloosemore.com/docs/2012c_IntelligenceExplosion_rpwl_bg.pdf

[17] Can Intelligence Explode? – http://www.hutter1.net/publ/singularity.pdf

[18] The Singularity: A Philosophical Analysis – http://consc.net/papers/singularity.pdf

[19] Cascades, Cycles, Insight… – http://lesswrong.com/lw/w5/cascades_cycles_insight/

[20] …Recursion, Magic – http://lesswrong.com/lw/w6/recursion_magic/

[21] Recursive Self-Improvement – http://lesswrong.com/lw/we/recursive_selfimprovement/

[22] Hard Takeoff – http://lesswrong.com/lw/wf/hard_takeoff/

[23] Permitted Possibilities, & Locality – http://lesswrong.com/lw/wg/permitted_possibilities_locality/

[24] Suppose that near certainty in your ability to assess a set of propositions equals a 1 in a million chance of being wrong about an assessment of a particular proposition. This means that given a million similar statements, you would have to be correct (on average) about 999999 such assessments while being wrong only once. Can you possibly be this accurate? An amusing example: http://www.spaceandgames.com/?p=27

[25] http://lesswrong.com/lw/3be/confidence_levels_inside_and_outside_an_argument/

[26] http://futureoflife.org/misc/AI

[27] http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/

[28] http://futureoflife.org/misc/open_letter

[29] http://en.wikipedia.org/wiki/AIXI

[30] http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better

Tags: , ,

[Click here to see a list of all interviews]


Sir William Timothy Gowers, FRS (Fellow of the Royal Society) is a British mathematician. He is a Royal Society Research Professor at the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, where he also holds the Rouse Ball chair, and is a Fellow of Trinity College, Cambridge. In 1998 he received the Fields Medal for research connecting the fields of functional analysis and combinatorics. [Homepage]

The Interview

Timothy Gowers: OK here are my answers, but with the qualification that for some questions I’m going to restrict attention to performance at mathematics. I don’t have enough appreciation of the technical difficulties associated with more general AI to feel confident about making predictions. But I do think that if a program can do mathematical research as well as humans, then science, engineering and programming can’t be far behind (especially programming).

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Timothy Gowers: I think there’s a 10% chance we’ll have programs as good as humans at doing maths within 25 years, a 50% chance that we’ll have it within 40 years and a 90% chance that we’ll have it by the end of the century.

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Timothy Gowers: I think that once computers are as good as, say, beginning PhD students at maths, then assuming there are lots of them that have different mathematical styles and tastes and can interact with each other in the way that human mathematicians do (in principle one computer could model an entire mathematical community but I think it’s more likely that it would be done by several different programs developed by several different teams), then I think one could just leave the programs to run and you’d see maths progress like human maths but much much faster. I don’t know exactly how good they’ll have to be before this singularity arises: the key property they’ll need to have is an ability to step back, think about what they are doing, and improve themselves. That’s why I went for a beginning PhD student, who has to go through a process like that. (Maybe humans would need to act as “research supervisors” just to get them going.)

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Timothy Gowers: I expect computers to become overwhelmingly better than humans at mathematical research, just as they are now overwhelmingly better at number crunching. But I think that they’ll also be able to explain a lot of what they do. It’s hard to imagine people training themselves in the skill needed to solve a maths problem if this happens, but maybe it will survive to some extent, just as people still enjoy playing chess now. But I think that the human activity of mathematical research as we know it will be killed off by this development. Quite what the knock-on effects of this will be are hard to predict. But for example I think if we can build good computer researchers, we’ll also be able to build amazing interactive teaching programs, so we needn’t worry that there will be nobody left to teach mathematics.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Timothy Gowers: I have a similar view to what I said in (2). To get to roughly  professional competence, a significant degree of self-modification will be needed, so I don’t really see how one can get to human levels without rapidly surpassing those levels.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Timothy Gowers: I really don’t know about this. I don’t think doing it just for maths is risky, because I think maths is sufficiently narrow that we don’t have to worry about things like whether the programs could become malign. But with more general intelligence I think it’s different. For example, if a program could pass a fairly modest Turing test, then one could build a spambot that would generate spam that was basically impossible to distinguish from non-spam. Imagine a blog that gets zillions of comments that are all perfectly sensible. Would it matter? That’s an interesting question, but it would certainly change things, and in general I think that keeping the internet going would be a serious challenge. On the plus side, one could also design better and better spam detectors, but I’m not sure how much comfort I get from that: by the definition of passing the Turing test, detection would appear to be impossible.

The general point here is that once self-modifying programs exist, people other than the original developers could use them for evil purposes. I don’t know how much of a problem that is. It applies to other things, such as nuclear weapons for instance.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Timothy Gowers: I don’t know, but my instinct tells me that the probability is pretty small. In particular, I find it small enough that we are nowhere near the point where we should stop doing, or even slow down, research into AI.

Q7: How would you test if an artificial intelligence was at least as good as humans at mathematics?

Timothy Gowers: It would be sufficient (though maybe not necessary) to subject an artificial mathematician to the same tests that human mathematicians are subjected to. If it can write papers that attract the interest of human mathematicians, then its intelligence is as good as that of a human mathematician we judge to be producing results of a similar level of interest.

Actually, I want to qualify that. I would want my artificial mathematician not just to produce mathematics of a kind that a human might produce, but also to explain how it did so. If, for example, it made excessive use of brute-force search but ended up with the proof, when a human would get there much more efficiently, then it would be lacking something important.

Q8: Is it correct that in order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”)?

Timothy Gowers: I think it would be extremely helpful to formalize the notion of “interesting mathematics” (as opposed to arbitrary well-formed statements and logically valid proofs). However, again I would regard that as sufficient but not necessary: if programs were written that in practice produced mathematics of a similar nature to human mathematics, one might eventually have enough faith in them to believe that they had captured the notion of “interesting mathematics” without our having had to define it. (One could trivially define it as “something that will eventually be part of the output of the program”.)

Q9: What role does natural language proficiency play in human mathematics and what are the challenges at doing mathematics without it?

Timothy Gowers: I think that mathematicians are tempted by their training to reduce everything to small sets of assumptions, and to have very economical foundations — e.g. in set theory. But in practice we think with massively redundant sets of assumptions, and that’s important because it enables us to make connections easily that would otherwise not be obvious and would hold us up. So I think that really good automatic theorem provers will need to operate in a very high-level language — not necessarily quite as flexible as the entirety of the English language, but more like the kind of language mathematicians use when writing out a proof carefully (minus the side remarks, unless these too are quite precise).

Q10: The problems I see are the following: (1) If the formalization of what your artificial mathematician is supposed to do is very specific then most of the work requiring human-level intelligence has been done by whoever came up with that formalization and if (2) the formalization is very unspecific then it is not clear how to test for success, much less judge its efficiency.

Timothy Gowers: I don’t agree with what you say.

(1) A formalization of what constitutes interesting mathematics doesn’t have to be of the form, “This statement is interesting if and only if X.” Rather, it can be of the following form. “Let B be the current body of mathematical knowledge. A statement S is interesting relative to B if it is generated in manner X.” Then a statement is “eventually interesting relative to B” if it belongs to the closure of B under extension by interesting statements. We may well not be able to describe in advance what the statements in this closure will look like — indeed, it’s pretty certain that we won’t.

It’s true that a lot of human intelligence would be needed to come up with a formalization of that kind, but after that one could leave the computers chugging away, gradually (or not so gradually) building up from the current body of mathematical knowledge.

(2) I don’t see what’s wrong with an informal test of success: does the program produce what humans would regard as interesting results? Are there whole classes of results that it seems to be unable to discover? Etc. etc.

Tags:

[Click here to see a list of all interviews]


Matt Mahoney is a programmer and researcher on data compression, who is well known for creation of an open compression standard and API, libZPAQ, which is released into Public domain, for the creation of data compressor, PAQ, which is released in GPLv3, and many other compression tools. He is working as a technology strategist at Ocarina Networks, (a division of Dell) specializing in data compression research. He received a Ph.D at Florida Tech in 2003 and taught some classes there through 2009. [Homepage]

Data Compression Explained, by Matt Mahoney

A Machine Learning Approach to Detecting Attacks by Identifying Anomalies in Network Traffic, by Matt Mahoney

The Social Impact of Artificial Intelligence, by Matt Mahoney

Proposed Design for Distributed Artificial General Intelligence,  by Matt Mahoney

The Interview

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Matt Mahoney: It depends on what you mean by human level intelligence. I think it has already happened. http://mattmahoney.net/iq/

But if you mean the Turing test, that will never happen for two reasons. First, a computer can never be more like a human than a human. Second, our goal is not to build machines that behave like humans. Our goal is to automate human labor. That means machines that can do everything that a group of billions of humans could do. Already, machines and technology do most of this work. The hard problems like vision, language, art, and robotics will require the equivalent computing power of 7 billion human brains, i.e. 10^26 operations per second (OPS), 10^25 bytes of memory, 10^17 bits of human knowledge collected at 5 bits per second per person, all consuming less than 10^13 W. I would say 30 years to reach that level, with 10% chance it will happen in 10 years and more than 10% chance it will never happen. But that is not the same as AI. It means half of the remaining work will be automated and productivity will double, without much change in unemployment. And then progress will continue smoothly after that. If you are asking when machines will do all human labor, I would say that won’t happen as long as humans are alive and in control because making money is reserved for humans, not machines. Controlling machines is a form of work.

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Matt Mahoney: They are already substantially better.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Matt Mahoney: If humans have no role, then we are extinct. Humans use AI to do research vastly faster. AI is a tool, not a competitor.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Matt Mahoney: A “human level” AI could not do anything by itself without the help of civilization. You are “human level”, but if you were alone in the jungle, you would not be thinking about how to build an AI computer out of sticks and rocks or how to modify your DNA to have smarter children. The whole idea is ridiculous. Self improvement is a company re-investing its profits or a country building better schools. It is already happening.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Matt Mahoney: We should understand what self-modification is. Intelligence depends on knowledge and computing power. Bacteria gain both by reproducing and evolving. These are risks that we already study. But we don’t think of bacteria as intelligent, even when they outwit our ability to control them with antibiotics. The DNA in our biosphere encodes 10^37 bits of memory, and replicates at 10^31 OPS. That far exceeds the computing power of all the human brains on the planet, and certainly all of our computers. The potential computing power of self replicating nanotechnology (http://www.foresight.org/nano/Ecophagy.html ) is roughly of the same order as the biosphere, possibly higher by 10^3.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Matt Mahoney: I’m not sure the question makes sense. Probability is a mathematical model of human belief. Human belief (like it or not) is based on past experience. We have never experienced human extinction. Therefore the probability is 0. By the same reasoning, I have never died, so therefore I should be immortal.

The mathematical model of probability is only valid for cases of n
independent trials, with some outcome observed r times, and where 0 < r < n. Only in that case can we estimate the probability to be about r/n. When we extend the mathematical model beyond these requirements, we get nonsensical results. This is one such case. Sorry, I can’t give you a number.

Also, would you count the extinction of Homo Erectus as an existential threat? How about the extinction of various branches of our single celled ancestors from billions of years ago? If humans evolve into something better, would that count? What if we uploaded (according to some transhumanist theories) so that our minds were simulated in computers and we had only virtual bodies in a simulated world? What if our simulated minds didn’t run all the time, but were instead stored on disk or printed as bar codes on paper? How far do you want to stretch the definition of human?

Tags:

[Click here to see a list of all interviews]


Dr. MARK CHANGIZI is an evolutionary neurobiologist aiming to grasp the ultimate foundations underlying why we think, feel and see as we do. His research focuses on “why” questions, and he has made important discoveries such as on why we see in color, why we see illusions, why we have forward-facing eyes, why letters are shaped as they are, why the brain is organized as it is, why animals have as many limbs and fingers as they do, and why the dictionary is organized as it is.

He attended the Thomas Jefferson High School for Science and Technology, and then went on to the University of Virginia for a degree in physics and mathematics, and to the University of Maryland for a PhD in math. In 2002 he won a prestigious Sloan-Swartz Fellowship in Theoretical Neurobiology at Caltech, and in 2007 he became an assistant professor in the Department of Cognitive Science at Rensselaer Polytechnic Institute. In 2010 he took the post of Director of Human Cognition at a new research institute called 2ai Labs.

He has more than three dozen scientific journal articles, some of which have been covered in news venues such as the New York Times, Wall Street Journal, Newsweek andWIRED. He has written three books, THE BRAIN FROM 25,000 FEET (Kluwer 2003), THE VISION REVOLUTION (Benbella 2009) and HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella 2011). He is working on his fourth non-fiction book, this one on emotions and facial expressions, calledFORCE OF EMOTIONS. He is simultaneously working on his first novel, called HUMAN 3.0. [Homepage]

THE INTERVIEW

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Mark Changizi: 100, 1000, 5000

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Mark Changizi: Not much more difficult. We weren’t “designed” to *do* science, math, etc., and so probably can eventually build something actually *good* at it. It will be much harder to build stuff better than what we evolved to be good at.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Mark Changizi: Yes. Similar reasons as (2). Similar also for games like chess. For tasks where, when good, people are labeled “genius”, we as a species are usually bad at the task. …and AI will one day surpass us. It is the tasks we take for granted, and are never called “genius” for doing them, that will be hard to get AI to do.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Mark Changizi: Zero, if it means self-modification to become better at the wide range of reasoning. If we can design AI to be good at chess, science, etc., then it doesn’t mean it has any extra insight at how to ramp up its own abilities generally. That will likely take a long evolutionary-selection process, not just self-learning. Smart things are smart by virtue of “instinct” being crammed inside, not by virtue of having general-purpose learning algorithms inside.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Mark Changizi: Not very. Or, much much less than sci-fi would have us believe. No reason, though, to connect up the entirety of civilization to its brain, though (like in Terminator).

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Mark Changizi: 1:10^{6}

Tags:

[Click here to see a list of all interviews]


Dr. Randal A. Koene is CEO and Founder of the not-for-profit science foundation carboncopies.org and CEO of start-up company NeuraLink Co., developing wireless neural probes at cellular scale. He is also the Director of Science for the 2045 Foundation.

Previously, Dr. Koene was Director of Analysis at nanotechnology company Halcyon Molecular in Silicon Valley (2010-2012), and Director of the Department of Neuroengineering at Tecnalia, third largest private research organization in Europe (2008-2010). He is a former Prof. at the Center for Memory and Brain of Boston University, and Founder of the Neural Engineering Corporation of Massachusetts. His research objective is whole brain emulation, creating the large-scale high-resolution representations and emulations of activity in neuronal circuitry that are needed in patient-specific neuroprostheses.

Dr. Koene has professional expertise in computational neuroscience, neural engineering, psychology, information theory, electrical engineering and physics. He organizes neural engineering efforts to obtain and replicate function and structure information that resides in the neural substrate for use in neuroprostheses and neural interfaces. Dr. Koene devised NETMORPH (netmorph.org), a computational framework for the simulated morphological development of neuronal circuitry, and a prototype for the development of Virtual Brain Laboratories to give neuroscientists, neuroengineers and clinicians large-scale high-resolution quantitative tools analogous to the computational tools that have become essential in fields such as genetics, chemistry or the aero-space industry. This effort bridges scales and will help determine how significant functions are encoded robustly in neural ensembles, and how those functions can nevertheless depend in specific ways on the detailed biophysics of particular component physiology.

Dr. Koene earned his Ph.D. in Computational Neuroscience at the Department of Psychology at McGill University, and his M.Sc. in Electrical Engineering with a specialization in Information Theory at Delft University of Technology. He is a member of the Oxford working group that convened in 2007 to create a first roadmap toward whole brain emulation (a descriptive term for the technological accomplishment of mind transfer to a different substrate that was first coined by Dr. Koene on his minduploading.org website in 2000).

Visit Dr. Koene’s personal web site randalkoene.com, carboncopies.org, minduploading.org or watch Dr. Koene present and discuss at TEDx in Tallinn, Estonia: Machines in Minds to Reverse engineer the Machine that is Mind.

Personal FAQ is available, based on biographical questions received. [Homepage] [Wikipedia]


AIDEUS is unlike other projects attempting to create a strong artificial intelligence on the basis of existing weak methods implementing some or other cognitive functions, they develop models of intelligent behavior, which are characterized by well-grounded universality, by increasing their practical applicability. Their goal is the creation of a strong artificial intelligence, solution of fundamental scientific problems, achievement of universality by embodied machine learning and decision-making systems. [Homepage]

The questions were answered by Alexey Potapov of AIDEUS.

THE INTERVIEW

Randal Koene: I do not think I would call myself an expert on A(G)I Risk. My expertise lies in the field of Whole Brain Emulation. But I do have overlapping interests with the AGI camp, so I think it is fine for me to try to answer your questions.

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Randal Koene: My estimates as of Dec. 2012 are:

10% by 2020
50% by 2035
90% by 2050

AIDEUS (Alexey Potapov): 2025/2035/2050

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Randal Koene: I think it depends on your notion of “better”. If to be substantially better then all it takes is to be 1000x faster at carrying out certain tasks involved (e.g. logical reasoning, proofs, etc) then it would not be much more difficult. It would be a matter of providing the necessary purpose-built hardware, possibly with some algorithmic streamlining.

If you mean something of greater conceptual magnitude, such as creating a mind that in addition to our capabilities is able to conceive of itself within a quantum mechanical universe where time and space are mere epiphenomena of the quantum mechanical interactions – and which can usefully progress in such a world… this could take a bit more effort.

AIDEUS (Alexey Potapov): It will be easier.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Randal Koene: Yes. Most of typical academic research is composed of some fairly trivial grunt work. In fact, building an Aritificial Reseacher is a goal that was probably largely spurred by that realization and the desire to focus on the interesting aspects rather than that grunt work.

So… if we create an AGI with human level performance overall, then I think it is entirely feasible to create an Artificial Researcher with above-human performance in academic research (when averaged).

AIDEUS (Alexey Potapov): Artificial intelligence will overwhelmingly outperform normal humans. However, friendly AI can help humans to advance to its own level.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Randal Koene:

Within hours: less than 0.1
Within days: less than 0.2
Within <5 years: 0.9-1.0

The reasons for my thinking in this area are the interdependence of things (see for example the nice movie I, Pencil: http://youtu.be/IYO3tOqDISE). This means that even if you have a thing in a box that is smart, that is not enough to “very quickly” put together everything you need to become a much greater thing. Information, material, energy and much more needs to travel to and fro to make that possible – and those things are limited in their rate in many ways.

AIDEUS (Alexey Potapov): 50%

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Randal Koene: I think it is quite important to be aware of those risks and to take active precautions about possible negative outcomes. On the flip-side, the only way to actually research the matter and become more aware of the real issues involved is to understand it better… and that is only possibly if you are intimately familiar with the topic, which again is directly related to working on it. So, I think that the matter is rather “academic” (in all senses of the work) unless one is also directly involved in A(G)I research.

AIDEUS (Alexey Potapov): Importance is moderate, because technical details of safe AI greatly depend on specific way of building AGI.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Randal Koene: VERY difficult question, because in reality the probability of human extinction within 100 years is not only dependent on the possibility of A(G)I. There are many things that could cause our extinction within that time-frame… and those things are in turn not all independent of one-another!

So, I think the probability is greater than 0. But beyond that, I don’t have the data to make an estimate that I would want to stand behind in publication. 😉

AIDEUS (Alexey Potapov): I think that this question is put in the slightly incorrect form, because singularity will bring drastic changes, and humanity will change within 100 years independent of (unsafe) AI. Biological human extinction will not matter. However, P(humans don’t participate in singularity | AI capable of self-modification and not provably non-friendly is created) = 90%. You can consider this as the answer to your question though.

Tags:

[Click here to see a list of all interviews]

Larry A. Wasserman is a statistician and a professor in the Department of Statistics and the Machine Learning Department at Carnegie Mellon University. He received the COPSS Presidents’ Award in 1999 and the CRMSSC Prize in 2002.

Homepage: stat.cmu.edu/~larry/

Blog: normaldeviate.wordpress.com/

Book: All of Statistics: A Concise Course in Statistical Inference

The Interview

Q1: Assuming beneficial political and economic development and that no
global catastrophe halts progress, by what year would you assign a
10%/50%/90% chance of the development of artificial intelligence that
is roughly as good as humans (or better, perhaps unevenly) at science,
mathematics, engineering and programming?

Larry Wasserman:

10%: 2025
50%: 2040
90%: 2070

Q2: Once we build AI that is roughly as good as humans (or better,
perhaps unevenly) at science, mathematics, engineering and
programming, how much more difficult will it be for humans and/or AIs
to build an AI which is substantially better at those activities than
humans?

Larry Wasserman: 

Not at all difficult. I think there will be a phase change.
Once AI is as good as humans, it will quickly be better than humans.

Q3: Do you ever expect artificial intelligence to overwhelmingly
outperform humans at typical academic research, in the way that they
may soon overwhelmingly outperform humans at trivia contests, or do
you expect that humans will always play an important role in
scientific progress?

Larry Wasserman: 

Yes. Absolutely. They will be better than humans at research.
However, I need to qualify this. I think humans will integrate
AI so the notion of AI versus human will get blurry.

Q4: What probability do you assign to the possibility of an AI with
initially roughly professional human-level competence (or better,
perhaps unevenly) at general reasoning (including science,
mathematics, engineering and programming) to self-modify its way up to
vastly superhuman capabilities within a matter of hours/days/< 5
years?

Larry Wasserman:

hours: 10%
days: 50%
years: 99%

Q5: How important is it to research risks associated with artificial
intelligence that is good enough at general reasoning (including
science, mathematics, engineering and programming) to be capable of
radical self-modification, before attempting to build one?

Larry Wasserman:

Not at all important. I see this as inevitable; just the next step in evolution.

Q6: What probability do you assign to the possibility of human
extinction within 100 years as a result of AI capable of
self-modification (that is not provably non-dangerous, if that is even
possible)? P(human extinction by AI | AI capable of self-modification
and not provably non-dangerous is created).

Larry Wasserman:

I would say low, perhaps 1%. However, since I think the line between
humans and AI will be blurry, the question may not be well-defined.

Tags:

[Click here to see a list of all interviews]

Professor John E. Laird is the founder of Soar Technology, an Ann Arbor company specializing in creating autonomous AI entities. His major research interest is in creating human-level artificial intelligent entities, with an emphasis on the underlying cognitive architecture. [Homepage]

Dr. Kristinn R. Thorisson has been developing A.I. systems and technologies for over two decades. He is the Coordinator / Principal Investigator of the HUMANOBS FP7 project and co-author of the AERA architecture, with Eric Nivel, which targets artificial general intelligence. A key driving force behind the project is Thorisson’s new Constructivist Methodology which lays out principles for why and how AI architectures must be given introspective and self-programming capabilities. [Homepage]

Q&A with John E. Laird

Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?

Explanatory remark to Q1:

P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/50%/90%

John E. Laird: I see this as a long way out. There are many technical/scientific hurdles, and there is not a general consensus that there is a need for the type of autonomous human-level machine intelligence from science fiction. Instead, I predict that we will see machine intelligence embedded into more and systems, making other systems “smart” but not as general as humans, and not with complete human-level intelligence. We will see natural language and speech becoming ubiquitous so we can communicate with devices (more than Siri) in the next 5-10 years. But I don’t see the development of autonomous HLMI coming anytime soon (such as robots in the movies – Data for example). There are many technical hurdles but there are also economic, political, and social issues. On the technical side, very few people are working on the problem of integrated human-level intelligence, and it is slow going. It would take significant, long term investment and I don’t see that happening anytime soon.

10% 20 years

50% 50 years

90% 80 years

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?

Explanatory remark to Q2:

P(human extinction | badly done AI) = ?

(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)

John E. Laird: 0% – I don’t see the development of AGI leading to this. There are other dangers of AI, where people (or governments) use the power that can be gained from machine intelligence to their own ends (financially, politically, …) that could end very badly (destruction of communication networks – bring down governments and economies) but the doomsday scenarios of Terminator and the Matrix just don’t make sense for many reasons. (Think James Bond evil genius’ instead of evil robots.) – If you want to get scared – watch Colossus, the Forbin Project  – but that also is just science fiction (along the lines of your next question as it turns out).

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?

John E. Laird: 0% – There is no reason to believe that an AGI could do this. First, why would an AGI be able to learn faster than humans. It takes lots of experience (which takes lots of real time) to learn about the world (that is why humans take >12 years to get to something intelligent). Just mining existing databases, etc. isn’t going to get you there – you need to interact with the world. Just getting lots of computers and lots of data doesn’t mean a system can get to superhuman intelligence. Also, using lots of distributed processing effectively (which I assume is the scenario you are thinking about) is problematic. Computation requires locality – to make an intelligent decision, you need to bring data together in one place. You have some aspects of intelligence distributed, but to be strategic, you need locality.

Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

John E. Laird: I don’t think so. It would be impossible to prove something like that for a system that is sufficiently complex.

Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

John E. Laird: No. There are many more things to lose sleep over than AGI.

Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?

John E. Laird: I think some awareness is important. Possibly a bit more than now, but this is not a pressing issue for human existence. As I stated earlier, we have much more to worry about how humans will use intelligent systems than how the intelligent systems will evolve on their own.

Q7: What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?

John E. Laird: Worry about bio-engineering – genetically altered avian flu virus – to me, that is much more likely to kill us off than AGI. Nanotechnology also has its scary side.

Q8: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?

John E. Laird: No – I can’t come up with such a milestone.

Q&A with Kristinn R. Thorisson

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Kristinn R. Thorisson:

10%: 2025
50%: 2045
90%: 2080

Mathematics and programming will surely come before engineering and science, by at least 20 years, with science emerging last.

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Kristinn R. Thorisson: I suspect that the task of making the next leap in building an AI becomes exponentially more difficult as intelligence grows, so if it took 100 years to develop a  human-level (measured roughly) AI system from the time when software was automatically running on a computer (around the middle of the 20th century), then the next milestone of roughly equal significance will be reached roughly 100 years later, or sometime in the timeframe between 2100-2180. However, before that milestone is reached it may already have been made irrelevant by other more interesting milestones based on e.g. running vast numbers of specially modified human-level AIs.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Kristinn R. Thorisson: I expect AIs to outperform humans in virtually every way, except perhaps on those points where evolution has guaranteed humans the necessary stability to grow and prosper, i.e. along the social and ethical dimensions – because it is difficult to engineer such capabilities in a top-down manner, they spring more naturally from (natural) evolution, and may in fact be dependent on that.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Kristinn R. Thorisson:

hours: 0.1%
days: 0.2%
<5 years: 2%

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Kristinn R. Thorisson: Not important at all. However, researching the risks associated with *human misuse* of such technology should be on the drawing board of governments everywhere in the next 10 years, ideally.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)

Kristinn R. Thorisson: Very low, approaching zero.

Tags: