Q&A with Mark Changizi on risks from AI

[Click here to see a list of all interviews]


Dr. MARK CHANGIZI is an evolutionary neurobiologist aiming to grasp the ultimate foundations underlying why we think, feel and see as we do. His research focuses on “why” questions, and he has made important discoveries such as on why we see in color, why we see illusions, why we have forward-facing eyes, why letters are shaped as they are, why the brain is organized as it is, why animals have as many limbs and fingers as they do, and why the dictionary is organized as it is.

He attended the Thomas Jefferson High School for Science and Technology, and then went on to the University of Virginia for a degree in physics and mathematics, and to the University of Maryland for a PhD in math. In 2002 he won a prestigious Sloan-Swartz Fellowship in Theoretical Neurobiology at Caltech, and in 2007 he became an assistant professor in the Department of Cognitive Science at Rensselaer Polytechnic Institute. In 2010 he took the post of Director of Human Cognition at a new research institute called 2ai Labs.

He has more than three dozen scientific journal articles, some of which have been covered in news venues such as the New York Times, Wall Street Journal, Newsweek andWIRED. He has written three books, THE BRAIN FROM 25,000 FEET (Kluwer 2003), THE VISION REVOLUTION (Benbella 2009) and HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella 2011). He is working on his fourth non-fiction book, this one on emotions and facial expressions, calledFORCE OF EMOTIONS. He is simultaneously working on his first novel, called HUMAN 3.0. [Homepage]

THE INTERVIEW

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Mark Changizi: 100, 1000, 5000

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Mark Changizi: Not much more difficult. We weren’t “designed” to *do* science, math, etc., and so probably can eventually build something actually *good* at it. It will be much harder to build stuff better than what we evolved to be good at.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Mark Changizi: Yes. Similar reasons as (2). Similar also for games like chess. For tasks where, when good, people are labeled “genius”, we as a species are usually bad at the task. …and AI will one day surpass us. It is the tasks we take for granted, and are never called “genius” for doing them, that will be hard to get AI to do.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Mark Changizi: Zero, if it means self-modification to become better at the wide range of reasoning. If we can design AI to be good at chess, science, etc., then it doesn’t mean it has any extra insight at how to ramp up its own abilities generally. That will likely take a long evolutionary-selection process, not just self-learning. Smart things are smart by virtue of “instinct” being crammed inside, not by virtue of having general-purpose learning algorithms inside.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Mark Changizi: Not very. Or, much much less than sci-fi would have us believe. No reason, though, to connect up the entirety of civilization to its brain, though (like in Terminator).

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Mark Changizi: 1:10^{6}

Tags:

  • Elithrion

    You should send the interview questions to the SIAI people as well. We think we know what they believe, broadly speaking, but I’m sure there are differences of opinion and it would be informative to be able to compare the interviews side-by-side.