Q&A with Matt Mahoney on risks from AI

[Click here to see a list of all interviews]


Matt Mahoney is a programmer and researcher on data compression, who is well known for creation of an open compression standard and API, libZPAQ, which is released into Public domain, for the creation of data compressor, PAQ, which is released in GPLv3, and many other compression tools. He is working as a technology strategist at Ocarina Networks, (a division of Dell) specializing in data compression research. He received a Ph.D at Florida Tech in 2003 and taught some classes there through 2009. [Homepage]

Data Compression Explained, by Matt Mahoney

A Machine Learning Approach to Detecting Attacks by Identifying Anomalies in Network Traffic, by Matt Mahoney

The Social Impact of Artificial Intelligence, by Matt Mahoney

Proposed Design for Distributed Artificial General Intelligence,  by Matt Mahoney

The Interview

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Matt Mahoney: It depends on what you mean by human level intelligence. I think it has already happened. http://mattmahoney.net/iq/

But if you mean the Turing test, that will never happen for two reasons. First, a computer can never be more like a human than a human. Second, our goal is not to build machines that behave like humans. Our goal is to automate human labor. That means machines that can do everything that a group of billions of humans could do. Already, machines and technology do most of this work. The hard problems like vision, language, art, and robotics will require the equivalent computing power of 7 billion human brains, i.e. 10^26 operations per second (OPS), 10^25 bytes of memory, 10^17 bits of human knowledge collected at 5 bits per second per person, all consuming less than 10^13 W. I would say 30 years to reach that level, with 10% chance it will happen in 10 years and more than 10% chance it will never happen. But that is not the same as AI. It means half of the remaining work will be automated and productivity will double, without much change in unemployment. And then progress will continue smoothly after that. If you are asking when machines will do all human labor, I would say that won’t happen as long as humans are alive and in control because making money is reserved for humans, not machines. Controlling machines is a form of work.

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Matt Mahoney: They are already substantially better.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Matt Mahoney: If humans have no role, then we are extinct. Humans use AI to do research vastly faster. AI is a tool, not a competitor.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Matt Mahoney: A “human level” AI could not do anything by itself without the help of civilization. You are “human level”, but if you were alone in the jungle, you would not be thinking about how to build an AI computer out of sticks and rocks or how to modify your DNA to have smarter children. The whole idea is ridiculous. Self improvement is a company re-investing its profits or a country building better schools. It is already happening.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Matt Mahoney: We should understand what self-modification is. Intelligence depends on knowledge and computing power. Bacteria gain both by reproducing and evolving. These are risks that we already study. But we don’t think of bacteria as intelligent, even when they outwit our ability to control them with antibiotics. The DNA in our biosphere encodes 10^37 bits of memory, and replicates at 10^31 OPS. That far exceeds the computing power of all the human brains on the planet, and certainly all of our computers. The potential computing power of self replicating nanotechnology (http://www.foresight.org/nano/Ecophagy.html ) is roughly of the same order as the biosphere, possibly higher by 10^3.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Matt Mahoney: I’m not sure the question makes sense. Probability is a mathematical model of human belief. Human belief (like it or not) is based on past experience. We have never experienced human extinction. Therefore the probability is 0. By the same reasoning, I have never died, so therefore I should be immortal.

The mathematical model of probability is only valid for cases of n
independent trials, with some outcome observed r times, and where 0 < r < n. Only in that case can we estimate the probability to be about r/n. When we extend the mathematical model beyond these requirements, we get nonsensical results. This is one such case. Sorry, I can’t give you a number.

Also, would you count the extinction of Homo Erectus as an existential threat? How about the extinction of various branches of our single celled ancestors from billions of years ago? If humans evolve into something better, would that count? What if we uploaded (according to some transhumanist theories) so that our minds were simulated in computers and we had only virtual bodies in a simulated world? What if our simulated minds didn’t run all the time, but were instead stored on disk or printed as bar codes on paper? How far do you want to stretch the definition of human?

Tags: