Q&A with Timothy Gowers on risks from AI

[Click here to see a list of all interviews]


Sir William Timothy Gowers, FRS (Fellow of the Royal Society) is a British mathematician. He is a Royal Society Research Professor at the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, where he also holds the Rouse Ball chair, and is a Fellow of Trinity College, Cambridge. In 1998 he received the Fields Medal for research connecting the fields of functional analysis and combinatorics. [Homepage]

The Interview

Timothy Gowers: OK here are my answers, but with the qualification that for some questions I’m going to restrict attention to performance at mathematics. I don’t have enough appreciation of the technical difficulties associated with more general AI to feel confident about making predictions. But I do think that if a program can do mathematical research as well as humans, then science, engineering and programming can’t be far behind (especially programming).

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Timothy Gowers: I think there’s a 10% chance we’ll have programs as good as humans at doing maths within 25 years, a 50% chance that we’ll have it within 40 years and a 90% chance that we’ll have it by the end of the century.

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Timothy Gowers: I think that once computers are as good as, say, beginning PhD students at maths, then assuming there are lots of them that have different mathematical styles and tastes and can interact with each other in the way that human mathematicians do (in principle one computer could model an entire mathematical community but I think it’s more likely that it would be done by several different programs developed by several different teams), then I think one could just leave the programs to run and you’d see maths progress like human maths but much much faster. I don’t know exactly how good they’ll have to be before this singularity arises: the key property they’ll need to have is an ability to step back, think about what they are doing, and improve themselves. That’s why I went for a beginning PhD student, who has to go through a process like that. (Maybe humans would need to act as “research supervisors” just to get them going.)

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Timothy Gowers: I expect computers to become overwhelmingly better than humans at mathematical research, just as they are now overwhelmingly better at number crunching. But I think that they’ll also be able to explain a lot of what they do. It’s hard to imagine people training themselves in the skill needed to solve a maths problem if this happens, but maybe it will survive to some extent, just as people still enjoy playing chess now. But I think that the human activity of mathematical research as we know it will be killed off by this development. Quite what the knock-on effects of this will be are hard to predict. But for example I think if we can build good computer researchers, we’ll also be able to build amazing interactive teaching programs, so we needn’t worry that there will be nobody left to teach mathematics.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Timothy Gowers: I have a similar view to what I said in (2). To get to roughly  professional competence, a significant degree of self-modification will be needed, so I don’t really see how one can get to human levels without rapidly surpassing those levels.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Timothy Gowers: I really don’t know about this. I don’t think doing it just for maths is risky, because I think maths is sufficiently narrow that we don’t have to worry about things like whether the programs could become malign. But with more general intelligence I think it’s different. For example, if a program could pass a fairly modest Turing test, then one could build a spambot that would generate spam that was basically impossible to distinguish from non-spam. Imagine a blog that gets zillions of comments that are all perfectly sensible. Would it matter? That’s an interesting question, but it would certainly change things, and in general I think that keeping the internet going would be a serious challenge. On the plus side, one could also design better and better spam detectors, but I’m not sure how much comfort I get from that: by the definition of passing the Turing test, detection would appear to be impossible.

The general point here is that once self-modifying programs exist, people other than the original developers could use them for evil purposes. I don’t know how much of a problem that is. It applies to other things, such as nuclear weapons for instance.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Timothy Gowers: I don’t know, but my instinct tells me that the probability is pretty small. In particular, I find it small enough that we are nowhere near the point where we should stop doing, or even slow down, research into AI.

Q7: How would you test if an artificial intelligence was at least as good as humans at mathematics?

Timothy Gowers: It would be sufficient (though maybe not necessary) to subject an artificial mathematician to the same tests that human mathematicians are subjected to. If it can write papers that attract the interest of human mathematicians, then its intelligence is as good as that of a human mathematician we judge to be producing results of a similar level of interest.

Actually, I want to qualify that. I would want my artificial mathematician not just to produce mathematics of a kind that a human might produce, but also to explain how it did so. If, for example, it made excessive use of brute-force search but ended up with the proof, when a human would get there much more efficiently, then it would be lacking something important.

Q8: Is it correct that in order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”)?

Timothy Gowers: I think it would be extremely helpful to formalize the notion of “interesting mathematics” (as opposed to arbitrary well-formed statements and logically valid proofs). However, again I would regard that as sufficient but not necessary: if programs were written that in practice produced mathematics of a similar nature to human mathematics, one might eventually have enough faith in them to believe that they had captured the notion of “interesting mathematics” without our having had to define it. (One could trivially define it as “something that will eventually be part of the output of the program”.)

Q9: What role does natural language proficiency play in human mathematics and what are the challenges at doing mathematics without it?

Timothy Gowers: I think that mathematicians are tempted by their training to reduce everything to small sets of assumptions, and to have very economical foundations — e.g. in set theory. But in practice we think with massively redundant sets of assumptions, and that’s important because it enables us to make connections easily that would otherwise not be obvious and would hold us up. So I think that really good automatic theorem provers will need to operate in a very high-level language — not necessarily quite as flexible as the entirety of the English language, but more like the kind of language mathematicians use when writing out a proof carefully (minus the side remarks, unless these too are quite precise).

Q10: The problems I see are the following: (1) If the formalization of what your artificial mathematician is supposed to do is very specific then most of the work requiring human-level intelligence has been done by whoever came up with that formalization and if (2) the formalization is very unspecific then it is not clear how to test for success, much less judge its efficiency.

Timothy Gowers: I don’t agree with what you say.

(1) A formalization of what constitutes interesting mathematics doesn’t have to be of the form, “This statement is interesting if and only if X.” Rather, it can be of the following form. “Let B be the current body of mathematical knowledge. A statement S is interesting relative to B if it is generated in manner X.” Then a statement is “eventually interesting relative to B” if it belongs to the closure of B under extension by interesting statements. We may well not be able to describe in advance what the statements in this closure will look like — indeed, it’s pretty certain that we won’t.

It’s true that a lot of human intelligence would be needed to come up with a formalization of that kind, but after that one could leave the computers chugging away, gradually (or not so gradually) building up from the current body of mathematical knowledge.

(2) I don’t see what’s wrong with an informal test of success: does the program produce what humans would regard as interesting results? Are there whole classes of results that it seems to be unable to discover? Etc. etc.

Tags: