Larry A. Wasserman is a statistician and a professor in the Department of Statistics and the Machine Learning Department at Carnegie Mellon University. He received the COPSS Presidents’ Award in 1999 and the CRM–SSC Prize in 2002.
Q1: Assuming beneficial political and economic development and that no
global catastrophe halts progress, by what year would you assign a
10%/50%/90% chance of the development of artificial intelligence that
is roughly as good as humans (or better, perhaps unevenly) at science,
mathematics, engineering and programming?
Q2: Once we build AI that is roughly as good as humans (or better,
perhaps unevenly) at science, mathematics, engineering and
programming, how much more difficult will it be for humans and/or AIs
to build an AI which is substantially better at those activities than
Not at all difficult. I think there will be a phase change.
Once AI is as good as humans, it will quickly be better than humans.
Q3: Do you ever expect artificial intelligence to overwhelmingly
outperform humans at typical academic research, in the way that they
may soon overwhelmingly outperform humans at trivia contests, or do
you expect that humans will always play an important role in
Yes. Absolutely. They will be better than humans at research.
However, I need to qualify this. I think humans will integrate
AI so the notion of AI versus human will get blurry.
Q4: What probability do you assign to the possibility of an AI with
initially roughly professional human-level competence (or better,
perhaps unevenly) at general reasoning (including science,
mathematics, engineering and programming) to self-modify its way up to
vastly superhuman capabilities within a matter of hours/days/< 5
Q5: How important is it to research risks associated with artificial
intelligence that is good enough at general reasoning (including
science, mathematics, engineering and programming) to be capable of
radical self-modification, before attempting to build one?
Not at all important. I see this as inevitable; just the next step in evolution.
Q6: What probability do you assign to the possibility of human
extinction within 100 years as a result of AI capable of
self-modification (that is not provably non-dangerous, if that is even
possible)? P(human extinction by AI | AI capable of self-modification
and not provably non-dangerous is created).
I would say low, perhaps 1%. However, since I think the line between
humans and AI will be blurry, the question may not be well-defined.
Comments are now closed.