Q&A On AI Risks

Peter Rothman took all of the Q&A style interviews about AI risks that I conducted with various researchers and posted them all in one place over at H+ Magazine.

Link: hplusmagazine.com/2012/11/29/alexander-kruels-agi-risk-council-of-advisors-roundtable/

The subject of AI risk recently made headlines again with The Cambridge Project for Existential Risk  announcing that it was going to open a so called “Terminator Center” to study existential risks due to AI and robotics and a yet another New York Times article on the subject of building “moral machines”. Although researchers in the field disagree strongly about whether such risks are real, and whether machines can or should be considered as ethical agents, it seems that it is  an appropriate time to discuss such risks as we look forward to widespread deployment of early AI systems such as self guiding vehicles and Watson-like question answering systems.

Back in 2011, Alexander Kruel (XiXiDu) started a Q&A style interview series on LessWrong asking various experts in artificial intelligence about their perception of AI risks. He convened what was in essence a council of expert advisors to discuss AI development and risk. The advisory panel approach stands in contrast to that announced by CPER which in effect appointed a single “expert” to opine on the subject of AI risk. I am re-publishing these interviews here because I feel they are an invaluable resource for anyone looking into the area of AI risk. I have collected and re-edited these interviews to present them here in a conversational manner as a sort of virtual expert roundtable on AI risks.

While an outside viewpoint on risk is welcomed, the value here is in gathering a group of experts currently working in the field and asking them what they think. These individuals may have certain unique insights as a result of their experience in trying to build working AGI systems as well as narrow AIs. Notably here are a diversity of opinions here even among the people that have similar interests and mostly agree about the bright future of AI research. I’ve also added a few simple data graphics to help visualize this diversity.

Tags: , ,