What (not) to worry about?

2013 : WHAT *SHOULD* WE BE WORRIED ABOUT?

Contributors [ 152 ]

Link: edge.org/responses/q2013

When EDGE asked what we should be worried about, apparently some people used the opportunity to state that we should not worry about artificial intelligence:

“The Singularity”: There’s No There There

Bruce Sterling

Science Fiction and Fantasy Writers of America

So, as a Pope once remarked, “Be not afraid.” We’re getting what Vinge predicted would happen without a Singularity, which is “a glut of technical riches never properly absorbed.” There’s all kinds of mayhem in that junkyard, but the AI Rapture isn’t lurking in there. It’s no more to be fretted about than a landing of Martian tripods.

Super-A.I.s Won’t Rule The World (Unless They Get Culture First)

Andy Clark

Philosopher and Cognitive Scientist, University of Edinburgh; Author: Supersizing the Mind: Embodiment, Action, and Cognitive Extension

The last decades have seen fantastic advances in machine learning and robotics. These are now coupled with the availability of huge and varied databases, staggering memory capacities, and ever-faster and funkier processors. But despite all that, we should not fear that our Artificial Intelligences will soon match and then rapidly outpace human understanding, turning us into their slaves, toys, pets or puppets.

For we humans benefit from one gigantic, and presently human-specific, advantage. That advantage is the huge yet nearly invisible mass of gradually accrued cultural practices and innovations that tweak and pummel the inputs that human brains receive. Those gradually accrued practices are, crucially, delicately keyed to the many initial biases, including especially biases for sociality, play and exploration, installed by the much slower processes of biological evolution. In this way a slowly accumulated mass of well-matched cultural practices and innovations ratchets up human understanding.

But there are also those that do worry:

Life As We Know It

Max Tegmark

Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute

…if there’s even a 1% chance that there’ll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least 1% of our GDP studying the issue and deciding what to do about it. Yet we largely ignore it, and are curiously complacent about life as we know it getting transformed. What we should be worried about is that we’re not worried.

Unknown Unknowns

Gary Marcus

Cognitive Scientist; Author, Guitar Zero: The New Musician and the Science of Learning

The truth is that we simply don’t know enough about the potential biotechnology, nanotechonology, or future iterations of artificial intelligence to calculate what their risks are, compelling arguments have been made that in principle any of the three could lead to human extinction. These risks may prove manageable, but I don’t think we can manage them if we don’t take them seriously. In the long run, biotech, nanotech and AI are probably significantly more likely to help the species, by increasing productivity and limiting disease, than they are to destroy it. But we need to invest more in figuring out exactly what the risks are, and to better prepare for then. Right now, the US spends more than $2.5 billion dollars a year studying climate change, but (by my informal reckoning) less than 1% of that total studying the risk of biotech, nanotech, and AI.

And most interestingly:

We Are In Denial About Catastrophic Risks

Martin Rees

Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Master, Trinity College; Author, From Here to Infinity

I’m worried that by 2050 desperate efforts to minimize or cope with a cluster of risks with low probability but catastrophic conseqences may dominate the political agenda.

Tags: ,