Scenarios that I deem to be realistic, in which an artificial intelligence (AI) constitutes a catastrophic or existential risk (or worse), are mostly of the kind in which “unfriendly” humans use such AIs as tools facilitating the achievement of human goals. Whereas I believe the scenario publicized by certain AI risk advocates to be illogical and practically impossible, a scenario in which an consequentialist AI (expected utility maximizer) undergoes uncontrollable recursive self-improvement in order to e.g. turn the universe into paperclips.
Yet what some AI risk advocates imagine could partly come true, in the shape of a grey goo scenario. But such a scenario, if possible at all, would not require full-fledged general intelligence. I expect that the intelligent tools that are required to eventually create true general intelligence will be sufficient in order to solve molecular nanotechnology, and that, shortly after those tools are invented, someone will use those tools to do just that. Which makes it an existential risk that is distinct from the one that those people imagine.
But the possibility of intelligent tools, enabling humans to solve molecular nanotechnology, suggests that less intelligent tools will be sufficient to bring about other existential risk scenarios such as synthetic bioweapons.
Much to my personal dismay, even less intelligent tools will be sufficient to enable worse than extinction risks, such as a stable global tyranny. Given enough resources, narrow artificial intelligence, capable of advanced data mining, pattern recognition and of controlling huge amounts of insect sized drones (a global surveillance and intervention system), might be sufficient to implement such an eternal tyranny.
Such a dictatorship is not too unlikely, as the tools necessary to stabilize it will be necessary in order prevent the previously mentioned risks, risks that humanity will face before general intelligence becomes possible.
And if such a dictatorship cannot to established, if no party was able to capitalize a first-mover advantage, that might mean that the propagation of those tools will be slow enough to empower a lot of different parties before a particular party can overpower all others. A subsequent war, utilizing that power, could easily constitute yet another extinction scenario. But more importantly, it could give several parties enough time to reach the next level and implement even worse scenarios.
But even given that the scenario makes no sense and is unfeasible, and if less than general intelligence was not sufficient in order to bring about other existential risks, there are other ways to create artificial general intelligence. Some of those ways might be worse than anything imagined by AI risk advocates.
Neuromorphic AI, mimicking neuro-biological architectures, is one such possibility. The closer in mind design space a general intelligence is to humans, the higher is the probability that humans will suffer. As the drives and values of such agents might be similar enough to not ignore or kill humans, yet alien enough to catastrophically interfere with human values.
What can be done to prevent such negative scenarios mainly seems to be (1) research on strong and beneficial forms of government (governments which will foster and protect human values and regulate technological development) (2) research on how to eventually implement such government (3) political activism to promote awareness of risks associated with advanced technologies.
Comments are now closed.