AI risk scenario: Insect-sized drones

Related to: AI vs. humanity and the lack of concrete scenariosQuestions regarding the nanotechnology-AI-risk conjunctionAI risk scenario: Deceptive long-term replacement of the human workforceAI drives vs. practical research and the lack of specific decision proceduresAI risk scenario: Elite CabalAI risk scenario: Social engineering;

Objective: Some remarks and questions about a scenario outlined by Tyler Cowen in which insect-sized drones are used to kill people or to carry out terror attacks.

The scenario:

Not bee drones, rather drone drones, with military and terrorist capabilities.  There is already a (foiled) terror plot using model airplanes.  How easy would it be to stop a mechanical “bee” which injects a human target with rapidly-acting poison?

Remarks:

From the point of view of my arguably naive layman perspective such a scenario seems scary and not too unrealistic even in the absence of a rogue superhuman artificial general intelligence (short: AI) trying to overpower humanity.

The scenario does not seem to require advanced molecular nanotechnology or generally rely on any kind of far-fetched or unspecified technological magic that an intelligence greater than that of humans might invent.

An AI would be capable of controlling a huge number of such drones in a goal-directed manner, either remotely or by implementing autonomous proxies of itself. Those drones could then be used to physically manipulate the environment or to possibly create wireless networks that are independent of the global infrastructure.

Questions:

(1) Could a huge number of such drones eventually overpower humanity and what number would be sufficient to accomplish that goal?

(2) How would an AI manage to unsuspiciously produce a huge number of such drones or how likely is such a number of drones to already be available to the AI and suitable for the purpose of taking over the world?

(3) How quickly could an AI overpower humanity using such drones before humans could intervene by e.g. the use of electromagnetic pulses to disable the drones.

(4) How likely is such an AI to exist in a time before wide-ranging security measures against macro-drones have been implemented due to their previous use by governments and or terrorists?

Tags: ,

  • Alessandro Siletto

    To be really dangerous they should be able to auto-replicate. Something like a robotic ant queen, being able to build and grow her own workers 🙂

  • Alexander Gabriel

    In general, I see no reason why non-intelligent threats are any less concerning than AI or posthuman threats. There seems to be no reason why intelligence must necessarily win out, because there are many dimensions of capability. Ability to create a bioengineered plague or insect-sized drones does not necessarily imply ability to defend against such threats.

    Regarding AI, I suggest that one way you might be able to probe these questions in more depth is to imagine how AI that is more capable than humans in certain areas, but has not reached the point of superintelligence, could be combined with human intelligence to bring about various outcomes.

  • Pingback: Alexander Kruel · MIRI/LessWrong Critiques: Index()

  • Pingback: Alexander Kruel · Realistic AI risk scenarios()

  • Joe

    In theory, you can do that without one bit of non-biological parts. Not practical yet, but imaginable.