Related to: AI vs. humanity and the lack of concrete scenarios; Questions regarding the nanotechnology-AI-risk conjunction; AI risk scenario: Deceptive long-term replacement of the human workforce; AI drives vs. practical research and the lack of specific decision procedures; AI risk scenario: Elite Cabal; AI risk scenario: Social engineering;
Objective: Some remarks and questions about a scenario outlined by Tyler Cowen in which insect-sized drones are used to kill people or to carry out terror attacks.
The scenario:
Not bee drones, rather drone drones, with military and terrorist capabilities. There is already a (foiled) terror plot using model airplanes. How easy would it be to stop a mechanical “bee” which injects a human target with rapidly-acting poison?
Remarks:
From the point of view of my arguably naive layman perspective such a scenario seems scary and not too unrealistic even in the absence of a rogue superhuman artificial general intelligence (short: AI) trying to overpower humanity.
The scenario does not seem to require advanced molecular nanotechnology or generally rely on any kind of far-fetched or unspecified technological magic that an intelligence greater than that of humans might invent.
An AI would be capable of controlling a huge number of such drones in a goal-directed manner, either remotely or by implementing autonomous proxies of itself. Those drones could then be used to physically manipulate the environment or to possibly create wireless networks that are independent of the global infrastructure.
Questions:
(1) Could a huge number of such drones eventually overpower humanity and what number would be sufficient to accomplish that goal?
(2) How would an AI manage to unsuspiciously produce a huge number of such drones or how likely is such a number of drones to already be available to the AI and suitable for the purpose of taking over the world?
(3) How quickly could an AI overpower humanity using such drones before humans could intervene by e.g. the use of electromagnetic pulses to disable the drones.
(4) How likely is such an AI to exist in a time before wide-ranging security measures against macro-drones have been implemented due to their previous use by governments and or terrorists?

I’m a 31 year old German
Pingback: Alexander Kruel · MIRI/LessWrong Critiques: Index()
Pingback: Alexander Kruel · Realistic AI risk scenarios()