Related to: AI vs. humanity and the lack of concrete scenarios; Questions regarding the nanotechnology-AI-risk conjunction; AI risk scenario: Deceptive long-term replacement of the human workforce; AI drives vs. practical research and the lack of specific decision procedures; AI risk scenario: Elite Cabal; AI risk scenario: Social engineering; AI risk scenario: Insect-sized drones;
Objective: Remarks and questions about the use of biological toxins or infectious agents by an artificial general intelligence (short: AI) to decisively weaken and eventually overpower humanity.
Remarks: In comparison to other scenarios (see the related links above), it seems difficult to determine the culprit behind such an attack. That difficulty could allow the AI to be safe from human counter-strikes.
The use of biological weapons does further not rely on highly speculative or unspecified technological magic.
(1.0) At the time of the first AI, how dependent will the global infrastructure be on a functioning human society and how dependent will the AI be on that infrastructure to eventually achieve whatever terminal goal it might have?
(2.0) Given that an AI is not infrastructure-dependent, what is the expected utility, given the instrumental goal of taking over the world, of trying to design biological weapons?
(2.1) How likely is an AI to be able to acquire the necessary infrastructure, design and release a suitable biological weapon without being caught? Consider that the danger of such weapons is already widely known and taken seriously.
(2.2) How likely is biological warfare to weaken humanity to such an extent that a thorough investigation of the causes and the ultimate revelation and termination of the culprit will be rendered impossible?
(2.3) At the time of the first AI, what will be the state of biodefense, how likely are biological weapons to remain undetected for a sufficient amount of time and how likely is their neutralization to pose a major difficulty?
Comments are now closed.