Related to: AI vs. humanity and the lack of concrete scenarios, Questions regarding the nanotechnology-AI-risk conjunction, AI risk scenario: Deceptive long-term replacement of the human workforce, AI drives vs. practical research and the lack of specific decision procedures; AI risk scenario: Elite Cabal;
Objective: Some remarks and questions about a scenario outlined in the LessWrong post ‘For FAI: Is “Molecular Nanotechnology” putting our best foot forward?‘ on how an artificial general intelligence (short: AI) could take control of Earth by means of social engineering, rigging elections and killing enemies.
I’m fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess.
First of all, I agree with the following part of the post:
I have a hard time believing the only reason you can’t make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we’re just not smart enough.
I assume the reason than MNT is added to a discussion on AI is because we’re trying to make the future sound more plausible via adding burdensome details. I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible.
As I have already outlined in a previous post, that scenario raises several questions.
Secondly, the following quote (emphasis mine) from the post linked to above is a great example of what I have been talking about in other posts:
If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.
But all of that is digressing from the objective of this post, namely asking some questions about the AI risk scenario that the author finds convincing.
When people tell me how an AI could somehow deceive and manipulate humans I like to ask them to first imagine a whole brain emulation of a grown up human and how it could become a pickup artist (a man who is skilled in the art of finding, attracting, and seducing women) instead of an AI trying to become good enough at social engineering to take over the world. I think it is sensible to assume that such an emulation might need an avatar or at least a lot of real world feedback before it could become sufficiently skilled to easily seduce women.
Once you notice that even for the emulation of a grown up adult it will be nontrivial to become a good pickup artist, the next step is to imagine how the emulation of a human toddler is going to acquire those skills before eventually trying to think about how an abstract AI that misses all of the hard coded capabilities of a human toddler is going to do it.
(1) How likely is an AI not to accidentally reveal its intention of taking over the world before it manages to become sufficiently good at assessing and manipulating humans to do so?
(2) How susceptible is a social engineering strategy to world domination to whistleblowers?
(3) How likely is it to manipulate the elections of a country in such a way that the instrumental usefulness to taking over the world does outweigh the possibility of being caught? Consider that there are pre-election polls and a lot of non-electronic oversight. Also consider that most democratic leaders have very little control (see e.g. ‘Lame duck‘).
(4) How would an AI make friends? By writing emails?
(5) How strong of a control is friendship as to be instrumentally useful in taking over the world? Consider how fragile human friendship is, being susceptible to small perturbations.
(6) By what means is the AI going to kill its enemies and how useful is that going to be? Consider how difficult it is even for a world power such as the USA to effectively kill its enemies.
Comments are now closed.