Here are two partial quotes from two recent conversations that reflect a typical argument being made by proponents of AI risks:
1.) OK, but maybe an AGI could solve lots of problems *faster* than humans …
2.) In that case the AI could simply scan everybody’s genome and neural system and store the information in a 100g diamond. And also to store all humanity’s information.
The problem is “Ok, but…” or “simply”.
Overpowering humanity is not something you can just conjecture and justify by imagining arbitrary amounts of intelligence as if it was some sort of magic that can disqualify any argument to the contrary.
If you want to be convincing then you have to be more specific and name some concrete examples of how exactly an AGI going to take over the world.
For starters you should be able to answer the following questions and taboo “intelligence” in doing so:
- How is an AGI going to become a master of dark arts and social engineering in order to persuade and deceive humans?
- How is an AGI going to coordinate a large scale conspiracy or deception, given its initial resources, without making any suspicious mistakes along the way?
- How is an AGI going to hack the Internet to acquire more computational resources?
- Are those computational resources that can be hacked applicable to improve the general intelligence of an AGI?
- Does throwing more computational resources at important problems, like building new and better computational substrates, allow an AGI to come up with better architectures so much faster as to outweigh the expenditure of obtaining those resources, without hitting diminishing returns?
- Does an increase in intelligence vastly outweigh its computational cost and the expenditure of time needed to discover it?
- How can small improvements replace conceptual revolutions that require the discovery of unknown unknowns?
- How does an AGI brute-force the discovery of unknown unknowns?
- Is an agent of a given level of intelligence capable of handling its own complexity efficiently?
- How is an AGI going to predict how improvements, respectively improved versions of itself, are going to act, to ensure that its values are preserved?
- How is an AGI going to solve important problems without real-world experimentation and slow environmental feedback?
- How is an AGI going to build new computational substrates and obtain control of those resources without making use of existing infrastructure?
- How is an AGI going to cloak its actions, i.e. its energy consumption etc.?
- How is an AGI going to stop humans from using its own analytic and predictive algorithms in the form of expert systems to analyze and predict its malicious intentions?
- How is an AGI going to protect itself from human counter strikes given the fragility of the modern world and its infrastructure, without some sort of shellproof internal power supply?
If you are unable to answer those questions other than by invoking intelligence as some sort of magic that makes all problems disappear, the scenario that you envision is nothing more than pure fantasy!
You can’t estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as the concept of “intelligence”.
Here is a case that bears some similarity and which might shed light on what I am trying to explain:
At his recent keynote speech at the New York Television Festival, former Star Trek writer and creator of the re-imagined Battlestar Galactica Ron Moore revealed the secret formula to writing for Trek.
He described how the writers would just insert “tech” into the scripts whenever they needed to resolve a story or plot line, then they’d have consultants fill in the appropriate words (aka technobabble) later.
“It became the solution to so many plot lines and so many stories,” Moore said. “It was so mechanical that we had science consultants who would just come up with the words for us and we’d just write ‘tech’ in the script. You know, Picard would say ‘Commander La Forge, tech the tech to the warp drive.’ I’m serious. If you look at those scripts, you’ll see that.”
Moore then went on to describe how a typical script might read before the science consultants did their thing:
La Forge: “Captain, the tech is overteching.”
Picard: “Well, route the auxiliary tech to the tech, Mr. La Forge.”
La Forge: “No, Captain. Captain, I’ve tried to tech the tech, and it won’t work.”
Picard: “Well, then we’re doomed.”
“And then Data pops up and says, ‘Captain, there is a theory that if you tech the other tech … ‘” Moore said. “It’s a rhythm and it’s a structure, and the words are meaningless. It’s not about anything except just sort of going through this dance of how they tech their way out of it.”
The use of “intelligence” is as misleading and dishonest in evaluating risks from AI as the use of “tech” in Star Trek.
Taking over the world is an extremely conjunctive vision. And the only argument you have is that intelligence will somehow magically allow an AGI to accomplish that.
I elaborate on all of the above and much more in the following posts: