Questions regarding the nanotechnology-AI-risk conjunction

Related to: AI vs. humanity and the lack of concrete scenarios

Objective: Posing questions examining what I call the nanotechnology-AI-risk conjunction, by which I am referring to a scenario that is often mentioned by people concerned about the idea of an artificial general intelligence (short: AI) attaining great power.

Below is a quote that is outlining the scenario in question (source: Intelligence Explosion Microeconomics, page 6.):

The first machine intelligence system to achieve sustainable returns on cognitive reinvestment is able to vastly improve its intelligence relatively quickly—for example, by rewriting its own software or by buying (or stealing) access to orders of magnitude more hardware on clustered servers. Such an AI is “prompt critical”— it can reinvest the fruits of its cognitive investments on short timescales, without the need to build new chip factories first. By the time such immediately accessible improvements run out, the AI is smart enough to, for example, crack the problem of protein structure prediction. The AI emails DNA sequences to online peptide synthesis labs (some of which boast a seventy-two-hour turnaround time), and uses the resulting custom proteins to construct more advanced ribosome-equivalents (molecular factories). Shortly afterward, the AI has its own molecular nanotechnology and can begin construction of much faster processors and other rapidly deployed, technologically advanced infrastructure. This rough sort of scenario is sometimes colloquially termed “hard takeoff ” or “AI-go-FOOM.”

A preliminary remark: If your AI relies on molecular nanotechnology to attain great power then the probability of any kind of AI attaining great power depends on factors such as the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems. To quote a report of the U.S. National Academy of Sciences in this regard (source):

… the eventually attainable perfection and complexity of manufactured products, while they can be calculated in theory, cannot be predicted with confidence. Finally, the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time. Research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide long-term vision is most appropriate to achieve this goal.

Assumptions: For the purpose of the following questions I will assume (1) that the kind of nanotechnology known from science fiction is in principle possible (2) that an advanced artificial general intelligence is required to invent such technology and not vice versa (in which case we should be worried about nanotechnology instead) (3) that any given AI would want to create molecular nanotechnology without this being an explicitly defined terminal goal (for more on this see: ‘AI drives vs. practical research and the lack of specific decision procedures‘).

Questions: A few initial questions that need to be answered in order to estimate the probability of the nanotechnology-AI-risk conjunction conditional on the above assumptions being true.

(1.0) How likely is an AI to be given control of the initially equipment necessary to construct molecular factories?

(1.1) How likely are an AI’s creators to let their AI do unsupervised research on molecular nanotechnology? Consider that possible risks associated with advanced nanotechnology are already widely known and taken seriously.

(1.2) How likely is an AI to use its initial infrastructure to succeed at doing covert research on molecular nanotechnology, without its creators noticing it?

(2.0) How likely is an AI to acquire useful long-term control of the equipment necessary to construct molecular factories without anyone noticing it?

(3.0) How likely is it that an AI manages to turn its discoveries into infrastructure and or tools that are instrumentally useful to deceive or overpower its creators before its creators or third-parties are still able to intervene and stop the AI?

All of the above questions can be broken up into a lot of more detailed questions while many additional questions are not asked. But I believe that those questions are a good starting point.

Tags: , ,

  • Vanzetti

    The people on lesswrong pascal-mugged themselves so hard its not even funny anymore.

  • Matt Mahoney

    The more likely scenario is that humans will use AI to develop self replicating nano-computers, rather than the AI doing it on its own. When AI is cheap, a lot of people could potentially do this.

  • Mitchell Porter

    How likely is it that in a society with AI, no AI will ever try to do something nasty?

    I know this isn’t your question, you’re asking “how likely is it that the first successfully self-enhancing AI will be able to remotely take over factories without its owners noticing, etc”. And it’s fine to examine the plausibility of the extreme scenarios which are regularly advanced in an attempt to convince a doubting world of the power that AGI would have.

    But if we take the scenario and break it into two parts, i see two, more interesting questions. First, could nanotechnology be used to destroy the Earth or to overwhelm it? And second, is there a risk that an AGI which was not explicitly “friendly” would attempt to achieve autonomy, subverting its controllers? And the answer to those two questions still seems to be Yes and Yes.

    I think perhaps what’s really needed is a shift of emphasis in the arguments advanced to make the “AI threat” look plausible, and perhaps you even agree with me.

  • Pingback: Alexander Kruel · MIRI/LessWrong Critiques: Index()

  • Pingback: Alexander Kruel · AI risk scenario: Mitchell Porter’s Elite Cabal()

  • Pingback: Alexander Kruel · AI risk scenario: Social engineering()

  • Pingback: Alexander Kruel · AI risk scenario: Insect-sized drones()

  • Pingback: Alexander Kruel · Realistic AI risk scenarios()