nanotechnology

You are currently browsing articles tagged nanotechnology.

Framed in terms of nanofactories, here is my understanding of a scenario imagined by certain AI risk advocates, in which an artificial general intelligence (AGI) causes human extinction:

Terminology: A nanofactory uses nanomachines (resembling molecular assemblers, or industrial robot arms) to build larger atomically precise parts.

Givens:

(1) The transition from benign and well-behaved nanotechnology, to full-fledged molecular nanotechnology, resulting in the invention of the first nanofactory, will be too short for humans to be able to learn from their mistakes, and to control this technology.

(2) By default, once a nanofactory is started, it will always consume all matter on Earth while building more of itself.

(3) The extent of the transformation of Earth cannot be limited. Any nanofactory that works at all will always transform all of Earth.

(4) The transformation of Earth will be too fast to be controllable, or to be aborted. Once the nanofactory has been launched, everything is being transformed.

To be proved: We need to make sure that the first nanofactory will protect humans and human values.

Proof: Suppose 1-4, by definition.

Thus:

(5) In order to survive, we need to figure out how to make the first nanofactory transform Earth into a paradise, rather than copies of itself.

Notice that you cannot disagree with 5, given 1-4. It is only possible to disagree with the givens, and to what extent it is valid to argue by definition.

I am not claiming that certain AI risk advocates are solely arguing by definition. But making inferences about the behavior of real world AGI based on uncomputable concepts such as expected utility maximization, comes very close. And trying to support such inferences by making statements about the vastness of mind design space does not change much. Since the argument ignores the small and relevant subset of AGIs that are feasible and likely to be invented by humans.

Here is my understanding of those people argue:

Suppose that a superhuman AGI, or an AGI that can make itself superhuman, critically relies on 999 modules. Respectively, 999 problems have to be solved correctly in order to create a working AGI.

There is another module labeled <goal>, or <utility function>. This <goal module> controls the behavior of the AGI.

Humans will eventually solve these 999 problems, but will create a goal module that does not prevent the AI from causing human extinction as an unintended consequence of its universal influence.

Notice the foregone conclusion that you need to prevent an AGI from killing everyone. The assumption is that killing everyone is what AGIs do by default. Further notice that this behavior is not part of the goal module that supposedly controls the AGIs behavior, but rather assumed to be a consequence of the 999 modules on which an AGI critically depends.

Analogous to the nanofactory scenario outlined above, an AGI is assumed to always behave in a way that will cause human extinction, based on the assumption that an AGI will always exhibit an unbounded influence. And from this the conclusion is drawn that it is only possible to prevent human extinction by directing this influence in such a way that it will respect and amplify human values. It is then claimed that the only possibility to ensure this is by implementing a goal module that either contains an encoding of all human values or a way to safely obtain an encoding of all humans values.

Given all of the above, you cannot disagree that it is not too unlikely that humans will eventually succeed at the correct implementation of the 999 modules necessary to make an AGI work, while failing to implement the thousandth module, the goal module, in such a way that the AGI will not kill us. Since relative to the information theoretic complexity of an encoding of all human values, the 999 modules are probably easy to get right.

But this is not surprising, since the whole scenario was designed to yield this conclusion.

Tags: , ,

Related to: AI vs. humanity and the lack of concrete scenarios

Objective: Posing questions examining what I call the nanotechnology-AI-risk conjunction, by which I am referring to a scenario that is often mentioned by people concerned about the idea of an artificial general intelligence (short: AI) attaining great power.

Below is a quote that is outlining the scenario in question (source: Intelligence Explosion Microeconomics, page 6.):

The first machine intelligence system to achieve sustainable returns on cognitive reinvestment is able to vastly improve its intelligence relatively quickly—for example, by rewriting its own software or by buying (or stealing) access to orders of magnitude more hardware on clustered servers. Such an AI is “prompt critical”— it can reinvest the fruits of its cognitive investments on short timescales, without the need to build new chip factories first. By the time such immediately accessible improvements run out, the AI is smart enough to, for example, crack the problem of protein structure prediction. The AI emails DNA sequences to online peptide synthesis labs (some of which boast a seventy-two-hour turnaround time), and uses the resulting custom proteins to construct more advanced ribosome-equivalents (molecular factories). Shortly afterward, the AI has its own molecular nanotechnology and can begin construction of much faster processors and other rapidly deployed, technologically advanced infrastructure. This rough sort of scenario is sometimes colloquially termed “hard takeoff ” or “AI-go-FOOM.”

A preliminary remark: If your AI relies on molecular nanotechnology to attain great power then the probability of any kind of AI attaining great power depends on factors such as the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems. To quote a report of the U.S. National Academy of Sciences in this regard (source):

… the eventually attainable perfection and complexity of manufactured products, while they can be calculated in theory, cannot be predicted with confidence. Finally, the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time. Research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide long-term vision is most appropriate to achieve this goal.

Assumptions: For the purpose of the following questions I will assume (1) that the kind of nanotechnology known from science fiction is in principle possible (2) that an advanced artificial general intelligence is required to invent such technology and not vice versa (in which case we should be worried about nanotechnology instead) (3) that any given AI would want to create molecular nanotechnology without this being an explicitly defined terminal goal (for more on this see: ‘AI drives vs. practical research and the lack of specific decision procedures‘).

Questions: A few initial questions that need to be answered in order to estimate the probability of the nanotechnology-AI-risk conjunction conditional on the above assumptions being true.

(1.0) How likely is an AI to be given control of the initially equipment necessary to construct molecular factories?

(1.1) How likely are an AI’s creators to let their AI do unsupervised research on molecular nanotechnology? Consider that possible risks associated with advanced nanotechnology are already widely known and taken seriously.

(1.2) How likely is an AI to use its initial infrastructure to succeed at doing covert research on molecular nanotechnology, without its creators noticing it?

(2.0) How likely is an AI to acquire useful long-term control of the equipment necessary to construct molecular factories without anyone noticing it?

(3.0) How likely is it that an AI manages to turn its discoveries into infrastructure and or tools that are instrumentally useful to deceive or overpower its creators before its creators or third-parties are still able to intervene and stop the AI?

All of the above questions can be broken up into a lot of more detailed questions while many additional questions are not asked. But I believe that those questions are a good starting point.

Tags: , ,

More: overcomingbias.com/2010/03/econ-of-nano-ai.html

Slides: hanson.gmu.edu/ppt/Econ%20of%20AI%20n%20Nanotech.ppt

Robin Hanson: “Economics of Nanotech and AI” at Foresight 2010 Conference from Foresight Institute on Vimeo.

All January 2010 Foresight Conference videos:
http://www.vimeo.com/album/176287
Join email list:
http://www.foresight.org/d/list_signup

Bio for this speaker:
Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, including articles in Applied Optics, Business Week, CATO Journal, Communications of the ACM, Economics Letters, Econometrica, Economics of Governance, Extropy, Forbes, Foundations of Physics, IEEE Intelligent Systems, Information Systems Frontiers, Innovations, International Joint Conference on Artificial Intelligence, Journal of Economic Behavior and Organization, Journal of Evolution and Technology, Journal of Law Economics and Policy, Journal of Political Philosophy, Journal of Prediction Markets, Journal of Public Economics, Medical Hypotheses, Proceedings of the Royal Society, Public Choice, Social Epistemology, Social Philosophy and Policy, Theory and Decision, and Wired.

Robin has pioneered prediction markets, also known as information markets or idea futures, since 1988. He was the first to write in detail about people creating and subsidizing markets in order to gain better estimates on those topics. Robin was a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, and of DARPA’s Policy Analysis Market, from 2001 to 2003. Robin has developed new technologies for conditional, combinatorial, and intermediated trading, and has studied insider trading, manipulation, and other foul play. Robin has written and spoken widely on the application of idea futures to business and policy, being mentioned in over one hundered press articles on the subject, and advising many ventures, including GuessNow, Newsfutures, Particle Financial, Prophet Street, Trilogy Advisors, XPree, YooNew, and undisclosable defense research projects. He is now chief scientist at Consensus Point.

Robin has diverse research interests, with papers on spatial product competition, health incentive contracts, group insurance, product bans, evolutionary psychology and bioethics of health care, voter information incentives, incentives to fake expertise, Bayesian classification, agreeing to disagree, self-deception in disagreement, probability elicitation, wiretaps, image reconstruction, the history of science prizes, reversible computation, the origin of life, the survival of humanity, very long term economic growth, growth given machine intelligence, and interstellar colonization.

——————————————

If you enjoyed this video, please consider making a donation to the non-profit Foresight Institute:
http://www.foresight.org/forms/php/donate.php

Tags: , , , , , ,