Substrate Neutrality: Representation vs. Reproduction

One of the fundamental premises on lesswrong.com is that a universal computing device can simulate every physical process and that we therefore should be able to reverse engineer the human brain as it is fundamentally computable. That is, intelligence and consciousness are substrate-neutral.

Substrate neutrality (not to be confused with substrate independence) is widely accepted to be factual, even self-evident, within computer science and transhumanist circles (i.e. the general science fiction, early adopter, programmer, technophile, nerd crowd). But this isn’t necessarily the case within the academic philosophy camp, which often leads to a lot of confusion and mutual disrespect.

Although I can’t tell that I understand either party, in this post I will attempt to rephrase the opinion hold by some philosophers.

The Great Singularity Debate

The below video discussion gives a rough overview of the opinions hold at the extreme ends of the spectrum and the subsequent confusion that arises if they clash.

The Singularity and the outer limits of physical possibility (08:38)
Do human brains run software? (09:58)
Consciousness, intelligence, and computation (03:14)
What could minds be made of? (13:08)
Is mind-uploading a dualist dream? (19:18)
Would the Singularity be a Vonnegut-style catastrophe? (10:56)

Simulated Gold

Let’s assume that we wanted to simulate gold, what does that mean?

If we were going to simulate a representation of the chemical properties of gold on a computer, would we be able to trade it on the gold market, establish a gold reserve or use it to create jewellery? Obviously not, but why? Some important characteristics seem to be missing. We do not assign the same value to a representation of gold that we assign to gold itself.

What would it take to simulate the missing properties? A particle accelerator or nuclear reactor:

The artificial production of gold is the age-old dream of the alchemists. It is possible in particle accelerators or nuclear reactors, although the production cost is currently many times the market price of gold. Since there is only one stable gold isotope, 197Au, nuclear reactions must create this isotope in order to produce usable gold.

That we know every physical fact about gold doesn’t make us own any gold.

Consequently, we need to reproduce gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job.

Emulations represent not reproduce

  • Emulations only exhibit emulated behavior.
  • Emulations only exhibit a representation of the behavior of the physical systems they are emulating.
  • Emulations are only able to emulate analogous behavior of physical systems given an equally emulated environment.

Imagine 3 black boxes, each of them containing a quantum-level emulation of some existing physical system. Two boxes contain the emulations of two different human beings and one box the emulation of an environment.

Assume that if we were to connect all 3 black boxes and observe the behavior of the two humans and their interactions we would be able to verify that the behavior of the humans, including their utterances, would equal that of the originals.

If one was to disconnect one of the black boxes containing the emulation of a human and store it within the original physical environment, replacing one original human being while retaining the other original human being, the new system would not exhibit the same behavior as either the system of black boxes or the genuinely physical system.

A compound system made up of black boxes containing emulations of physical objects and genuinely physical objects does neither equal a system solely made up of black boxes nor a system made up of the original physical objects alone.

The representations of the original physical systems that are being emulated within the black boxes are one level removed from the originals. A composition of those levels will exhibit a different interrelationship.

Once we enable the black box to interact with the higher level in which it resides, the compound system made up of the black box, the original environment and the human being (representation-level ++ physical-level ++ physical-level) will approach the behavior exhibited in the context of an emulated system (representation-level ++ representation-level ++ representation-level) and by the original physical system (physical-level / physical-level / physical-level).

How do we make a compound system made up of representations and originals approach the behavior of the original physical system?

We could equip the black box with sensors and loudspeakers yet it will not exhibit the same behavior. We could further equip it with an avatar. Still, the original and emulated human will treat an avatar differently than another original, respectively emulated human. We could give it a robot body. The behavior will still not equal the behavior that the original physical system would exhibit and neither the behavior that would be exhibited in the context of a system made up of emulations.

We may continue to tweak what was once the black box containing an emulation of a human being. But as we approach a system that will exhibit the same behavior as the original system we are slowly reproducing the original human being, we are turning the representation into a reproduction.

Conclusion

What many philosophers seem to be thinking is that the nature of “fire” can not be captured by an equation. The basic disagreement seems to be that a representation is distinct from a reproduction, that there is a crucial distinction between software and hardware.

For computer scientists the difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.

Philosophers seem to agree about the difference between a physical thing and its mathematical representation but they don’t agree that we can represent the most important characteristic as long as we do not reproduce the physical substrate. This position is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.

Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).

Tags: , ,

  • http://reperiendi.wordpress.com Mike Stay

    Regarding simulated gold: I’m fairly convinced that the reason philosophers have trouble with accepting a simulation of gold as being as real as gold is because it has no way to interact with the outside world: you can’t touch it, you can’t see it.

    The http://en.wikipedia.org/wiki/AdS/CFT_correspondence (aka the “holographic principle”) says that string theory in anti-de Sitter space is the same as conformal field theory on the *boundary* of the space. If we had a holodeck that could create a horizon on which we ran a simulation of gold, then we could run electricity through it, we could pound it into beautiful jewelry or mint coins out of it. We would have *gold*.

    In a similar way, rats have been shown to be able to learn with an artificial hippocampus: they remove a volume of the hippocampus and grab the impulses coming in through the surface of that volume. They run a simulation of the behavior of the removed portion and feed impulses back out through that surface. The result works well enough that they are unable to learn with it turned off and are able to learn with it turned on.

    In a discrete dynamical system, we have the notion of bisimilarity that allows us to say when two such systems are computing the same thing; there are continuous notions of bisimilarity, too. However, we don’t know at what granularity we need to simulate the mind in order to capture actual perception; for all we know, it’s at a much finer level than what is needed to produce a convincing philosophical zombie, especially since people are wired to anthropomorphize everything. Something as simple as ELIZA is sufficient to fool many people; see also http://web.archive.org/web/20060612204506/http://www.msnbc.msn.com/id/12939612/

    So while I think the philosopher’s objections are fundamentally groundless, it seems reasonable that a digital simulation of a mind may “work” at a coarser granularity while losing some important feature of being a human.

  • http://reperiendi.wordpress.com Mike Stay

    Sorry, the article on robots and landmines I meant to link to was this one: http://www.washingtonpost.com/wp-dyn/content/article/2007/05/05/AR2007050501009_pf.html

  • Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index()

  • Pingback: Alexander Kruel · How Eliezer Yudkowsky is a dualist()

  • Pingback: Alexander Kruel · The Chinese room, understanding and dualism()