You are currently browsing articles tagged simulation.

Here is an interesting answer, posted on Quora, by Josh Siegle. His answer paraphrases some of what I tried to highlight in my post “Substrate Neutrality: Representation vs. Reproduction“.

Here are some quotes, starting with a comment by Josh Siegle from an ensuing discussion:

I’m saying that meat has properties and causal powers that algorithms do not. If the properties we’re talking about are mass, acidity, or opacity, this statement would be trivially true. A simulation of an apple will not weigh 0.1 kg, taste delicious, and appear red, although all those properties could be represented. Claiming that awareness is somehow different—that it would be present in the simulation—suggests that it is not part of the physical world. This leads very quickly down the path to a dualistic separation between the mental and the physical, which I imagine is exactly what you’re trying to avoid.

Quotes from the original answer:

First of all, imagine that the book in the room is a Chinese–English dictionary. When the Chinese characters come in, the man translates them into English, thinks of a reply, and translates that into to Chinese. BOOM! The room now acts as though it understands Chinese, but does it actually? I don’t think we gain anything by saying that it does. It should be obvious that the true understanding lies in the person that wrote the dictionary, and the man in the room is just piggy-backing on this knowledge. If not, then I could claim that I understand every major language because I know how to use Google Translate.


If instead of a string of Chinese characters, the man received a string of ones and zeros encoding a visual scene, would the room be having its own, separate visual experience while the man moves some paper around and reads the ink that adorns it? People make it sound like Searle was bonkers for claiming that such subjective experience wouldn’t arise. But what makes you so certain that it would?

What Josh Siegle appears to be saying is that consciousness is, in some respect, similar to properties such as mass or wetness. In the same sense that you cannot extinguish a physical fire with simulated water, a digital computer will not possess the same sort of conscious understanding that humans do.

In his original answer, Josh Siegle wrote that it is a given that the Chinese room passes the Turing test. That is not being disputed. The claim is rather that human understanding is more delicate (qualitatively different) than e.g. the ability of a system made up of a human and Google Translate to understand various languages.

I consider this a relatively weak claim, but nonetheless something that should not be dismissed. Namely that one of the most important, and morally relevant, features of human understanding could be related to the hard problem of consciousness, and that consciousness is a property that is in some relevant respect similar to physical properties such as mass or wetness.

Consider the following. Knowing every physical fact about gold does not make us own any gold. A representation of the chemical properties of gold on a computer cannot be traded on the gold market, established as a gold reserve, or used to create jewelry. It takes a particle accelerator or nuclear reactor to create gold. No Turing machine can do the job.

There is nothing spooky about this. The point is that a representation is distinct from a reproduction. Only if you reproduce all relevant physical properties of e.g. water can it be considered water in the context of the physical world.

The evidence in support of consciousness requiring a similarly detailed reproduction is our inability to explain how we intuitively disagree that a person with a Chinese–English dictionary does possess the same understanding of Chinese as a person who actually “understands” Chinese.


A Turing machine (Rules table not represented).

Can you program a Turing machine in such a way that it would end up in a state mimicking all relevant physical properties of water, in order to drink it? It seems rather weird to claim that a device that manipulates symbols on a strip of tape could configure itself in such a way as to mimic water. In the same sense it would be really weird to look at a configuration of logic gates and proclaim, “This configuration of logic gates experiences pain!”.


A Turing machine can exhibit a certain number of states. None of those states can possibly correspond to a number of physical properties such as acidity or wetness. The Chinese room thought experiment highlights how none of the states of a Turing machine could intuitively mimic conscious understanding of Chinese.

Or to quote Magnus Vinding, “just like a computer will not have Wi-Fi if it does not have the hardware that supports it – regardless of what software we implement in it – a computer will not be conscious if it does not have the right hardware.”

Tags: , , ,

One of the fundamental premise in computer science circle is that a universal computing device can simulate every physical process and that we therefore should be able to reverse engineer the human brain as it is fundamentally computable. That is, intelligence and consciousness are substrate-neutral.

Substrate neutrality (not to be confused with substrate independence) is widely accepted to be factual, even self-evident, within computer science and transhumanist circles (i.e. the general science fiction, early adopter, programmer, technophile, nerd crowd). But this isn’t necessarily the case within the academic philosophy camp, which often leads to a lot of confusion and mutual disrespect.

Although I can’t tell that I understand either party, in this post I will attempt to rephrase the opinion hold by some philosophers.

Simulated Gold

Let’s assume that we wanted to simulate gold, what does that mean?

If we were going to simulate a representation of the chemical properties of gold on a computer, would we be able to trade it on the gold market, establish a gold reserve or use it to create jewellery? Obviously not, but why? Some important characteristics seem to be missing. We do not assign the same value to a representation of gold that we assign to gold itself.

What would it take to simulate the missing properties? A particle accelerator or nuclear reactor:

The artificial production of gold is the age-old dream of the alchemists. It is possible in particle accelerators or nuclear reactors, although the production cost is currently many times the market price of gold. Since there is only one stable gold isotope, 197Au, nuclear reactions must create this isotope in order to produce usable gold.

That we know every physical fact about gold doesn’t make us own any gold.

Consequently, we need to reproduce gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job.

Emulations represent not reproduce

  • Emulations only exhibit emulated behavior.
  • Emulations only exhibit a representation of the behavior of the physical systems they are emulating.
  • Emulations are only able to emulate analogous behavior of physical systems given an equally emulated environment.

Imagine 3 black boxes, each of them containing a quantum-level emulation of some existing physical system. Two boxes contain the emulations of two different human beings and one box the emulation of an environment.

Assume that if we were to connect all 3 black boxes and observe the behavior of the two humans and their interactions we would be able to verify that the behavior of the humans, including their utterances, would equal that of the originals.

If one was to disconnect one of the black boxes containing the emulation of a human and store it within the original physical environment, replacing one original human being while retaining the other original human being, the new system would not exhibit the same behavior as either the system of black boxes or the genuinely physical system.

A compound system made up of black boxes containing emulations of physical objects and genuinely physical objects does neither equal a system solely made up of black boxes nor a system made up of the original physical objects alone.

The representations of the original physical systems that are being emulated within the black boxes are one level removed from the originals. A composition of those levels will exhibit a different interrelationship.

Once we enable the black box to interact with the higher level in which it resides, the compound system made up of the black box, the original environment and the human being (representation-level ++ physical-level ++ physical-level) will approach the behavior exhibited in the context of an emulated system (representation-level ++ representation-level ++ representation-level) and by the original physical system (physical-level / physical-level / physical-level).

How do we make a compound system made up of representations and originals approach the behavior of the original physical system?

We could equip the black box with sensors and loudspeakers yet it will not exhibit the same behavior. We could further equip it with an avatar. Still, the original and emulated human will treat an avatar differently than another original, respectively emulated human. We could give it a robot body. The behavior will still not equal the behavior that the original physical system would exhibit and neither the behavior that would be exhibited in the context of a system made up of emulations.

We may continue to tweak what was once the black box containing an emulation of a human being. But as we approach a system that will exhibit the same behavior as the original system we are slowly reproducing the original human being, we are turning the representation into a reproduction.


What many philosophers seem to be thinking is that the nature of “fire” can not be captured by an equation. The basic disagreement seems to be that a representation is distinct from a reproduction, that there is a crucial distinction between software and hardware.

For computer scientists the difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.

Philosophers seem to agree about the difference between a physical thing and its mathematical representation but they don’t agree that we can represent the most important characteristic as long as we do not reproduce the physical substrate. This position is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.

Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).

Tags: , ,

World Builder from Bruce Branit on Vimeo.

Tags: , , ,