The Chinese room, understanding and dualism

Here is an interesting answer, posted on Quora, by Josh Siegle. His answer paraphrases some of what I tried to highlight in my post “Substrate Neutrality: Representation vs. Reproduction“.

Here are some quotes, starting with a comment by Josh Siegle from an ensuing discussion:

I’m saying that meat has properties and causal powers that algorithms do not. If the properties we’re talking about are mass, acidity, or opacity, this statement would be trivially true. A simulation of an apple will not weigh 0.1 kg, taste delicious, and appear red, although all those properties could be represented. Claiming that awareness is somehow different—that it would be present in the simulation—suggests that it is not part of the physical world. This leads very quickly down the path to a dualistic separation between the mental and the physical, which I imagine is exactly what you’re trying to avoid.

Quotes from the original answer:

First of all, imagine that the book in the room is a Chinese–English dictionary. When the Chinese characters come in, the man translates them into English, thinks of a reply, and translates that into to Chinese. BOOM! The room now acts as though it understands Chinese, but does it actually? I don’t think we gain anything by saying that it does. It should be obvious that the true understanding lies in the person that wrote the dictionary, and the man in the room is just piggy-backing on this knowledge. If not, then I could claim that I understand every major language because I know how to use Google Translate.

[…]

If instead of a string of Chinese characters, the man received a string of ones and zeros encoding a visual scene, would the room be having its own, separate visual experience while the man moves some paper around and reads the ink that adorns it? People make it sound like Searle was bonkers for claiming that such subjective experience wouldn’t arise. But what makes you so certain that it would?

What Josh Siegle appears to be saying is that consciousness is, in some respect, similar to properties such as mass or wetness. In the same sense that you cannot extinguish a physical fire with simulated water, a digital computer will not possess the same sort of conscious understanding that humans do.

In his original answer, Josh Siegle wrote that it is a given that the Chinese room passes the Turing test. That is not being disputed. The claim is rather that human understanding is more delicate (qualitatively different) than e.g. the ability of a system made up of a human and Google Translate to understand various languages.

I consider this a relatively weak claim, but nonetheless something that should not be dismissed. Namely that one of the most important, and morally relevant, features of human understanding could be related to the hard problem of consciousness, and that consciousness is a property that is in some relevant respect similar to physical properties such as mass or wetness.

Consider the following. Knowing every physical fact about gold does not make us own any gold. A representation of the chemical properties of gold on a computer cannot be traded on the gold market, established as a gold reserve, or used to create jewelry. It takes a particle accelerator or nuclear reactor to create gold. No Turing machine can do the job.

There is nothing spooky about this. The point is that a representation is distinct from a reproduction. Only if you reproduce all relevant physical properties of e.g. water can it be considered water in the context of the physical world.

The evidence in support of consciousness requiring a similarly detailed reproduction is our inability to explain how we intuitively disagree that a person with a Chinese–English dictionary does possess the same understanding of Chinese as a person who actually “understands” Chinese.

640px-Maquina

A Turing machine (Rules table not represented).

Can you program a Turing machine in such a way that it would end up in a state mimicking all relevant physical properties of water, in order to drink it? It seems rather weird to claim that a device that manipulates symbols on a strip of tape could configure itself in such a way as to mimic water. In the same sense it would be really weird to look at a configuration of logic gates and proclaim, “This configuration of logic gates experiences pain!”.

Conclusion

A Turing machine can exhibit a certain number of states. None of those states can possibly correspond to a number of physical properties such as acidity or wetness. The Chinese room thought experiment highlights how none of the states of a Turing machine could intuitively mimic conscious understanding of Chinese.

Or to quote Magnus Vinding, “just like a computer will not have Wi-Fi if it does not have the hardware that supports it – regardless of what software we implement in it – a computer will not be conscious if it does not have the right hardware.”

Tags: , , ,

  • Xagor et Xavier

    I think there are two aspects here that are easily confused.

    The first aspect is the hard problem of consciousness. Here we have such things as qualia and intentionality to contend with. Eliminative materialists will say that these things don’t exist, that “a difference that makes no difference is no difference”. One can hold a different opinion, like Penrose (for instance), but there’s no definite proof and so the people who claim Turing machines can be conscious tend to take the high ground.

    The other is the problem of representation and reproduction. A good way of intuitively bringing this problem to mind is to ask: “If a computer can be anything, then what is a computer?”. On one level, the reproduction level, a computer is a heap of atoms arranged in a particular way. On the other level, a computer is, or represents a function evolving through time. Different heaps of atoms may represent the same function, but if the function is pure, they only do so in the context of someone or something that can look at the heap of atoms (or its effects) and understand how the function is evolved, or what the function’s end state corresponds to. Even if the function is not pure, it is not discernible as a function at the outset.

    So I think you’re right in that a representation of gold can’t really be gold. A computer representation of gold isn’t gold: it’s a heap of atoms making up part of the computer’s memory.

    But there’s another point that bears keeping in mind. Within the universe defined by the computer, the representation of gold is gold. If you make a model universe with a computer, and you use the simulation program to insert gold into the model universe, then that model universe now has more model gold in it.

    I think that the “dualists” would claim that because of the Copernican principle, any simulated universe is equally real. Let us say that we have a model universe. Anything inside that model universe will be suffixed by an asterisk, so that gold inside that universe is gold*. Then as we’ve said, if you insert gold* using the simulator, that gold* is real gold*. It is not gold. However, to inhabitants of the simulated universe, gold* is the real deal. In a similar manner, if you can trick this universe into making something that is internally represented the same way gold is, then you have real gold: that’s what the holographic principle counterargument is about.

    They would then further claim that we can’t know what kind of simulation we’re in, if any. So our gold might be someone else’s gold*, and could in theory be inserted into our universe. So our universe is not more privileged than any other Turing machine-implemented universe. This of course assumes that no hypercomputers can be made in our universe: otherwise, our universe would be special in that it can go beyond Turing machines.

    And now to consciousness as a property like mass. Given the above, there could be one of two responses. The first is that consciousness is like mass, so conscious beings inside a simulation are not conscious but conscious* instead. The second is that a properly made AI can be conscious the way we are. It just needs to be built in such a way as to translate the conscious* of anything inside into proper consciousness outside. This would be like an orrery with a universal constructor that, when you tell it to make a [gold Mercury]*, it actually makes a miniature copy of Mercury made out of gold. It is not Mercury (it’s just Mercury*), but it is gold. An AI simulation of a conscious human might, given the right translation, be not human (only human*) but still properly conscious.

  • Lukasz Stafiniak

    What if simulated water is connected to nanoeffectors that then give the haptic sensation of wetness, but would give a corresponding different sensation if a rock was simulated? Also, consider thinking in terms of processes rather than states. It will make it harder to confuse the “state” of the Turing machine with “the state of the Turing machine when running the program” (the former says nothing about what program the Turing machine is running and what is on its tape).

  • Xagor et Xavier

    By connecting the simulation to nanoeffectors, you’re trying to reproduce something closer to what water really is like. You can do so and get closer and closer to actual water the more sophisticated you get. In the extreme, if you could hack the universe and instantiate a cube of H2O one meter to your right, then that would actually be water. But if the universe were running on a computer in an alien dimension, the hack would not produce alien-dimension water – unless the simulation was connected to something reproducing alien-dimension water.

  • Lukasz Stafiniak

    But at some level of similarity (at some granularity of isomorphism) between systems and processes, if one system exhibits a process of feeling pain, the process that the other system exhibits is a process of feeling a similar pain. A WBE has different qualia than a biological brain, but this is because perfect isomoprhism between processes would nearly entail physical identity.

  • Xagor et Xavier

    That’s what the whole question is about, isn’t it? Kruel refers to Siegle saying that feeling pain may very well be a physical property like mass. So if we accept that, we can replace “feeling pain” with mass in what you said, and get:

    “But at some level of similarity between systems and processes, if one system is massive, the process of the other system is also massive.”

    But a system (like an emulation) is on some level a heap of atoms, and it can be implemented as many different heaps of atoms. Therefore, the emulation doesn’t by itself gain mass by simulating something that has mass unless it goes out of its way to replicate that property in the physical universe (e.g. by building a more massive version of itself).

    For a WBE, again, if you accept Siegle’s premise, the WBE can solve the problem in two ways. Either it can not care about pain (and the emulated person will only experience pain*), or the emulator can be sufficiently brain-like (in whatever way that makes a real person with a brain able to feel pain) so that when the emulated person feels pain*, it also feels pain.

    Of course, you could also state that pain* and pain is the same thing because it makes no difference to the person emulated. But others may not necessarily agree with you. Or you could reject Siegle’s premise and argument altogether.

  • Lukasz Stafiniak

    What about pain* is not like pain? I don’t accept Siegle’s premise, but I don’t dismiss it entirely, I want my explanatory theory have a stance on why, to what degree, it is false. Experiences of WBE are not identical to experiences of BB, but they are similar.

  • Xagor et Xavier

    I think that before answering that, I’d like to narrow down what to answer.

    Do you think that there are some things where reproduction is not representation (say mass), but that the consciousness that lets one feel pain is not one of these things,

    or do you think there is little difference between mass and mass* and thus that there is little difference between pain and pain* as well?

  • Lukasz Stafiniak

    First, let’s clear a bit the issue of representation vs. reproduction. Representation of a process is usually not a process isomorphic to the process represented — for example a memory of an event is not a process of for example an approximation of the event happening over and over in a loop. But if we grant isomorphism, there is still a difference in that representation has by definition different causal links than reproduction: representation links with the system that represents, which is qualitatively different from the complement of the process or object. Reproduction links with an environment that is similar to the complement of the process/object. Now on to the issue of mass* vs. mass. The process of attraction of a rock has “causal arms” that stretch out to other matter in the universe. The simulation of this, let’s say mass*, is of course different from mass, for example the causal arms do not stretch over to the complement of the rock, but to the simulated complement of the rock. The WBE on the other hand is in principle entirely capable of having the same causal links to the rest of the universe as BB. ETA: I’m sorry if this is dense… I’ll try to do better if you have further questions.

  • Xagor et Xavier

    I am sorry. I have been insanely busy and so couldn’t reply to your post. I may still be insanely busy, so I’ll try to be short.

    I think you are onto something when you say that representation has different causal links than reproduction. But I suspect that the difference is greater. To import a little SF: say you had a massless object with reactionless drives. And say that you added a computer and program to this object so that it would use its reactionless drives, along with a gravity sensor, to accelerate itself as if it had mass. Now it simulates mass, and it imposes that simulation upon the universe by means of its reactionless drives. But it does not seem, to my intuition at least, that it is actually massive. In a similar manner, a self-healing object like the body isn’t made of very strong material – it simply reacts (over long enough time scales) as if it had; and an aerodynamically unstable fly-by-wire aircraft is fundamentally different from a stable one, even if both can fly well when under optimal conditions.

    So the idea is that a reproduction of some effect is not the same thing as that effect itself. If consciousness (or qualia, or pain) is a physical property, then having something unlike ordinary consciousness but which has a translation layer do things “as if” it were the real thing, would not capture everything about it.

    It might be easier to see this from the point of view of the thing experiencing pain. If you wore armor that would inflict pain on you whenever you fell; and the computer inflicted the same kind of pain on you no matter how you fell (or you felt burning instead of stinging, or whatnot), then that wouldn’t really be accurate to what pain is like. That’s because the translation layer that goes between “armor is damaged” and “act upon the universe to make the person inside feel pain” doesn’t translate properly. Its pain isn’t your pain, it just gets translated into it.

  • Lukasz Stafiniak

    Could you find an example that is nomologically possible? 😉 (I’m not claiming that such example doesn’t exist, only that it would make your response better.)

  • Xagor et Xavier

    Alright, how about this:

    Say you have a seemingly ordinary block of matter that is really coated with nanoscale robots that will push anything sliding along it in the direction it is sliding. Say further that the power source they use for that pushing is inside this block (so there’s no power problems) and that the amount of force applied is set so that the surface of the block seems to be frictionless.

    Then the block isn’t actually coated with frictionless material even though the robots act as to reproduce the effect. The nanobots interact with the universe in all ways that a frictionless material would. But it isn’t physically evaluated to be frictionless in the context of the laws of physics: the robots just cancel out the friction.

    In that example, there would be side effects. I’m not sure if that is a problem or if it’s actually a characteristic property of simulation. There is a programming concept that says that “all abstractions are leaky”, after all. But if it is a problem, it’s a problem because it doesn’t make the simulation perfect: someone in opposition could claim that the nanobots aren’t perfectly replicating frictionlessness because their actions have side effects that frictionlessness doesn’t, and that the closer you get to canceling out those side effects, the closer you get to actual frictionlessness: in the limit, it quacks exactly like a duck and then it is a duck.

  • Pingback: Alexander Kruel · How Eliezer Yudkowsky is a dualist()

  • Pingback: Alexander Kruel · AI Risk Critiques: Index()