consciousness

You are currently browsing articles tagged consciousness.

Here is an interesting answer, posted on Quora, by Josh Siegle. His answer paraphrases some of what I tried to highlight in my post “Substrate Neutrality: Representation vs. Reproduction“.

Here are some quotes, starting with a comment by Josh Siegle from an ensuing discussion:

I’m saying that meat has properties and causal powers that algorithms do not. If the properties we’re talking about are mass, acidity, or opacity, this statement would be trivially true. A simulation of an apple will not weigh 0.1 kg, taste delicious, and appear red, although all those properties could be represented. Claiming that awareness is somehow different—that it would be present in the simulation—suggests that it is not part of the physical world. This leads very quickly down the path to a dualistic separation between the mental and the physical, which I imagine is exactly what you’re trying to avoid.

Quotes from the original answer:

First of all, imagine that the book in the room is a Chinese–English dictionary. When the Chinese characters come in, the man translates them into English, thinks of a reply, and translates that into to Chinese. BOOM! The room now acts as though it understands Chinese, but does it actually? I don’t think we gain anything by saying that it does. It should be obvious that the true understanding lies in the person that wrote the dictionary, and the man in the room is just piggy-backing on this knowledge. If not, then I could claim that I understand every major language because I know how to use Google Translate.

[…]

If instead of a string of Chinese characters, the man received a string of ones and zeros encoding a visual scene, would the room be having its own, separate visual experience while the man moves some paper around and reads the ink that adorns it? People make it sound like Searle was bonkers for claiming that such subjective experience wouldn’t arise. But what makes you so certain that it would?

What Josh Siegle appears to be saying is that consciousness is, in some respect, similar to properties such as mass or wetness. In the same sense that you cannot extinguish a physical fire with simulated water, a digital computer will not possess the same sort of conscious understanding that humans do.

In his original answer, Josh Siegle wrote that it is a given that the Chinese room passes the Turing test. That is not being disputed. The claim is rather that human understanding is more delicate (qualitatively different) than e.g. the ability of a system made up of a human and Google Translate to understand various languages.

I consider this a relatively weak claim, but nonetheless something that should not be dismissed. Namely that one of the most important, and morally relevant, features of human understanding could be related to the hard problem of consciousness, and that consciousness is a property that is in some relevant respect similar to physical properties such as mass or wetness.

Consider the following. Knowing every physical fact about gold does not make us own any gold. A representation of the chemical properties of gold on a computer cannot be traded on the gold market, established as a gold reserve, or used to create jewelry. It takes a particle accelerator or nuclear reactor to create gold. No Turing machine can do the job.

There is nothing spooky about this. The point is that a representation is distinct from a reproduction. Only if you reproduce all relevant physical properties of e.g. water can it be considered water in the context of the physical world.

The evidence in support of consciousness requiring a similarly detailed reproduction is our inability to explain how we intuitively disagree that a person with a Chinese–English dictionary does possess the same understanding of Chinese as a person who actually “understands” Chinese.

640px-Maquina

A Turing machine (Rules table not represented).

Can you program a Turing machine in such a way that it would end up in a state mimicking all relevant physical properties of water, in order to drink it? It seems rather weird to claim that a device that manipulates symbols on a strip of tape could configure itself in such a way as to mimic water. In the same sense it would be really weird to look at a configuration of logic gates and proclaim, “This configuration of logic gates experiences pain!”.

Conclusion

A Turing machine can exhibit a certain number of states. None of those states can possibly correspond to a number of physical properties such as acidity or wetness. The Chinese room thought experiment highlights how none of the states of a Turing machine could intuitively mimic conscious understanding of Chinese.

Or to quote Magnus Vinding, “just like a computer will not have Wi-Fi if it does not have the hardware that supports it – regardless of what software we implement in it – a computer will not be conscious if it does not have the right hardware.”

Tags: , , ,

Link: rationallyspeaking.blogspot.de/2012/11/consciousness-and-internet.html

In the interview, Koch continued: “certainly by any measure [the Internet is] a very, very complex system. Could it be conscious? In principle, yes it can.” And, pray, which principle would that be? I have started to note that a number of people prone to speculations at the border between science and science fiction, or between science and metaphysics, are quick to invoke the “in principle” argument. When pressed, though, they do not seem to be able to articulate exactly which principle they are referring to. Rather, it seems that the phrase is meant to indicate something along the lines of “I can’t think of a reason why not,” which at best is an argument from incredulity.

[…]

As it turns out, cosmologist Sean Carroll was the most reasonable of the bunch interviewed by Falk at Slate. As he put it: “There’s nothing stopping the Internet from having the computational capacity of a conscious brain, but that’s a long way from actually being conscious … Real brains have undergone millions of generations of natural selection to get where they are. I don’t see anything analogous that would be coaxing the Internet into consciousness. … I don’t think it’s at all likely.”

Tags: , ,

In this post I try to fathom an informal definition of Self, the “essential qualities that constitute a person’s uniqueness”. I assume that the most important requirement for a definition of self is time-consistency. A reliable definition of identity needs to allow for time-consistent self-referencing, since any agent that is unable to identify itself over time will be prone to make inconsistent decisions.

Data Loss

Obviously most humans don’t want to die, but what does that mean? What is it that humans try to preserve when they sign up for Cryonics? It seems that an explanation must account and allow for some sort of data loss.

The Continuity of Consciousness

It can’t be about the continuity of consciousness as we would have to refuse general anesthesia due to the risk of “dying” and most of us will agree that there is something more important than the continuity of consciousness that makes us accept a general anesthesia when necessary.

Computation

If the continuity of consciousness isn’t the most important detail about the self then it very likely isn’t the continuity of computation either. Imagine that for some reason the process evoked when “we” act on our inputs under the control of an algorithm halts for a second and then continues otherwise unaffected, would we don’t mind to be alive ever after because we died when the computation halted? This doesn’t seem to be the case.

Static Algorithmic Descriptions

Although we are not partly software and partly hardware, we could, in theory, come up with an algorithmic description of the human machine, of our selfs. Might it be that algorithm that we care about? If we were to digitize our self we would end up with a description of our spatial parts, our self at a certain time. Yet we forget that all of us possess such an algorithmic description of our selfs and we’re already able back it up. It is our DNA.

Temporal Parts

Admittedly our DNA is the earliest version of our selfs, but if we don’t care about the temporal parts of our selfs but only about a static algorithmic description of a certain spatiotemporal position, then what’s wrong with that? It seems a lot, we stop caring about past reifications of our selfs, at some point our backups become obsolete and having to fall back on them would equal death. But what is it that we lost, what information is it that we value more than all of the previously mentioned possibilities? One might think that it must be our memories, the data that represents what we learnt and experienced. But even if this is the case, would it be a reasonable choice?

Indentity and Memory

Let’s just disregard the possibility that we often might not value our future selfs and so do not value our past selfs either for that we lost or gained important information, e.g. if we became religious or have been able to overcome religion.

If we had perfect memory and only ever improved upon our past knowledge and experiences we wouldn’t be able to do so for very long, at least not given our human body. The upper limit on the information that can be contained within a human body is 2.5072178×10^38 megabytes, if it was used as a perfect data storage. Given that we gather much more than 1 megabyte of information per year, it is foreseeable that if we equate our memories with our self we’ll die long before the heat death of the universe. We might overcome this by growing in size, by achieving a posthuman form, yet if we in turn also become much smarter we’ll also produce and gather more information. We are not alone either and the resources are limited. One way or the other we’ll die rather quickly.

Does this mean we shouldn’t even bother about the far future or is there maybe something else we value even more than our memories? After all we don’t really mind much if we forget what we have done a few years ago.

Time-Consistency and Self-Reference

It seems that there is something even more important than our causal history. I think that more than everything we care about our values and goals. Indeed, we value the preservation of our values. As long as we want the same we are the same. Our goal system seems to be the critical part of our implicit definition of self, that which we want to protect and preserve. Our values and goals seem to be the missing temporal parts that allow us to consistently refer to us, to identify our selfs at different spatiotempiral positions.

Using our values and goals as identifiers also resolves the problem of how we should treat copies of our self that are featuring alternating histories and memories, copies with different causal histories. Any agent that does feature a copy of our utility function ought to be incorporated into our decisions as an instance, as a reification of our selfs. We should identify with our utility-function regardless of its instantiation.

Stable Utility-Functions

To recapitulate, we can value our memories, the continuity of experience and even our DNA, but the only reliable marker for the self identity of goal-oriented agents seems to be a stable utility function. Rational agents with an identical utility function will to some extent converge to exhibit similar behavior and are therefore able to cooperate. We can more consistently identify with our values and goals than with our past and future memories, digitized backups or causal history.

But even if this is true there is one problem, humans might not exhibit goal-stability.

Tags: , , , , , ,