Philosophy

You are currently browsing the archive for the Philosophy category.

Here are some interesting scenarios with low or unstable probabilities but potentially enormous pay-offs. Some of the given arguments in favor of taking these scenarios seriously are also thought-provoking.

Note that not all of the descriptions below are quotes, some are short summaries which might not adequately reflect the original author’s statements. Please read up on the the original sources, they are provided after the description of each scenario. Also note that I do not want to judge any of these scenarios but merely list them here in order to highlight possible similarities. And despite the title, it is not my intention to suggest that the scenarios listed here are cases of Pascal’s wager, but merely that there seems to be no clear cutoff between Pascal’s wager type arguments and finite expected value calculations.

The order in which these scenarios are listed is roughly by how seriously I take them, where the scenario listed at the end is the one that I take the least seriously.

1. Large asteroid strikes are low-probability, high-death events–so high-death that by some estimates the probability of dying from an asteroid strike is on the same order as dying in an airplane crash. [Source: Planetary Defense is a Public Good]

2. It’s often argued that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. Voting might be worth a charitable donation of somewhere between $100 and $1.5 million. [Source: Voting is like donating thousands of dollars to charity]

3. A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. A highly capable decision maker can have an irreversible impact on humanity. None of this proves that AI will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. [Source: Of Myths And Moonshine]

4. We should cut way back on accidental yelling to aliens, such as via Arecibo radar sending, if continuing at current rates would over the long run bring even a one in a billion chance of alerting aliens to come destroy us. And even if this chance is now below one in a billion, it will rise with time and eventually force us to cut back. So let’s start now to estimate such risks, and adapt our behavior accordingly. [Source: Should Earth Shut the Hell Up?]

5. GMOS might introduce “systemic risk” to the environment. The chance of ecocide, or the destruction of the environment and potentially humans, increases incrementally with each additional transgenic trait introduced into the environment. The downside risks are so hard to predict — and so potentially bad — that it is better to be safe than sorry. The benefits, no matter how great, do not merit even a tiny chance of an irreversible, catastrophic outcome. [Source: The Trouble With the Genetically Modified Future]

6. Cooling something to a temperature close to absolute zero might be an existential risk. Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying it is less likely than one in a million is likely very overconfident. [Source: Cool risks outside the envelope of nature]

7. Fundamental physical operations — atomic movements, electron orbits, photon collisions, etc. — could collectively deserve significant moral weight. The total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. [Source: Is there suffering in fundamental physics?]

8. Suppose someone comes to me and says, “Give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people. A compactly specified wager can grow in size much faster than it grows in complexity.  The utility of a Turing machine can grow much faster than its prior probability shrinks. [Source: Pascal’s Mugging: Tiny Probabilities of Vast Utilities]

I will expand this list as I come across similar scenarios.

Further reading

Tags: ,

If you believe that an artificial general intelligence is able to comprehend its own algorithmic description to such an extent as to be able to design improved version of itself, then you must believe that it is in principle possible for an agent to mostly understand how it functions. Which in turn means that it should be in principle possible to amplify human capabilities to such an extent as to enable someone to understand and directly perceive their own internal processes and functions.

What would it mean for a human being to have nearly perfect introspection? Or more specifically, what would it mean for someone to comprehend their hypothetical algorithmic description to such an extent that their own actions could be interpreted and understood in terms of that algorithmic description? Would it be desirable to understand oneself sufficiently well, to be able to predict and interpret one’s actions in terms of a mechanistic internal self-model?

Such an internal self-model would allow you to understand your consciousness, and states such as happiness or sadness, as what they are: purely mechanistic and predictable procedures.

Intracranially self-stimulating rat.

Intracranially self-stimulating rat.

How will such insight affect a being with human values?

Humans value novelty and become bored of tasks that are dull. Boredom is described as a response to a moderate challenge for which the subject has more than enough skill. Which means that once you cross an intelligence threshold where your own values start to appear dull, you will become bored of yourself.

You would understand that you are a robot, a function whose domain are internal and external forces and whose range are the internal states and actions of the robot. Your near-total internal understanding would render any conversation to be a trivial and dull game, on a par with watching two machines playing Pong or Rock-paper-scissors. You would still be able to experience happiness, but you would now also perceive it to be conceptually no more interesting than an involuntary muscle contraction.

Perfect introspection would reduce the previously incomprehensible complexity of your human values to a conceptually simplistic and transparent set of rules. Such insight would expose your behavior as what it is: the stimulation of your reward or pleasure center. Where before life seemed inscrutable, it would now appear to be barely more interesting than a rat pressing a lever in order to receive a short electric stimulation of its reward center.

What can be done about this? Nothing. If you value complexity and novelty then you will eventually have to amplify your own capabilities and intelligence. Which will ultimately expose the mechanisms that drive your behavior.

You might believe that there will always be new challenges and problems to solve. And this is correct. But you will perfectly grasp the nature of problem solving itself. Discovering, proving and incorporating new mathematics will, like everything else you do, be understood as a mechanical procedure that is being executed in order to feed you reward center.

The problem is thus that understanding happiness, and how to mechanically maximize what makes you happy, such as complexity and novelty, will eventually cause you to become bored with those activities in the same sense that you would now quickly become bored with watching cellular automata generate novel music.

Tags:

Premise 1: There exists a procedure (P1) that can compute optimal creativity and an optimal experience of fun.

Justification: If artificial general intelligence and whole brain emulation is possible then this implies that it is possible to capture creativity and experiences such as fun in a purely mechanical, algorithmic fashion.

Premise 2: There exists a procedure (P2) for which it is possible to perfectly comprehend P1, in the same sense that it is possible for humans to comprehend the rules of Tic-tac-toe.

Justification: If it is possible for an artificial general intelligence or whole brain emulation to improve itself considerably then this implies that it is possible for those agents to understand themselves sufficiently.

Tic Tac Toe

Tic Tac Toe

From the subjective viewpoint of P1, being computed is fun and creative. I will label this view, in function notation, as inside_view(P1). Or, in other words, how an algorithm feels from inside.

From the subjective viewpoint of P2, being computed means to perfectly understand what P1 is doing and how it is doing it. I will call this function outside_view(P1).

Premise 3: A human being (possibly given a hypothetical intelligence amplification) could incorporate P2. I will label this function human_P2().

What value would human_P2() assign to he computation of P1? I will label the computation of P1 compute(P1).

human_P2(compute(P1)) =

(1) Uninteresting (dull). Similarly to computing all possible games of Tic-tac-toe.

(2) Intrinsically valuable. The more resources are used to compute P1, the better.

What I perceive to be problematic is #2. What differences would it make to run P1, (1) once (2) N times (3) not at all?

Personally I assign little value to the repeated computation of something that I already understand thoroughly. Which does not mean that the algorithm itself would share my perception. But why should I care about that? As long as suffering has been eradicated, what difference would it make if the whole universe was used to compute an uninteresting algorithm (outside view) compared to a universe that does nothing in particular?

There are two possibilities:

(1) I could observe the computation of P1 from the outside (possibly until the heat death of the universe).

(2) I could turn myself into P1 and experience fun and creativity.

Why would I care about either 1 or 2 if I completely understand those possibilities and don’t expect any surprises that are conceptually more interesting than coming across a Feynman point?

Tags:

The thought experiment:

Jürgen is a brilliant artificial general intelligence (AGI) researcher who investigates the world from a purely mathematical point of view.

He specializes in a formal theory of fun and creativity and acquires, let us suppose, all the physical information there is to obtain about what goes on when we enjoy music, or create art, and utter sentences like ‘This is fun!’, ‘I feel great!’,  ‘This music is very emotive!’, and so on.

He discovers, for example, just which musical pieces are more interesting or aesthetically rewarding than others, and exactly how this can be mechanistically described and produced by a computable algorithm that results in the creation and perception of optimally appealing art and music.

What will happen when Jürgen eventually computes the algorithm? Will he learn anything or not?


Note:

The past two posts – The value of philosophy in a universe ruled by a friendly AI and Utopia is dull – have been written in order to analyze the expected value of living in a world after a benevolent technological singularity took place.

The expected value of the event of a technological singularity itself is distinct from the value of the time between this event and the heat death of the universe. Given that a positive technological singularity could end all suffering, it is intrinsically valuable to achieve it.


Jürgen, our AGI researcher, has two options:

(1) Jürgen could decide that the discovery of a algorithm that can produce optimal fun and creativity makes any human attempt to have fun, and to be creative, futile and of no additional value.

(2) Jürgen could decide that it would be valuable to turn the whole universe into a computational substrate computing his algorithm in order to maximize fun and creativity.

With which option you agree partly depends on the answer to the thought experiment outlined above. Will Jürgen learn anything new from computing the algorithm?

(1) Given that you believe that Jürgen will not learn anything new from computing his algorithm, what difference is there between a universe that contains his algorithm, and a universe that computes the algorithm as often as possible? In other words, Jürgen solved and proved a mathematical problem. What value is there in solving and proving it over and over again?

(2) Given that you believe that Jürgen will learn something from computing his algorithm, then once his algorithm computed an optimal, or nearly optimal, result, what difference would it make to compute it N times?

Again, the question here is not about the value of discovering such an algorithm. The question is not even about computing it once. The question is about the expected value of living in a universe where such an algorithm already exists.

To clarify the above, consider a different algorithm. Let’s call it Much-Better-Life Simulator™. Running Much-Better-Life Simulator™ is equivalent to the most enjoyable life a human being could ever experience.

What difference would it make to run Much-Better-Life Simulator™ (1) once (2) N times (3) not at all? What do you estimate is the expected value of 1, 2 and 3? And how confident are you about that estimate? Can you explain what difference it makes?

More specifically, consider the value humans assign to music and art. As described in my previous posts, the value of creating music and art will be diminished by (1) the instant availability and integrability of the best possible and perceptible permutations of art and music (2) a perfectly understood, integrable and implementable mechanistic algorithm which can yield the most emotive and appealing music and art that is provably possible. In other words, anything you could ever achieve has already been achieved in the best possible way when that algorithm had been discovered.

But what about enjoying art and music? As human enjoyment is perfectly understood as well, it will be possible to generate an optimal experience of either listening to music, enjoying art, or composing and creating it. All other permutations will be provably less desirable. There will be exactly one perfect experience of either enjoying or creating music and art.

You could either integrate such an experience, as if it has already happened, or run a simulation. Afterwards you could run all less desirable permutations of it, or run it over and over again. Which raises the question of what difference it makes to have a universe in which all matter is converted in order to be excited, compared to a universe where you perfectly understand what excitement is, but choose not to compute it?

Tags:

Imagine that, after your death, you were cryogenically frozen and eventually resurrected in a benevolent utopia ruled by a godlike artificial intelligence.

Naturally, you desire to read up on what has happened after your death. It turns out that you do not have to read anything, but merely desire to know something and the knowledge will be integrated as if it had been learnt in the most ideal and unbiased manner. If certain cognitive improvements are necessary to understand certain facts, your computational architecture will be expanded appropriately.

You now perfectly understand everything that has happened and what has been learnt during and after the technological singularity, that took place after your death. You understand the nature of reality, consciousness, and general intelligence.

Concepts such as creativity or fun are now perfectly understood mechanical procedures that you can easily implement and maximize, if desired. If you wanted to do mathematics, you could trivially integrate the resources of a specialized Matrioshka brain into your consciousness and implement and run an ideal mathematician.

But you also learnt that everything you could do has already been done, and that you could just integrate that knowledge as well, if you like. All that is left to be discovered is highly abstract mathematics that requires the resources of whole galaxy clusters.

So you instead consider to explore the galaxy. But you become instantly aware that the galaxy is unlike the way it has been depicted in old science fiction novels. It is just a wasteland, devoid of any life. There are billions of barren planets, differing from each other only in the most uninteresting ways.

But surely, you wonder, there must be fantastic virtual environments to explore. And what about sex? Yes, sex! But you realize that you already thoroughly understand what it is that makes exploration and sex fun. You know how to implement the ideal adventure in which you save people of maximal sexual attractiveness. And you also know that you could trivially integrate the memory of such an adventure, or simulate it a billion times in a few nanoseconds, and that the same is true for all possible permutations that are less desirable.

You realize that the universe has understood itself.

The movie has been watched.

The game has been won.

The end.


A quote from the novel Ventus, by Karl Schroeder:

The view was breathtaking. From here, beyond the orbit of Neptune, Axel could see the evidence of humanity’s presence in the form of a faint rainbowed disk of light around the tiny sun. Scattered throughout it were delicate sparkles, each some world-sized Dyson engine or fusion starlette. Earth was just one of a hundred thousand pinpricks of light in that disk. Starlettes lit the coldest regions of the system, and all the planets were ringed with habitats and the conscious, fanatical engines of the solarforming civilization. This was the seat of power for the human race, and for many gods as well. It was ancient, implacably powerful, and in its trillions of inhabitants habored more that was alien than the rest of the galaxy put together.

Axel hated the place.

[…]

If he shut his eyes he could open a link to the outer edge of the inscape, the near-infinite datanet that permeated the Archipelago. He chose not to do this.

[…]

“Isn’t it marvellous?” she said as she came to stand next to him. “I have never been here! Not physically, I mean.” She was dressed in her illusions again, today in a tiny whirlwind of strategically timed leaves: Eve in some medieval painter’s fantasy.

“You haven’t missed much,” he said.

Marya blinked. “How can you say that?” She went to lean on the window, her fingers indenting its resilient surface. “It is everything!”

“That’s what I hate about it.” He shrugged. “I don’t know how people can live here, permanently linked into inscape. All you can ever really learn is that everything you’ve ever done or thought has been done and thought before, only better. The richest billionaire has to realize that the gods next door take no more notice of him than he would a bug. And why go explore the galaxy when anything conceivable can be simulated inside your own head?

Tags: ,

For the sake of the argument, suppose that AI risk advocates succeed at implementing an artificial general intelligence that protects and amplifies human values (friendly AI).

Such a friendly AI (FAI) would have to (1) disallow any entity smarter than itself that isn’t provably friendly (2) know exactly what humans value and how to protect and amplify those values in a way that humans desire.

How valuable would such an outcome be? Let’s look at a specific human value and its expected value in the context of a universe ruled by such an FAI. Let’s look at doing philosophy.

I can see two possibilities,

(1) The FAI had to solve all of philosophy in order do its job.

(2) The FAI did not have to solve philosophy but would in principle be capable of doing so.

Given either possibility, how much would humans value to do philosophy if all interesting questions either had already been answered or could easily be answered by the FAI?

That partly depends on whether it would be possible to just ask the FAI for any answer. But why would that not be possible? There seem to be two answers,

(1) The FAI learnt that humans don’t want it to answer such questions.

(2) The FAI was programmed to not answer such questions.

The first possibility seems to imply that humans want to figure out philosophy in a certain way, which does not include just asking for an answer or looking it up. But how likely is this possibility? How many philosophers would desire that the Stanford Encyclopedia of Philosophy would not exist so that they could figure out all of it on their own?

The second possibility is itself problematic. In a universe ruled by an FAI, artificial general intelligence and friendly AI have obviously been solved. Which means that people could either desire the FAI to alter itself in such a way that it would be able to answer such questions, or implement a less capable version that can answer philosophy questions. And if that isn’t allowed, which would mean that pretty much the whole field of machine learning would be forbidden, then people could just ask the FAI to improve themselves in such a way as to be capable of easily solving any philosophical puzzle.

To recapitulate the situation. Given any human intellectual activity, not just philosophy, in a universe controlled by an FAI it should be possibly to either,

(1) Directly ask the FAI for an answer to any question.

(2) Implement a superintelligence that could answer those questions.

(3) Ask to have your cognitive abilities improved in such a way as to easily answer those questions.

No matter if the above possibilities are allowed or not, in both cases a wide range of human values would be dramatically reduced. Because either all human intellectual activity becomes as trivial as asking a question, or humans are forever stuck with the mental capabilities that they have been equipped with by evolution, while being forbidden to create another intelligence more capable than themselves.

The only way out that I can imagine is to choose ignorance. To ask the FAI to be oblivious of its existence and of how to create an FAI. But who would desire that? Who would desire to forever fail at solving philosophy, amplifying human intelligence, or to create an artificial one? I would certainly hate not to know the truth, to be forever fooled.

Tags: ,

Here is an interesting answer, posted on Quora, by Josh Siegle. His answer paraphrases some of what I tried to highlight in my post “Substrate Neutrality: Representation vs. Reproduction“.

Here are some quotes, starting with a comment by Josh Siegle from an ensuing discussion:

I’m saying that meat has properties and causal powers that algorithms do not. If the properties we’re talking about are mass, acidity, or opacity, this statement would be trivially true. A simulation of an apple will not weigh 0.1 kg, taste delicious, and appear red, although all those properties could be represented. Claiming that awareness is somehow different—that it would be present in the simulation—suggests that it is not part of the physical world. This leads very quickly down the path to a dualistic separation between the mental and the physical, which I imagine is exactly what you’re trying to avoid.

Quotes from the original answer:

First of all, imagine that the book in the room is a Chinese–English dictionary. When the Chinese characters come in, the man translates them into English, thinks of a reply, and translates that into to Chinese. BOOM! The room now acts as though it understands Chinese, but does it actually? I don’t think we gain anything by saying that it does. It should be obvious that the true understanding lies in the person that wrote the dictionary, and the man in the room is just piggy-backing on this knowledge. If not, then I could claim that I understand every major language because I know how to use Google Translate.

[…]

If instead of a string of Chinese characters, the man received a string of ones and zeros encoding a visual scene, would the room be having its own, separate visual experience while the man moves some paper around and reads the ink that adorns it? People make it sound like Searle was bonkers for claiming that such subjective experience wouldn’t arise. But what makes you so certain that it would?

What Josh Siegle appears to be saying is that consciousness is, in some respect, similar to properties such as mass or wetness. In the same sense that you cannot extinguish a physical fire with simulated water, a digital computer will not possess the same sort of conscious understanding that humans do.

In his original answer, Josh Siegle wrote that it is a given that the Chinese room passes the Turing test. That is not being disputed. The claim is rather that human understanding is more delicate (qualitatively different) than e.g. the ability of a system made up of a human and Google Translate to understand various languages.

I consider this a relatively weak claim, but nonetheless something that should not be dismissed. Namely that one of the most important, and morally relevant, features of human understanding could be related to the hard problem of consciousness, and that consciousness is a property that is in some relevant respect similar to physical properties such as mass or wetness.

Consider the following. Knowing every physical fact about gold does not make us own any gold. A representation of the chemical properties of gold on a computer cannot be traded on the gold market, established as a gold reserve, or used to create jewelry. It takes a particle accelerator or nuclear reactor to create gold. No Turing machine can do the job.

There is nothing spooky about this. The point is that a representation is distinct from a reproduction. Only if you reproduce all relevant physical properties of e.g. water can it be considered water in the context of the physical world.

The evidence in support of consciousness requiring a similarly detailed reproduction is our inability to explain how we intuitively disagree that a person with a Chinese–English dictionary does possess the same understanding of Chinese as a person who actually “understands” Chinese.

640px-Maquina

A Turing machine (Rules table not represented).

Can you program a Turing machine in such a way that it would end up in a state mimicking all relevant physical properties of water, in order to drink it? It seems rather weird to claim that a device that manipulates symbols on a strip of tape could configure itself in such a way as to mimic water. In the same sense it would be really weird to look at a configuration of logic gates and proclaim, “This configuration of logic gates experiences pain!”.

Conclusion

A Turing machine can exhibit a certain number of states. None of those states can possibly correspond to a number of physical properties such as acidity or wetness. The Chinese room thought experiment highlights how none of the states of a Turing machine could intuitively mimic conscious understanding of Chinese.

Or to quote Magnus Vinding, “just like a computer will not have Wi-Fi if it does not have the hardware that supports it – regardless of what software we implement in it – a computer will not be conscious if it does not have the right hardware.”

Tags: , , ,

Assuming two agents A and B, where Agent_A is the blackmailer, blackmail has the following properties:

Agent_A offers to omit acting in a way that is undesirable according to Agent_B for the purpose of controlling the actions of Agent_B in such a way that is desirable according to Agent_A.

Consider a situation where Agent_A creates a desirable product that Agent_B wants. Such a case does not satisfy all of the above conditions since Agent_A offers Agent_B to act in such a way as to achieve a world state that has a higher ranking according to Agent_B than the current world state, namely the action of acquiring the product created by Agent_A.

The important difference between blackmail and trade scenarios that are not considered blackmail is that blackmail scenarios offer the agent that is being blackmailed the possibility to be worse off whereas a trade scenario offers the agent the possibility to achieve a world state that is ranked higher according to its utility-function.

A trade is considered to be blackmail when it benefits one party involved in a trade by offering to omit an action that would make all other parties worse off. More specifically, if an offer to omit an action that harms other parties was made for the benefit of the party making the offer then the action is considered to be blackmail.

The properties of blackmail are thus defined to be:

(1) The omission of an action is offered by one party.
(2) The action would make all other parties involved worse off.
(3) The purpose of offering to omit the action is to control the actions of all other parties in such a way as to make the party offering to omit it better off.

Tags:

Background: Newcomb’s problem

Objective: The problem I am trying to highlight with this post is not the difficulty of predicting another agent accurately but (1) the problem of stating precisely what it is that Omega is predicting in the first place (2) that locating and isolating a discrete agent in a continuous universe by e.g. formalizing the boundaries of the physical system in question seems to be nontrivial for complex agents (3) how to think about decision making when decisions are determined not just by the agent (as arbitrarily defined by humans) but by the larger environment.

The ability to accurately predict the decision making of other agents is insufficient if it is not possible to define what is meant by <decision making> and <agent>.

Newcomb’s problem: Ignoring the problems mentioned above, one-boxing is the correct strategy, given that Omega is correct more then 50.05% of the time. Since,

Two-boxing:

If (prediction == One-box)

return (1000000+1000)

else

return 1000

Expected value: y = (1-x)*(1000000+1000)+x*1000 = -1000000x+1001000 where 0<x<1 is the probability of a correct prediction.

One-boxing:

If (prediction == One-box)

return 1000000

else

return 0

Expected value: z = x(1000000)+(1-x)*0 = 1000000x where 0<x<1 is the probability of a correct prediction.

Two-boxing versus One-boxing:

y > z

-1000000x+1001000 > 1000000x

1001000 > 2000000x

1001000/2000000 > x

0.5005 > x

As long as the probability of a correct prediction is less than 50.05%, two-boxing has the larger expected value.

Consider the following scenarios:

(S1) Omega predicts that you will end up taking both boxes because, even though at some point you did precommit to one-boxing, you change your mind and take both boxes.

(S2) Omega predicts that you will end up taking both boxes because of a stroke causing brain damage.

(S3) Omega predicts that a sudden wind gust will cause you to stumble and topple over both boxes, even though you did precommit to taking only one box.

(S4) You make up one half of a split brain residing in the same body. You precommit to one-boxing while the other personality sharing the body with you chooses two-boxing. You have no control of the movement of the body except that you are the one who can talk.

Remarks:

If a body harboring two or more personalities with different precommitment strategies about Newcomb-like problems ends up taking both boxes then did all of the agents who reside in that body take two boxes or just the one that happened to control the body during the critical moment?

It seems possible to adopt a wide range of definitions of “agency” when trying to reason about and predict the behavior of other agents. It is possible to define an agent as a global or a local physical system, respectively slice of space. In other words, when examining a system it is possible to either act based on the assumption that the whole system is a coherent entity or to assign the quality of agency to arbitrary sub-procedures of the system and examine them in isolation.

For example, if Omega was to assign the quality of agency to a volume of space approximately the size of the human brain, would then a precommitment to one-boxing satisfy Omega’s condition to put $1,000,000 into box A? Would then a case of two-boxing as a result of e.g. brain damage caused by external factors be ignored?

So what is it that Omega predicts when your actions are ultimately the local behavior of a larger physical system we call the universe?

Can you formalize the difference between what it means to take both boxes due to (1) changing your mind for subtle reasons (e.g. reading a decision theory paper) (2) changing your mind for not so subtle reasons (e.g. brain damage) (3) because you are not in control of the the larger physical system (e.g. sudden strong wind causes you to stumble) or (4) because you do not control “your” body (e.g. multiple personality disorder)?

Tags: , ,

Link: rationallyspeaking.blogspot.de/2012/11/consciousness-and-internet.html

In the interview, Koch continued: “certainly by any measure [the Internet is] a very, very complex system. Could it be conscious? In principle, yes it can.” And, pray, which principle would that be? I have started to note that a number of people prone to speculations at the border between science and science fiction, or between science and metaphysics, are quick to invoke the “in principle” argument. When pressed, though, they do not seem to be able to articulate exactly which principle they are referring to. Rather, it seems that the phrase is meant to indicate something along the lines of “I can’t think of a reason why not,” which at best is an argument from incredulity.

[…]

As it turns out, cosmologist Sean Carroll was the most reasonable of the bunch interviewed by Falk at Slate. As he put it: “There’s nothing stopping the Internet from having the computational capacity of a conscious brain, but that’s a long way from actually being conscious … Real brains have undergone millions of generations of natural selection to get where they are. I don’t see anything analogous that would be coaxing the Internet into consciousness. … I don’t think it’s at all likely.”

Tags: , ,

« Older entries