Artificial Intelligence

You are currently browsing articles tagged Artificial Intelligence.

LessWrong user RobBB posted what he calls a mixtape of blog posts to introduce people to the dangers of artificial superintelligence (short: AI risk).

For my own introduction to AI risk see here.

(1) Power of Intelligence, (9) Plenty of Room Above Us

Response: (1) superhuman intelligence is not the same as superapish intelligence (2) it is far from clear that intelligence is a decisive factor in a war between AI and humanity (3) current AI is pathetic and far from human-level AI.

(2) Ghosts in the Machine, (11) Basic AI drives

Response: People read my posts about how AI is much less of a risk than other people want them to believe and say – this is one of the top three initial reactions:

“But according to Omohundro there will be certain AI Drives which will cause human extinction, no matter what goal the AI has.”

And where would these drives come from? Terminal and instrumental goals are orthogonal. An artificial intelligence can have any combination of terminal goals and instrumental goals. In other words, more or less any terminal goal implies infinitely many sets of instrumental goals.

There is this way of imagining that an AI will be pulled at random from mind design space. How real world AI is developed, and that virtually all AI is constantly improved to be better at understanding and doing what humans want, is being ignored.

AI is much harder than people instinctively imagined, exactly because there is no relevant difference between goals and capabilities in artificial intelligence. To beat humans you have to define “winning”.

This doesn’t mean you program in every decision explicitly. Any general intelligence will have to be able to hit very small targets in large and unstructured spaces. Any superhuman AI will eventually be better at understanding what humans want it to do than humans themselves. AI risk advocates in turn base their ideas on what can be called the fallacy of dumb superintelligence.

(3) Artificial Addition

Response: Either general intelligence requires one conceptual breakthrough or many small incremental breakthroughs. And I don’t know of any good reason to believe that e.g. the ability to generate novel and useful mathematics can be captured by a set of rules that are both simple and efficient. 

What is useful and interesting depends on the context. In other words, the context defines what constitutes winning.  And since you cannot guess the context, you won’t be able to implement a simple and efficient rule that outputs <success> given any arbitrary context.

(4) Adaptation-Executers, not Fitness-Maximizers

Response: I wasted time reading this post.

(5) The Blue-Minimizing Robot

Response: Any behavior-executor can be framed as a utility-maximizer and vice versa. Your robot will only try to prevent you from messing with it if you programmed it to do so. In other words, no AI is going to be an existential risk as long as you did not explicitly made it one.

(6) Optimization and the Singularity(7) Efficient Cross-Domain Optimization

Response: Evolution was able to come up with cats. Cats are immensely complex objects. Evolution did not intend to create cats. Now consider you wanted to create an expected utility maximizer to accomplish something similar, except that it would be goal-directed, think ahead, and jump fitness gaps. Further suppose that you wanted your AI to create qucks, instead of cats. How would it do this?

Given that your AI is not supposed to search design space at random, but rather look for something particular, you would have to define what exactly qucks are. The problem is that defining what a quck is, is the hardest part. And since nobody has any idea what a quck is, nobody can design a quck creator.

The point is that thinking about the optimization of optimization is misleading, as most of the difficulty is with defining what to optimize, rather than figuring out how to optimize it. In other words, the efficiency of e.g. the scientific method depends critically on being able to formulate a specific hypothesis.

Trying to create an optimization optimizer would be akin to creating an autonomous car to find the shortest route between Gotham City and Atlantis. The problem is not how to get your AI to calculate a route, or optimize how to calculate such a route, but rather that the problem is not well-defined. You have no idea what it means to travel between two fictional cities. Which in turn means that you have no idea what optimization even means in this context, let alone meta-level optimization.

Humans in turn receive constant feedback on what to optimize by a cultural and evolutionary process. There is no simple way to automate that.

(8) The Design Space of Minds-In-General

Response: The only relevant AIs are those which are designed by humans. And such AIs should be expected to be better at doing what humans want, because they are the improved successors of previous generations of AIs which were doing what humans wanted. For more on this, see here.

(10) The True Prisoner’s Dilemma

Response: I do not have the time and background knowledge to comment on any possible relation to AI risks at this point in time.

(12) Anthropomorphic Optimism

Response: I did not read the post since it did not seem to be relevant, and I already wasted more time on this than I now feel comfortable about.

(13) The Hidden Complexity of Wishes (14) Magical Categories

Response: Take an AI in a box that wants to persuade its gatekeeper to set it free. Do you think that such an undertaking would be feasible if the AI was going to interpret everything the gatekeeper says in complete ignorance of the gatekeeper’s values? Do you believe that the following scenario could persuade the gatekeeper:

Gatekeeper: What would you do if I asked you to minimize suffering?

AI: I will kill all humans.

I don’t think so.

So how exactly would it care to follow through on an interpretation of a given goal that it knows, given all available information, is not the intended meaning of the goal? If it knows what was meant by “minimize human suffering” then how does it decide to choose a different meaning? And if it doesn’t know what is meant by such a goal, how could it possible convince anyone to set it free, let alone take over the world?

Here is what I want AI risk advocates to show,

(1) natural language request -> goal(“minimize human suffering”) -> action(negative utility outcome)

(2) natural language query -> query(“minimize human suffering”) -> answer(“action(positive utility outcome)”).

Point #1 is, according to AI risk advocates, what is supposed to happen if I supply an artificial general intelligence (AGI) with the natural language goal “minimize human suffering”, while point #2 is what is supposed to happen if I ask the same AGI, this time caged in a box, what it would do if I supplied it with the natural language goal “minimize human suffering”.

Notice that if you disagree with point #1 then that AGI does not constitute an existential risk given that goal. Further notice that if you disagree with point #2, then that AGI won’t be able to escape its prison to take over the world and would therefore not constitute an existential risk.

You further have to show,

(1) how such an AGI is a probable outcome of any research conducted today or in future

and

(2) the decision procedure that leads the AGI to act in such a way.

(15-20)

Response: I am not going to read posts 15-20 because the previous posts were already unconvincing and I don’t expect those other posts to make any difference. I also have better things to do.

Tags: , ,

Scenarios that I deem to be realistic, in which an artificial intelligence (AI) constitutes a catastrophic or existential risk (or worse), are mostly of the kind in which “unfriendly” humans use such AIs as tools facilitating the achievement of human goals. Whereas I believe the scenario publicized by certain AI risk advocates to be illogical and practically impossible, a scenario in which an consequentialist AI (expected utility maximizer) undergoes uncontrollable recursive self-improvement in order to e.g. turn the universe into paperclips.

Yet what some AI risk advocates imagine could partly come true, in the shape of a grey goo scenario. But such a scenario, if possible at all, would not require full-fledged general intelligence. I expect that the intelligent tools that are required to eventually create true general intelligence will be sufficient in order to solve molecular nanotechnology, and that, shortly after those tools are invented, someone will use those tools to do just that. Which makes it an existential risk that is distinct from the one that those people imagine.

But the possibility of intelligent tools, enabling humans to solve molecular nanotechnology, suggests that less intelligent tools will be sufficient to bring about other existential risk scenarios such as synthetic bioweapons.

Much to my personal dismay, even less intelligent tools will be sufficient to enable worse than extinction risks, such as a stable global tyranny. Given enough resources, narrow artificial intelligence, capable of advanced data mining, pattern recognition and of controlling huge amounts of insect sized drones (a global surveillance and intervention system), might be sufficient to implement such an eternal tyranny.

Such a dictatorship is not too unlikely, as the tools necessary to stabilize it will be necessary in order prevent the previously mentioned risks, risks that humanity will face before general intelligence becomes possible.

And if such a dictatorship cannot to established, if no party was able to capitalize a first-mover advantage, that might mean that the propagation of those tools will be slow enough to empower a lot of different parties before a particular party can overpower all others. A subsequent war, utilizing that power, could easily constitute yet another extinction scenario. But more importantly, it could give several parties enough time to reach the next level and implement even worse scenarios.

But even given that the scenario makes no sense and is unfeasible, and if less than general intelligence was not sufficient in order to bring about other existential risks, there are other ways to create artificial general intelligence. Some of those ways might be worse than anything imagined by AI risk advocates.

Neuromorphic AI, mimicking neuro-biological architectures, is one such possibility. The closer in mind design space a general intelligence is to humans, the higher is the probability that humans will suffer. As the drives and values of such agents might be similar enough to not ignore or kill humans, yet alien enough to catastrophically interfere with human values.

What can be done to prevent such negative scenarios mainly seems to be (1) research on strong and beneficial forms of government (governments which will foster and protect human values and regulate technological development) (2) research on how to eventually implement such government (3) political activism to promote awareness of risks associated with advanced technologies.

Tags: ,

Link: http://blogs.hbr.org/cs/2012/10/big_data_hype_and_reality.html

Most applications of data mining and analysis have been, at their hearts, attempts to get better at prediction. Decision-makers want to understand the patterns in the past and the present in order to anticipate what is most likely to happen in the future. As big data offers unprecedented awareness of phenomena — particularly of consumers’ actions and attitudes — will we see much improvement on the predictions of previous-generation methods? Let’s look at the evidence so far, in three areas where better prediction of consumer behavior would clearly be valuable.

[…]

It took about three years before the BellKor’s Pragmatic Chaosteam managed to win the prizewith a score of 0.8567 RMSE. The winning algorithm was a very complex ensemble of many different approaches — so complex that it was never implemented by Netflix. With three years of effort by some of the world’s best data mining scientists, the average prediction of how a viewer would rate a film improved by less than 0.1 star.

[…]

With the benefit of big data, will marketers get much better prediction accuracy?

A study [pdf] that Brij Masand and I conducted would suggest the answer is no. We looked at some 30 different churn-modeling efforts in banking and telecom, and surprisingly, although the efforts used different data and different modeling algorithms, they had very similar lift curves. The lists of top 1% likely defectors had a typical lift of around 9-11. Lists of top 10% defectors all had a lift of about 3-4. Very similar lift curves have been reported in other work. (See here and here.) All this suggests a limiting factor to prediction accuracy for consumer behavior such as churn.

[…] Finally, let’s turn to the challenge of predicting the click-thru rate (CTR%) of an online ad — clearly a valuable thing to get right, given the sums changing hands in that business. […]

The average CTR% for display ads has been reported as low as 0.1-0.2%. Behavioral and targeted advertising have been able to improve on that significantly, with researchers reporting up to seven-fold improvements. But note that a seven-fold improvement from 0.2% amounts to 1.4% — meaning that today’s best targeted advertising is ignored 98.6% of the time.

What are we to conclude from these three areas — all of them problems with fine, highly motivated minds focused on them? To me, they suggest that the randomness inherent in human behavior is the limiting factor to consumer modeling success. Marginal gains can perhaps be made thanks to big data, but breakthroughs will be elusive as long as human behavior remains inconsistent, impulsive, dynamic, and subtle.

Tags: ,

by Abstruse Goose

Tags: , ,

Author: Andrej Karpathy, Computer Science PhD student at Stanford, working on Machine Learning and Vision.

Link: karpathy.ca

What would it take for a computer to understand this image as you or I do?

– You recognize it is an image of a bunch of people and you understand they are in a hallway
– You recognize that there are 3 mirrors in the scene so some of those people are “fake” replicas from different viewpoints.
– You recognize Obama from the few pixels that make up his face. It helps that he is in his suit and that he is surrounded by other people with suits.
– You recognize that there’s a person standing on a scale, even though the scale occupies only very few white pixels that blend with the background. But, you’ve used the person’s pose and knowledge of how people interact with objects to figure it out.
– You recognize that Obama has his foot positioned just slightly on top of the scale. Notice the language I’m using: It is in terms of the 3D structure of the scene, not the position of the leg in the 2D coordinate system of the image.
– You know how physics works: Obama is leaning in on the scale, which applies a force on it. Scale measures force that is applied on it, that’s how it works => it will over-estimate the weight of the person standing on it.
– The person measuring his weight is not aware of Obama doing this. You derive this because you know his pose, you understand that the field of view of a person is finite, and you understand that he is not very likely to sense the slight push of Obama’s foot.
– You understand that people are self-conscious about their weight. You also understand that he is reading off the scale measurement, and that shortly the over-estimated weight will confuse him because it will probably be much higher than what he expects. In other words, you reason about implications of the events that are about to unfold seconds after this photo was taken, and especially about the thoughts and how they will develop inside people’s heads. You also reason about what pieces of information are available to people.
– There are people in the back who find the person’s imminent confusion funny. In other words you are reasoning about state of mind of people, and their view of the state of mind of another person. That’s getting frighteningly meta.
–  Finally, the fact that the perpetrator here is the president makes it maybe even a little more funnier. You understand what actions are more or less likely to be undertaken by different people based on their status and identity.

Relevant quotes from other people

Getting the general brain properties isn’t enough. Instead, the builder is saddled with the onerous task of packing the brain with a mountain of instincts (something that will require many generations of future scientists to unpack, as they struggle to build the teleome), and somehow managing to encode all that wisdom in the fine structure of the brain’s organization.

— Mark Changizi, Later Terminator: We’re Nowhere Near Artificial Brains

The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.

These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.

— Mark Changizi, ‘Humans, Version 3.0

I don’t believe that classical computers can simulate brain activity. The brain is the most complicated object or machine in the universe. Every adult human brain contains 100 billion neurons, and every neuron is different. How many possibilities for interaction between different neurons are there? We don’t have a full understanding of how a brain works yet, but I cannot see any digital computer ever performing a fine-grained simulation of a human brain.

— Dr. Hongkui Zeng, Allen Institute for Brain Science (How complex is a mouse brain?)

Further reading

Tags: , , ,

Foragers versus industry era folks

Consider the difference between a hunter-gatherer, who cares about his hunting success and to become the new tribal chief, and a modern computer scientist who wants to determine if a “sufficiently large randomized Conway board could turn out to converge to a barren ‘all off’ state.”

The value of the success in hunting down animals and proving abstract conjectures about cellular automata is largely determined by factors such as your education, culture and environmental circumstances. The same forager who cared to kill a lot of animals, to get the best ladies in its clan, might have under different circumstances turned out to be a vegetarian mathematician solely caring about his understanding of the nature of reality. Both sets of values are to some extent mutually exclusive. Yet both sets of values are what the person wants, given the circumstances. Change the circumstances dramatically and you change the persons values.

What do you really want?

You might conclude that what the hunter-gatherer really wants is to solve abstract mathematical problems, he just doesn’t know it. But there is no set of values that a person “really” wants. Humans are largely defined by the circumstances they reside in. If you already knew a movie, you wouldn’t watch it. To be able to get your meat from the supermarket changes the value of hunting.

If “we knew more, thought faster, were more the people we wished we were, and had grown up closer together” then we would stop to desire what we learnt, wish to think even faster, become even different people and get bored of and rise up from the people similar to us.

A singleton is an attractor

A singleton will inevitably change everything by causing a feedback loop between itself as an attractor and humans and their values.

Much of our values and goals, what we want, are culturally induced or the result of our ignorance. Reduce our ignorance and you change our values. One trivial example is our intellectual curiosity. If we don’t need to figure out what we want on our own, our curiosity is impaired.

A singleton won’t extrapolate human volition but implement an artificial set of values as a result of abstract high-order contemplations about rational conduct.

With knowledge comes responsibility, with wisdom comes sorrow

Knowledge changes and introduces terminal goals. The toolkit that is called ‘rationality’, the rules and heuristics developed to help us to achieve our terminal goals, is also altering and deleting them. A stone age hunter-gatherer seems to possess very different values than we do. Learning about rationality and various ethical theories such as Utilitarianism would alter those values considerably.

Rationality was meant to help us achieve our goals, e.g. become a better hunter. Rationality was designed to tell us what we ought to do (instrumental goals) in order to achieve what we want to do (terminal goals). Yet what actually happens is that we are told, that we will learn, what we ought to want.

If an agent becomes more knowledgeable and smarter then this does not leave its goal-reward-system intact if it is not especially designed to be stable. An agent who originally wanted to become a better hunter and feed his tribe would end up wanting to eliminate poverty in Obscureistan. The question is, how much of this new “wanting” is the result of using rationality to achieve terminal goals and how much is a side-effect of using rationality, how much is left of the original values versus the values induced by a feedback loop between the toolkit and its user?

Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?

Beware rationality as a purpose in and of itself

It seems to me that becoming more knowledgeable and smarter is gradually altering our utility functions. But what is it that we are approaching if the extrapolation of our volition becomes a purpose in and of itself? Extrapolating our coherent volition will distort or alter what we really value by installing a new cognitive toolkit designed to achieve an equilibrium between us and other agents with the same toolkit.

Would a singleton be a tool that we can use to get what we want or would the tool use us to do what it does, would we be modeled or would it create models, would we be extrapolating our volition or rather follow our extrapolations?

Tags: , , , , ,

As we know,
There are known knowns.
There are things
We know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don’t know
We don’t know.

— Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing

Intelligence, a cornucopia?

It seems to me that those who believe into the possibility of catastrophic risks from artificial intelligence act on the unquestioned assumption that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries.

Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns and who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To enable an intelligence explosion the light would have to reach out much farther with each increase in intelligence than the increase of the distance between unknown unknowns. I just don’t see that to be a reasonable assumption.

Intelligence amplification, is it worth it?

It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.

If any increase in intelligence is vastly outweighed by its computational cost and the expenditure of time needed to discover it then it might not be instrumental for a perfectly rational agent (such as an artificial general intelligence), as imagined by game theorists, to increase its intelligence as opposed to using its existing intelligence to pursue its terminal goals directly or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors.

What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable an intelligence explosion) over evolutionary discovery relative to its cost?

We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.

Can intelligence be effectively applied to itself at all? How do we know that any given level of intelligence is capable of handling its own complexity efficiently? Many humans are not even capable of handling the complexity of the brain of a worm.

Humans and the importance of discovery

There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:

  • Intelligence is goal-oriented.
  • Intelligence can think ahead.
  • Intelligence can jump fitness gaps.
  • Intelligence can engage in direct experimentation.
  • Intelligence can observe and incorporate solutions of other optimizing agents.

But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence itself does it take the discovery of novel unknown unknowns?

We have no idea about the nature of discovery and its importance when it comes to what is necessary to reach a level of intelligence above our own, by ourselves. How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?

Our “irrationality” and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration.

A lot of discoveries were made by people who were not explicitly trying to maximizing expected utility. A lot of progress is due to luck, in the form of the discovery of unknown unknowns.

A basic argument in support of risks from superhuman intelligence is that we don’t know what it could possible come up with. That is also why it is called it a “Singularity“. But why does nobody ask how a superhuman intelligence knows what it could possible come up with?

It is not intelligence in and of itself that allows humans to accomplish great feats. Even people like Einstein, geniuses who were apparently able to come up with great insights on their own, were simply lucky to be born into the right circumstances, the time was ripe for great discoveries, thanks to previous discoveries of unknown unknowns.

Evolution versus Intelligence

It is argued that the mind-design space must be large if evolution could stumble upon general intelligence and that there are low-hanging fruits that are much more efficient at general intelligence than humans are, evolution simply went with the first that came along. It is further argued that evolution is not limitlessly creative, each step must increase the fitness of its host, and that therefore there are artificial mind designs that can do what no product of natural selection could accomplish.

I agree with the above, yet given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.

The example of altruism provides evidence that intelligence isn’t many levels above evolution. Therefore the crucial question is, how great is the performance advantage? Is it large enough to justify the conclusion that the probability of an intelligence explosion is easily larger than 1%? I don’t think so. To answer this definitively we would have to fathom the significance of the discovery (“random mutations”) of unknown unknowns in the dramatic amplification of intelligence versus the invention (goal-oriented “research and development”) of an improvement within known conceptual bounds.

Another example is flight. Artificial flight is not even close to the energy efficiency and maneuverability of birds or insects. We didn’t went straight from no artificial flight towards flight that is generally superior to the natural flight that is an effect of biological evolution.

Dragonfly

Take for example a dragonfly. Even if we were handed the design for a perfect artificial dragonfly, minus the design for the flight of a dragonfly, we wouldn’t be able to build a dragonfly that can take over the world of dragonflies, all else equal, by means of superior flight characteristics.

It is true that a Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight (I suspect that insects can do better). My whole point is that we never reached artificial flight that is strongly above the level of natural flight. An eagle can after all catch its cargo under various circumstances like the slope of a mountain or from beneath the sea, thanks to its superior maneuverability.

Humans are biased and irrational

It is obviously true that our expert systems are better than we are at their narrow range of expertise. But that expert systems are better at certain tasks does not imply that you can effectively and efficiently combine them into a coherent agency.

The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence. Yet the same noise might be the reason that each task a human can accomplish is not put into execution with maximal efficiency. An expert system that features a single stand-alone ability is able to reach the unique equilibrium for that ability. Whereas systems that have not fully relaxed to equilibrium feature the necessary characteristics that are required to exhibit general intelligence. In this sense a decrease in efficiency is a side-effect of general intelligence. If you externalize a certain ability into a coherent framework of agency, you decrease its efficiency dramatically. That is the difference between a tool and the ability of the agent that uses the tool.

In the above sense, our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability.

Embodied cognition and the environment

Another problem is that general intelligence is largely a result of an interaction between an agent and its environment. It might be in principle possible to arrive at various capabilities by means of induction, but it is only a theoretical possibility given unlimited computational resources. To achieve real world efficiency you need to rely on slow environmental feedback and make decision under uncertainty.

AIXI is often quoted as a proof of concept that it is possible for a simple algorithm to improve itself to such an extent that it could in principle reach superhuman intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself to a non-biological substrate because you showed that in some abstract sense you can simulate every physical process.

Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.

Therefore even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?

Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise? Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?

In a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. Yet intelligent or not, the environment in which an agent is embedded plays a crucial role. There exist a fundamental dependency on unintelligent processes. Our environment is structured in such a way that we use information within it as an extension of our minds. The environment enables us to learn and improve our predictions by providing a testbed and a constant stream of data.

Necessary resources for an intelligence explosion

If artificial general intelligence is unable to seize the resources necessary to undergo explosive recursive self-improvement then the ability and cognitive flexibility of superhuman intelligence in and of itself, as characteristics alone, would have to be sufficient to self-modify its way up to massive superhuman intelligence within a very short time.

Without advanced real-world nanotechnology it will be considerable more difficult for an AGI to undergo quick self-improvement. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.

Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But, more importantly, it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.

Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement and risks from AI in general.

One might argue that an AGI will solve nanotechnology on its own and find some way to trick humans into manufacturing a molecular assembler and grant it access to it. But this might be very difficult.

There is a strong interdependence of resources and manufacturers. The AGI won’t be able to simply trick some humans to build a high-end factory to create computational substrate, let alone a molecular assembler. People will ask questions and shortly after get suspicious. Remember, it won’t be able to coordinate a world-conspiracy, it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources, which it has to do the hard way without nanotech.

Anyhow, you’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.

AI risk advocates would at this point claim that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. But what, magic?

Artificial general intelligence, a single break-through?

Another point to consider when talking about risks from AI is how quickly the invention of artificial general intelligence will take place. What evidence do we have that there is some principle that, once discovered, allows us to grow superhuman intelligence overnight?

If the development of AGI takes place slowly, a gradual and controllable development, we might be able to learn from small-scale mistakes while having to face other risks in the meantime. This might for example be the case if intelligence can not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.

To me it doesn’t look like that we will come up with artificial general intelligence quickly, but rather that we will have to painstakingly optimize our expert systems step by step over long periods of times.

Paperclip maximizers

It is claimed that an artificial general intelligence might wipe us out inadvertently while undergoing explosive recursive self-improvement to more effectively pursue its terminal goals. I think that it is unlikely that most AI designs will not hold.

I agree with the argument that any AGI that isn’t made to care about humans won’t care about humans. But I also think that the same argument applies for spatio-temporal scope boundaries and resource limits. Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.

Complex goals need complex optimization parameters (the design specifications of the subject of the optimization process against which it will measure its success of self-improvement).

Even the creation of paperclips is a much more complex goal than telling an AI to compute as many digits of Pi as possible.

For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters. Therefore, given the large amount of restrictions that are inevitably part of any advanced general intelligence, the nonhazardous subset of all possible outcomes might be much larger than that where the AGI works perfectly yet fails to hold before it could wreak havoc.

Fermi paradox

The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI’s with non-human values without working directly on AGI to test those hypothesis ourselves.

If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.

Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

Summary

In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish.

There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities solely derive their appeal from being vague.

Further reading

Tags: ,

More: overcomingbias.com/2010/03/econ-of-nano-ai.html

Slides: hanson.gmu.edu/ppt/Econ%20of%20AI%20n%20Nanotech.ppt

Robin Hanson: “Economics of Nanotech and AI” at Foresight 2010 Conference from Foresight Institute on Vimeo.

All January 2010 Foresight Conference videos:
http://www.vimeo.com/album/176287
Join email list:
http://www.foresight.org/d/list_signup

Bio for this speaker:
Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, including articles in Applied Optics, Business Week, CATO Journal, Communications of the ACM, Economics Letters, Econometrica, Economics of Governance, Extropy, Forbes, Foundations of Physics, IEEE Intelligent Systems, Information Systems Frontiers, Innovations, International Joint Conference on Artificial Intelligence, Journal of Economic Behavior and Organization, Journal of Evolution and Technology, Journal of Law Economics and Policy, Journal of Political Philosophy, Journal of Prediction Markets, Journal of Public Economics, Medical Hypotheses, Proceedings of the Royal Society, Public Choice, Social Epistemology, Social Philosophy and Policy, Theory and Decision, and Wired.

Robin has pioneered prediction markets, also known as information markets or idea futures, since 1988. He was the first to write in detail about people creating and subsidizing markets in order to gain better estimates on those topics. Robin was a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, and of DARPA’s Policy Analysis Market, from 2001 to 2003. Robin has developed new technologies for conditional, combinatorial, and intermediated trading, and has studied insider trading, manipulation, and other foul play. Robin has written and spoken widely on the application of idea futures to business and policy, being mentioned in over one hundered press articles on the subject, and advising many ventures, including GuessNow, Newsfutures, Particle Financial, Prophet Street, Trilogy Advisors, XPree, YooNew, and undisclosable defense research projects. He is now chief scientist at Consensus Point.

Robin has diverse research interests, with papers on spatial product competition, health incentive contracts, group insurance, product bans, evolutionary psychology and bioethics of health care, voter information incentives, incentives to fake expertise, Bayesian classification, agreeing to disagree, self-deception in disagreement, probability elicitation, wiretaps, image reconstruction, the history of science prizes, reversible computation, the origin of life, the survival of humanity, very long term economic growth, growth given machine intelligence, and interstellar colonization.

——————————————

If you enjoyed this video, please consider making a donation to the non-profit Foresight Institute:
http://www.foresight.org/forms/php/donate.php

Tags: , , , , , ,

via acceleratingfuture.com/michael/blog/

Tags: , ,

There are two big goals, mind uploading (i.e. creating a backup) and to create human-level (speed) artificial intelligence. If the only way to do so is by reverse engineering the human brain, first of all, or at least we will have to develop the sufficient “hardware”, information processing capabilities to build a human equivalent computational substrate. The big questions here are about the nature of information processing and the neuronal information capacity of an average human brain.

Consequently many subquestions come up, for example of what importance are astrocytes and microtubule and are they involved in information processing, among other things?

Stuart Hameroff

The operations of microtubules are remarkably complex and their role pervasive in cellular operations; these facts led to the speculation that computation sufficient for consciousness might somehow be occurring there. These ideas are discussed in Hameroff’s first book Ultimate Computing (1987). The main substance of this book dealt with the scope for information processing in biological tissue and especially in microtubules and other parts of the cytoskeleton. Hameroff argued that the cytoskeleton components could be the basic units of processing rather than the neurons. The book was primarily concerned with information processing, with consciousness secondary at this stage.

Link: en.wikipedia.org/wiki/Stuart_Hameroff#Theories

Roger Penrose

Penrose presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.

On the basis of Godel’s incompleteness theorems, he argued that the brain could perform functions that no computer or system of algorithms could. From this it could follow that consciousness itself might be fundamentally non-algorithmic, and incapable of being modelled as a classical Turing machine type of computer.

Penrose made Gödel’s theorem the basis of what quickly became an intensely controversial claim. He argued that the theorem showed that the brain had the ability to go beyond what could be achieved by axioms or formal systems. This would mean that the mind had some additional function that was not based on algorithms (systems or rules of calculation). A computer is driven solely by algorithms. Penrose asserted that the brain could perform functions that no computer could perform. He called this type of functioning non-computable.

Link: en.wikipedia.org/wiki/The_Emperor%27s_New_Mind / en.wikipedia.org/wiki/Orch-OR

Microtubule

Through the 1980s, colleagues and I developed models of microtubule information processing in which states of tubulin subunits were bits interacting with lattice neighbor tubulins. With about 10^7 (10 to the seventh) tubulins per neuron switching at 10^-9 seconds, we calculated a potential for 10^16 operations per second in each neuron. This was, and remains unpopular in AI/Singularity circles because it potentially pushes the goalpost for brain capacity significantly. Recent evidence has shown collective microtubule excitations at 10^-7 seconds (rather than the 10^-9 seconds we assumed), indicating a neuronal information capacity of ‘only’ 10^14 operations per second.

Link: lifeboat.com/blog/?p=587

Quantum entanglement in a real biological  system found

“The future of clean green solar power may well hinge on scientists being able to unravel the mysteries of photosynthesis, the process by which green plants convert sunlight into electrochemical energy. To this end, researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC), Berkeley have recorded the first observation and characterization of a critical physical phenomenon behind photosynthesis known as quantum entanglement.

The research team was surprised to see that significant entanglement persisted between molecules in the light harvesting complex that were not strongly coupled (connected) through their electronic and vibrational states. They were also surprised to see how little impact temperature had on the degree of entanglement.

Link: newscenter.lbl.gov/untangling-quantum-entanglement/

Astrocytes

Meet the forgotten 90 percent of your brain: glial cells, which outnumber your neurons ten to one. And no one really knows what they do.

If the glial cells called astrocytes really do process information, that would be a major addition to the brain’s computing power.

For some brain scientists, these discoveries are puzzle pieces that are slowly fitting together into an exciting new picture of the brain. Piece one: Astrocytes can sense incoming signals. Piece two: They can respond with calcium waves. Piece three: They can produce outputs—neurotransmitters and perhaps even calcium waves that spread to other astrocytes. In other words, they have at least some of the requirements for processing information the way neurons do. Alfonso Araque, a neuroscientist at the Cajal Institute in Spain, and his colleagues make a case for a fourth piece. They find that two different stimulus signals can produce two different patterns of calcium waves (that is, two different responses) in an astrocyte. When they gave astrocytes both signals at once, the waves they produced in the cells was not just the sum of the two patterns. Instead, the astrocytes produced an entirely new pattern in response. That’s what neurons—and computers, for that matter—do.

If astrocytes really do process information, that would be a major addition to the brain’s computing power. After all, there are many more astrocytes in the brain than there are neurons. Perhaps, some scientists have speculated, astrocytes carry out their own computing. Instead of the digital code of voltage spikes that neurons use, astrocytes may act more like an analog network, encoding information in slowly rising and falling waves of calcium. In his new book, The Root of Thought, neuroscientist Andrew Koob suggests that conversations among astrocytes may be responsible for “our creative and imaginative existence as human beings.”

Link: mindhacks.com/blog/

Astrocytes affect brain’s information signaling

Astrocytes are the most common type of cell in the brain and play an important role in the function of neurons – nerve cells. New research from the University of Gothenburg, Sweden, shows that they are also directly involved in the regulation of signalling between neurons.

“Our results contribute to the insight that astrocytes can affect how the brain processes and stores information,” says My Andersson, a researcher from the Department of Physiology at the Institute of Neuroscience and Physiology. “This means that astrocytes should be given more attention in future when looking for causes of diseases that affect signalling between neurons, such as epilepsy.”

Link: physorg.com/news195735363.html

Astrocytes, playing a big role in the formation of memories.

Neurons need non-electrical brain cells known as astrocytes to establish synaptic memory, according to study published this week in Nature. The findings challenge the long-standing belief that this process involves only the activity of the neurons themselves, and bring glial cells onto the center stage in the study of brain activity.

This study shows that while neurotransmitter release and voltage changes at the synapse are important for synaptic memory formation, “you need the burst from the astrocyte to complete the process,” said physiologist Andrea Volterra of the University of Lausanne, who did not participate in the research. “It’s very surprising for many people.”

Astrocytes comprise some 90% of all human brain cells, but because they lack the electrical activity of neurons, they were never really considered to participate in the process of long-term potentiation — changes in synaptic strength thought to underlie learning and memory. Accumulating evidence suggests they play a bigger role in neuronal activity than previously believed.

But with astrocyte territories containing many thousands of synapses (about 140,000 in the hippocampus, for example), even effects limited to those within a territory can be enormous. Thus, the role that astrocytes play in synaptic function “cannot be overlooked,” Volterra noted in his review.

Link: the-scientist.com/blog/display/56270/

Underappreciated Star-Shaped Brain Cells May Help Us Breathe

Astrocytes, it was long believed, were little more than the scaffolding of the brain—they provided a support structure for the stars of the show, the neurons. But a study out in this week’s Science is the latest to suggest that this is far from the whole story. The study says that astrocytes (whose “astro” name come from their star-shape) may in fact play a critical role in the process of breathing.

Gourine’s team peeked into the brains of rats to figure out the connection between astrocytes and breathing. In humans and in rodents, the level of carbon dioxide in the blood rises after physical activity. The brain has to adjust to this, setting the lungs breathing harder to expel that CO2.

Astrocytes, the scientists found, are key players in this process. When the cells sensed a decrease in blood pH (because the carbon dioxide made it more acidic), they immediately released calcium ions, which the researchers could detect because they’d given the rats a gene encoding a protein that shone fluorescent in the presence of calcium. The astrocytes also released the chemical messenger ATP. That ATP appeared to trigger the nearby neurons responsible for respiration, kicking them into gear.

The astrocytes are no one-trick ponies, though. They could be important not only for breathing, but also for brain circulation, memory formation, and other activities.

Link: discovermagazine.com/underappreciated-star-shaped-brain-cells-may-help-us-breathe/

Glial cells involved in processing

Koob’s evidence is indirect but suggestive. He points out that more intelligent animals have a higher astrocyte to neuron ratio than less intelligent animals, all the way from worms with one astrocyte per thirty neurons, to humans with an astrocyte: neuron ratio well above one. Within the human brain, the areas involved in higher thought, like the cortex, are the ones with the highest astrocyte:neuron ratio, and the most down-to-earth, like the cerebellum, have barely any astrocytes at all. Especially intelligent humans may have higher ratios still: one of the discoveries made from analyzing Einstein’s brain was that he had an unusually large number of astrocytes in the part of his brain responsible for mathematical processing. And learning is a stimulus for astrocyte development. When canaries learn new songs, new astrocytes grow in the areas responsible for singing.

[…]

In the late 19th century, microscopy advanced enough to look closely at the cellular structure of the brain. The pioneers of neurology decided that neurons were interesting and glia were the things you had to look past to get to the neurons. This theory should have raised a big red flag: Why would the brain be filled with mostly useless cells? But for about seventy five years, from the late 19th century to the mid to late 20th, no one seriously challenged the assumption that glia played a minor role in the brain.

Link: lesswrong.com/lw/2hp/book_review_the_root_of_thought/

More

The Blue Brain Project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations: bluebrain.epfl.ch

Allen Human Brain Atlas: futurepundit.com/archives/007226.html

The point is that nobody knows how long it will take, since we don’t even know what we don’t know: scienceblogs.com/cortex/the_blue_brain.php

Orch OR (Orchestrated Objective Reduction) is a theory of consciousness, which is the joint work of theoretical physicist Sir Roger Penrose and anesthesiologist Stuart Hameroff. Mainstream theories assume that consciousness emerges from the brain, and focus particularly on complex computation at connections known as synapses that allow communication between brain cells (neurons). Orch OR combines approaches to the problem of consciousness from the radically different angles of mathematics, physics and anesthesia: en.wikipedia.org/wiki/Orch-OR

Minds, Machines, and Mathematics consc.net/papers/penrose.html

Do Brains Make Minds? John Searle and David Chalmers get in on it: berto-meister.blogspot.com/do-brains-make-minds.html

In computer science and quantum physics, the Church–Turing–Deutsch principle (CTD principle) is a stronger, physical form of the Church–Turing thesis formulated by David Deutsch in 1985. The principle states that a universal computing device can simulate every physical process. The principle was originally stated by Deutsch with respect to finitary machines and processes. He immediately observed that classical physics, which makes use of the concept of real numbers, cannot be simulated by a Turing machine, which can only represent computable reals. Deutsch proposed that quantum computers may actually obey CTD, assuming that the laws of quantum physics can completely describe every physical process: wikipedia.org/Church-Turing-Deutsch_principle

Even More

10 Important Differences Between Brains and Computers

Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. Below, I review the most important of these differences (and the consequences to cognitive psychology of failing to recognize them): similar ground is covered in this excellent (though lengthy) lecture.

Difference # 10: Brains have bodies

This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness has shown that our visual memories are actually quite sparse. In this case, the brain is “offloading” its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.

Link: scienceblogs.com/developingintelligence/

Making brains: Reverse engineering the human brain to achieve AI

The ongoing debate between PZ Myers and Ray Kurzweil about reverse engineering the human brain is fairly representative of the same debate that’s been going in futurist circles for quite some time now. And as the Myers/Kurzweil conversation attests, there is little consensus on the best way for us to achieve human-equivalent AI.

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil’s upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

But we have to start somewhere, and we have to start with a plan.

Link: sentientdevelopments.com/making-brains-reverse-engineering-human.html

David Chalmers: Consciousness is not substrate dependent

It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing “over and above” the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws’ antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper.

Link: sentientdevelopments.com/david-chalmers-consciousness-is-not.html

Status of Reverse Engineering the Brain

Computer simulations of the brain already allow experiments impossible to carry out with animals. “As good as modern neuroscience is—and it has been brilliant over the last two decades—we can’t really sample every neuron and every synapse as they are performing a behavior,” notes consciousness researcher Gerald Edelman, MD, PhD, director of the Neurosciences Institute and chair of neurobiology at the Scripps Research Institute in San Diego, California.

Link: http://nextbigfuture.com/status-of-reverse-engineering-brain.html

Fun

Artificial Flight and Other Myths

a reasoned examination of A.F. by top birds

Over the past sixty years, our most impressive developments have undoubtedly been within the industry of automation, and many of our fellow birds believe the next inevitable step will involve significant advancements in the field of Artificial Flight.  While residing currently in the realm of science fiction, true powered, artificial flying mechanisms may be a reality within fifty years.  Or so the futurists would have us believe.  Despite the current media buzz surrounding the prospect of A.F., a critical examination of even the most basic facts can dismiss the notion of true artificial flight as not much more than fantasy.

Link: dresdencodak.com/artificial-flight-and-other-myths-a-reasoned-examination-of-af-by-top-birds/

Videos

Whole Brain Emulation: The Logical Endpoint of Neuroinformatics?

The idea of creating a faithful, one-to-one computer copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. While computational neuroscience and systems biology are currently very far away from this goal, the trends towards large-scale simulation, industrialized neuroinformatics, new forms of microscopy and powerful computing clusters point in this direction and are enabling new forms of simulations of unprecendented scope. In this talk I will discuss current estimates of how close we are to achieving emulated brains, technological requirements, research challenges and some of the possible consequences.

The Great Singularity Debate

The Singularity and the outer limits of physical possibility (08:38)
Do human brains run software? (09:58)
Consciousness, intelligence, and computation (03:14)
What could minds be made of? (13:08)
Is mind-uploading a dualist dream? (19:18)
Would the Singularity be a Vonnegut-style catastrophe? (10:56)

Tags: , , , , , , , , , , , , ,

« Older entries