(Note: The author of the quote and possible a link to the context can be found below each quotation.)
Getting the general brain properties isn’t enough. Instead, the builder is saddled with the onerous task of packing the brain with a mountain of instincts (something that will require many generations of future scientists to unpack, as they struggle to build the teleome), and somehow managing to encode all that wisdom in the fine structure of the brain’s organization.
— Mark Changizi, Later Terminator: We’re Nowhere Near Artificial Brains
The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.
These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.
— Mark Changizi, ‘Humans, Version 3.0’
I don’t believe that classical computers can simulate brain activity. The brain is the most complicated object or machine in the universe. Every adult human brain contains 100 billion neurons, and every neuron is different. How many possibilities for interaction between different neurons are there? We don’t have a full understanding of how a brain works yet, but I cannot see any digital computer ever performing a fine-grained simulation of a human brain.
…please bear in mind that the relation of Solomonoff induction and “Universal AI” to real-world general intelligence of any kind is also rather wildly speculative… This stuff is beautiful math, but does it really have anything to do with real-world intelligence? These theories have little to say about human intelligence, and they’re not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on “scaling them down” to make them realistic; so far this only works for very simple toy problems, and it’s hard to see how to extend the approach broadly to yield anything near human-level AGI). And it’s not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts.
— Ben Goertzel, Are Prediction and Reward Relevant to Superintelligences?
About the hypothetical uber-intelligence that wants to tile the cosmos with molecular Mickey Mouses — I truly don’t feel confident making any assertions about a real-world system with vastly greater intelligence than me. There are just too many unknowns. Sure, according to certain models of the universe and intelligence that may seem sensible to some humans, it’s possible to argue that a hypothetical uber-intelligence like that would relentlessly proceed in tiling the cosmos with molecular Mickey Mouses. But so what? We don’t even know that such an uber-intelligence is even a possible thing — in fact my intuition is that it’s not possible.
— Ben Goertzel, Muehlhauser-Goertzel Dialogue, Part 1
There’s the minor issue of Hume’s problem of induction, as well. I.e., the issue that, in the rational and scientific world-view, that we have no rational reason to believe that any patterns observed in the past will continue into the future. This is an ASSUMPTION, plain and simple — an act of faith. Occam’s Razor (which is one way of justifying and/or further specifying the belief that patterns observed in the past will continue into the future) is also an assumption and an act of faith. Science and reason rely on such acts of faith, yet provide no way to justify them. A big gap.
— Ben Goertzel, Muehlhauser-Goertzel Dialogue, Part 1
… it is easy to prove that for essentially any pair of Bayesians, each thinks the other is crazy.
— Larry Wasserman, Freedman’s Neglected Theorem
…if you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!
…often your goal is not to find the best solution, but just a better solution than your competitors. And often we terminate trains of thought long before they lead to hard instances of NP-complete problems—we’re usually not even conscious that that’s what we’re doing; we just have an intuition that “such-and-such would require a hopeless search.
‘Some years ago I was trying to decide whether or not to move to Harvard from Stanford. I had bored my friends silly with endless discussion. Finally, one of them said, “You’re one of our leading decision theorists. Maybe you should make a list of the costs and benefits and try to roughly calculate your expected utility.” Without thinking, I blurted out, “Come on, Sandy, this is serious.”’
— Persi Diaconis, The Problem of Thinking Too Much
Pythagoras could not have imagined the uses to which his equation would be put (if, indeed, he ever came up with the equation himself in the first place). The same applies to almost all of the equations in this book. They were studied/discovered/developed by mathematicians and mathematical physicists who were investigating subjects that fascinated them deeply, not because they imagined that two hundred years later the work would lead to electric light bulbs or GPS or the internet, but rather because they were genuinely curious.
— Ian Stewart, 17 Equations that changed the world
With a powerful radio source, you can PM everyone in your future light-cone unfortunate enough to decode your message, potentially highjacking entire distant civilizations of decision-theory users.
— Oso_42, Comments on Pascal’s Mugging – Less Wrong Discussion
Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and “downloading it” is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can’t simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).
Etc, etc. I can’t even begin to cover complications I see as soon as I look at what’s happening here. I’m all for life extension, I just don’t think cryonics is a viable way to accomplish it.
I’ve signed up for cryonics (with Alcor) because I believe that if civilization doesn’t collapse then within the next 100 years there will likely be an intelligence trillions upon trillions of times smarter than anyone alive today.
Personally, I’ve been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I’m willing to grapple with those issues for as many centuries as it takes.
— Patrick Nielsen Hayden
With UFAI people’s estimate are about as divergent as with Second Coming of Jesus Christ, ranging from impossible even in theory through essentially impossible all the way to almost certain.
— taw comments on Optimal Philanthropy for Human Beings – Less Wrong
(UFAI = unFriendly Superhuman Artificial General Intelligence)
What game theorists somewhat disturbingly call rationality is assumed throughout—in other words, game players are assumed to be hedonistic yet infinitely calculating sociopaths endowed with supernatural computing abilities.
A “filter bubble”— “a unique universe of information for each of us”, meaning that we are less likely to encounter information online that challenges our existing views or sparks serendipitous connections. “A world constructed from the familiar is a world in which there’s nothing to learn,” Mr Pariser declares. He calls this “invisible autopropaganda, indoctrinating us with our own ideas”.
— Eli Pariser, The dangers of the internet: Invisible sieve
…a poor badly-schooled unmarried urban black male dies 17.7 times as often as a rich well-educated married rural asian woman (of the same age), with a lifespan roughly thirty years shorter on average.
— Robin Hanson, Jobs Kill, BIG Time
We talk an idealistic talk, but walk a practical walk, and try to avoid walking our talk or talking our walk.
— Robin Hanson, Beware Commitment
Science is not in principle committed to the idea that there’s no afterlife or that the mind is identical to the brain […] If it’s true that consciousness is being run like software on the brain and can – by virtue of ectoplasm or something else we don’t understand – be dissociated from the brain at death, that would be part of our growing scientific understanding of the world if we discover it […] But there are very good reasons to think it’s not true. We know this from 150 years of neurology where you damage areas of the brain, and faculties are lost […] You can cease to recognize faces, you can cease to know the names of animals but you still know the names of tools […] What we’re being asked to consider is that you damage one part of the brain, and something about the mind and subjectivity is lost, you damage another and yet more is lost, [but] you damage the whole thing at death, we can rise off the brain with all our faculties in tact, recognizing grandma and speaking English!
— Sam Harris’ Argument Against the Afterlife
I think one of the enduring tragedies of the 22nd century will be that during the 20th and 21st centuries we persistently treat nuclear reactors as if they’re nuclear weapons, and nuclear weapons as if they’re nuclear reactors.
I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, “Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct?” But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbours or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.
— C. S. Lewis, Stop Being Wrong: A Moral Imperative
If a man proves too clearly and convincingly to himself … that a tiger is an optical illusion–well, he will find out he is wrong. The tiger will himself intervene in the discussion, in a manner which will be in every sense conclusive.
— G. K. Chesterton, Variations on a Theme
It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself “I could have written that”. With that thought he moves the program in question from the shelf marked “intelligent”, to that reserved for curios, fit to be discussed only with people less enlightened than he.
I realised that by donating a large part of my future income to the most efficient charities, I really could save thousands of people’s lives. Since I already have most of the things I really value in life, I thought — why not?
— Toby Ord (Founder of Giving What We Can)
Resurrection without a backup. As with ecosystem reconstruction, such “resurrections” are in fact clever simulations. If the available information is sufficiently detailed, the individual and even close friends and associations are unable to tell the difference. However, transapient informants say that any being of the same toposophic level as the “resurrector” can see marks that the new being is not at all like the old one. “Resurrections” of historical figures, or of persons who lived at the fringes of Terragen civilization and were not well recorded, are of very uneven quality.
Once a psychoblast became self-aware, it was granted citizenship, and intervention without consent became impossible. This was not a matter of mere custom or law; the principle was built into the deepest level of the polis. A citizen who spiraled down into insanity could spend teratau in a state of confusion and pain, with a mind too damaged to authorize help, or even to choose extinction. That was the price of autonomy: an inalienable right to madness and suffering, indistinguishable from the right to solitude and peace.
— Greg Egan, Diaspora
What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve in the Autoverse. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.
— Greg Egan, Dust Theory FAQ
People are always amazed by how much “free time” I have. They’re also amazed that I don’t know who Ally McBeal is. Frankly, I’m amazed that they can’t make the connection.
[…] in the multiple worlds interpretation, we do survive— we just also die. If we ask ”Which of the two will I experience?” then it seems totally valid to argue ”I won’t experience being dead.
— abramdemski comments on The Anthropic Trilemma
…it took until more than 80 pages into volume 2 to be able to prove (as “Proposition *110.643?) that 1+1=2 (with the comment “[This] proposition is occasionally useful”).
— Stephen Wolfram, 100 Years Since Principia Mathematica
If I were having a philosophical talk with a man I was going to have hanged or electrocuted, I should say, “I don’t doubt that your act was inevitable for you, but to make it more avoidable by others we propose to sacrifice you to the common good. You may regard yourself as a soldier dying for your country if you like. But the law must keep its promises.
When you understand why you dismiss all the other possible gods, you will understand why I dismiss yours.
— Stephen Roberts
Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away.
— Antoine de Saint-Exupéry