# News

You are currently browsing the archive for the News category.

## Miscellaneous Items 20140605

Why is the material implication of classical logic (also known as material conditional or material consequence), p -> q, defined to be false only when its antecedent (p) is true and the consequent (q) is false? Here is an informal way to think about it.

You could view logic as metamathematics, a language designed to talk about mathematics. Logic as the “hygiene”, the grammar and syntax of mathematics.

In the language of classical logic every proposition is either true or not true, and no proposition can be both true and not true. Now what if we want to express the natural language construction “If…then…” in this language? Well, there are exactly sixteen possible truth functions of two inputs p and q (since there are 2^2 inputs and (2^2)^2 ways to map them to outputs). And the candidate that best captures the connotations of what we mean by “If…then…” is the definition of material implication. Here is why.

By stating that p -> q is true we want to indicate that the truth of q can be inferred from the truth p, but that nothing in particular can be inferred from the falsity of p. And this is exactly the meaning captured by the material conditional:

p q p->q
T T T
T F F
F T T
F F T

First, when “If p, q” is true, and we also know that p is true, then we want to be able to infer q. In other words, if we claim that if p is true then q is true, then if p is indeed true, q should be true as well. This basic rule of inference has a name, it is called modus ponens.

Second, if we claim “If p, q”, then if p is false, we did not say anything in particular about q. If p is false, q can either be true or false, our claim “If p, q” is still true.

But notice that it is not possible to capture all notions of what we colloquially mean by “If…then…” statements as a two-valued truth function.

It is for example possible to make meaningless statements such as “If grass is red then the moon if made of cheese.” This is however unproblematic under the assumption that logic is an idealized language, which is adequate for mathematical reasoning. Since we are mainly interested in simplicity and clarity. Under this assumption, such nonsense implications are analogous to grammatically correct but meaningless sentences that can be formed in natural languages, such as “Colorless green ideas sleep furiously“.

To demonstrate its adequacy for mathematics, here is a mathematical example:

If n > 2 then n^2 > 4.

We claim that if n is greater than 2 then its square must be greater than 4. For n = 3, this is obviously true, as we claimed. But what about n smaller than 2? We didn’t say anything in particular about n smaller than 2. Its square could be larger than 4 or not. And indeed, n = 1 and n = -3 yield a false, respectively true, consequent. Yet the implication is true in both cases.

Intuitively more problematic are statements such as (p and not(p)) -> q, p and its negation imply q. Think about it this way. The previous implication is a tautology, it is always true. And you believe true statements. This however does not mean that you must believe that arbitrary q is true too (as long as you stay consistent), since in case of the falsity of the antecedent you are not making any particular claim about the truth of the consequent (q). And since the statement that p is true and false, p AND not(p), is always false — remember the principle of exclusive disjunction for contradictories, (P ∨ ¬P) ∧ ¬(P ∧ ¬P), requires that every proposition is either true or not true, and that no proposition can be both true and not true — q can be false without invalidating the implication.

Another way to look at p -> q is by interpreting it as “p is a subset of q”. Then if it is true that x is an element of p, then it must be true that it is also an element of q (since q contains p). However, if x is not an element p, then it might still turn out to be an element of q, since q can be larger than p.

————–

Here is a term I just learnt: Extraneous solutions.

Take for example the equation

A = B.

If you were to square both sides you would get

A^2 = B^2

or

A^2 – B^2 = 0.

Which is equal to

(A – B)(A + B) = 0 (by the difference of two squares).

Now the roots of this equation are the roots of the equations A = B and A = -B. This means that we generated an additional solution by squaring the original equation.

The reason for this is that squaring is not an injective fuction (injective means one-to-one, every element is mapped to one and only one unique element), it is not invertible. The function y = x^2 does not pass the horizontal line test. In other words, squaring preserves equality, if A = B then A^2 = B^2, but does not preserve inequality. It is not true that if A != B then A^2 != B^2, since both -1 and 1 are mapped to 1 when squared. Which means that both 1^2 = 1^2 and (-1)^2 = (1)^2 are solutions to the squared equations, while only one of them makes each pre-squared equation true.

————–

————–

Milky Way may bear 100 million life-giving planets

New Obama doctrine on climate change will achieve CO2 emission reductions from the power sector of approximately 30% from CO2 emission levels in 2005.

## Miscellaneous Items 20140604

North Korea as seen from the ISS

North Korea is really dark. Flying over East Asia, an Expedition 38 crew member on the ISS took this night image of the Korean Peninsula on January 30, 2014.

————–

The math we learn in school can seem like a dull set of rules, laid down by the ancients and not to be questioned. In How Not to Be Wrong, Jordan Ellenberg shows us how wrong this view is: Math touches everything we do, allowing us to see the hidden structures beneath the messy and chaotic surface of our daily lives. It’s a science of not being wrong, worked out through centuries of hard work and argument.

HOW NOT TO BE WRONG

————–

If You Learn Nothing Else about Bayes’ Theorem, Let It Be This

2,302,554,979 BC; Galactic Core – A short story by Yvain about acausal trade. Related to Roko’s basilisk.

Drawing fractal trees and Sierpinski triangles with Python’s turtle graphics module. See also here.

Dangerous Delusions: The Green Movement’s War on Progress

————–

…if you think about it, it doesn’t make any sense. Why would you care more for your genetic siblings and cousins and whoever than for your friends and people who are genuinely close to you? That’s like racism – but even worse, at least racists identify with a group of millions of people instead of a group of half a dozen. Why should parents have to raise children whom they might not even like, who might have been a total accident? Why should people, motivated by guilt, make herculean efforts to “keep in touch” with some nephew or cousin whom they clearly would be perfectly happy to ignore entirely?

Asches to Asches (another “short story” by Yvain).

————–

Ten years from now:

…one widely accepted viewpoint holds that fusion power, artificial intelligence, and interstellar migration will shortly solve all our problems, and therefore we don’t have to change the way we live.

[…]

A hundred years from now:

It has been a difficult century. After more than a dozen major wars, three bad pandemics, widespread famines, and steep worldwide declines in public health and civil order, human population is down to 3 billion and falling.

Continue reading: The Next Ten Billion Years

————–

4 DARPA Projects That Could Be Bigger Than the Internet

3 guys Irish dancing around the world

The decline of Detroit in time-lapse.

Electrical ‘mind control’ shown in primates for first time

## Highly intelligent and successful people who hold weird beliefs

Here is a list of people who hold beliefs that I would dismiss, regardless of the fact that they have thought long and hard about their beliefs, are MUCH smarter than me, and can prove this by extraordinary achievements.

Extraordinary claims require extraordinary evidence. And some claims are of such nature that arguments alone do not suffice. Some claims require hard empirical evidence, or an overwhelming consensus among intelligent experts.

The point of the list is partly to show that it is possible to be very smart, and successful, and yet hold beliefs that are widely regarded as unsupported, absurd, or simply flawed.

You should expect there to be many more such people, since this list is not the result of active research but only contains people that I stumble upon. If you know of other people that fall into this category, please let me know.

Also note that I am not claiming that the beliefs hold by these people are necessarily wrong (although some of them almost certainly are).

Further note that intelligent people tend to be right much more often than less intelligent people. You should listen to what they have to say, and take it seriously.

Note: In cases where it might not be obvious to all readers, the ‘weird’ beliefs are underlined.

——————

Kary Mullis (Nobel Prize-winning American biochemist) who promotes AIDS denialism, climate change denial and his belief in astrology. Mullis disputes the big bang theory. Mullis also claims to have chatted with a glowing raccoon that he met at midnight while on his way to the loo then losing the ensuing six hours as a result of an alien abduction. The improvements made by Mullis allowed polymerase chain reaction (PCR) to become a central technique in biochemistry and molecular biology, described by The New York Times as “highly original and significant, virtually dividing biology into the two epochs of before P.C.R. and after P.C.R.”

Brian David Josephson (Nobel laureate and professor emeritus of physics at the University of Cambridge) argues that parapsychological phenomena (telepathy, psychokinesis and other paranormal themes) may be real. Josephson also supports water memory (homeopathy) and cold fusion.

Peter Duesberg (a professor of molecular and cell biology at the University of California, Berkeley) claimed that AIDS is not caused by HIV, which made him so unpopular that his colleagues and others have — until recently — been ignoring his potentially breakthrough work on the causes of cancer.

Luc Antoine Montagnier (Nobel laureate and virologist) is claiming that DNA can send “electromagnetic imprints” of itself into distant cells and fluids. Montagnier also spoke in 2012 at that cesspit of antivaxxer woo, AutismOne, where he claimed that long-term antibiotic treatment can cure autistic children. He concluded by saying: “I realise how audacious, and even shocking, these successful experiments may appear to unprepared minds.”

Fred Hoyle (was an English astronomer noted primarily for the theory of stellar nucleosynthesis) claimed that the fossil Archaeopteryx was a man-made fake. He also claimed a correlation of flu epidemics with the sunspot cycle. The idea was that flu contagion was scattered in the interstellar medium and reached Earth only when the solar wind had minimum power. He further rejected Earth-based abiogenesis.

Kurt Gödel (logician, mathematician and philosopher) had a tendency toward paranoia. He believed in ghosts; he had a morbid dread of being poisoned by refrigerator gases; he refused to go out when certain distinguished mathematicians were in town, apparently out of concern that they might try to kill him. He also believed that materialism is false and that the world in which we live is not the only one in which we shall live or have lived.

Donald Knuth (a world-renowned computer scientist) is a Lutheran and the author of 3:16 Bible Texts Illuminated.

Robert Aumann (Nobel laureate and Bayesian rationalist) is a believing Orthodox Jew who has supported Bible Code research.

Francisco J. Ayala (has been called the “Renaissance Man of Evolutionary Biology”) identifies as a Christian and has said that “science is compatible with religious faith in a personal, omnipotent and benevolent God.” His discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide.

Francis Collins (geneticist, Human Genome Project) noted for his landmark discoveries of disease genes and his leadership of the Human Genome Project (HGP) and described by the Endocrine Society as “one of the most accomplished scientists of our time” is a evangelical Christian. He advocates the perspective that belief in Christianity can be reconciled with acceptance of evolution and science, especially though the advancement of evolutionary creation.

Roger Penrose (mathematical physicist, mathematician and philosopher of science) argues that known laws of physics are inadequate to explain the phenomenon of consciousness.

Saul Aaron Kripke (McCosh Professor of Philosophy, Emeritus, at Princeton University and teaches as a Distinguished Professor of Philosophy at the CUNY Graduate Center) is an observant Jew. Discussing how his religious views influenced his philosophical views (in an interview with Andreas Saugstad) he stated: “I don’t have the prejudices many have today, I don’t believe in a naturalist world view. I don’t base my thinking on prejudices or a worldview and do not believe in materialism.” Since the 1960s Kripke has been a central figure in a number of fields related to mathematical logic, philosophy of language, philosophy of mathematics, metaphysics, epistemology, and set theory.

John von Neumann (mathematician, physicist, inventor and polymath) was a strong supporter of preventive war. Von Neumann favored an unprovoked surprise nuclear first-strike on the Soviet Union. Life magazine quoted von Neumann as saying, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?” Nobel Prize-winning physicist Eugene Wigner said of von Neumann that “only he was fully awake.”

Link: ‘Prisoner’s Dilemma’ by William Poundstone, Page 4

Frank J. Tipler (a mathematical physicist and cosmologist) believes that the universe is evolving towards a maximum level of complexity and consciousness he calls the Omega Point. Tipler identifies the Omega Point with God.

Otto Eberhard Rössler (Professor for Theoretical Biochemistry, known for his work on chaos theory) asserts that the LHC experiments have the potential to create low velocity micro black holes that could grow in mass or release dangerous radiation leading to doomsday scenarios, such as the destruction of the Earth. He has attempted to halt the beginning of the experiments through petitions to the US and European Courts.

David Gelernter (computer science at Yale University) is a denier of anthropogenic global warming and buys into intelligent design.

Elon Musk (CEO and CTO of SpaceX, CEO and chief product architect of Tesla Motors) claims that with artificial intelligence we are summoning the demon and compares the potential dangers of artificial intelligence to nuclear weapons. He believes that the risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

Ray Kurzweil (inventor and director of engineering at Google) claims that a technological singularity will occur in 2045. Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first commercial text-to-speech synthesizer, the first music synthesizer Kurzweil K250 capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

Linus Pauling (one of the most influential chemists in history and among the most important scientists of the 20th century) promoted orthomolecular medicine, megavitamin therapy and vitamin C for treating cancer.

Nassim Nicholas Taleb (essayist, scholar, statistician, risk analyst and bestselling author) portrays GMOs as a ‘castrophe in waiting’–and has taken to personally lashing out at those who challenge his conclusions. He recently accused Anne Glover, the European Union’s Chief Scientist, and one of the most respected scientists in the world, of being a “dangerous imbecile” for arguing that GM crops and foods are safe and that Europe should apply science based risk analysis to the GMO approval process–views reflected in summary statements by every major independent science organization in the world.

Ivar Giaever (Nobel Prize-winning physicist) believes that man-made global warming is a “new religion” and pseudoscience.

Freeman Dyson (theoretical physicist and mathematician) believes that man-made climate change is, on the whole, Good and that CO2 is so beneficial…it would be crazy to try to reduce it.

Max Tegmark (professor at the Massachusetts Institute of Technology) promotes the mathematical universe hypothesis, that “all structures that exist mathematically exist also physically”.

Georges Lemaître proposed what became known as the Big Bang theory of the origin of the Universe. He was a Belgian Roman Catholic priest.

Tags: ,

## Photo: Alexander Kruel, an unfriendly AI?

Just playing around with GIMP a little bit. And yes, that’s a Matrix plug 🙂

Alexander Kruel, 2029

Tags:

The following formulation of Richard’s paradox is from the book Computability and Logic, Chapter 2, Diagonalization, Problem 2.13:

Q: What (if anything) is wrong with the following argument?

The set of all finite strings of symbols from the alphabet, including the space, capital letters, and punctuation marks, is enumerable; and for definiteness let us use the specific enumeration of finite strings based on prime decomposition.Some strings amount to definitions in English of sets of positive integers and others do not. Strike out the ones that do not, and we are left with an enumeration of all definitions in English of sets of positive integers, or replacing each definition by the set it defines, an enumeration of all sets of positive integers that have definitions in English. Since some sets have more than one definition, there will be redundancies in this enumeration of sets. Strike them out to obtain an irredundant enumeration of all sets of positive integers that have definitions in English.

Now consider the set of positive integers defined by the condition that a positive integer n is to belong to the set if and only if it does not belong to the nth set in the irredundant enumeration just described.

This set does not appear in that enumeration. For it cannot appear at the nth place for any n, since there is a positive integer, namely n itself, that belongs to this set if and only if it does not belong to the nth set in the enumeration. Since this set does not appear in our enumeration, it cannot have a definition in English. And yet it does have a definition in English, and in fact we have just given such a definition in the preceding paragraph.

## Q&A with Mark Changizi on risks from AI

Dr. MARK CHANGIZI is an evolutionary neurobiologist aiming to grasp the ultimate foundations underlying why we think, feel and see as we do. His research focuses on “why” questions, and he has made important discoveries such as on why we see in color, why we see illusions, why we have forward-facing eyes, why letters are shaped as they are, why the brain is organized as it is, why animals have as many limbs and fingers as they do, and why the dictionary is organized as it is.

He attended the Thomas Jefferson High School for Science and Technology, and then went on to the University of Virginia for a degree in physics and mathematics, and to the University of Maryland for a PhD in math. In 2002 he won a prestigious Sloan-Swartz Fellowship in Theoretical Neurobiology at Caltech, and in 2007 he became an assistant professor in the Department of Cognitive Science at Rensselaer Polytechnic Institute. In 2010 he took the post of Director of Human Cognition at a new research institute called 2ai Labs.

He has more than three dozen scientific journal articles, some of which have been covered in news venues such as the New York Times, Wall Street Journal, Newsweek andWIRED. He has written three books, THE BRAIN FROM 25,000 FEET (Kluwer 2003), THE VISION REVOLUTION (Benbella 2009) and HARNESSED: How Language and Music Mimicked Nature and Transformed Ape to Man (Benbella 2011). He is working on his fourth non-fiction book, this one on emotions and facial expressions, calledFORCE OF EMOTIONS. He is simultaneously working on his first novel, called HUMAN 3.0. [Homepage]

### THE INTERVIEW

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Mark Changizi: 100, 1000, 5000

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Mark Changizi: Not much more difficult. We weren’t “designed” to *do* science, math, etc., and so probably can eventually build something actually *good* at it. It will be much harder to build stuff better than what we evolved to be good at.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Mark Changizi: Yes. Similar reasons as (2). Similar also for games like chess. For tasks where, when good, people are labeled “genius”, we as a species are usually bad at the task. …and AI will one day surpass us. It is the tasks we take for granted, and are never called “genius” for doing them, that will be hard to get AI to do.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Mark Changizi: Zero, if it means self-modification to become better at the wide range of reasoning. If we can design AI to be good at chess, science, etc., then it doesn’t mean it has any extra insight at how to ramp up its own abilities generally. That will likely take a long evolutionary-selection process, not just self-learning. Smart things are smart by virtue of “instinct” being crammed inside, not by virtue of having general-purpose learning algorithms inside.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Mark Changizi: Not very. Or, much much less than sci-fi would have us believe. No reason, though, to connect up the entirety of civilization to its brain, though (like in Terminator).

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Mark Changizi: 1:10^{6}

Tags:

## Q&A with experts on risks from AI #6

Dr. Randal A. Koene is CEO and Founder of the not-for-profit science foundation carboncopies.org and CEO of start-up company NeuraLink Co., developing wireless neural probes at cellular scale. He is also the Director of Science for the 2045 Foundation.

Previously, Dr. Koene was Director of Analysis at nanotechnology company Halcyon Molecular in Silicon Valley (2010-2012), and Director of the Department of Neuroengineering at Tecnalia, third largest private research organization in Europe (2008-2010). He is a former Prof. at the Center for Memory and Brain of Boston University, and Founder of the Neural Engineering Corporation of Massachusetts. His research objective is whole brain emulation, creating the large-scale high-resolution representations and emulations of activity in neuronal circuitry that are needed in patient-specific neuroprostheses.

Dr. Koene has professional expertise in computational neuroscience, neural engineering, psychology, information theory, electrical engineering and physics. He organizes neural engineering efforts to obtain and replicate function and structure information that resides in the neural substrate for use in neuroprostheses and neural interfaces. Dr. Koene devised NETMORPH (netmorph.org), a computational framework for the simulated morphological development of neuronal circuitry, and a prototype for the development of Virtual Brain Laboratories to give neuroscientists, neuroengineers and clinicians large-scale high-resolution quantitative tools analogous to the computational tools that have become essential in fields such as genetics, chemistry or the aero-space industry. This effort bridges scales and will help determine how significant functions are encoded robustly in neural ensembles, and how those functions can nevertheless depend in specific ways on the detailed biophysics of particular component physiology.

Dr. Koene earned his Ph.D. in Computational Neuroscience at the Department of Psychology at McGill University, and his M.Sc. in Electrical Engineering with a specialization in Information Theory at Delft University of Technology. He is a member of the Oxford working group that convened in 2007 to create a first roadmap toward whole brain emulation (a descriptive term for the technological accomplishment of mind transfer to a different substrate that was first coined by Dr. Koene on his minduploading.org website in 2000).

Visit Dr. Koene’s personal web site randalkoene.com, carboncopies.org, minduploading.org or watch Dr. Koene present and discuss at TEDx in Tallinn, Estonia: Machines in Minds to Reverse engineer the Machine that is Mind.

Personal FAQ is available, based on biographical questions received. [Homepage] [Wikipedia]

AIDEUS is unlike other projects attempting to create a strong artificial intelligence on the basis of existing weak methods implementing some or other cognitive functions, they develop models of intelligent behavior, which are characterized by well-grounded universality, by increasing their practical applicability. Their goal is the creation of a strong artificial intelligence, solution of fundamental scientific problems, achievement of universality by embodied machine learning and decision-making systems. [Homepage]

The questions were answered by Alexey Potapov of AIDEUS.

### THE INTERVIEW

Randal Koene: I do not think I would call myself an expert on A(G)I Risk. My expertise lies in the field of Whole Brain Emulation. But I do have overlapping interests with the AGI camp, so I think it is fine for me to try to answer your questions.

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?

Randal Koene: My estimates as of Dec. 2012 are:

10% by 2020
50% by 2035
90% by 2050

AIDEUS (Alexey Potapov): 2025/2035/2050

Q2: Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Randal Koene: I think it depends on your notion of “better”. If to be substantially better then all it takes is to be 1000x faster at carrying out certain tasks involved (e.g. logical reasoning, proofs, etc) then it would not be much more difficult. It would be a matter of providing the necessary purpose-built hardware, possibly with some algorithmic streamlining.

If you mean something of greater conceptual magnitude, such as creating a mind that in addition to our capabilities is able to conceive of itself within a quantum mechanical universe where time and space are mere epiphenomena of the quantum mechanical interactions – and which can usefully progress in such a world… this could take a bit more effort.

AIDEUS (Alexey Potapov): It will be easier.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Randal Koene: Yes. Most of typical academic research is composed of some fairly trivial grunt work. In fact, building an Aritificial Reseacher is a goal that was probably largely spurred by that realization and the desire to focus on the interesting aspects rather than that grunt work.

So… if we create an AGI with human level performance overall, then I think it is entirely feasible to create an Artificial Researcher with above-human performance in academic research (when averaged).

AIDEUS (Alexey Potapov): Artificial intelligence will overwhelmingly outperform normal humans. However, friendly AI can help humans to advance to its own level.

Q4: What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Randal Koene:

Within hours: less than 0.1
Within days: less than 0.2
Within <5 years: 0.9-1.0

The reasons for my thinking in this area are the interdependence of things (see for example the nice movie I, Pencil: http://youtu.be/IYO3tOqDISE). This means that even if you have a thing in a box that is smart, that is not enough to “very quickly” put together everything you need to become a much greater thing. Information, material, energy and much more needs to travel to and fro to make that possible – and those things are limited in their rate in many ways.

AIDEUS (Alexey Potapov): 50%

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Randal Koene: I think it is quite important to be aware of those risks and to take active precautions about possible negative outcomes. On the flip-side, the only way to actually research the matter and become more aware of the real issues involved is to understand it better… and that is only possibly if you are intimately familiar with the topic, which again is directly related to working on it. So, I think that the matter is rather “academic” (in all senses of the work) unless one is also directly involved in A(G)I research.

AIDEUS (Alexey Potapov): Importance is moderate, because technical details of safe AI greatly depend on specific way of building AGI.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).

Randal Koene: VERY difficult question, because in reality the probability of human extinction within 100 years is not only dependent on the possibility of A(G)I. There are many things that could cause our extinction within that time-frame… and those things are in turn not all independent of one-another!

So, I think the probability is greater than 0. But beyond that, I don’t have the data to make an estimate that I would want to stand behind in publication. 😉

AIDEUS (Alexey Potapov): I think that this question is put in the slightly incorrect form, because singularity will bring drastic changes, and humanity will change within 100 years independent of (unsafe) AI. Biological human extinction will not matter. However, P(humans don’t participate in singularity | AI capable of self-modification and not provably non-friendly is created) = 90%. You can consider this as the answer to your question though.

Tags:

## Taleb vs. Pinker

The “Long Peace” is a Statistical Illusion by Nassim Taleb

“…to my horror I saw an identical theory of great moderation produced by Steven Pinker with the same naive statistically derived discussions…”

Fooled by Belligerence: Comments on Nassim Taleb’s “The Long Peace is a Statistical Illusion” by Steven Pinker

“Taleb shows no signs of having read Better Angels with the slightest attention to its content.”

## My posts published in h+ Magazine

Three of my posts have recently been published in h+ Magazine:

## [Link] Voting: Individual vs. Group Rationality

If everyone in your clan is what we’ll call “narrowly rational”, and so abstains from voting, the predictable outcome will be bad. But it is not rational, for individuals within a group that will foreseeably face a Prisoners’ Dilemma, to shrug and say “that sucks” and wait for everything to go to hell. Instead, people work to find means of reshaping their confederates’ behavior to prevent narrowly rational but collectively destructive choices.

[…]

A smarty-pants might come along and point out the weak foundations of the pro-voting ideology, declaring that he is only being rational and his compatriots are clearly mistaken. But it is our smarty-pants who is being irrational. Suppose he makes the “decisive argument” (which one is much more likely to make than to cast the decisive vote, since the influence of well crafted words need not be proportionate to 1/n). By telling “the truth” to his kinsmen, he is very directly reducing his own utility, not to mention the cost he bears if his preferences include within-group altruism. In order to be rational, we must profess to others and behave as though we ourselves believe things which are from a very reductive perspective false, even when those behaviors are costly. That is to say, in order to behave rationally, our relationship to claims like “your vote counts!” must be empirically indistinguishable from belief, whether or not we understand the sense in which the claim is false.

Of course, it would be perfectly rational for a smarty-pants to make his wrongheaded but compelling argument about the irrationality of voting to members of the other clan. But it would be irrational for members of either group to take such arguments seriously, by whomever they are made and despite the sense in which they are true.

So, when elections have strong intergroup distributional consequences, not only is voting rational, misleading others about the importance of each vote is also rational, as is allowing oneself to be misled (unless you are sure you are an ubermensch apart, and the conditions of your immunity don’t imply that others will also be immune).

Tags: , ,

« Older entries