News

You are currently browsing the archive for the News category.

My perspective of American politics might be completely skewed, but I found the following pretty impressive for an U.S. politician:

[Transcript]

Quote #1:

We all know the story of Abraham and Isaac. Abraham is ordered by God to offer up his only son, and without argument, he takes Isaac to the mountaintop, binds him to an altar, and raises his knife, prepared to act as God has commanded.

Of course, in the end God sends down an angel to intercede at the very last minute, and Abraham passes God’s test of devotion.

But it’s fair to say that if any of us leaving this church saw Abraham on a roof of a building raising his knife, we would, at the very least, call the police and expect the Department of Children and Family Services to take Isaac away from Abraham. We would do so because we do not hear what Abraham hears, do not see what Abraham sees, true as those experiences may be. So the best we can do is act in accordance with those things that we all see, and that we all hear, be it common laws or basic reason.

Quote #2:

And even if we did have only Christians in our midst, if we expelled every non-Christian from the United States of America, whose Christianity would we teach in the schools? Would we go with James Dobson’s, or Al Sharpton’s? Which passages of Scripture should guide our public policy? Should we go with Leviticus, which suggests slavery is ok and that eating shellfish is abomination? How about Deuteronomy, which suggests stoning your child if he strays from the faith? Or should we just stick to the Sermon on the Mount – a passage that is so radical that it’s doubtful that our own Defense Department would survive its application? So before we get carried away, let’s read our bibles. Folks haven’t been reading their bibles.”

Tags: , ,

(Note: The author of the quote and possible a link to the context can be found below each quotation.)

Getting the general brain properties isn’t enough. Instead, the builder is saddled with the onerous task of packing the brain with a mountain of instincts (something that will require many generations of future scientists to unpack, as they struggle to build the teleome), and somehow managing to encode all that wisdom in the fine structure of the brain’s organization.

— Mark Changizi, Later Terminator: We’re Nowhere Near Artificial Brains

The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.

These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.

— Mark Changizi, ‘Humans, Version 3.0

I don’t believe that classical computers can simulate brain activity. The brain is the most complicated object or machine in the universe. Every adult human brain contains 100 billion neurons, and every neuron is different. How many possibilities for interaction between different neurons are there? We don’t have a full understanding of how a brain works yet, but I cannot see any digital computer ever performing a fine-grained simulation of a human brain.

Dr. Hongkui Zeng, How complex is a mouse brain?

…please bear in mind that the relation of Solomonoff induction and “Universal AI” to real-world general intelligence of any kind is also rather wildly speculative… This stuff is beautiful math, but does it really have anything to do with real-world intelligence? These theories have little to say about human intelligence, and they’re not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on “scaling them down” to make them realistic; so far this only works for very simple toy problems, and it’s hard to see how to extend the approach broadly to yield anything near human-level AGI). And it’s not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts.

— Ben Goertzel, Are Prediction and Reward Relevant to Superintelligences?

About the hypothetical uber-intelligence that wants to tile the cosmos with molecular Mickey Mouses — I truly don’t feel confident making any assertions about a real-world system with vastly greater intelligence than me. There are just too many unknowns. Sure, according to certain models of the universe and intelligence that may seem sensible to some humans, it’s possible to argue that a hypothetical uber-intelligence like that would relentlessly proceed in tiling the cosmos with molecular Mickey Mouses. But so what? We don’t even know that such an uber-intelligence is even a possible thing — in fact my intuition is that it’s not possible.

— Ben Goertzel, Muehlhauser-Goertzel Dialogue, Part 1

There’s the minor issue of Hume’s problem of induction, as well. I.e., the issue that, in the rational and scientific world-view, that we have no rational reason to believe that any patterns observed in the past will continue into the future. This is an ASSUMPTION, plain and simple — an act of faith. Occam’s Razor (which is one way of justifying and/or further specifying the belief that patterns observed in the past will continue into the future) is also an assumption and an act of faith. Science and reason rely on such acts of faith, yet provide no way to justify them. A big gap.

— Ben Goertzel, Muehlhauser-Goertzel Dialogue, Part 1

… it is easy to prove that for essentially any pair of Bayesians, each thinks the other is crazy.

— Larry Wasserman, Freedman’s Neglected Theorem

…if you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!

Scott Aaronson

…often your goal is not to find the best solution, but just a better solution than your competitors. And often we terminate trains of thought long before they lead to hard instances of NP-complete problems—we’re usually not even conscious that that’s what we’re doing; we just have an intuition that “such-and-such would require a hopeless search.

— Scott Aaronson

‘Some years ago I was trying to decide whether or not to move to Harvard from Stanford. I had bored my friends silly with endless discussion. Finally, one of them said, “You’re one of our leading decision theorists. Maybe you should make a list of the costs and benefits and try to roughly calculate your expected utility.” Without thinking, I blurted out, “Come on, Sandy, this is serious.”’

— Persi Diaconis, The Problem of Thinking Too Much

Pythagoras could not have imagined the uses to which his equation would be put (if, indeed, he ever came up with the equation himself in the first place). The same applies to almost all of the equations in this book. They were studied/discovered/developed by mathematicians and mathematical physicists who were investigating subjects that fascinated them deeply, not because they imagined that two hundred years later the work would lead to electric light bulbs or GPS or the internet, but rather because they were genuinely curious.

 — Ian Stewart, 17 Equations that changed the world

With a powerful radio source, you can PM everyone in your future light-cone unfortunate enough to decode your message, potentially highjacking entire distant civilizations of decision-theory users.

— Oso_42, Comments on Pascal’s Mugging – Less Wrong Discussion

Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and “downloading it” is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can’t simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).

Etc, etc. I can’t even begin to cover complications I see as soon as I look at what’s happening here. I’m all for life extension, I just don’t think cryonics is a viable way to accomplish it.

— kalla724

I’ve signed up for cryonics (with Alcor) because I believe that if civilization doesn’t collapse then within the next 100 years there will likely be an intelligence trillions upon trillions of times smarter than anyone alive today.

 — James Miller

Personally, I’ve been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I’m willing to grapple with those issues for as many centuries as it takes.

— Patrick Nielsen Hayden

With UFAI people’s estimate are about as divergent as with Second Coming of Jesus Christ, ranging from impossible even in theory through essentially impossible all the way to almost certain.

 — taw comments on Optimal Philanthropy for Human Beings – Less Wrong

(UFAI = unFriendly Superhuman Artificial General Intelligence)

What game theorists somewhat disturbingly call rationality is assumed throughout—in other words, game players are assumed to be hedonistic yet infinitely calculating sociopaths endowed with supernatural computing abilities.

 — Cosma Shalizi

A “filter bubble”— “a unique universe of information for each of us”, meaning that we are less likely to encounter information online that challenges our existing views or sparks serendipitous connections. “A world constructed from the familiar is a world in which there’s nothing to learn,” Mr Pariser declares. He calls this “invisible autopropaganda, indoctrinating us with our own ideas”.

 — Eli Pariser, The dangers of the internet: Invisible sieve

…a poor badly-schooled unmarried urban black male dies 17.7 times as often as a rich well-educated married rural asian woman (of the same age), with a lifespan roughly thirty years shorter on average.

 — Robin Hanson, Jobs Kill, BIG Time

We talk an idealistic talk, but walk a practical walk, and try to avoid walking our talk or talking our walk.

 — Robin Hanson, Beware Commitment

Science is not in principle committed to the idea that there’s no afterlife or that the mind is identical to the brain […] If it’s true that consciousness is being run like software on the brain and can – by virtue of ectoplasm or something else we don’t understand – be dissociated from the brain at death, that would be part of our growing scientific understanding of the world if we discover it […] But there are very good reasons to think it’s not true. We know this from 150 years of neurology where you damage areas of the brain, and faculties are lost […] You can cease to recognize faces, you can cease to know the names of animals but you still know the names of tools […] What we’re being asked to consider is that you damage one part of the brain, and something about the mind and subjectivity is lost, you damage another and yet more is lost, [but] you damage the whole thing at death, we can rise off the brain with all our faculties in tact, recognizing grandma and speaking English!

 — Sam Harris’ Argument Against the Afterlife

I think one of the enduring tragedies of the 22nd century will be that during the 20th and 21st centuries we persistently treat nuclear reactors as if they’re nuclear weapons, and nuclear weapons as if they’re nuclear reactors.

Charles Stross

I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, “Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct?” But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbours or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.

— C. S. Lewis, Stop Being Wrong: A Moral Imperative

If a man proves too clearly and convincingly to himself … that a tiger is an optical illusion–well, he will find out he is wrong. The tiger will himself intervene in the discussion, in a manner which will be in every sense conclusive.

— G. K. Chesterton, Variations on a Theme

It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself “I could have written that”. With that thought he moves the program in question from the shelf marked “intelligent”, to that reserved for curios, fit to be discussed only with people less enlightened than he.

Joseph Weizenbaum, 1966

I realised that by donating a large part of my future income to the most efficient charities, I really could save thousands of people’s lives. Since I already have most of the things I really value in life, I thought — why not?

— Toby Ord (Founder of Giving What We Can)

Resurrection without a backup. As with ecosystem reconstruction, such “resurrections” are in fact clever simulations. If the available information is sufficiently detailed, the individual and even close friends and associations are unable to tell the difference. However, transapient informants say that any being of the same toposophic level as the “resurrector” can see marks that the new being is not at all like the old one. “Resurrections” of historical figures, or of persons who lived at the fringes of Terragen civilization and were not well recorded, are of very uneven quality.

 — Encyclopedia Galactica – Limits of Transapient Power

Once a psychoblast became self-aware, it was granted citizenship, and intervention without consent became impossible. This was not a matter of mere custom or law; the principle was built into the deepest level of the polis. A citizen who spiraled down into insanity could spend teratau in a state of confusion and pain, with a mind too damaged to authorize help, or even to choose extinction. That was the price of autonomy: an inalienable right to madness and suffering, indistinguishable from the right to solitude and peace.

 — Greg Egan, Diaspora

What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve in the Autoverse. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.

 — Greg Egan, Dust Theory FAQ

People are always amazed by how much “free time” I have. They’re also amazed that I don’t know who Ally McBeal is. Frankly, I’m amazed that they can’t make the connection.

 — Robert Wenzlaff

[…] in the multiple worlds interpretation, we do survive— we just also die. If we ask ”Which of the two will I experience?” then it seems totally valid to argue ”I won’t experience being dead.

 — abramdemski comments on The Anthropic Trilemma

…it took until more than 80 pages into volume 2 to be able to prove (as “Proposition *110.643?) that 1+1=2 (with the comment “[This] proposition is occasionally useful”).

 — Stephen Wolfram, 100 Years Since Principia Mathematica

If I were having a philosophical talk with a man I was going to have hanged or electrocuted, I should say, “I don’t doubt that your act was inevitable for you, but to make it more avoidable by others we propose to sacrifice you to the common good. You may regard yourself as a soldier dying for your country if you like. But the law must keep its promises.

 — Oliver Wendell Holmes

When you understand why you dismiss all the other possible gods, you will understand why I dismiss yours.

— Stephen Roberts

Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away.

— Antoine de Saint-Exupéry

Tags:

In this post I try to fathom an informal definition of Self, the “essential qualities that constitute a person’s uniqueness”. I assume that the most important requirement for a definition of self is time-consistency. A reliable definition of identity needs to allow for time-consistent self-referencing, since any agent that is unable to identify itself over time will be prone to make inconsistent decisions.

Data Loss

Obviously most humans don’t want to die, but what does that mean? What is it that humans try to preserve when they sign up for Cryonics? It seems that an explanation must account and allow for some sort of data loss.

The Continuity of Consciousness

It can’t be about the continuity of consciousness as we would have to refuse general anesthesia due to the risk of “dying” and most of us will agree that there is something more important than the continuity of consciousness that makes us accept a general anesthesia when necessary.

Computation

If the continuity of consciousness isn’t the most important detail about the self then it very likely isn’t the continuity of computation either. Imagine that for some reason the process evoked when “we” act on our inputs under the control of an algorithm halts for a second and then continues otherwise unaffected, would we don’t mind to be alive ever after because we died when the computation halted? This doesn’t seem to be the case.

Static Algorithmic Descriptions

Although we are not partly software and partly hardware, we could, in theory, come up with an algorithmic description of the human machine, of our selfs. Might it be that algorithm that we care about? If we were to digitize our self we would end up with a description of our spatial parts, our self at a certain time. Yet we forget that all of us possess such an algorithmic description of our selfs and we’re already able back it up. It is our DNA.

Temporal Parts

Admittedly our DNA is the earliest version of our selfs, but if we don’t care about the temporal parts of our selfs but only about a static algorithmic description of a certain spatiotemporal position, then what’s wrong with that? It seems a lot, we stop caring about past reifications of our selfs, at some point our backups become obsolete and having to fall back on them would equal death. But what is it that we lost, what information is it that we value more than all of the previously mentioned possibilities? One might think that it must be our memories, the data that represents what we learnt and experienced. But even if this is the case, would it be a reasonable choice?

Indentity and Memory

Let’s just disregard the possibility that we often might not value our future selfs and so do not value our past selfs either for that we lost or gained important information, e.g. if we became religious or have been able to overcome religion.

If we had perfect memory and only ever improved upon our past knowledge and experiences we wouldn’t be able to do so for very long, at least not given our human body. The upper limit on the information that can be contained within a human body is 2.5072178×10^38 megabytes, if it was used as a perfect data storage. Given that we gather much more than 1 megabyte of information per year, it is foreseeable that if we equate our memories with our self we’ll die long before the heat death of the universe. We might overcome this by growing in size, by achieving a posthuman form, yet if we in turn also become much smarter we’ll also produce and gather more information. We are not alone either and the resources are limited. One way or the other we’ll die rather quickly.

Does this mean we shouldn’t even bother about the far future or is there maybe something else we value even more than our memories? After all we don’t really mind much if we forget what we have done a few years ago.

Time-Consistency and Self-Reference

It seems that there is something even more important than our causal history. I think that more than everything we care about our values and goals. Indeed, we value the preservation of our values. As long as we want the same we are the same. Our goal system seems to be the critical part of our implicit definition of self, that which we want to protect and preserve. Our values and goals seem to be the missing temporal parts that allow us to consistently refer to us, to identify our selfs at different spatiotempiral positions.

Using our values and goals as identifiers also resolves the problem of how we should treat copies of our self that are featuring alternating histories and memories, copies with different causal histories. Any agent that does feature a copy of our utility function ought to be incorporated into our decisions as an instance, as a reification of our selfs. We should identify with our utility-function regardless of its instantiation.

Stable Utility-Functions

To recapitulate, we can value our memories, the continuity of experience and even our DNA, but the only reliable marker for the self identity of goal-oriented agents seems to be a stable utility function. Rational agents with an identical utility function will to some extent converge to exhibit similar behavior and are therefore able to cooperate. We can more consistently identify with our values and goals than with our past and future memories, digitized backups or causal history.

But even if this is true there is one problem, humans might not exhibit goal-stability.

Tags: , , , , , ,

What if I’m wrong?

Sebastian Marshall asks, “What if I’m wrong?”:

What if you were really wrong? Like, not just the wrong course of action, but what if your whole idea of the setup and cause and effect and payoffs and long term consequences of your actions were flawed? What if you made a serious mistake somewhere in your evaluations, and you were going to get the opposite result of what you wanted? What if you got a horrific result?

[…]

What if your safe job is actually a trap?

What if your favorite food is making you fat and diabetic and killing you?

What if you’re slowly killing the person you’re trying to save? What if they’re slowly killing you?

What if getting your preferred politics turned your society and culture into an apocalyptic wasteland?

What if your favorite leisure activity is wrecking your mind, making you stupid, and holding you back from heights you can’t even imagine from where you’re at?

What if being “ultra-hardcore” at the gym is likely to cause injury and destroy your strength, flexibility, and health? What if resting more actually produced larger, safer gains?

The satisfaction of needs

Becoming less wrong is just one of your preferences and needs, as a human being you need to acknowledge and account for all your preferences and needs.

“What if I’m wrong?”

You have to draw the line where asking that question once more will make you never ask the question again. In other words, if you notice that you need to eat, drink or sleep then stop asking the question, because otherwise you won’t be able to ask it anymore. This also counts for pleasure and leisure, if you feel unhappy about not being able to play that new game then go play it until you feel satisfied. If you don’t do it, if you don’t play the game or watch that movie and continue to ask yourself if it is worth it, if it might be the wrong choice, then your unhappiness might turn into depression which in turn will make you reluctant or unable to ask that question anymore.

You can only do your best

What if I’m wrong about the above? I can only do my best.

Whatever intelligence is, it can’t be intelligent all the way down. It’s just dumb stuff at the bottom.
Andy Clark

We are fundamentally dependent on unintelligent processes and naive introspection. We do not plan when and how to think. We rely on an unconscious hierarchical decision procedure that decides to filter out most sensory data. Only what is deemed “important”, what is above a certain threshold, is forwarded far enough to reach conscious reflection. It would be stupid to allocate resources equally.

I, my brain and body, might be wrong to conclude that I need sleep. But I am not thinking about that possibility, not only because I’m a computationally bounded agent but also because thinking in and of itself is an activity that I might be wrong about, just like sleeping. All in all, everything taken into account, sleeping simply turned out to have the most weight right now.

But what if there are monsters under the bed? Then either I survive, learn from that incident and assign enough weight to the possibility of monsters hiding under my bed as to take it into account the next time, or I die and only those agents who “naturally” allocate enough resources to fighting monsters, before going to bed, will survive.

We can only do our best, which includes the allocation of resources to preemptive measures against black swan events.

Tags:

Google Streetview ist nun auch mit 20 deutschen Städten online. Leider ist Gütersloh noch nicht dabei. Sehr schade ist natürlich, dass es auch in den jetzt zur Verfügung stehenden Städten einige wenige aber doch auffällige Lücken gibt. Diesen Versuch der Unterhöhlung des öffentlichen Raums lässt sich allerdings entgegenwirken und der Schaden zum Teil rückgängig machen. Es gibt gesetzliche (legale) Möglichkeiten die Unkenntlichmachung und Zensur von Google Streetview aufzuheben bzw. zu umgehen.

Wie gesagt, es ist zum Glück möglich, den durch die Paranoia der Deutschen (Medien) ausgelösten Zensurwahn von Google Streetview rückgängig zu machen. Es besteht die Möglichkeit zur Bereitstellung privater Streetview-Daten mittels der Foto-Sharing-Webseite Panoramio. Anders ausgedrückt lassen sich private georeferenzierte Fotografien zu Panoramio umsonst hochladen, also Fotos, die Daten wie die Aufnahmeposition (GPS Ortsinformationen) oder die Ausrichtung der Kamera im dreidimensionalen Raum beinhalten. Dieser Dienst fügt solche Fotos dann nach einiger Zeit als Ebene zu Google Maps (auch Google Earth) hinzu und integriert sie in Google Streetview.

Street View Verpixelung durch Panoramio Umgehen

Street View Verpixelung durch Panoramio Umgehen

Einige weiterführende Links zum Thema:

Zum Schluß hier noch ein paar nette Kommentare (Tweets) zum deutschen Start von Goolge Streetview und der damit einhergehenden Zensur:

RT @CineKie: Wer sein Haus bei Google #StreetView verpixeln lässt, gibt viel mehr über seine Person preis, als es die Fassade je hätte tun können.

RT @andiliciouscom: Genial! > Muhaha! Wie sinnvoll die Verpixelung in #StreetView ist,sieht man an dieser Stelle wunderbar: http://maps.google.com/maps… (RT @haascore)

RT @Balkonschlaefer: Wer sein Haus unkenntlich gemacht haben will sollte Christo und nicht Google rufen. #streetview

RT @weckgeschnappt: Verpixelte Häuser in der Nachbarschaft? http://www.computerbild.de/artikel… “So laden Sie eigene Bilder bei Panoramio hoch” 😉 #streetview

P.S.

Wenn euch der obenstehende Text ein bisschen komisch anmutet oder übertrieben erscheint, ihr habt ja Recht. Es dient dazu möglichst viele Suchbegriffe abzudecken und somit viele Leute auf die geschilderte Möglichkeit aufmerksam zu machen.

Nachtrag (2010-11-20):

Nachtrag (2010-11-22):

Google scheint wohl zu viel Angst vor den 3% der Bevölkerung zu haben, die alle anderen ihre Paranoia aufzwingen wollen. Oder vielleicht doch eher vor den alten und mitlerweile in einer Existenzkrise befindlichen Medien, wie dem WDR (mit vom Austerben bedrohter Zuschauerschaft), der in der naiven Annahme damit Quoten zu machen, ständig Schwachsinn berichtet und so Stunk gegen Google macht? Ist auch egal, spätestens wenn u.a. Google bald mit ihrem Internet Fernsehen auf dem Markt kommen, wird das sowieso ein jehes Ende nehmen. Also genug davon, jedenfalls hat Google schnell seine Einbindung von Panoramio geändert. Man kann damit zwar immer noch die Zensur umgehen, muss allerdings nun auf das Foto klicken. Die schöne Integration von privaten Fotos zu einer eigenen Street View Ebene ist erst mal in der ursprünglichen Form nicht mehr zu sehen. Ändert aber nicht viel, nachfolgend ein paar Screenshots von Google Earth und Google Maps:

Google Earth (3D / Street View / Panoramio)

Google Streetview & Panoramio

Ich werde euch auf den laufenden halten und diese Post aktualisieren, wenn ich etwas Neues erfahre.

Tags: , , , , , , , ,

Christopher Hitchens deconstructs the ten commandments and adds a few of his own for the April issue of Vanity Fair.

Tags: , , ,

Scientists admit that radiometric dating, one of the fundamental techniques used to show the earth is billions of years old is flawed!!! The earth is not 4.55 billion years old. Watch and find out just how old it really is.

Of course, scientists are always refining their techniques, it’s part of of science works. Creationists have pointed to a number of “results” from radiometric dating that prove it doesn’t work. Here I go over all the reasons why. Why is there Carbon-14 in some coal. Why did Potassium-Argon dating of the 1980 eruption of Mount Saint Helens give ages on the order of hundreds of thousands of years.

To download this video, copyright free, please go to:
http://www.mediafire.com/?yytzwtrzmwh

To download the scientific paper featured in this video please go to:
http://www.mediafire.com/?mhljmmzn3m2

If you wish to translate the subtitles please download them from here:
http://www.mediafire.com/?0mynummmyz0

More: youtube.com/user/cdk007

Tags: , , , , ,

More: overcomingbias.com/2010/03/econ-of-nano-ai.html

Slides: hanson.gmu.edu/ppt/Econ%20of%20AI%20n%20Nanotech.ppt

Robin Hanson: “Economics of Nanotech and AI” at Foresight 2010 Conference from Foresight Institute on Vimeo.

All January 2010 Foresight Conference videos:
http://www.vimeo.com/album/176287
Join email list:
http://www.foresight.org/d/list_signup

Bio for this speaker:
Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, including articles in Applied Optics, Business Week, CATO Journal, Communications of the ACM, Economics Letters, Econometrica, Economics of Governance, Extropy, Forbes, Foundations of Physics, IEEE Intelligent Systems, Information Systems Frontiers, Innovations, International Joint Conference on Artificial Intelligence, Journal of Economic Behavior and Organization, Journal of Evolution and Technology, Journal of Law Economics and Policy, Journal of Political Philosophy, Journal of Prediction Markets, Journal of Public Economics, Medical Hypotheses, Proceedings of the Royal Society, Public Choice, Social Epistemology, Social Philosophy and Policy, Theory and Decision, and Wired.

Robin has pioneered prediction markets, also known as information markets or idea futures, since 1988. He was the first to write in detail about people creating and subsidizing markets in order to gain better estimates on those topics. Robin was a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, and of DARPA’s Policy Analysis Market, from 2001 to 2003. Robin has developed new technologies for conditional, combinatorial, and intermediated trading, and has studied insider trading, manipulation, and other foul play. Robin has written and spoken widely on the application of idea futures to business and policy, being mentioned in over one hundered press articles on the subject, and advising many ventures, including GuessNow, Newsfutures, Particle Financial, Prophet Street, Trilogy Advisors, XPree, YooNew, and undisclosable defense research projects. He is now chief scientist at Consensus Point.

Robin has diverse research interests, with papers on spatial product competition, health incentive contracts, group insurance, product bans, evolutionary psychology and bioethics of health care, voter information incentives, incentives to fake expertise, Bayesian classification, agreeing to disagree, self-deception in disagreement, probability elicitation, wiretaps, image reconstruction, the history of science prizes, reversible computation, the origin of life, the survival of humanity, very long term economic growth, growth given machine intelligence, and interstellar colonization.

——————————————

If you enjoyed this video, please consider making a donation to the non-profit Foresight Institute:
http://www.foresight.org/forms/php/donate.php

Tags: , , , , , ,

via acceleratingfuture.com/michael/blog/

Tags: , ,

This video describes rewarding brain stimulation.
For references and more information visit iplant.eu/rbs.html

Tags: , , , , , , , , , , , , , , , , , , ,

« Older entries § Newer entries »