Thoughts and news on transhumanism, vegetarianism, science fiction, science, philosophy, math, programming, language, consciousness and the nature of reality.
Subscribe to feed
‹ Another “dumb superintelligence” argument • Probability of unfriendly and friendly AI ›
2013-09-17 in AI/Rationality | 13 comments
WARNING: Learning about the following idea is strongly discouraged. Known adverse effects are serious psychological distress, infinite torture, and convulsive laughter.
Tags: artificial general intelligence, Roko's Basilisk
I find it very ironic that the allegedly rational community of lesswrong chose to pascal-wager themselves so hard…
Pingback: Alexander Kruel · Roko’s Basilisk: Everything you need to know()
Haha. BUT WHICH SINGULARITY GOD IS THE CORRECT ONE? SHOULD I SPEND MY LIFE MAKING PAPERCLIPS? AHHHHHH! (Brain explodes- just like the AI planned- duh duh duh)
They’re rational in the sense of following utilitarianism to its endpoint. (This is why Pascal’s wager poses such a problem.)
It would be even more ironic if they’re committing the same error they fear AIs would: namely, to take a simple defined goal function or system and be led astray by taking it to its ultimate conclusion.
Pingback: Alexander Kruel · New Rationalism: An Introduction()
Pingback: New Rationalism: An Introduction | TiaMart Blog()
Pingback: New Rationalism: An Introduction | JanNews Blog()
Pingback: Alexander Kruel · Making Roko’s basilisk more persuasive()
Pingback: Alexander Kruel · Eliezer Yudkowsky: Friendly AI torturing people has probability ~0()
Pingback: Eliezer Yudkowsky: Pleasant AI torturing individuals has chance ~zero | JanNews Blog()
Pingback: Eliezer Yudkowsky: Friendly AI torturing people has probability ~0 | Xcuz Me()
Pingback: Making Roko’s basilisk more persuasive | Xcuz Me()
Pingback: New Rationalism: An Introduction | Xcuz Me()
Powered by WordPress and Tarski