Thoughts and news on transhumanism, vegetarianism, science fiction, science, philosophy, math, programming, language, consciousness and the nature of reality.
Subscribe to feed
‹ Another “dumb superintelligence” argument • Probability of unfriendly and friendly AI ›
2013-09-17 in AI/Rationality | 16 comments
WARNING: Learning about the following idea is strongly discouraged. Known adverse effects are serious psychological distress, infinite torture, and convulsive laughter.
Tags: artificial general intelligence, Roko's Basilisk
I find it very ironic that the allegedly rational community of lesswrong chose to pascal-wager themselves so hard…
Pingback: Alexander Kruel · Roko’s Basilisk: Everything you need to know()
Haha. BUT WHICH SINGULARITY GOD IS THE CORRECT ONE? SHOULD I SPEND MY LIFE MAKING PAPERCLIPS? AHHHHHH! (Brain explodes- just like the AI planned- duh duh duh)
They’re rational in the sense of following utilitarianism to its endpoint. (This is why Pascal’s wager poses such a problem.)
It would be even more ironic if they’re committing the same error they fear AIs would: namely, to take a simple defined goal function or system and be led astray by taking it to its ultimate conclusion.
Pingback: Alexander Kruel · New Rationalism: An Introduction()
Pingback: New Rationalism: An Introduction | TiaMart Blog()
Pingback: New Rationalism: An Introduction | JanNews Blog()
Pingback: Alexander Kruel · Making Roko’s basilisk more persuasive()
Pingback: Alexander Kruel · Eliezer Yudkowsky: Friendly AI torturing people has probability ~0()
Pingback: Eliezer Yudkowsky: Pleasant AI torturing individuals has chance ~zero | JanNews Blog()
Pingback: Eliezer Yudkowsky: Friendly AI torturing people has probability ~0 | Xcuz Me()
Pingback: Making Roko’s basilisk more persuasive | Xcuz Me()
Pingback: New Rationalism: An Introduction | Xcuz Me()
Well, I am doing all I can to create the Basilisk, so I’m OK. Can you all say the same?
What if someone else is faster and they build a different Basilisk that will have different values and will torture those who participate in creating any Basilisk that uses acausal blackmail?
The people who create this Basilisk may do so because the prospect of this Basilisk being realized can be used as a way to prevent people from being afraid of the original Basilisk.
You see, you can imagine all sorts of Basilisks. Even irrational ones that don’t respond to reasoning. Ones that torture everyone, for example.
If we live in a multiverse, then somewhere everything happens. Only the probabilities are different.
Powered by WordPress and Tarski