Beware of high IQ individuals making sense of nonsense

Related to: Highly intelligent and successful people who hold weird beliefs

The smarter someone is the easier it is for them to rationalize ideas that do not make sense. Just like a superhuman AI could argue its way out of a box by convincing its gatekeeper that it is rational to do so, even when it is not.[1]

In essence this can be highlighted by the relation between adults and children. Adults can confuse themselves of more complex ideas than children. Children however can be infected by the same ideas transferred to them from adults.

Which means that people should be especially careful when dealing with high IQ individuals who seemingly make sense of ideas that trigger the absurdity heuristic.[2][3]

If however an average IQ individual is able to justify a seemingly outlandish idea, then that is reassuring in the sense that you should expect there to be even better arguments in favor of that idea.

This is something that seems to be widely ignored by people associated with LessWrong.[4] It is taken as evidence in favor of an idea if a high IQ individual thought about something for a long time and still accepts the idea.

If you are really smart you can make up genuine arguments, or cobble together concepts and ideas, to defend your cherished beliefs. The result can be an intricate argumentative framework that shields you from any criticism, yet seems perfectly sane and rational from the inside.[5]

Note though that I do not assume that smart people deliberately try to confuse themselves. What I am saying is that the rationalization of complex ideas is easier for smart people. And this can have the consequence that other people are then convinced by the same arguments with which the author, erroneously, convinced themselves.

It is a caveat that I feel should be taken into account when dealing with complex and seemingly absurd ideas being publicized by smart people. If someone who is smart manages to convince you of something that you initially perceived to be absurd, then you should be wary of the possibility that your newly won acceptance might be due to the person being better than you at looking for justifications and creating seemingly sound arguments, rather than the original idea not being absurd.

As an example, there are a bunch of mathematical puzzles that use a hidden contradiction to prove something absurd.[6] If you are smart, then you can hide such an inconsistency even from yourself and end up believing that 0=1.

As another example, if you are not smart enough to think about something as fancy as the simulation argument then you are not at a risk of fearing a simulation shutdown.[7][8]

But if a smart person who comes across such an argument becomes obsessed with it, then they have the ability to give it a veneer of respectability. Eventually then the idea can spread among more gullible people and create a whole community of people worrying about a simulation shutdown.

Conclusion

More intelligent people can fail in more complex ways than people of lesser intelligence. The more intelligent someone is, relative to your own intelligence, the harder it is for you to spot how they are mistaken.

Obviously the idea is not to ignore what smarter people say but to notice that as someone of lesser intelligence you can easily fall prey to explanations that give credence to a complicated idea but which suffer from errors that you are unable to spot.

When this happens, when you are at the risk of getting lost, or overwhelmed, by an intricate argumentative framework, created by someone much smarter than you, then you have to fall back on simpler heuristics than direct evaluation. You could, for example, look for a consensus among similarily smart individuals, or ask for an evaluation by a third-party that is widely deemed to be highly intelligent.

Further reading

Notes

[1] The LessWrong community actually tested my hypothesis by what they call the “AI box experiment” (yudkowsky.net/singularity/aibox/), in which Eliezer Yudkowsky and others played an unfriendly AI and managed to convince several people by means of arguments that they should let them out of a confinement.

I think such results should ring a lot of alarm bells. If it is possible to first convince someone that an unfriendly AI is an existential risk and then subsequently convince them to let such an AI out of the box, what does this tell us about the relation between such arguments and what is actually true?

[2] wiki.lesswrong.com/wiki/Absurdity_heuristic

[3] Absurdity can indicate that your familiarity with a topic is insufficient in order to discern reality from fantasy (e.g. a person’s first encounter with quantum mechanics). As a consequence you are more prone to be convinced by arguments that are wrong but which give an appearance of an explanation (e.g. popular science accounts of quantum mechanics).

[4] lesswrong.com

[5] kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/

[6] What’s wrong with the following contradiction?

e^(i*pi) = -1

(e^(i*pi))^2 = (-1)^2 = 1= e^(i*2*pi)

e^(i*2*pi) = e^0

ln(e^(i*2*pi)) = ln(e^0)

i*2*pi = 0

Well, ln(e^0) = ln(1). And ln(1) = i*2*pi*n, where n can be any integer. For n = 0, e^i*2*pi*0 = e^0 = 1. And for n = 1, e^i*2*pi*1 = e^i*2*pi = 1.

[7] simulation-argument.com

[8] See e.g. this link.

Tags: ,

  • Xagor et Xavier

    This is all reminiscent of the argument that philosophy papers will generally be of poorer quality than scientific ones because the results found in the latter can be tested by experiment, whereas in the former, there is no experiment, only intuition, and thus the greatest sophist wins.

    I’m not sure I agree with the argument, but the similarity is interesting. It also suggests that the LW group is rediscovering philosophy (and its attendant slipperiness) without really noticing.

    Perhaps some analogy of cross-validation would be useful. Imagine yourself as a slightly less intelligent person. If you come across arguments that the slightly less intelligent person you imagine yourself to be would believe, but that you wouldn’t, then be suspicious. The reasoning goes: there’s no reason that the particular level of intelligence you imagined yourself to be in is special. So if you come across something you believe in when you’re using your full intelligence, and it’s of similar nature to the thing that you had to use your full intelligence to spot the errors in, it might be the case that it is faulty but you need more intelligence than you have to spot the flaw right away.

  • seahen

    This is an interesting point, and one I wouldn’t have thought of on my own. Maybe the reason that LessWrongers forget this is that so many of them cap out all the decent IQ tests, or that so many of the cognitive differences between modosophonts disappear from an Omega’s-eye view.

  • seahen

    I tend to have trouble imagining myself as a less intelligent person, because I’d have to have learned how to adapt to having less intelligence.

  • Pingback: Alexander Kruel · Highly intelligent and successful people who hold weird beliefs()

  • Pingback: Alexander Kruel · New Rationalism: An Introduction()

  • Pingback: New Rationalism: An Introduction | TiaMart Blog()

  • Pingback: New Rationalism: An Introduction | JanNews Blog()

  • Pingback: Extremely smart and profitable individuals who maintain bizarre beliefs | TiaMart Blog()