Prerequisite knowledge required for deciding if friendly AI research is worthwhile?

Cause of this post: the following passage (source),

In November of 2012 I set a goal for myself: find the most x-risk reducing role I can fill. At first I thought it would be by working directly with MIRI, but after a while it became clear that I could contribute more by simply donating. So my goal became: find the highest paying job, so I can donate lots of money to CFAR and MIRI.

Motivation for writing this post: Unclear. Possibly an attempt to remove cognitive load. Further assessment of the underlying motivation is estimated to be more resource expensive than writing the post itself. Future posts are not expected to be triggered by similar motivations. Therefore the expected value of investing the aforementioned resources to further analysis of the underlying motivations is deemed to be unproductive. Everything said so far might partly be rationalization in order to not having to think about the motivation in more detail. At this point further meta evaluation is expected to lead to an infinite regress.

Work put into this post: Quick mind dump.

Epistemic state: Perplexed.


Here is what freaks me out. There are certain very complex issues. For example: (1) what economic model best resembles observed data (2) whether the practical benefits of researching lab-made viruses outweigh the risks of an accidental or deliberate release of a lab-created flu strain (3) the expected value of geoengineering.

For someone to decide #1, and to be confident enough of their ability to judge economic models to subsequently adopt one as a role model in shaping the world, I would at least expect such a person to have studied economics for several years. And even then, based on the complexity of the problem and the frequent failure of experts, calculations of the expected value of taking your model seriously enough to draw action relevant conclusions from it seem to be highly error prone.

Deciding #2 seems to be much more difficult. Studying epidemiology doesn’t seem to be nearly enough to decide what to do in this case. You would need a very good and robust model of applied ethics, rationality and somehow be able to obtain, understand and analyze all the data necessary to evaluate the risk. Which includes such diverse fields as statistics, lab safety, data security and social dynamics. It appears to be nearly impossible for one person to arrive at a definitive conclusion of what to do in this case.

When it comes to #3, a low model uncertainty and an action relevant expected value calculation seem utterly out of reach of any single person. Geoengineering is a very complex climatological, technological, political and ethical issue with far-reaching consequences.

So what about friendly AI? The rationale underlying this issue is an incredibly complex yet vague conjecture about artificial general intelligence, a subject that nobody understands, involving ideas from highly controversial and unsolved fields such as ethics and rationality.

If someone says that they are going to donate lots of money to an organization concerned with researching supposedly <existential risks> associated with <artificial general intelligence> (more here) that is conjectured to be undergoing an <intelligence explosion>, at some unknown point in future, focusing on ensuring some unknown definition of <friendliness>, how likely is it that the person is doing so based on an evidence based and robust expected value calculation?

Almost all of the information available on the underlying issues concerning friendly AI research and the alleged importance of researching the subject have been written by the same people who are asking for money, while the few available opinions of third-party experts are not very favorable. Could anyone have acquired a sufficiently strong grasp of (1) artificial general intelligence (2) ethics (3) rationality, at this point in time, to be confident enough to decide to significantly alter their life by looking for a high paying job in order to support that cause by donating lots of money? I don’t see that at all.

Tags: , ,