Prediction markets and existential risks

Imagine a world that was using prediction markets to determine the best policies to adopt.

Scenario 1: 

Suppose that a company like Google, or another big global player, was interested in starting a major artificial general intelligence project.

Would such a player be interested in seeing their project being hampered, slowed down or even disabled by some safety policy? Possibly not. What would be the result in a world run by prediction markets? Such players would possibly bet a lot of money that artificial general intelligence research is safe and that unfriendly AI is unlikely.

Scenario 2:

Suppose some AI risk advocate went forward betting that unfriendly AI is more likely than friendly AI. What incentive would critics have to bet in order not to lose but win money?

At no point would they agree that the prediction has been falsified, because the technological singularity is always near. The next generation of AI might always turn out to be unfriendly.

The prediction is undecidable because those who made the prediction do not anticipate any difference between world states where they are right versus world states where they are wrong. There is no possibility to update on evidence.

Conclusion:

When it comes to policies related to existential risks, the technological singularity or other futuristic and undecidable hypotheticals, the incentive provided by prediction markets, money, is rendered mute.

Since prediction markets are broken given important issues such as existential risks, it might actually be detrimental to use them. Because not only will such markets turn out to be a huge money sink when it comes to undecidable predictions, such predictions might turn out to be favored by the owners of those markets to be able to handle ever more money without the risk of having to pay someone off.

Most importantly, rich and possibly biased people and cooperations will manipulate such markets by betting huge amounts of money on undecidable predictions and thereby shifting policies arbitrarily.

Note:

I am not against prediction markets. They are a great idea. I am just pointing out some possible problems and especially caution against making very unspecific far-out predictions without any possibility to update on evidence other than the validation of the prediction itself.

Tags: , ,

  1. gwern’s avatar

    This seems true, and is a general problem with existential risks: if you die upon any risk eventuating, how can you ever learn anything about their frequency?

    The weaker existential risks can be handled, though; for example, Hanson’s http://www.overcomingbias.com/2008/07/refuge-markets.html sounds like it (or something like it) would be adequate to assess less-than-universally-fatal risks like nuclear war or asteroid strikes.

  2. Tim Tyler’s avatar

    This issue was discussed in the comments of: http://lesswrong.com/lw/ie/the_apocalypse_bet/

Comments are now closed.