How far is AGI?

Here is another piece of an email discussion:

I think that many serious people believe that AGI would be extraordinarily dangerous for a different reason than SI thinks it would be: they think it would be an extraordinarily powerful tool and thus dangerous *in the wrong hands.*

Like who? I don’t believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.

The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justified even in principle to be confident in that prediction.

In short, either their work is incrementally useful or is based on wild speculations about the possible discovery of unknown unknowns.

The kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying is completely unfounded. Especially given that most AI researchers seem to disagree.

There are mainly two possibilities here:

  1. It takes a few breakthroughs, i.e. that it is a somewhat gradual development that can be extrapolated.
  2. It only takes a single breakthrough.

1.) In the case that the development of self-improving AI’s is stepwise I doubt that their optimism is justified simply because they are unable to show any achievements. All achievements in AI so far are either a result of an increase in computational resources or, in the case of e.g. IBM Watson or the Netflix algorithm, the result of throwing everything we have at a problem to brute force a solution. None of those achievements are based on a single principle like an approximation of AIXI. Therefore, if people like Schmidbauer and Goertzel made stepwise progress and extrapolate it to conclude that more progress will amount to general intelligence, then where are the results? They should be able to market even partial achievements.

2.) In the case that the development of self-improving AI’s demands a single breakthrough or mathematical insights I simply doubt their optimism based on the fact that such predictions amount to pure guesswork and that nobody knows when such a breakthrough will be achieved or at what point new mathematical insights will be discovered.

I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”. I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.

Every success in AI so far relied on a huge team. IBM Watson, Siri, Big Dog or the various self-driving cars:

1.) With Siri, Apple is using the results of over 40 years of research funded by DARPA via SRI International’s Artificial Intelligence Center through the Personalized Assistant that Learns Program and Cognitive Agent that Learns and Organizes Program CALO.

2.) When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.

It takes a company like IBM to design even such a narrow AI. That it could be done without a lot of computational and intellectual resources seems ridiculous given the above.

Tags: ,