AI Risk Caveats

(Cross-posting a comment from a Facebook chat.)

I believe that AI progress (towards generally intelligent agency) will be much slower than our ability to specify AI targets.

My basic points are:

1. AI progress will be slow enough to learn from small scale mistakes.
1.2 AI progress will be of the kind that incrementally leads to AI’s that are better aligned with human volition.
2. AI drives as specified by Omohundro are too far detached from probable real world outcomes to have much weight in assessing AI risks.
2.1 The most basic AI drive of any AI that is capable of improving itself will be to refine its goal system.
3. Any AI created by humans will end up with goals that upon refinement turn out to be interdependent with human volition.
3.1 Human volition, having a direct causal relationship with whatever goals the AI has, is a fact about the universe that has to be accounted for in refining those goals.

None of this means that there are definitively no AI risks or that researching friendly AI is worthless. All of it are caveats that in my opinion show that the case for AI risks is much less definitive than some people claim.

So how come I believe that AI progress won’t be of the uncontrollable type?

There are mainly two possibilities on how to arrive at the seed of an AI with superhuman potential:

1. Gradual development.
2. Breakthroughs.

Regarding point #1, AGI researchers are continually unable to show off applicable progress that would suggest that they pick up the pace. Further, the only example of general intelligence available to us does suggest that it takes a conglomerate of specialties and drives guiding it rather than a few basic principles.

Regarding point #2, I doubt that it would be justified even in principle to be confident in speculations about the possible discovery of unknown unknowns or predictions of mathematical breakthroughs.

P.S. There are of course heaps of other caveats I wrote about.

Tags: ,