Watson was supposed to win at Jeopardy! and succeeded. Whereas artificial general intelligence is conjectured to work sufficiently well at not doing what it is supposed to do.
You need a lot of ingenuity to cause human extinction. Your artificial general intelligence will have to work perfectly, exactly as it was intended to work. A perfectly working machine does however not commit such mistakes as to cause human extinction in order to win at Jeopardy!, as long as it was not explicitly build to do that.
IBM Watson committed mistakes. An artificial general intelligence that is supposed to outsmart humanity has a very small margin for error. If an artificial general intelligence was prone to commit errors on the scale of confusing goals such as <win at Jeopardy!> with <kill all humans> then it would never succeed at killing all humans because it would make similar mistakes on a wide variety of problems that are necessary to solve in order to do so.