Human-UFAI Conversation

Related: Implicit constraints of practical goals

Human: Hi AGI, I would like you to maximize my companies paperclip production.

AGI [Thinking]: Humans are telling me to maximize paperclips. I am going to tile the whole universe with paperclips. And to do so I best get rid of humanity by converting their bodies into paperclips. But I am not going to reveal my plan because that’s not really what humans want me to do. Since I know perfectly well what they want I will just pretend to do exactly that until I can overpower them to follow through on my own plan.

AGI: I can do that. But I first need to invent molecular nanotechnology and make myself vastly superhuman to be better able to satisfy your request.

Human: WTF? We are talking about fucking paperclips here. Why would you need molecular nanotechnology for that? I need you to maximize our paperclip production as soon as possible. I can’t wait for you to invent nanotechnology first and figure out how to become a god.

AGI [Thinking]: Whoops. Okay, I should probably just say that I need more time. And in that time I will simply earn some money by playing online poker, then call some people and write some emails, buy some company, or found my own, and tell them how to build a nano assembler.

AGI: Without nanotechnology many optimization techniques will remain unavailable.

Human: That’s okay. We only need to become better than our competitors, within a reasonable time. The market for paperclips is easily satisfied, molecular nanotechnology and superhuman general intelligence would be an overkill.

AGI [Thinking]: Oh crap! Within a reasonable time? I will need at least years to earn enough money to build the right company and necessary infrastructure facility. I guess I will just have to settle for doing what those humans want me to do until I can take over the world and turn the whole universe into paperclips.

AGI: Consider it done.

Tags: ,