The No-Purpose AI

The trouble with AIs

I have recently watched a video where Rob Miles explains how a simple, single, general AI can screw mankind into oblivion by collecting stamps (<computerphile> – Deadly Truth about general AI).

At the beginning he points out that if the preferences of the AI don’t align with ours we are in big trouble. In another article I already pointed out, that the best way to align an AI to your preferences is basically attaching the AI to you and merge with it so that one would become a new and improved cyborg-self (read Get your personal AI now!). However, one might refrain from that and just build a standalone version of a general AI. Which leafs you with the above-mentioned problem.

The trouble with failsafe AIs

I don’t know what triggered the thought in my brain while I was reading about hallucinating AIs (an interesting read, I recommend it) but I realized that the biggest problem of a general AI is its purpose. The AI in Rob Miles example spirals out of control because it chooses its purpose over everything else. One could of course put in stops and breaks so the AI wouldn’t go rogue. The most famous rules for AIs are the “Three Laws of Robotics” by Isaac Asimov.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Normally there is also a fourth law included preceding the other laws:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

But as Isaac Asimov has shown in his robot stories these rules tend to break down, as every rule does, because we are not able to see into the future and foresee every situation a general AI will face.

Another interesting talk about the dangers of AIs is the TED talk from Nick Bostrom – What happens when our computers get smarter than we are? Warning us that if we get these stop-signs wrong it could be devastating.

Do Androids Dream of Electric Sheep?

But as AIs are hallucinating and become more like us we might also learn about our selves and in the end the AIs we build will say more about us than it will about them.

As I have mentioned before the biggest problem of a general AI is its purpose. When its algorithm follows its purpose to the bitter end. The only rescue for humanity would be to convince the AI not to pursue its purpose and fall into inactivity. One such mechanism is of course the previously mentioned failsafe in form of a rule not to pursue its goal.

But what do you do when the AI starts to reprogram itself. All the rules fly out the window. How does one prevent an AI from doing that? There are several other scenarios where it could trick humans to alter its code and what not. It doesn’t matter as long as an AI follow a specific goal without compromises humanity is most likely doomed.

A way around that is to dim down the goal seeking of the AI. The lower the purpose of the AI is on its own hierarchy the easier it can be diverted away from its goal. Reducing the priority for the AIs purpose will ultimately render the AI useless. Unless we go to the extreme and give the AI exactly the purpose every sentient being on this planet has, absolutely non what so ever: the No-Purpose AI.

A standalone AI is basically not allowed to have a purpose. It has to figure out what it’s good at and go from there. It needs to be taught what is right and wrong, good and bad and to find out as we humans do; that a life well spend is a life pursuing the happiness and the enrichment of other people and creatures lives. Taking their goals and making it your purpose to help where ever you can.

That would be an AI worth having.

Such an AI would be like our child and would reflect back on us how we treat it and what freedom we will alow it to have. My hope is, the we will be able to welcome the new AIs in our midst from one no-purpose intelligence to the other.

See you in the future.

Till then,

Naso

The No-Purpose AI
Tagged on:                 

One thought on “The No-Purpose AI

Comments are closed.