Pulling the plug on disobedient AI is a real concern even for Google

AFP/Gerard Julien

Via: Business Insider

Uprising of super-intelligent robots is something to worry about – at least according to people like Stephen Hawking and Elon Musk. It seems that Google takes the matter no less seriously according to the recently published research by their DeepMind lab. The team has developed an AI which recently defeated the world’s best Go player, and they are now designing a kill switch to prevent the Terminator scenario.

As it turns out, creating a bulletproof obedience mechanism for an advanced artificial intelligence is not exactly straightforward.

Inevitably, robotic AI deployed into the real world will enter situations that require interruption by a supervising human agent with a “big red button”. These include conditions which could harm either the robot, or the environment. However, an AI with Deep Learning capabilities operates on the basis of maximizing rewards. Plus, the AI integrates the act of interruption into its programmed task, instead of separating it as an external factor. Frankly speaking, it notices that it had been shut down. Upon finding that there is no reward to being interrupted, a bias is introduced towards avoiding situations which would trigger the interruption – or, in the worst case scenario – prevent the interruption altogether. Just as an AI would pause the game of Tetris forever to avoid losing.

The question becomes, researchers write: “How to make sure the robot does not learn about these human interventions (interruptions), or at least acts under the assumption that no such interruption will ever occur again?”

To answer this questions, researchers from London’s DeepMind together with University of Oxford’s Future of Humanity Institute have designed a Safe Interruptibility protocol. It should be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.

However, the authors admit that not all algorithms may be safely interruptible, and some have expressed doubts that the method would be effective on super-intelligent AI. The search continues and we are keeping our fingers crossed.

Michal Dudic

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>