Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Google making it possible to securely stop AI robots from causing harm

Worried about the robot apocalypse? Some may think it's nothing to worry about, but Google is making sure we can stop AI agents from making bad decisions.
By

Published onJune 4, 2016

Worried about the robot apocalypse? Some may think it’s nothing to worry about, but it is true that a self-learning, intelligent machine will not be perfect. Even if they may not turn evil and want to take over the world, chances are there will be times when these AI agents make harmful decisions. This is why Google is creating a safe way to interrupt their actions.

Google-owned company DeepMind specializes in AI research and is in charge of this study. Think of it as a kill switch of sorts. This system would allow people to stop self-learning machines from doing harm to themselves, people or their environment.

Shutterstock

This “framework” makes it so that a “human operator” can easily and safely interrupt whatever an AI robot is doing. And because these agents can also learn, DeepMind is making sure the AI can’t learn to prevent interruption. In essence, disobeying you would be one of the very few things these AI agents can’t learn.

And it seems DeepMind is very conscious of the dangers that their technology can have in the coming years. They allowed a Google acquisition only with the condition that the tech company creates a an AI ethics board to supervise all progress.

These may have seemed like off-the-grid conspiracy theories just some years ago, but AI technology is advancing very quickly and this is definitely something we must think about. We will soon be able to have Chatbots with which we can have “interesting conversations”. And these are expected to reach human capabilities by 2029. Things are moving quickly, guys.

Are you worried about AI agents? I am not too freaked out by the idea of a robot apocalypse, but I do say having a kill switch is necessary. You know, just in case. But though we may get all crazy about sci-fi outcomes, all this tech is likely for more functional reasons. A robot shouldn’t walk into a lake, or squeeze a cardboard box too tightly, for example. These are things it may not know at first.

You might like