Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Google outlines some key ways to keep an AI from taking over the world

Before we create a truly powerful AI, it's important that we will be able to control it. Google has published a paper outlining some of the problems this situation presents.
By

Published onJune 22, 2016

artificial intelligence

Sci-fi aficionados will be quite familiar with Isaac Asimov’s Three Laws of Robotics. Asimov used this grokable list of simple restrictions governing the behavior of machine intelligences to demonstrate, through complications expounded upon in his fiction, that the problem of controlling an artificial mind is a complex one.

Although there’s no doubt that Google has been contemplating these concerns for quite some time, the search giant recently published a paper outlining some practical approaches to solving what may turn out to be one of the most important problems society will face in the coming decades.

Nick Bostrom, an influential thinker on the subject of AI, calls this the “control problem.” In essence, any sufficiently intelligent artificial mind could be capable of having devastating effects on the world, so approaches to controlling such a creation should be carefully analyzed beforehand.

In their blog post outlining their paper “Concrete Problems in AI Safety,” Google is careful to give relatively harmless negative repercussions as examples – a robot attempting to mop an electrical outlet, a cleaning AI deciding to cover up messes rather than take care of them, etc… – but a failure to solve the control problem actually presents much higher risks, such as “existential catastrophe.”

After reading thousands of romance books, Google’s AI is writing eerie post-modern poetry
News

Google has identified five subparts of this problem, each of which must be meaningfully addressed before it’s safe to move ahead in general purpose AI development. The big one is Avoiding Negative Side Effects, which is essentially making sure an AI doesn’t knock over a vase or convert all of the solar system’s raw materials into a computronium fixated on calculating the digits of pi.

The big one is Avoiding Negative Side Effects, which is essentially making sure an AI doesn’t knock over a vase or convert all of the solar system’s raw materials into a computronium fixated on calculating the digits of pi.

Other problems include Avoiding Reward Hacking, which would prevent an AI from working around its intended task to achieve an easier method of producing the desired results, though they differ from the instructor’s intent.

Scalable Oversight addresses how to appropriately monitor an artificial intelligence without the output of data being overwhelming, and Safe Exploration ensures that an AI would be free to search for more creative solutions without violating the Avoiding Negative Side effects tenant.

Robustness to Distributional Shift addresses the concern that an AI’s real world environment will differ substantially from its training environment. Like a college graduate, it will be important for an AI to enter the real world and get a job and not end up in their mother’s basement arguing with strangers on the internet.

What are your thoughts regarding Google’s ongoing research into AI? Are concerns about the possible negative consequences of developing artificial intelligence overblown or, as Elon Musk believes, is creating a machine mind “summoning the demon”? Let us know your take in the comments below.

New AI chip could bring artificial intelligence to your smartphone
News
You might like