Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

$7 million in grants awarded to projects that will stop an AI taking over the world

The Future of Life Institute has awarded $7 million for research so that we can benefit from AI, while avoiding the apocalypse and the end of life on earth.
By

Published onJuly 1, 2015

Robotics

It seems like a month doesn’t go past without some high-profile person commenting on the potential for machines with Artificial Intelligence to take over the world. Elon Musk is on record saying that AI systems are “potentially more dangerous than nukes.” But he isn’t the only one, Stephen Hawking is also concerned, “The development of full artificial intelligence could spell the end of the human race,” Professor Hawking told the BBC. “It would take off on its own, and re-design itself at an ever increasing rate.” Even Bill Gates is worried, “I am in the camp that is concerned about super intelligence. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

They are all voicing concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. Back in January, Elon Musk donated $10 million to the Future of Life Institute (FLI) to run a global research program aimed at keeping AI beneficial to humanity.

Today the Future of Life Institute has announced that it has awarded $7 million in funds to 37 research teams around the world so that society can reap the benefits of AI, while avoiding the apocalypse and the end of life on earth.

The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI.

Jaan Tallin, one of the founding members of the FLI and the guy that wrote the original version of Skype, said of the new research, “building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering.”

Among the 37 projects are three projects which will look at techniques to enable AI systems to learn what humans prefer from observing our behavior. One of these projects will be run at UC Berkeley, and one will be at Oxford University.

Benja Fallenstein, at the Machine Intelligence Research Institute, is getting $250,000 to research ways to keep the interests of superintelligent systems aligned with human values. While Manuela Veloso’s team, from Carnegie-Mellon University, will receive $200,000 to look at how to make AI systems explain their decisions to humans.

Other projects include a study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial, a project on how to keep AI-driven weapons under “meaningful human control,” and a new Oxford-Cambridge Audio research center for studying AI-relevant policy.

nick-bostrom-superintelligence-book-cover
This last one is the big jackpot. The new center called the “Strategic Research Center for Artificial Intelligence” will be headed up by Nick Bostrom, the guy that wrote the book that caused Elon to panic about the future of mankind. To set up the center the FLI is giving Bostrom a cool $1.5 million. Who said there was no money in writing stuff about robots taking over the world? Well actually no-one, just look at how much money Hollywood makes from that genre!

Talking of Hollywood, the FLI is trying to disassociate itself with the apocalypse. The FLI is keen to stress the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

What do you think? Money well spent?

You might like