Affiliate links on Android Authority may earn us a commission. Learn more.
Google to talk AI ethics with top minds in the industry
To some it’s scary and to others it’s exciting, but the scientific consensus is that artificial intelligence will have a drastic impact on humanity, probably within our lifetimes.
What exactly that will look like is a debated range of sci-fi scenarios. Stephen Hawking made waves when he claimed that the development of AI could end the human race. While real-life Tony Stark Elon Musk and Microsoft’s Bill Gates chimed in with concerns of their own, other experts believe the threat of artificial intelligence has been exaggerated. What everyone can agree on is that developing intelligence is something we should be very, very careful with.
When Google purchased DeepMind to the tune £400 million, the London-based AI startup had some pretty firm ground-rules regarding the two companies’ relationship. Demis Hassabis, the DeepMind CEO, said that a condition to their acquisition by Google was for Google to form an internal ethics committee. DeepMind also refuses to allow any of their technology to be used for weapons or military interests.
What everyone can agree on is that developing intelligence is something we should be very, very careful with.
Hassabis has announced that he and many of the top minds currently working with AI research will be meeting in New York in early 2016 to discuss and debate ethical issues surrounding their work. Although no official list of participants has been released, big players such as Apple and Facebook will almost certainly have representatives present.
Since purchasing the AI company, Google has been using DeepMind’s technology in a wide array of implementations. Artificial intelligence has improved Google’s image recognition technology (if you remember when DeepMind’s “dreams” went viral earlier this year) and is also helping services like Google Now anticipate user’s needs more accurately. Talks like the one expected to occur in New York will likely serve to create ethical frameworks that will guide the development of this and other technologies.
What do you think about this development? Should scientific research not involving living creatures be fettered by ethical concerns? Is AI going to be the proverbial genie that won’t go back in the bottle? Are we going to throttle ourselves into a wasteland future that sees every human being wailing “What has science done?” under the authoritarian bootheel of robotic overlords? Let us know in the comments!