“Go has always been a holy grail for AI research,” DeepMind founder Demis Hassabis told The Verge in an interview following AlphaGo’s first victory against Korean Go champion Lee Sedol this week (which has already been followed up by a second victory). But Hassabis has much bigger picture ideas for DeepMind beyond complex gaming, including how AI can be used to make virtual assistants like Google Now much better.
New AI chip could bring artificial intelligence to your smartphone
Speaking about how different AlphaGo is to the famous Deep Blue chess program, Hassabis said “programmers distilled the information from chess grandmasters into specific rules and heuristics, whereas we’ve imbued AlphaGo with the ability to learn and then it’s learnt through practice and study, which is much more human-like.”
Hassabis has bigger ambitions for AlphaGo and DeepMind generally – of which AlphaGo is not even the main project – saying he wants to apply DeepMind solutions to “big real-world problems”. Among these problems, and one that is close to our hearts, is making virtual assistants smarter.
Pre-programmed virtual assistants simply aren't able to handle the kinds of unpredictable things that people do.
According to Hassabis, “at the moment most of these systems are extremely brittle – once you go off the templates that are pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust”. Because the real world is “messy and complicated,” pre-programmed virtual assistants simply aren’t able to handle the kinds of unpredictable things that people do.
This is where AlphaGo’s learning capabilities come into play. “I just think we would like these smartphone assistants to actually be smart and contextual and have a deeper understanding of what it is you’re trying to do,” Hassabis says. “The only way to do intelligence is to do learning from the ground up and be general.”
“The only way to do intelligence is to do learning from the ground up and be general.”
Hassabis is confident that his team could start applying AlphaGo learning to virtual assistants tomorrow, but that it would require a slightly different approach to AlphaGo. Still, he says the benefits of AlphaGo-style learning will gradually be felt in virtual assistants “in the next two to three years…certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities”.
Fortunately, there’s already a Google product for that learning in Google Now, and according to Hassabis, “a smartphone assistant is something I think is very core – I think Sundar [Pichai] has talked about that as very core to Google’s future.” But DeepMind’s ambitions reach far beyond making Google Now more adaptable to your quirks.
Hassabis foresees a future that includes AI-assisted research where programs can do the time-consuming and data-driven grunt work for scientists to help identify significant trends or structure in data sets no human would ever be able to sift through on their own. But before all that, AlphaGo has another two games of Go to win.
Where do you see virtual assistants in five years’ time?