There was certainly lots of great launches at Google’s October 4th event. Not only did we get the new Pixel smartphones, Google Home, Daydream VR and Google WiFi, but there was some interesting announcements on the expansion of Google’s ecosystem via its Google Assistant and Actions on Google.
Since Google foresees a future where, “the next big innovation is going to take place at the intersection of hardware and software, with AI at the center.” So, as part of the Google Assistant announcement was the pre-launch of an open developer platform called Actions on Google. It basically means that third parties can integrate their software and hardware into the Google Assistant. It will launch in early December.
There will be two types of actions: Direct Actions and Conversation Actions. When a request is simple, the Google Assistant can trigger the partner action directly. A great example of a direct action is home automation. The user says, “Turn on the lights in the living room.” And the Google Assistant does just that.
However somethings take a little more conversation. If you want to book an Uber then you need to chat a little about destination etc. These are Conversation Actions, actions that need “back and forth” interactions to complete.
Actions on Google is designed to scale which means that they will work equally well on text only interfaces and speech interfaces, plus they will also work in whatever hybrid interfaces that will come along in the future.
Google has already lined up a load of partners covering news, video and music for the initial launch. If you are a developer and want to know more then check out developers.google.com/actions/. Google will also be releasing an embedded SDK which will allows developers to build the Google Assistant right into a range of devices including the Raspberry Pi!