Someone tried to mess around with the FaceUnlock application code from the Android SDK, and stumbled upon this code, which seems to include answers, as if given by a personal assistant or such. We’ve heard some rumors a few months back that Google may be working on a personal assistant as part of their Google X projects, and they’ve been doing this for years. These rumors came from someone who claimed he was involved with Google X, and he wanted to give a hint about what they’ve been working on:

This is in total violation of the NDA, but I don’t care anymore. Sue me.

The central focus of Google X for the past few years has been a highly advanced artificial intelligence robot that leverages the underlying technology of many popular Google programs. As of October (the last time I was around the project), the artificial intelligence had passed the Turing Test 93% of the time via an hour long IM style conversation. IM was chosen to isolate the AI from the speech synthesizer and physical packaging of the robot.

If that’s true, and this technology can be used in Android in the (near?) future, then we can soon enough have close-to-real conversations with your phone’s AI, and according to him, it should make Siri look like an AI-toy in comparison:

I was unfairly terminated along with quite a few of my coworkers. It was a small enough group that they will probably figure out who I am, but like I said, I don’t care.

I’m not going to answer all of the questions, but for #6, our AI is practically like talking to a human. Siri can give you information, but we can give you conversation. We can also go full genius mode which essentially gives you the sum of internet knowledge via conversation.

It can do objects by using an advanced version of the Google Goggles software. It also has a suite of basically every sensor that could be put on it; optical, laser, infrared, ultrasonic, depth cameras, etc.

It needs an internet connection. Last I heard, it was using an average of only a couple hundred kilobytes per second. Most of the processing is done onboard, internet is used for external information.

It’s possible that all this is still fake, but it all does sound pretty real. It would make sense for it to recognize objects based on Google Goggle’s pattern recognition algorithms, or to use a lot of data if it’s indeed as smart as he says it is. Google may not have wanted to reveal this in the next few years, until it would’ve been ready to beat maybe 99% of the Turing tests, or until phones were powerful enough, and data connections were truly unlimited and LTE was everywhere. However, with Apple’s release of Siri, they might not have a choice but to use this technology for a true personal assistant in an upcoming version of Android.

Will it be in the version of Android announced at Google I/O (5.0 perhaps?) ? It would certainly make sense to release something like this at a developer event like that. Last year they were already showing some robots there, and some rumors said the next version of Android will feature some really groundbreaking changes and technologies. So we’ll see if Majel, or whatever they plan on calling the personal assistant, will be announced there, too.


Read comments