Google patent details indicate Google Glass(es) may use hand gestures
- Augmented vision is a staple of sci-fi movies, like Terminator or Battlestar Galactica, to name just a couple. But the future of augmented reality may be closer than you think!
Earlier this week Google was granted three patents for their Google Glass augmented reality glasses. These patents, detailed by Patent Bolt, go a long way towards shedding some light on just how the glasses will be controlled by users.
It was previously believed that the glasses might be primarily controlled by head gestures, as hinted in the example video Google produced on the Glass project. Details from the patents show that the wearable computer may instead be controlled by hand gestures via hand-wearable markers. This is great news if you thought that making head gestures might end up looking silly, like you had a nervous tick. Then again, will making hand gestures look much better? Regardless, there’s a lot to be excited about.
How Hand Gestures May Work
It appears that there will be several hand-wearable options to choose from. For example, the patent information includes designs for rings, bracelets, fake fingernails, and invisible decals. The glasses will have an IR camera that will read wearable items with IR reflective surfaces. Also fascinating is the possibility of wearing multiple IR reflective items to enable complex gesturing. For example wearing multiple rings or invisible decals could allow you to use gestures using multiple finger movements.
Can anyone else envision there being some style involved with this? Perhaps we’ll see different styles of rings and fake fingernails.
The patent information also indicated there may be a small touch pad on the side of the glasses for manual input. Yes, the application also confirmed that the glasses are intended to be used as smartphones. They’ll have the ability to support wireless radio technologies such as 3G, LTE, CDMA, and GSM, as well as other features you would expect, like gyroscopes and GPS chips.
While hand gestures seem to make a lot of sense as a Google Glass control method, there’s no way to know for sure yet how Google will decide they’ll be controlled. Perhaps they will settle on head gestures after all or maybe they’ll use a combination of movements, gestures, and voice commands.
What’s clear is that Google is certainly doing some experimenting with different input methods.