An evolution to gesture controls for mobile devices could be just around the corner. PMD Technologies has produced what they are calling the world’s smallest, most integrated and sophisticated Time-of-Flight 3D imaging camera. The CamBoard pico XS is a 3D depth sensor capable of creating full 3D scans of your world, specifically able to detect your hands and fingers with low latency and noise, allowing for advanced detection of intricate and rapid hand gestures.
The CamBoard Pico XS is about the height of a ballpoint pen and could easily be integrated into a typical laptop, or a large tablet. Although this may be too large for most mobile devices, this model is much smaller than its predecessor, so it is reasonable to imagine that a mobile friendly version could be available down the road.
A great sensor requires great software – with help from Three Gear Systems, nimbleUX was created and is being shown off as a tool to turn hand movements into actions on a computer. Out of the box, nimbleUX has a solid set of hand gestures to navigate your computer, but allows for custom gestures and actions. Operation is very quick and smooth, and the live 3D models appear very accurate.
CamBoard’s creation of dense 3D models of objects is made possible by improved and sophisticated illumination techniques. Time-of-Flight referring to the sensor emitting modulated infrared light and measuring the time it takes for the signal to travel to an object and back to the camera. This allows the sensor to accurately replicate the world in front of it in true three dimension, meaning it can register not only basic hand movements, but individually track finger positions and movements to provide gesture controls current mobile sensors simply cannot register.
Of course, 3D modelling goes beyond hand and gesture controls, even beyond the ability to track multiple users’ hands and fingers at the same time that CamBoard can provide. Dense models of environments can be used for scenes for augmented reality, even for 3D printing and more.
The nimbleUX interface could allow for a wide range of uses, interaction with a virtual reality environment, such as that provided by the Oculus Rift. The potential applications are numerous and compelling, and we very could be seeing sophisticated gesture based control systems for AIO (All in one) systems, tablets, and eventually our mobile devices as the tech itself is further miniaturized. A full touchless operating system could revolutionize how we use our devices. Touchless gesture controls in an automobile would all but eliminate driver distraction for basic tasks such as changing the song on the radio.
We’ve seen LG do this before, and Samsung has pushed the envelope here, but the implementation is still rather rudimentary in its current form. Samsung’s floating touch and Air View, which utilized their S-Pen stylus, have been highly promoted, but generally relegated to bragging rights, unable to stick for everyday use. Face, head and eye detection essentially round out the currently available options letting us navigate a device with one finger from an inch or two away, turn pages or scroll pages based on our eye movement and shut off a device when our head turns away or eyes close.
Several third-party applications have popped up in the Play Store that utilize the typical Android proximity sensor to allow for touchless gesture controls. Air Gesture Control by INTank Corp is a very powerful app in terms of the tasks you can perform with a proximity sensor gesture, but its abilities are in what the app can do, not in the gestures you can perform. Obviously, we are excited at the prospect of new ways to interact with our Android devices. What touchless control are you most interested in having on your device? Would you welcome the ability to interact with your technology without ever touching it?
Like this post? Share it!
Iron man’s workshop!
Another piece in the ‘puzzle’ toward Artificial Intelligence or independant machines. The evolution of the species or simply good tech?
This looks awesome!!! How does it compare to that Leapmotion thing that is similar to this.
So sign language could become another input method!
for mobile phones there are already prototypes of kinect-style 3d sensors on cameras that use 2 regular cameras on the front face. I’m happy with that style, but either way :)
meh. Looks like less capable than Leap and that bored me too after a while.