Touchscreens are currently the preferred way of interacting with smarphones and tablets. But mobile devices are a fast-evolving lot, and we have seen how QWERTY keyboards, stylii and even numeric keypads have been popular UI choices just a few years back.
Touchscreen alternatives are already in place, such as speech-recognition present in Apple’s SIRI and various Nuance-led efforts in Android and other platforms. In the future, we might also consider motion control. Think of it as Minority Report style manipulation of virtual objects on-screen.
Oh, wait. We already have that with Microsoft Kinect, right? Kinect was primarily marketed for console gaming, although a computer-based variant is available. But what’s missing at this point are workable user interfaces that actually turn motion-control a productive means of manipulating data.
There is a development by a company called Leap Motion, which has created a device known as The Leap. The technology uses several camera sensors in mapping out a three-dimensional workspace for tracking movement. The Leap does not have Kinect’s limitations of distance and angle of view. Instead, the device can track motion as long as the user is within view of the sensors. What’s great is that the device is very accurate, and can track motion down to a hundredths of a millimeter, or about half the diameter of human hair. It can also distinguish among different objects and parts of the body.
Currently, The Leap is a device about the size of a USB stick, and the VGA cameras are limited to a eight cubic feet workspaces (0.22 cubic meters) — about the volume of a small to medium refrigerator. The technology can be scaled, though, such that the size of the workspace is only limited to the field of vision. I suppose Leap Motion can capture movement in bigger areas like entire fields and rooms.
But the small size has an advantage. It can be embedded into mobile devices like notebooks, smartphones and tablets. Leap is already working with device manufacturers in possibly including the technology on their products. Leap is also giving free sensors to thousands of “qualified developers” in the aim of building up a good app base for the system.
The company says the cost of the device will be $70 when it is released sometime between December this year to February 2013. Interested users can already pre-order.
Leap is more than just motion-based capture, though. The company’s CTO, David Holz, says that motion-based technology should not force users to have to memorize gestures and combination of movements in accessing data. Instead, developers are encouraged to provide a constant dynamic feedback. This means the UI should be intuitive and responsive enough that a user should not have to wonder how to use it.
For instance, the pinch-to-zoom analogy is quite self-explanatory. So is turning an object to rotate. The aim here is to make the UI as similar to manipulating real-world objects as possible.
Once out, Leap should make it possible to manipulate objects be moving your hands and fingers through the air, or even along a surface. This kind of technology has actually been explored by Apple, and the company has even applied for a patent that senses motion over a screen without the need for capacitance.
As such, we might soon expect to be able to control our devices even without touching them. We can perhaps cancel or answer calls by swiping our hands in the air. We may also be able to ask the phone to read out messages through another hand motion.
Couple this with technologies like Google Glass, and we’ve got ourselves a computing future like Tom Cruise’s system on Minority Report. Let’s just hope it doesn’t lead to crime precognition or we’re all screwed.
Check out the demo video below.