Google’s Project Tango is an Android-based smartphone with Kinect-like powers
Google is launching today Project Tango, an Android-based prototype phone and developer kit that’s equipped with 3D sensors and computer vision chips that can track motion and build a tridimensional representation of a room.
The idea is to give developers interested in creating apps that make use of information on the users’ surrounding ways to create apps that were simply not possible in the past. Google is opening Project Tango to developers today, but only 200 of the applicants will be hand-picked to receive early access to the device.
The project is being developed by Google’s Advanced Technology and Projects (ATAP) group, a skunkworks type group of engineers that Googled transferred from Motorola. The group is headed by former DARPA boss Regina Dugan and is also responsible of Project Ara.
According to project technical lead Johnny Lee, who worked in the past for Microsoft on the Kinect sensor, “Project Tango strives to give mobile devices a human-like understanding of space and motion through advanced sensor fusion and computer vision, enabling new and enhanced types of user experiences – including 3D scanning, indoor navigation and immersive gaming.”
The Project Tango prototype device uses a revolutionary 3D sensing chip developed by Movidius. Up until now, this kind of applications required too much power to feasibly work in commercial smartphones, but Movidius has managed a breakthrough in this area. According to TechCrunch’s Matthew Panzarino, “Movidius has leapfrogged ahead in the 3D-sensing market by manufacturing a ready-to-wear chip that has enormously lower power consumption. It produces over 1 teraflop of processing power on only a few hundred milliwatts of power”. That’s compared to a full watt that previous solutions required.
Here’s how the development device looks:
And here how Google describes it:
[quote qtext=”Our current prototype is a 5” phone containing customized hardware and software designed to track the full 3D motion of the device, while simultaneously creating a map of the environment. These sensors allow the phone to make over a quarter million 3D measurements every second, updating it’s position and orientation in real-time, combining that data into a single 3D model of the space around you. It runs Android and includes development APIs to provide position, orientation, and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine. These early prototypes, algorithms, and APIs are still in active development. So, these experimental devices are intended only for the adventurous and are not a final shipping product.” qperson=”” qsource=”” qposition=”center”]
Stay tuned for more.