NFC is a clever little technology that has helped lead to some big developments: most notably Tap and Pay systems. But it also has its limitations, one of which is that its tags are required in both electronics devices that would interact. In the future, smartphones may instead make use of an electromagnetic emissions sensing solution to achieve something similar, without the same tech appearing on another device.
The Carnegie Mellon University‘s Future Interfaces Group has created a concept to show off how this new tech could be applied. Using a Moto G 2013 and an attached electromagnetic sensor (though in the future this could be integrated into the phone), the smartphone is able to identify electronic objects just by tapping them — and then interact with them.
Check out this so-called “on-touch contextual functionality” in the video below.
The reason this works is that every electronics product emits a unique electromagnetic field: something which was outlined in this research from Disney last year (in that case it was only used to identify objects, not control them, however).
Of course, the appliances the Moto G seeks to control must be “smart” devices — it may identify a regular coffee machine, but it wouldn’t be able to interact with it — but the implication is that this could simplify our interactions with smart devices and extend their functionality.
I’m not completely sold on the idea that smartphone control is generally superior to physical controls, and the smartphone would still need to come into contact with another device, which isn’t necessarily an improvement on NFC pairing. However, these developments could open up some exciting opportunities. At the end of the video above, the tech is used to transfer a file to a computer’s desktop by tapping the phone against the monitor and it’s this kind of interactivity that poses the real promise, rather than the universal remote-style functions.
Engadget notes that commercial implementation of this is still at least a year away and there are no guarantees that it would be widely adopted even after this. But if it’s cost-effective and easy to integrate, it stands a good chance.
What are your thoughts on the potential on-touch contextual functionality? Let us know in the comments.