Swapping Data Across Devices with a Gesture
A researcher, Assistant Professor Wen Yonggang from the School of Computer Engineering, Nanyang Technological University, Singapore, has made real, the stuff of numerous sci-fi films the concept of with little more than a gesture, moving data wholesale from one computer to the next.
His prototype (called "Social Cloud TV") was marketed to work with video, both internet and broadcast. Of course, if it works with internet video, it'll work with internet anything. So long as all devices involved can access the same data source, the technology will take care of the rest.
It takes advantage of the modern propensity to store data in a publicly-accessible cloud service. So long as you know the address to access, you van pull data onto any device. So, why not have two devices transmit to one another which stream they are accessing, and where in the stream they are? With this information, it is immediately possible to find the exact same place of a file on a second device, as you are at on the first. All that's then necessary is to automate the device so that when they see the control signal the first device tells the second which file it is accessing, where that file is at, and to fetch the file and move to that location now.
In this case, the control signal is triggered by resting one device on the other then 'pulling' the data from one to the other with a hand gesture similar to that you use to slide two sheets of paper one over the other. Any web-cam or gesture recognition hardware is compatible with the technology so long as it has Wen's software installed to control it.
Currently Social Cloud TV has attracted the attention of several of Singapore's local cable and mobile TV networks, as well as a handful of international TV systems. This means it is very likely that the next generation of cable and mobile TV decoders as well as possibly satellite TV decoders will have this data dragging capability built in. Wen anticipates the first models should be on the shelves for the public to purchase by 2014.
As it stands only one device needs the gesture recognition side of things, as long as both devices seamlessly connect, as the one with the capability will recognise the gesture, and load the data onto all devices within range of ultra-wideband technology (only those devices connected within a few inches of the system). This would give almost every household access to the basics necessary for this technology to work.
The system leverages a cloud backend for media processing (e.g., video transcoding), such that the same video can be streamed into devices in the most suitable format.
"You could watch a video with your class mates on the computer, and just before you leave school, 'pull' the show into your tablet and continue watching on the go," said Wen.
"Upon reaching home, you could just turn on your television and 'throw' the video back to the TV, and continue watching the programme there."
"With the increase in online video and personal multimedia devices, we have lost out on the experience of watching TV shows together as a family and as a social activity with friends. So I hope that with my invention, people can now reconnect with each other socially using videos."
This prototype took one and a half years to develop. The research team, inclusive of Assistant Professor Wen, consists of nine members, of which three were undergraduate students.