Links on Android Authority may earn us a commission. Learn more.
The future landscape of mobile devices
Editor’s Note – In many ways the mobile revolution has only just started. As devices become smarter, as seemingly insignificant objects get connected, and as wireless technologies advance, there will be significant changes in how technology impacts are lives. One researcher who is taking a longer term view of mobile technology is Professor Cristian Borcea of the New Jersey Institute of Technology.
Borcea is an Associate Professor and the Associate Chair of the Department of Computer Science at NJIT, and holds a Visiting Associate Professor appointment at the National Institute of Informatics in Tokyo, Japan. Holding a Ph.D. from Rutgers University, Borcea studies mobile computing and sensing, ad hoc and vehicular networks, and cloud and distributed systems.
Here’s Cristian Borcea talking about his research interests in this April 2014 video:
Professor Borcea and his colleagues have recently been awarded a National Science Foundation Grant to research a novel mobile cloud computing platform that would “support collaborative applications in areas such as healthcare, safety, and social interaction, potentially benefiting millions of users.”
“Our goal is to make smartphones smarter,” said Borcea, who is the grant’s principal investigator. We caught with Professor Borcea and asked him to explain his work and vision to us. In the following guest post, Professor Borcea lays out his vision of cloud-augmented mobile computing and the potential impact his research could have on fields.
The Future Landscape of Mobile Devices
By Cristian Borcea
In the next 10 to 15 years, the mobile landscape will experience a seismic shift that will completely alter the way our devices interact with the physical world. The market will be saturated with intelligent wireless sensors that will impact healthcare, transportation, energy and water distribution networks, etc. For example, body-worn health monitoring sensors will communicate wirelessly with smartphones or smartwatches, which will be integrated with the cloud. The applications of this technology are seemingly endless – from finding a doctor nearby to assist someone who is having a heart attack, to monitoring and potentially stopping the spread of epidemic diseases. In addition to sensors, we will see autonomous devices, vehicles and robots in a multitude of forms (self-driving cars, drones, household robots).
These devices will stream large amounts of data from the physical environment (video, audio, and other types), and this data has to be quickly processed to provide useful real-time assistance to users. However, in order for this vision to become a reality, several problems need to be overcome to ensure that these novel mobile apps work efficiently and protect the users’ privacy. Researchers and computer scientists will have to integrate mobile and cloud computing in order to allow automation and interaction between devices.
At the New Jersey Institute of Technology, my colleagues and I are trying to answer a key question necessary for the shifting mobile landscape: How can we provide fast, scalable, reliable, and energy efficient, distributed computing over mobile devices?
Our proposed solution is called Avatar, and it is a mobile-cloud system that enables effective and efficient collaborative apps for mobile users. In Avatar, a mobile user owns one or more mobile devices and has an “avatar” hosted in the cloud. Our version of an avatar is a per-user software entity that acts as a surrogate for the user’s devices, which will reduce the workload and demand for storage and bandwidth. Avatars run the same operating system as the mobiles and can run unmodified mobile apps or app components. Implicitly, they save energy on the mobiles and improve the response time for many apps by executing certain tasks on behalf of the users. The avatars are always available, even when their mobile devices are offline.
Potential application: finding people in a crowd
Currently we see a wide range of applications for this research. For example, through Avatar, a parent could find a lost child by using the child’s photo to search through recent images taken by nearby mobile users. Similarly, law enforcement agencies may search for a person of interest. Being able to efficiently and automatically run such an operation on thousands of mobile devices, selected according to their current location and other properties such as social connections, while preserving the users’ privacy, has been the holy grail of mobile computing for a long time.
The “find person” app could run on either the avatars or the mobile phones, depending upon where the photos are currently located and the trade-offs between computation and communication. Our architecture improves the response time by using avatars to process the photos already uploaded to the cloud, and by deciding how to deal best with the photos residing on mobiles.
In addition to impacting the way parents and law enforcement can find persons of interest, our research will improve healthcare and wellbeing. Users may have health body sensors that report health-related data to smart phones and then on to the avatars; additionally, the phones may record the user location and co-location with other users. A simple example app is one that would allow users or health agencies to monitor and stop, in early stages, the spread of epidemic diseases, by seeing spikes in the data and alerting the CDC to help control the outbreak. These types of applications would have perhaps helped limit the spread of Ebola. When natural disasters strike, such as an earthquake or a blizzard, the mobiles/avatars of users can be queried in real-time to alert emergency teams of the locations of injured citizens. The avatars may share the users’ data even after the mobiles have run out of battery power, thus improving availability.
Privacy in the cloud
The above applications can work efficiently by storing and processing an unprecedented amount of data in the cloud. At the same time, our goal is to also protect the user’s privacy and data confidentiality from the cloud providers. We propose to use a variant of multi-party computation, which is tailored for the Avatar system and cloud:
- Split and store the users’ data between two different cloud providers in such a way that each individual cloud provider cannot access the original data (this is achieved through cryptographic functions);
- Execute the desired program on the data split between the two cloud providers such that the providers cannot see the final result – the requester will get partial results from each cloud provider and use cryptographic functions to merge them into a final result.
The privacy of the users’ data is preserved as long as the cloud providers do not collude with each other. This assumption is supported by the current real-world settings in which the cloud providers are competitors (e.g., Amazon and Microsoft).
Programmability and scalability challenges
In addition to privacy issues, there are substantial technical challenges to Avatar, including programmability and scalability issues. Many current apps are interactive or heavy on communication instead of computation. Therefore, new cloud architectures and protocols are needed to maximize scalability and find a good balance between cost and efficiency. For this reason, we propose to work on re-designing the cloud architecture and protocols to support billions of mobile users and mobile apps with very different characteristics from the current cloud workloads.
Some of our current recommendations include the following techniques: Virtual machine clustering to localize communication; distributed storage and data layout to localize data accesses; deducing data and computation redundancy; and schedule VMs and requests carefully to further reduce computing resource consumption.
Avatar apps execute over distributed and synchronized (mobile device, avatar) pairs to achieve a global goal. Therefore, app components have multiple options about where to place execution to achieve different global performance objectives. However, the programming abstractions should shield the programmers from all these complexities and provide a simple, high-level API. In addition to the app code, the programmer should issue policy and performance objectives that will be translated into an execution plan by the Avatar middleware. For this reason, we propose to work on creating a high-level programming model and a middleware that enable effective execution of distributed applications on a combination of mobile devices and avatars.
Overall, computer scientists have a major task ahead of them in order to integrate mobile and cloud computing, but the impact to the way society manages healthcare, transportation, energy, and safety are immense.
Through our research at NJIT, we’re hoping to find ways to create a new mobile cloud architecture that allows for many of the future mobile apps to happen efficiently and without privacy intrusion. The future landscape of mobile devices over the next 10 to 15 years is exciting, and it’s almost impossible to imagine how far we can improve the fundamental building blocks of the way we interact with our physical and digital landscapes.