Mobile data is both wonderful and horrible at the same time. It’s wonderful to be able to pull information about anything from anywhere. It’s horrible because a billion people use a network that can only barely handle a billion people. That means a lot of data packet loss and low data speeds. Researchers at MIT may have found a way to fix that.
Almost all of our current data problems come from congestion of the network. There are so many people pulling information all at once that sometimes things go wrong and data packets get lost. Then the network has to resend those data packets. So not only are a whole bunch of people asking the network for data, but the data must be sent twice in many cases.
This results in pretty low data speeds, but MIT researchers believe they may have solved the problem. How do they do it? Using our old friend algebra. Basically, instead of letting the network resend missing data packets, the device receives an algebraic equation describing the data packets.
Using it, the device simply reconstructs the missing data packets itself. It sounds complicated because it is complicated. However, it involves much less from the network, which is then free to go about providing faster speeds to more people.
Oh my goodness yes. In preliminary testing, MIT researchers were able to boost speeds from 1Mbps to 16Mbps in systems so congested that 2% of data packets were lost. That is just an amazing increase. In similar tests, MIT researchers were able to boost speeds from .5Mbps to 13.5MBps in systems that were so crowded that 5% of data packets were lost.
To put that in perspective, Technology Review states that in an average day in Boston, 3% of data packets are lost because of the network congestion. So a method like this would theoretically boost Boston data speeds by a considerable amount.
It is very unlikely that everyone would experience the kind of data speed boosts that MIT researchers saw. However, even if it works at only a fraction of MIT’s success, it’ll help alleviate the spectrum crunch issue. If you don’t know about the spectrum crunch issue, you can find out more about it at the FCC’s official website. What are your thoughts on this?
Like this post? Share it!
Gotta love NERDS…. Get em MIT!
Wonderful, they’re down the street figuring this out…and I still have a dead spot in my bedroom. -_-*
What this does is merely send slightly more data than originally required so lighter corruption of it can be reconstructed locally. All of the original data is still being sent, it’s not “replaced” by anything – it just hopes to prevent the still in-use retransmission mechanism being used / kick in by repairing the original data locally if possible. This has been invented about 70 years ago, the concept is old as dirt and used everywhere in the industry. What might actually be new is the explicit sort of coding used to send as little extra data to as much correcting effect as possible. I can fully see how carriers might be interested in any amount of bandwidth improvement, but I have serious doubts those bandwidth numbers mean what most people think they mean – wanna bet nobody will see order-of-magnitude sized bandwidth increases in real-life situations with this technology…? You’re on! Oh, and what can I say, informed journalism FTW…