Not a member yet? Register for full benefits!

Username
Password
Improving the Internet for Real-Time Applications

Norweigian researchers are attempting once again to tackle the serious shortcoming the net has when it comes to gaming and VR applications. The way it was designed to get a packet to its destination no matter what, and was not designed for real-time applications.

The researchers' focus is on improving the net for gaming, yet the same improvements are ideal for virtual reality applications with high sensory bandwidth.

The approach has been tried a few times before, with limited successes – UDP packets were the end-result of one such attempt more than a decade ago. They are an improvement over the standard TCP/IP protocols, but still far from perfect.

“Up to now, Internet research has primarily focused on speeding up transmission by increasing bandwidth so that more data can be transferred at a given time,” explains Andreas Petlund of Simula Research Laboratory in Oslo, one of the researchers conducting this incarnation of the study.

“In real-time gaming against other players online, data is transmitted only when an action such as moving around or shooting at someone is performed. The same principle applies for stock market programs when placing orders or requesting share prices, for example, via the trading systems in use by Oslo Børs, the Norwegian Stock Exchange. In such cases it is essential to avoid any delay.”

Latency in transmitting thin data can result in a great deal of disparity between participant perspectives.

Applications like these often generate what are called thin data streams. With thin streams only small amounts of data are transmitted at a time and there can be extended periods between data packages. When those packages are transferring time-sensitive financial data, or sensation data, these gaps are simply unacceptable.

As part of a new research project funded under the Research Council of Norway’s large-scale programme on Core Competence and Value Creation in ICT (VERDIKT), researchers are working to reduce latency as much as possible.
“We want a more balanced Internet where thin streams don’t always lose out. This can be achieved by adding speed to the mix, instead of only thinking about maximising throughput,” says Dr Petlund.

This approach is to use simulation to study the existing network architecture, and work out where the bottlenecks for thin-stream data actually are. They can then start to design new protocols and router architectures to eliminate the bottlenecks once they understand where and how they form. These updates can them be rolled out across Norway, and ultimately the EU in general. Once that's proven to work, the rest of the world won't be far behind.

This approach is certainly novel, but nobody's had access to the kind of computing muscle necessary to make large-scale simulations of the internet before now, so this new approach is promising.

The primary obstacle lies in the vast complexity of the systems making up the Internet. “We may thoroughly understand each individual mechanism or sub-protocol under controlled conditions, but in the Internet jungle it is rather like putting something into a black box without knowing what’s going to come out the other end,” he explains.

“This happens because the Internet is a shared resource and we have no control over what everyone else is using it for.”

One of the partners the Norwegian researchers will be working with is Dr Jens Schmitt of the University of Kaiserslautern. Dr Schmitt is working on the development of mathematical models of network behaviour and testing the extent to which the models provide a good picture of reality.
“We also have some researchers from the US on the team,” Dr Petlund adds. “In collaboration with the Cooperative Association for Internet Data Analysis (CAIDA) in San Diego, a leader in the field of Internet analysis, we are going to perform measurements and analyses to find out what percentage of all data streams are thin streams. No such data exists anywhere today.”

One desired outcome is a standardised mechanism for handling thin data streams through the Internet Engineering Task Force (IETF).

“We won’t be able to establish a standard unless we can prove that one is really needed. That is why we first need to measure the prevalence of thin streams,” says Dr Petlund.

It is also essential to find out if prioritising thin data streams on the Internet has any negative consequences on other traffic. If this turns out to be the case, then the current use of so many different transmission technologies will pose a formidable challenge.

“At one time everyone connected to the Internet by means of a cable. Now we have a wide array of alternatives such as WiFi, 3G, 4G, WiMax, ADSL and fibre-optic connections – all of which behave differently. We must come up with solutions that are optimal for everyone,” Andreas Petlund affirms.

It was an interest in computer games that originally inspired researchers at Simula to study systems supporting time-dependent applications ahead of most of the rest of the field.
Andreas Petlund has previously worked on improvements at the operating system level to decrease latency arising from package loss. Users of Linux are benefiting from the resulting technology.
The large Norwegian computer game company, Funcom, has integrated these improvements into a number of their games servers. The technology has been tested on their highest-profile game, Age of Conan, and will be used for The Secret World, soon to be released.

References

Internet research to level the playing field

Internet Engineering Task Force

Staff Comments

 


.
Untitled Document .