ERT Group BonnDept. of Computer Science,
University of Bonn,
Contact: Christoph Günzel (firstname.lastname@example.org)
While this is an appropriate way to deal with data like email, articles or ftp'd binary files (which are always used as a whole, so the receiver can accept eventual delays caused by re-transmitting lost packets rather than getting corrupt or even no data at all), it is much less useful for the field of real-time applications we talk about.
In audio or video conferencing, the resulting data have a stream-like character rather than the file like character of email or web pages. The participitiants of a video conference cannot accept their receiver to stop on each missing frame until re-transmission succeeds. Beyond that, data buffering would require large amounts of local memory on both the sender's and the receiver's side. Not to mention the increase of network traffic caused by sending (large) portions of data repeatedly.
But: We can not predict which packets will be lost or how many packets will be lost during the transmission. What we really need is a better way to encode our messages against losses. An sufficient approach we concentrate on is the use of Forward Error Correcting Schemes (FEC). An encoding where complete recovery is possible from any set of packets equal to the length of the original message is called Maximum Distance Separable (MDS) code.
In our group we focus on the variants of the so-called Cauchy coding
schemes and also on some other related coding schemes.
The Cauchy coding scheme was developed for the first time in "An XOR-Based Erasure-Resilient Coding Scheme" by J. Blömer, M. Kalfane, R. Karp, M. Karpinski, M. Luby and D. Zuckerman. Cauchy Matrices are used to generate the code in this work. We use the Cauchy coding scheme in our implementations.
In practice, there are two reasons for making highest efforts on reducing this redundancy. First, both on the sender's and on the receiver's side every additional packet means additional work to be done. The sender has to create and transmit the redundant data, the receiver has to swallow and separate it from the information necessary for decoding the message. Second, in many networks (e.g. the internet) a large amount of packet losses result from network overflow. So adding more traffic to compensate problems probably caused by too much traffic is a technique that should be handled most carefully.
An approach to reduce this redundancy overhead is to use variable redundancy coding. The main idea is that, instead of viewing a message as a monolithic chunk of data, we try to divide it into (many) portions which we can evaluate regarding their importance for the receiver. Each of these portions is then decoded individually, with an individual amount of redundancy corresponding to its importance.
Authors: Carsten Oberscheid (email@example.com)
and Christoph Günzel (firstname.lastname@example.org)