ERT Group Bonn

Dept. of Computer Science
University of Bonn, 
53113 Bonn

Contact: Fabian Hargesheimer (

The ERT link page
Seminar Erasure Resilient Transmission Systems

Erasure Resilient Transmission (ERT)

- A Short Introduction

ERT addresses the problem of transmission of large amounts of data over lossy, packet-oriented networks. In our group, we focus on audio and video data, e.g. video conferencing or digital telephony, via the Internet UDP protocol. Challenges for this kind of real-time applications are:

Disadvantages of TCP/IP Protocols

Connection-oriented protocols like TCP/IP inherently avoid any loss of data - Internet users want to be sure that their email is delivered correctly and completely. Such protocols are based on the direct communication between sender and receiver about the success of data transmission, thus enabling the sender to repeat transmission of lost packets.

While this is an appropriate way to deal with text files such as email, articles or binary files (which are always used as a whole, so the receiver can accept eventual delays caused by re-transmitting lost packets rather than getting corrupt or even no data at all), it is much less useful for real-time applications we are talking about.

In audio or video conferencing, the resulting data have a stream-like character rather than the file like character of email or web pages. The participants of a video conference cannot accept that their receiver stops on each missing frame until re-transmission succeeds. Besides that, data buffering would require large amounts of local memory on both ends of the transmission channel. Not to mention the increase of network traffic caused by re-sending (large) portions of data.

Avoiding Packet Loss By Redundancy

The main idea of using redundancy for loss reduction is quite simple: If you know that the network loses every second packet, you send every packet twice. Very easy!

But: We can not predict which packets will be lost or how many packets will be lost during the transmission. What we really need is a better way to protect our messages against losses. An approach on which we concentrate is the use of  Forward Error Correcting Schemes (FEC). An encoding scheme in which complete recovery is possible from any set of packets that contains the same number as the original message is called Maximum Distance Separable (MDS) code.

In our group we focus on the variants of the so-called Cauchy-based coding schemes and also on some other related coding schemes.
The Cauchy coding scheme is described in a paper "An XOR-Based Erasure-Resilient Coding Scheme" by J. Blömer, M. Kalfane, R. Karp, M. Karpinski, M. Luby and D. Zuckerman. Cauchy Matrices are used to generate the code in this work. We use the Cauchy coding scheme in our implementations.

The Cauchy Coding Scheme


Let's examine the process of creating a coding for a message M  = (M_1, ..., M_m) containing m packets.
We generate code C = (C_1, ..., C_n) containing n packets with Cauchy Matrix as a generator matrix s.t.


Because of the fact that we use an MDS code, we can guarantee, that the recovery of message M is possible if at least m  packets of code C arrive at the receiver.
Note: m is the number of packets in message M.

Further Enhancement By Priority Encoding

The Cauchy coding scheme described above provides an efficient method for erasure resilient transmission (ERT) over packet-oriented networks. The price for avoiding packet losses is the increase in network traffic because the redundant data must be added to the original information.

In practice there are several  reasons for making highest efforts to keep this redundancy as small as possible. First, both on the sender and on the receiver's side every additional packet means that some additional work must be performed. The sender has to create and transmit the redundant data, the receiver has to swallow and separate it from the information necessary for decoding the message. Second, in many networks (e.g. the Internet) a large amount of packet losses result from network overflow. So adding more traffic to compensate problems probably caused by too much traffic is a technique that should be handled most carefully.
However the most important reason is the unpredictable character of losses in the Internet. We cannot know in
advance what part of the packets will be lost, therefore we don't know how much redundancy is necessary to transmit the whole message without  losses.  In case of multimedia applications  our goal is to achieve graceful degradation. This means that transmission quality degrades smoothly with increasing loss rates. Thus in case of low loss rates very high quality of transmission can be realized. And in case of high loss rates a certain minimal quality of received data can still be guaranteed.

In order to achieve the graceful degradation we will use a priority encoding transmission that is based on variable redundancy coding. The main idea is that, instead of viewing a message as a monolithic chunk of data, we try to divide it into (many) portions that can be evaluated with respect to their importance for the receiver. Each of these portions is then encoded and  decoded separately, with redundancy corresponding to its importance. Obviously the bigger the number of portions into which we split the data the smoother the dependence between loss rates and transmission quality. We will concentrate on data compression methods that allow us to split the compressed data into a number of portions according to their importance for the receiver and at the same time provide high compression rates. Examples of such methods are DCT-based compression formats MPEG and JPEG and wavelet-based image and video compression algorithms.

What We Are Going To Do

In our group we want to focus on several applications, most important of them are:

Previous Work

Resources on ERT

See our ERT link page.