Review: Receiver-driven Layered Multicast

From: Fareha Shafique <fareha.s_at_gmail.com>
Date: Mon, 30 Oct 2006 21:46:07 -0500

The paper describes a framework for the transmission of layered signals over
hetergenous networks using receiver-driven adaptation, since source-based
rate adaption performs poorly in a heterogenous multicast environment with
no single target. The proposed approach extends the multiple group framework
with a rate-adaption protocol called Receiver-driven Multicast (RLM), which
combines a layered compression algorithm with a layered transmission scheme
and receivers can subscribe to a subset of IP-Multicast groups (layers)
according to the available link capacity. RLM allows receivers to adapt to
both static heterogeneity of link bandwidths as well as congestion.
RLM work within the existing IP model and requires no new machinery in the
network. The authors make the following assumptions in their network model:
1. only best-effort, multipoint packet delivery is available (no order
guarantee)
2. the delivery efficiency of IP Multicast
3. group-oriented communication
Furthermore, to handle multiple, simultaneous sources RLM assumes that
receivers specify group membership on a per source basis. The authors then
explain the RLM protocol. The source is not significantly modified, it onlt
transmits each layer of its signal on a separate multicast group. The
receiver the adapts by joining a group (adding a layer in a join experiment)
on spare capacity and leaving a group (dropping a layer) on congestion. In
this way, the reciever finds the optimal level of subscription. It joins a
higher layer group, if congestion occurs (indicated by packet loss), it
falls back to lower level. They use a learning algorithm to limit the
transient congestion resulting from joining higher layers, which may impact
the quality of delivered signals. Furthermore, RLM makes use of shared
learning among the receivers in the group, so as to limit the number of
receivers experimentally joining a higher a layer. The paper describes the
4-state state-diagram (stable, measure, hysteresis, drop) along with the
timers and parameters used in their algorithm.
The authors then carry out a set of simulations varying the network
topology, link bandwidths and latencies, the number and rate of transmission
layers and the placement of senders and receivers. They evaluate the
worst-case loss rate over varying time scales and the throughput based on
the time it takes the system to converge to the optimal operating point. The
results can be summarized as follows:
- the impact of packet loss is roughly proportional to the link latency in a
single-source, single-receiver topology.
- the long-term worst case loss rates are under 1% and the short-term loss
rates are between 10%-20%.
- the worst-case loss rates are independent of the session size and the
long-term loss rate is about 1% even for large sessions.
- as the number of receivers grows, the convergence time increases.
- the algorithm works well even in the presence of large sets of receivers
with different bandwidth constraints.
- RLM does not guarantee fair allocation of bandwidth when a number of
independent single-source/single-receiver sessions share a common link.
The paper then discusses some netwrok implications such as all users must
cooperate, performance depends on the join/leave latencies, and fairness.
The authors briefly talk about a layered source coder they developed,
followed by related work and furture work.
The paper is well written, however, in the simulations the authors do not
show how varying their algorithm's parameters affects performance and they
always assume a constant packet size of 1KB. Furthermore, their application
design is influenced by the RLM design and vice-versa. The authors talk
about designing sub-components jointly rather than in isolation. I feel that
this does not fit very well with the concept of porting applications to
different environments.
Received on Mon Oct 30 2006 - 21:46:17 EST

This archive was generated by hypermail 2.2.0 : Mon Oct 30 2006 - 22:00:06 EST