(no subject)

From: Jin Jin <jinjin_at_eecg.toronto.edu>
Date: Fri, 29 Sep 2006 23:51:57 -0400

Summary of the paper

This paper proposes a new mechanism of congestion control. It is
called RED-PD. It uses the packet drop history at the router to
detect high-bandwidth flows in times of congestion and preferentially
drops packets from these flows, and then control their throughput.
After measurement and analysis, authors conclude that it
significantly improves the performance., and requires only a small
amount of state.

RED-PD's approach is to keep state for only high-bandwidth flows,
because a small fraction of flows are responsible for most of the
bytes sent. So identifying and preferentially dropping from this
small number of flows is a powerful approach, since controlling the
throughput of these flows results in a significant decrease in the
ambient drop rate.

There are two components in RED-PD. One is identifying high-bandwidth
flows and the other one is controlling the bandwidth obtained by
these flows. RED-PD uses the RED drop history to identify high-
bandwidth flows. Because it can be considered as reasonably random
samples of the incoming traffic and the drop history represents flows
that have been sent congestion signals. If the high-bandwidth flows
are detected, then they will be monitored. In front of the output
queue, preferential dropping is done using a pre-filter. Packets from
the monitored flows are dropped in the pre-filter with a probability
dependent on the excess sending rate of the flow. Unmonitored traffic
is put in the output queue directly. In this way, the mechanism could
control the throughput and control the congestion.

Authors provide the evaluation and analysis of RED-PD. The
measurement indicates that high sending rate flows that escape
identification in a particular round are identified soon in a near
future round because the identification probability associated with
them is high. The result also shows it's possible to approximate
fairness among flows by iteratively increasing and decreasing the pre-
filter dropping probability for the high-bandwidth flows.

At last, authors give some conclusion and future work.

Points in favour or against

The paper is generally well written, with fine and clear
presentation. Obviously, it improves the work of RED. The main
contribution of this paper is that it demonstrates that a small
fraction of flows are responsible for most of the bytes sent. So we
can control the throughput by only monitoring the high-bandwidth
flows and control their bandwidth. At this base, authors propose a
new mechanism and provide measurement and analysis to support this
viewpoint. However, as mentioned in the paper, unresponsiveness test
needs further investigation. Moreover, it also has the problem about
the computing cost. Storing the history data and computing for pre-
filter all need computing cost. Does it make sense if considering
there problems?
Received on Fri Sep 29 2006 - 23:52:52 EDT

This archive was generated by hypermail 2.2.0 : Sat Sep 30 2006 - 17:28:02 EDT