(no subject)

From: Jin Jin <jinjin_at_eecg.toronto.edu>
Date: Mon, 25 Sep 2006 17:37:26 -0400

Summary of the paper

This paper presents and analyzes the congestion avoidance and control
in TCP layer. In author's view, the main reason for congestion
collapse is not in the protocols themselves, but in protocol
implementations. The paper gives some algorithms which are rooted in
the idea of achieving network stability by forcing the transport
connection to obey a "packet conservation" principle could be used to
avoid congestion collapse and make the networks work better.

The main contribution of the paper work is to introduce several
algorithms into TCP and analyze the effect and performance of the
them through analyzing and treating three ways for packet
conservation to fail.

The first algorithm is "Slow-start". The "Slow-start" algorithm is
to gradually increase the amount of data in-transit. This can start
the sender's "clock" which is very important for retransmission
mechanism. It is easy to implement and has a negligible effect on
performance.

The second one is about "round-trip timing". The failure of sender's
retransmit timer could cause problem that sender inject more packet
which makes the loss of packets. The mistake is not estimating the
variation of the round trip time from which could calculate the
retransmit timer. The timer value should be wider and considered the
variation of the round trip time. The paper provides a cheap method
for estimating retransmit timer essentially eliminates spurious
retransmission. For the other mistake about backoff after a
retransmit, only one scheme has any hope of working-exponential backoff.

Finally, paper introduces and analyzes the congestion avoidance
through manipulating the congestion window which is almost used at
the same time as slow-start. However, they are different and
independent. The congestion avoidance strategy include two
components: One is that the network must be able to signal the
transport endpoint that congestion is occurring. The other part is to
follow directly from a first-order time-series model of the network.
The network load is measured by average queue length over fixed
intervals of some appropriate length.

At the last part of the paper, author gives the future work focusing
on gateway side work, including fair sharing of capacity, congestion
detection, and so on.

Points in favour or against

The paper is generally well written, with fine and clear
presentation. As known to all, TCP provides reliable transmission
service, although the lower IP layer only can provide unreliable data
transmission service. To insure this reliable transmission, TCP must
solve the congestion problems, because the most packets loss is
caused by congestion. According to the paper, the core principle is
to obey "packet conservation", through retransmission, congestion
window, retransmit timer, and some other mechanisms. The algorithms
this paper proposed are classic in the TCP layer. Nowadays, these
algorithms are fairly good at dealing with congested conditions on
the Internet. For a paper in 1998, author also provides lots of
measurement results to support the viewpoint which enhance paper's
integrity and can convince readers very well. However, in the paper,
author did not introduce some other algorithm shown in the
introduction which is the little flaw I think.
Received on Mon Sep 25 2006 - 17:37:59 EDT

This archive was generated by hypermail 2.2.0 : Mon Sep 25 2006 - 18:13:29 EDT