(no subject)

From: Jin Jin <jinjin_at_eecg.toronto.edu>
Date: Wed, 4 Oct 2006 12:52:42 -0400

Summary of the paper

This paper proposes the improvement of the congestion control
mechanism in TCP. The main contribution is to introduce several
techniques for improving TCP. The experiment on both the Internet and
using a simulator shows that this approach called Vegas achieves
better throughput, less retransmitted packets compared to Reno.

The modification mainly includes three parts,

1: Introducing a new timeout mechanism

For the original mechanism, time interval between sending a segment
that is lost until there is a timeout and the segment is resent is
generally much longer than necessary. Although the fast retransmit
and fast recovery mechanisms are very successful, some of analysis
indicated that eliminating the dependency on coarse-grain timeouts
would result in the increase in throughput. For Vegas, it uses a more
accurate RTT estimate to decide to retransmit in two situations. The
goal of the new retransmission mechanism is not just to reduce the
time to detect lost packets from the third duplicate ACK to the first
or second duplicate ACK, but to detect lost packets even though there
may be no second or third duplicate ACK.

2: Novel approach to congestion avoidance

Reno uses the loss of segments as a signal that there is congestion
in the network, but it has no mechanism to detect the incipient
stages of congestion-before losses occur-so they can be prevented.
Vegas' approach compares the measured throughput rate with an
expected throughput rate. The goal is to maintain the best amount of
extra data in the network. Vegas' congestion avoidance actions are
based on changes in the estimated amount of extra data in the
network, and not only on dropped segments.

3: Modified slow-start mechanism

During the initial slow-start, there is no a priori knowledge of the
available bandwidth that can be used to stop the exponential growth
of the window. And there is no knowledge of a safe window size when
the connection starts. What is needed is a way to find a connection's
available bandwidth which does not incur losses. Vegas allows
exponential growth only every other RTT. In between, the congestion
window stays fixed so a valid comparison of the expected and actual
rates can be made. This mechanism is highly successful at preventing
the losses incurred during the initial slow-start period.

Points in favour or against

The paper is well written, with fine and clear presentation.
Obviously, it improves the previous work. This paper mainly focuses
on TCP congestion control problem. However, there are several
problems unsolved.

1: Although authors discussed the fairness problem, it doesn't not
solve this problem very well.

2: For the tools, I think authors write too much.

3: For the value of two threshold, although authors gave their
meaning, but I think they did not explain why choose the values as 1
and 3 respectively. Moreover, authors did not give the evaluation of
these two parameters and the comparison with other values.

4: Authors did not give the clear explanation about the equation of
expected thought put. Why use this equation?

5: I think authors should use some control theories to demonstrate
the stability and why these modifications could gain better
performance than Reno. I mean authors should analysis the problems
more theoretically.

6: After reading the papers about congestion control in the middle
(gateway, router). TCP is not enough to do the congestion control.

7: Authors did not discuss the impact of coarse-grain timeout. Does
this affect the whole performance? How to calculate this time?
Received on Wed Oct 04 2006 - 12:53:11 EDT

This archive was generated by hypermail 2.2.0 : Wed Oct 04 2006 - 14:36:25 EDT