Review: TCP Vegas: End to End Congestion Avoidance on a Global Internet

From: Fareha Shafique <fareha_at_eecg.toronto.edu>
Date: Wed, 4 Oct 2006 11:15:09 -0400

The paper proposes TCP Vegas, a new implementation of TCP, that does not
change the specifications but just modifies the following three mechanisms:
1) Retransmission mechanism:
   Reno has 2 mechanisms: a)Retransmit timeout based on the RTT and
variance estimate results in long delays since the timeout is only
checked at fixed intervals. b)Fast retransmit and fast recovery not
only retransmits when a timout occurs but also when 3 duplicate ACKs
are received by the sender (because the receiver is getting despite some
loss). This prevents most of the coarse-grained timeouts, but the
authors argue that throughput can be increased further.
   Vegas reads and records the system clock when a segment is sent and
its ACK received, and uses these values to calculate the RTT. When a
duplicate ACK is received, it checks if (current time + timestamp) is
greater than timout, if so it retansmits without waiting for a third
duplicate ACK. Furthermore, when the first or second non-duplicate ACK
(after the retransmit) is received, Vegas checks if the segment sent
before the retransmitted one has timedout (and retransmits it if
needed). That is, Vegas treats the receipt of certain ACKs as a signal
to check for timeouts with the hope of detecting lost packets quicker.
2) Congestion avoidance mechanism:
   In Reno's reactive mechanism, packet loss signals congestion. It
increases the congestion window until packet loss occurs and then
decreases it, thereby creating its own losses and losses for other
connections.
   Previous proactive schemes involved RTT, window sizes and throughput.
Vegas looks at the throughput rate (sender rate) and avoids congestion
by limiting the number of buffers used at the bottleneck. It calculates
the BaseRTT (RTT when link not congested, normally the minimum of all
measured RTTs), Expected throughput (current congestion window
size/BaseRTT), Actual sending rate (based on the number of bytes
transmitted between the transmission of one segment, receipt of its ACK
and its RTT), and also the difference between the expected and actual
rates. Vegas tries to maintain this difference between two constants
Alpha=1 and Beta=3 (diff<alpha --> increase window size, diff>beta -->
decrease window size).
3) Slow-start mechanism:
   Reno sends only one segment while starting or restarting after a
loss, then as ACKs are received, an extra segment is send in addition to
the amount of data ACK'd. Initially, this exponential increase continues
until a loss occurs, but on a restart slow-start is only used till the
congestion window reaches a threshold (half the window size before the
restart).
   Vegas incorporates its congetsion detecion mechanism into slow-start.
It allows exponential growth every other RTT and keeps the congestion
window fixed in between so that the expected and actual rates can be
compared. When the actual rate decreases to one router buffer less than
the expected rate, slow-start is stopped and linear increase is started.
The paper then evaluates the performance of Vegas (over the Internet and
through several simulated scenarios) and compares it to that of Reno.
The proposed enhancements produce significant improvements in throughput
as well as as the number of losses. Furthermore, Vegas does not
adversely affect Reno's throughput. Vegas is also less sensitive to
changes to in network parameters as compared to Reno. Finally, the paper
briefly discusses that Vegas is no less fair than Reno, it is also
comparably stable since its mechanims are conservative over Reno's, and
that Vegas does not have a bad affect on latency.
The paper explained all their modification very clearly. However,
despite their explanations of the graphs, I found them to be not too clear.
Received on Wed Oct 04 2006 - 11:15:22 EDT

This archive was generated by hypermail 2.2.0 : Wed Oct 04 2006 - 12:53:13 EDT