Congestion Avoidance and Control

From: <nadeem.abji_at_utoronto.ca>
Date: Tue, 26 Sep 2006 02:05:05 -0400

Paper Review: Congestion Avoidance and Control

This paper was written at a time of rapid growth of the Internet,
resulting in severe congestion. This paper states that the cause of
this congestion, at times leading Internet gateways to drop 10% of
incoming traffic, lays in transport protocol implementations. The
paper was motivated by a series of congestion collapses leading the
authors to investigate whether 4.3BSD TCP could be tuned to work in
loaded network conditions.

Seven new algorithms were added to the BSD and testing showed that the
new version performed well in the face of congested conditions on the
Internet. The paper focuses on the following 5 new algorithms along
with the reasoning for each:

1. round-trip-time variance estimation
2. exponential retransmit timer backoff
3. slow-start
4. more aggressive receiver ack policy
5. dynamic window sizing on congestion

The new algorithms all stem from a single principle called
‘conservation of packets’ which requires that a new packet not be put
into the network until an old packet leaves. Obeying this principle,
should in theory, result in a system which is robust when facing
congestion.

The paper is organized such that it investigates three ways in which
the ‘conservation of packets’ theorem was being violated and how the
new algorithms serve to resolve the problem.

(1) Connection never reaches equilibrium:
A slow-start algorithm is introduced to gradually build up the data in
traffic in an effort to eventually reach equilibrium. This algorithm
prevents high-bandwidth hosts from overloading slower networks causing
them to fail, yet opens up quickly enough to have a negligible effect
on performance.

(2) Injecting a new packet before an old packet has exited:
This situation occurs when the retransmit timer has failed to operate
correctly. The paper states that this is usually caused by not
estimating the variation in the round trip time estimations. Another
source of error is an improper backoff after a retransmit. A good
method is to use an exponential backoff since a network can be
approximated by a linear system.

(3) Equilibrium can’t be reached due to resource limits along the path:
Congestion in the network is the final reason equilibrium may not be
reached. Thus a ‘congestion avoidance’ strategy is proposed. The
retransmit timer is used as the indicator for network congestion. A
multiplicative decrease additive increase scheme is used to change the
window size. The reasoning behind this is that it is easier to drive
the net into saturation but hard to recover from over saturation,
known as the rush-hour effect.
The paper concludes by investigating future work relating to
congestion control in gateways.

This work is profound in that it finds relatively simple ways to solve
congestion on the Internet, a considerable problem. The solutions are
subtle, yet clever, and are easy to implement as they adhere to the
end-to-end argument. The paper is very concise and gets right to the
point without too being too verbose. By adding the figures with
detailed descriptions at the end of the paper and also including the
finer details in an appendix, the paper caters to all audiences, those
with and without in-depth knowledge of the TCP protocol.

-- Nadeem Abji
Received on Tue Sep 26 2006 - 02:05:32 EDT

This archive was generated by hypermail 2.2.0 : Tue Sep 26 2006 - 03:41:17 EDT