Review: Congestion Avoidance and Control

From: Robert Danek <rdanek_at_sympatico.ca>
Date: Mon, 25 Sep 2006 18:13:17 -0400

Paper: Congestion Avoidance and Control

Name: Robert Danek.
Course: CS2209, Fall '06.

Congestion on a network refers to the situation when there are more
packets on the network than can be handled by the network reliably. This
is usually the result of finite queue lengths at gateways and routers
within the network; when there are too many incoming packets into a
gateway/router, packets will be dropped. This paper explores congestion
on modern networks, why it occurs, and what can be done to prevent it.

The paper points out that one of the major causes of congestion is not
in the protocols themselves, but rather in the implementation. One
example of this is illustrated with the "congestion collapse" that
occured in the UC Berkley network in October 86. The problem was due to
the implementation in 4.3 BSD TCP, as well as certain TCP parameters not
being optimally tuned.

Seven new algorithms, which were added to 4BSD TCP for dealing with
congestion, are listed by the authors. What the first five of these
algorithms have in common is that they are based on the observation that
flow of a TCP connection should behave according to a "conservation of
packets" principle. This principle states that packets will only be
added to a network in a tit-for-tat manner as packets leave the network.

The Internet should follow the "conservation of packets" principle, and
thus be able to handle congestion. However, based on examples such as
the "congestion collapse", it clearly doesn't. There are three possible
reasons for this pointed out by the authors.

The first reason is that the system doesn't reach equilibrium. To solve
this, the authors describe a scheme that takes advantage of the
self-clocking nature of TCP. They call this the slow-start algorithm,
and it works by constantly adjusting the size of a "congestion window"
based on whether packets were acknowledged or not.

The second possible reason for the congestion problems in the Internet
is because senders may prematurely inject new packets into the system
before the old one has left. This can happen due to problems with the
round trip time and variance not being estimated correctly, thus leading
to erroneous timeouts. The authors suggest a method for estimating
variation that results in better estimates of retransmission times.

The final reason for congestion problems the authors explore is resource
limitations (e.g., finite queue lengths in routers). To solve this
problem, the authors suggest another window-based algorithm whose window
size is adjusted based on the network signalling whether there is any
congestion. This algorithm, they point out, is different from their
slow-start window-based algorithm, since it is only for congestion
avoidance.
Received on Mon Sep 25 2006 - 18:13:27 EDT

This archive was generated by hypermail 2.2.0 : Mon Sep 25 2006 - 21:00:51 EDT