Summary: Congestion Avoidance and Control

From: Andrew Miklas <agmiklas_at_cs.toronto.edu>
Date: Tue, 26 Sep 2006 03:41:00 -0400

This paper explores the causes of "congestion collapse": a condition where a
network's throughput begins to sharply drop as offered load passes some
threshold value. It presents the "conservation of packets" rule of congestion
avoidance: once a flow is in equilibrium, each new packet should only be
inserted into the network once an old packet leaves. If this simple rule is
followed, the paper claims that congestion can be largely avoided.

There are three ways for a flow to not obey the "conservation of packets"
rule:
1. The flow never reaches equilibrium
2. The sender injects a packet prior to an old packet being removed
3. Equilibrium can't be reached due to some resource constraint.
The paper then presents a number of algorithms that address each of these
failure modes.

The slow-start algorithm is used to allow the flow to ramp up to the
equilibrium rate without exceeding it by more than a factor of two at any
moment. Briefly, each time a new flow starts or recovers from a loss, one
packet is transmitted, and transmission than halts pending the ack of the
first packet. After the first ack, two packets are transmitted. Once the
two acks are received, four packets are transmitted, and so on. (This
ignores the effects of the receiver window)

Accurate round-trip timing and the recognition that round-trip time variance
increases with load is the key to ensuring that a sender doesn't
inadvertently violate condition two. The goal here is to avoid
retransmitting packets that haven't been lost, but have simply become delayed
due to increased loads. Note that retransmitting packets that haven't been
lost violates the second condition, and can thus lead the network into
collapse.

An algorithm to adapt flows to the paths that they are running through is also
presented. This algorithm uses multiplicative decrease of the window size
when responding to packet loss, which it interprets as network congestion.
Linear increase is used to probe for additional bandwidth along the path.
This algorithm avoids congestion given the changing nature of the path
between the hosts.

Finally, the paper describes gateway congestion control. The purpose of this
process is to ensure that flows fairly share bandwidth as they cross at
gateways. Note that this can't be done at the end-points, since they are
unaware of how flows converge at various gateways along their paths.

I found this paper pretty easy to follow. Only slight confusion was how "slow
start" and "adapting to the path: congestion avoidance" related to each
other. However, the paper carefully explained the differences.
Received on Tue Sep 26 2006 - 03:41:15 EDT

This archive was generated by hypermail 2.2.0 : Tue Sep 26 2006 - 04:46:47 EDT