Review: Integrated Congestion Management Architecture

From: Robert Danek <rdanek_at_sympatico.ca>
Date: Mon, 9 Oct 2006 19:07:52 -0400

Paper: An Integrated Congestion Management Architecture for Internet Hosts

Name: Robert Danek.
Course: CS2209, Fall '06.

    This paper starts by discussing how the trend in traffic patterns on
the internet have been changing recently, and how these trends threaten
the long-term stability of the internet. In the past, traffic has
consisted mainly of long-running flows. However, with web workloads on
the rise, more short-lived concurrent TCP flows are being generated.
Besides this, more and more applications try to bypass TCP's congestion
control mechanism by using UDP and building their own user-level
protocols on top of it.

    The authors propose a congestion management architecture that has
the following goals: to efficiently multiplex concurrent flows, and to
provide the ability for applications and transport protocols to react to
congestion. The benefit of multiplexing concurrent flows is that there
can be shared state learning. This way, when a new flow starts, it can
take advantage of the knowledge of currently established flows. This is
particularly good for short-lived flows using protocols like TCP, which
have a "start up" phase. Without shared state learning, since the flows
are too short-lived, it is not possible for TCP's congestion control
mechanism to reach a steady state.

   
    The congestion manager consists of two modules, one at each end of
the network flow. The module at the receiving end of the flow is
optional and the manager can work without it. It is used for eliciting
feedback from the receiver so that the sender can adjust its congestion
window. The API provided on the sender side "puts the application in
control" by providing a callback mechanism that informs the application
when it can send data and how much it can send; the application can then
decide what to send and how much bandwidth to allocate to each of its
flows.
   
    I didn't think this was a very good paper. It requires too much
reengineering of existing applications and protocols. The callback
mechanism that requires applications register for notifications is
particularly ugly, and no one will want to make use of it. Developers
want a clean interface that they don't have to think about. In addition
to this, it doesn't solve the problem of malicious hosts who just want
to grab as much bandwidth as possible. The author's own experiments show
that by not using their mechanism, a malicious host could achieve a
greater aggregate throughput with TCP Newreno.
Received on Mon Oct 09 2006 - 19:07:58 EDT

This archive was generated by hypermail 2.2.0 : Mon Oct 09 2006 - 20:20:56 EDT