Review: End-to-End Internet Packet Dynamics

From: Robert Danek <rdanek_at_sympatico.ca>
Date: Thu, 16 Nov 2006 01:56:26 -0500

This paper investigates the dynamics of packets on the Internet. In
particular, it studies unusual network behaviour, such as out-of-order
delivery, replication and packet corruption; it also studies a new
algorithm for estimating bottleneck bandwidth, and using this
algorithm, goes on to analyze end-to-end Internet packet loss and
delay.

In order to perform the measurements, the paper makes use of a
framework that involves using measurement daemons call Network Probe
Daemons (NPDs). The study involves using 35 NPD sites and measuring TCP
bulk transfers conducted between these sites. The problem with using
TCP, however, is that there are intertwined effects between the
protocol and the network. The author uses a special program that allows
the effects to be separated. There is also another problem: since the
time scales over which TCP packets are sent can vary, correlational and
frequency domain analysis can be difficult. For this reason, the latter
type of analysis is not performed.

The paper makes some interesting observations about the prevalence of
the reordering of packets. This phenomenon is quite common on the
Internet based on the data collected in the paper. The author goes on
to observe that having better knowledge of when reordering occurs can
also help improve the fast retransmit algorithm used in TCP. In
particular, the paper examines two different ways that the fast
retransmit algorithm can be modified: the first way involves delaying
the generation of duplicate acks so that reordering can be better
distinguished from loss; the second way involves altering the
retransmit threshold to avoid unneeded retransmissions.

Other network pathologies that the paper investigates include packet
replication, which occurs rarely, and packet corruption. The results
that the author obtains for packet corruption are inconclusive. Some of
his results suggest that the rate is too high, and that it may be
beneficial for TCP to use a 32-bit checksum instead of a 16 bit one.
The author suggests that further studies are required in this area.

The paper then goes on to discuss a method for determining the
bottleneck bandwidth of a network, which is how fast a connection can
possibly transmit data. An existing method for calculating the
bottleneck bandwidth involves using packet pairs. When transmitting two
packets consecutively, if the time interval separating their
transmission is small enough, after they've gone through the bottleneck
link, the time interval separating their arrival at the destination can
be used for calculating the bottleneck bandwidth. This technique
suffers from a number of potential problems, including out-of-order
delivery, limitations due to clock resolution, changes in bottleneck
bandwidth, and multi-channel bottleneck links. The paper suggests an
alternative method for performing the calculation, called "packet bunch
modes" (PBM).

The last sections of the paper discuss packet loss and packet delays.
One of the interesting aspects of packet loss that is discussed is how
often retransmissions occur for genuine losses. The author notes that
certain implementations of TCP do not properly calculate retransmission
timeouts and as a result cause unnecessary traffic.
Received on Thu Nov 16 2006 - 01:56:27 EST

This archive was generated by hypermail 2.2.0 : Thu Nov 16 2006 - 02:33:22 EST