In this paper, the authors shows that in the current settings of the
Internet, for about 70-80% of paths selected by routers for pairs of
hosts, there exist an alternate paths with significantly better quality in
terms of round-trip, loss rate and bandwidth.
Authors use four datasets and refine them to reduce structural and
statistical biases in their measurement as much as possible -- by for
example eliminating hosts which are ICMP packets. The authors also argue
that their measurements are robust regarding to many factors engaged in
the measurements like the size of the time period, over which performance
is averaged.
In section 7.1, authors use a heuritic to find the top ten hosts
contributing to the CDF of the difference between the default path's
quality and the best alternate one's. I think this heuristic can perform
arbitrarily wrong, specially if our aim is to find a larger number of
contributing hosts (like 10% instead of 10) into the CDF. However, an
interesting point of this section of the work is the author's ability in
decoupling the propagation delay and the congestion contribution in the
CDF by looking at the nice figure 16.
I was also expecting the paper to discuss about the stability of the
computed default paths and whether it could have converged into a better
path if the network parameters remain fixed. In other words, is the
phenomenon that authors are talking about an inherent drawback in the ICMP
protocol or it is due to the unstability of Internet ?
Received on Thu Nov 03 2005 - 11:07:37 EST
This archive was generated by hypermail 2.2.0 : Thu Nov 03 2005 - 11:07:38 EST