Review - The End-to-End Effects of Internet Path Selection

From: Ian Sin <ian.sinkwokwong_REMOVE_THIS_FROM_EMAIL_FIRST_at_utoronto.ca>
Date: Thu, 3 Nov 2005 09:13:36 -0500

This paper shows through measurement and analysis of real traces that
for a large number of paths through the Internet, there are superior
alternative paths. The authors argue that, because current routing
algorithms do not pass information such as RTT, bandwidth and loss rate,
the default routing paths are not optimal.

Although in my opinion, this work seems based on too many
approximations, the authors argue at great length that these
approximations are reasonable in supporting their conclusions. I liked
the fact that they presented the bias in the measurements and defended
their assumptions as self-reinforcing, since it only underestimates the
inefficiency of current routing protocols. The other strong point of
this paper is that it considers factors such as node popularity in
alternate paths and other variations (mean vs median, etc) in the data
set as potential bias factors but show that they really are not.

However, after reading this paper, I did not learn anything new than
from the introduction. OK, 30-80% of the default routing paths are
inefficient and a lot of time is devoted to support this claim, which I
agree is important. But now what? They do not present any solution or
insight on what could be done about this situation. Can we even think
about re-deploying an improved BGP?

Since this problem seems to be important, up to 80% of paths are not
efficient, should we be concentrating more research effort in routing?
Maybe the approximate routing is good enough and the extra overhead and
complexity in passing this meta-data (bandwidth, RTT, etc) among routers
not worth it? After all networks are very fast already and better
alternative paths (better by 1-10us) will not make a noticeable difference.
Received on Thu Nov 03 2005 - 09:13:44 EST

This archive was generated by hypermail 2.2.0 : Thu Nov 03 2005 - 09:19:48 EST