Review: An Architecture for Content Routing Support in the Internet

From: Waqas ur Rehman <waqas_at_cs.toronto.edu>
Date: Thu, 30 Nov 2006 10:15:22 -0500

Content delivery in a timely manner can significantly affect the hits to a
web sites and that’s why different content providers are replying on using
different content delivery networks (CDN) to reduce the latency. Content
providers try to reduce the content delivery latency by having multiple
replica servers distributed across different geographic sites. So the idea
is to serve the user using the nearest replica server, but this requires
that a client’s request is routed to the nearest replica. The author in
this paper has argued that the current content routing mechanism suffer
significant delays because of use of DNS as a basic approach to direct
client to nearest replica. Thus the goal of content routing to reduce the
time to access the contents is not being completely achieved.

In order to address this problem author has purposed an idea of
integrating the content routing into network and view it as a simple
routing problem. Specialized content routers can be used to distribute and
maintain the information about content reachability. The author has come
up with two protocols namely Internet Name Resolution Protocol (INRP) and
Name-Based Routing Protocol (NBRP) to support the functions of
distributing and maintaining the content information. Client requiring
particular contents forwards the request to a local content router using
INPR, which is then routed using the name-to-next hop mapping in each
content router until it reaches the adjacent router to best content server
which generate the response containing the address of content server. On
the other hand NBRP is being used to distribute the routing information
amongst contents routers. To analyze the performance of the solution
author carried out small scale experiments to deliver content using INRP
and NBRP. The results are encouraging and show significant improvement in
terms of reduced latency to resolve the content server address.

I believe the idea presented in this paper is really attractive but I am
not sure whether it is actually addressing a problem or not. First of all
author believes that 80% of the traffic is HTTP which I believe is not
valid today, as most of the traffic is peer-to-peer traffic and secondly
author has shown that the latency using traditional approach is on average
100ms which I believe is not so large. So to address these insignificant
problems by introducing extra complexity in the current internet
architecture does not make sense to me. But I like the goal of author to
have open standards instead of proprietary standards controlling the
internet.
Received on Thu Nov 30 2006 - 10:15:44 EST

This archive was generated by hypermail 2.2.0 : Thu Nov 30 2006 - 10:40:27 EST