[CSC2231] Paper Review: Globally Distributed Content Delivery

From: Kenneth Po <kpo_REMOVE_THIS_FROM_EMAIL_FIRST_at_eecg.toronto.edu>
Date: Thu, 6 Oct 2005 10:06:23 -0400

This article describes the Akamai, which is a scalable and reliable
web-content delivery network. Akamai distributes replicas of web
contents globally at the edge of the Internet to provide fast
performance and to eliminate single point of failures.

The Akamai approach heavily depends on DNS for load balancing. However,
there is one point mentioned in the article I doubt: the name server
understands service and content requests and must direct the requests to
edge servers that can provide the requested contents. To my
understanding, the DNS protocol does not contain information regarding
the type of service or content. Does it mean that Akamai's DNS servers
can figure out the service or content to be requested by the client
simply by looking at the name itself?

Given that the Akamai network provides a lot of replicas, they may be
exploit to achieve faster download of large contents. Common download
tools open multiple connections to a server so that each connection
performs a partial download, the same technique can be used for Akamai
contents. The difference is that each connection may connect to a
different edge server to further exploit the parallelism.

I am wondering whether the strategy used by Google has any advantage
over Akamai or vice versa. Although both of them are primarily serving
read-only contents, Google uses huge clusters at two data centers where
Akamai has a dozen or more servers at data centers distributed globally.
>From a causal Internet user's prospective, the availability of Google
and Akamai's content providers (such as Yahoo!) are good. Perhaps there
will be interesting discoveries if we can use PlanetLab machines to
study the availability of these two systems for long term.
Received on Thu Oct 06 2005 - 10:06:43 EDT

This archive was generated by hypermail 2.2.0 : Thu Oct 06 2005 - 19:17:38 EDT