Globally Distributed Content Delivery -------------------------------------- J. Dilley, B. Maggs, J. Parikh, H. Prokop, R. Sitaram, B. Weihl The paper describes Akamai, a content delivery network that uses 12000 servers in 1000 networks (in 2002). After an analysis of existing approaches, the authors present the system's features - DNS for load balancing, real-time network monitoring, content served and technical challenges in maintaining and enhancing this system. The paper strength is represented by the system itself. Akamai provides static, dynamic content and streaming media. From the user's point of view the content is being delivered very fast because of the proximity to an Akamai cluster. From the customer's point of view, Akamai relieves the load on his server (important at peak times). They also provide content-access logs for customers. For load balancing the system uses DNS redirection and to improve availability, the system uses techniques like online testing, testing before software deployment. The comparison with existing approaches is weak. For example Google uses clusters that have thousands of commodity-class PCs and they are successful. The bottom line is that it actually depends on the system, on the workload. One other thing is the replication vs partition part. Disks are not a real issue here since they are cheap. Also, the authors described a successful CDN, Akamai, but they failed in backing it up with data. One other weakness of the paper is that the authors don't address in depth the security of the Akamai system. An attack on Akamai's DNS can be a real problem. (it actually happened in 2004 - Google was using their DNS servers then). Although not relevant, I think that there is an error in the DNS Resolution part (no low-level DNS in Figure 1). Interesting issues: how hard is it to manage a system like Akamai ?, availability of Akamai, speed-up of a site with/without Akamai at peak/non-peak times, their load balancing strategy (DNS redirection) - how good is it ?.