review of Akamai

From: Guoli Li <gli_REMOVE_THIS_FROM_EMAIL_FIRST_at_cs.toronto.edu>
Date: Wed, 5 Oct 2005 18:42:02 -0400

 

This paper introduces Akamai¡¯s solutions for distributed content delivery services over the Internet. Akamai system provides faster content delivery service by caching content at edges of the Internet. It enables enterprises to increase the global performance, reliability and scalability of all their Web-based content. As a result, customers of Akamai solutions increase the user satisfaction.

 

Akamai uses a dynamic fault-tolerant DNS system to direct client requests to a proper server which may have the requested objects. The DNS system is also in charge of the load balancing and failures in the network. The name servers deliver a client request to a server based on the workload, the request content, network condition and client location, etc. The goal of the DNS system is to find a most efficient server to process client requests with some Quality of Service metrics. The DNS also guarantees that network failures do not affect end users. Failures are detected by the DNS system, and the name servers adjust their resource records based on the failure information and distribute different server IP to users. Content is cached at edge servers of the Internet, which speed up the response time of client requests. The success of Akamai lies in professional services designed from users¡¯ point of view. Many success case studies are presented on their website (www.akamai.com).

 

Caching consistency is a big challenge, especially for a large-scale distributed system as Akamai. Multiple copies of content may be cached all over the edge servers based on user¡¯s requests. There exists the inconsistency between cache and the content provider, as the same time inconsistency also exists among multiple caches for the same content. TTL and content with versions are used to address the cache consistency problem. However, both of them may provide inaccurate information to client, which maybe unavoidable in distributed systems. What we can do is to provide updated content to clients as much as we can. A possible solution may be adding priorities to cacheable objects. For the most important objects, we request directly from the provider, as we do for uncacheable objects. For other objects we may apply TTL and versions to them. Of course, there is a tradeoff between service accurate and delivery performance.

 

The monitoring for load balancing in Akamai is centralized to some extent. A monitoring application collects the workload data from content servers and generates the load report to local DNS servers and the top-level DNS server. The information at the top-level server is used by the traffic analyzer application. It is not efficient for the centralized analyzer to process all the data report. Intuitively, local traffic analyzers should process the traffic data first and report much condensed information to the top-level server. The distributed monitoring and analyzing are more efficient in terms of traffic and analyzing performance.

 

In industry, Akamai now is the leading global service provider for accelerating content and business processes online.
Received on Wed Oct 05 2005 - 17:47:09 EDT

This archive was generated by hypermail 2.2.0 : Thu Oct 06 2005 - 00:59:53 EDT