Review for Paper: The Design Philosophy of the DARPA Internet Protocols

From: Iqbal I Mohomed <iimohome_at_us.ibm.com>
Date: Thu, 14 Sep 2006 10:41:07 -0400

Reviewer: Iqbal Mohomed
Paper: The Design Philosophy of the DARPA Internet Protocols
D. Clark
ACM SIGCOMM 1988.

This paper describes the various design goals of the Internet, their
relative importance, and how considerations for these goals lead to the
evolution of the Internet. It was an interesting read, particularly because
the author had an authoritative role in the architectural design of TCP/IP.
One interesting point made by the paper is that the designers were working
with a specific set and ordering of goals in mind. If the goals or their
ordering was different, the Internet would have been designed quite
differently. The overarching goal was the need to connect heterogeneous
networks. This goal had a key ramification that the designers could only
make the simplest assumptions on the functionality provided by these
networks. If this wasn't the overarching goal, a more homogenous topology
could have been imagined, permitting other goals (such as differentiated
services) to be met better. Another point that I found interesting was the
shared-fate argument. This is somewhat related to the end-to-end argument
made in Salter et al.'s paper. The idea is that for the Internet to be
flexible and resilient to failures on intermediate nodes, the end hosts
must take on responsibilities that could conceivably be pushed into the
network itself.

The paper mentioned two areas of weakness for the Internet in the 1980s
that interestingly still exist. The first is differentiated services and
the other is detailed accounting. Reading this paper from the 1980s suggest
that the existing architecture of the Internet simply does not afford
idealized solutions to these two problem areas.

A point in the paper that I found enjoyable to read was the survivability
argument. The professor of this class repeatedly argues that if a rare
glitch in the Internet can be corrected with pressing a refresh button on a
browser, we don't really need to engineer a complex, heavy-weight solution.
This actually agrees with the end-to-end arguments made by the Saltzer et
al. paper (for example, human level retransmission in VOIP). However, I
found it interesting to see the old-school view in this paper. That is, the
goal of the reliable transport layer is to make sure that applications do
not have to deal with timeouts and retransmissions. Also, it is interesting
that the original ARPANET was designed so it would not have a single point
of failure. I believe that today's Internet has some weak spots. For
example, a speaker at the UofT networks seminar mentioned that a large
chunk of the network would go down if a certain Nameserver in Berkley was
taken down.
Received on Thu Sep 14 2006 - 10:41:18 EDT

This archive was generated by hypermail 2.2.0 : Thu Sep 14 2006 - 11:12:31 EDT