Review for Paper: End-to-End Arguments in System Design

From: Iqbal I Mohomed <iimohome_at_us.ibm.com>
Date: Thu, 14 Sep 2006 10:15:16 -0400

Reviewer: Iqbal Mohomed
Paper:
J.H. Saltzer, D.P. Reed, D.D. Clark
ACM TOCS, Vol 2, N. 4, p 277-288, November 1984.

This paper argues that higher-level functionality for network communication
should be implemented at end hosts, rather than at intermediate nodes. An
example of a reliable file transfer application is presented at the outset,
and referred back to throughout the paper. The key idea is that the
functionality of interest has to be implemented by the end hosts anyways,
and that it is inefficient and redundant to have to provide the same across
intermediate hops in the network. The authors offer a number of examples of
higher-level functionality to which the end-to-end argument applies:
reliable data transmission, guaranteed delivery, secure transmission,
elimination of duplicates, in order delivery of packets and transaction
management. The authors make an interesting point that identifying the
appropriate end points in a given application may be subtle. They offer the
example of voice data being transferred across a packet network. If the
application is a real-time conversation, the end points are people. If some
part of a voice transmission is garbled, the recipient can request higher
level retransmission with something akin to "could you please repeat
that?". However, if the application was transmission of a voice message to
a person's voice mail, then there is no opportunity for this high-level
retransmission, and the lower layers have a greater responsibility to
provide accurate data transmission. The paper ends with a discussion of how
the end-to-end argument applies in areas other than network communication,
such as OS and CPU design.

My concern with the paper was that the writing sometimes indicated
application writers need to take of some higher level functionality. I
presume they mean writers of network stacks, and not end users of network
interfaces, such as socket programmers. The latter is quite unworkable
because the idea of abstraction is a fundamental principal in computer
science that allows us to make progress without having to reinvent the
wheel constantly. With respect to the former, the idea makes more sense,
but it is fairly non-trivial to code functionality over and over again, on
a variety of end hosts. In fact, this causes significant issues to
pervasive devices, which are designed to be cheap, and are thus typically
resource constrained.

Also, I'd like to know exactly what fault caused 1 in a million bytes to
get flipped. I buy the overall message, but I have issues with anecdotes in
scientific publications.
Received on Thu Sep 14 2006 - 10:15:40 EDT

This archive was generated by hypermail 2.2.0 : Thu Sep 14 2006 - 10:38:12 EDT