This paper describes the history of constructing the basics of the Internet. At the core of the internet lies a minimal sef of basic principles allowing for flexibility in services that the internet supports. These principles include transmission of data in a datagram form, reliable but not perfect delivery and an addressing system. Clark describes several goals of the internet as it was developed in the order of importance, since it's primary purpose was for the military. Top three goals described in detail are: the internet continues to perform despite local failures in gateways, supports all types of services and works on variety of networks.
The success of the Internet was due to the simplicity of its core model. To support all services the TCP was not enough, since in some application sequential guaranteed delivery was less important than fast, real time delivery for example. Thus, the TCP and IP separation happened. The TCP provided the secure, in order packet delivery, while the IP provided basic building blocks for all other services that could be used on the internet. It was up to the designer of the service to architect it using those building blocks, datagram at its core."The hope was that multiple types of service could be constructed out of the basic datagram building block using algorithms within the host and the gateway."
The datagram provided several good features such as it eliminated the need for connection state within the intermediate switching nodes, so that the internet could be reconstructed after a failure without concern for the state, and it represented the minimum network service assumption. However, with the datagram being a separate entity arose the problem of accounting for resources used by the internet, which was one of the goals when architecting the network. Since the datagram has no idea of "flow" it is hard to account for the resources being used by a network connection. Clark suggests that there must be another building block besides the datagram that encompasses this "flow" concept, to address this problem.
This paper provided great historic context on architecting the early stages of the Internet. It provided a very good summary of the goals and the means and the results and illustrated the reason for this architecture's success. It also showed a few weaknesses in the design such as the routing problem and the resource accounting problem that was not solved by the original TCP/IP implementation.
Monday, September 1, 2008
Sunday, August 31, 2008
Summary of 'End-to-End Arguments in System Design' by Saltzer, Reed and Clark
This paper describes the end-to-end argument, which states that functionality of a network system should be pushed as close to the endpoint, i.e. the application that uses it, instead of putting much functionality in the lower level of the system. If functions are placed at lower levels of the network, all applications that use the network use the functions redundantly, even if they don't need them.
A simple example application Saltzer used was the secure file transfer example. The task is simple: from machine A transfer file x to machine B securely. There are several things that can go wrong such as machine B receiving an incorrect file due to hardware faults on the way, host failure, mistakes in buffering on either end or dropped or duplicate packets. These problems can be addressed by implementing the checksum functionality in the application level. It is more efficient, since the application knows best how to check for correctness and this also reduces redundancy in the network's lower levels.
There are a few other applications Saltzer cites as examples to the end-to-end argument such as duplicate suppression, delivery guarantee, secure data transmission and guaranteeing FIFO order in packets delivery.
This paper sets the basics for TCP/IP networks. Instead of the traditional encapsulation of functionality in the network systems, they argue to leave the network as simple as possible and push all the functionality to the ends (or as close as possible). In most cases this introduces great reduction in complexity and redundancy of operations. However, this principle does not work for all applications. One example they use is the voice data transfer. If the data needs to be securely and correctly tranferred right away, then we do need the functionality of checking for correct packets to be lower in the system, allowing for delays of course.
The question arises then of where to draw the line between simple network systems with no functionality, to complex networks with encapsulated functionality on all levels. This is where much experimentation would shine some light on this issue and in particular by applying it to various scenarios. This is a very good paper, and describes with good arguments the basics of the network systems used today. It is interesting to read from the first source as to why and how these principles were invented and widely spread.
A simple example application Saltzer used was the secure file transfer example. The task is simple: from machine A transfer file x to machine B securely. There are several things that can go wrong such as machine B receiving an incorrect file due to hardware faults on the way, host failure, mistakes in buffering on either end or dropped or duplicate packets. These problems can be addressed by implementing the checksum functionality in the application level. It is more efficient, since the application knows best how to check for correctness and this also reduces redundancy in the network's lower levels.
There are a few other applications Saltzer cites as examples to the end-to-end argument such as duplicate suppression, delivery guarantee, secure data transmission and guaranteeing FIFO order in packets delivery.
This paper sets the basics for TCP/IP networks. Instead of the traditional encapsulation of functionality in the network systems, they argue to leave the network as simple as possible and push all the functionality to the ends (or as close as possible). In most cases this introduces great reduction in complexity and redundancy of operations. However, this principle does not work for all applications. One example they use is the voice data transfer. If the data needs to be securely and correctly tranferred right away, then we do need the functionality of checking for correct packets to be lower in the system, allowing for delays of course.
The question arises then of where to draw the line between simple network systems with no functionality, to complex networks with encapsulated functionality on all levels. This is where much experimentation would shine some light on this issue and in particular by applying it to various scenarios. This is a very good paper, and describes with good arguments the basics of the network systems used today. It is interesting to read from the first source as to why and how these principles were invented and widely spread.
Subscribe to:
Posts (Atom)