Site menu Master Dissertation

Master Dissertation

Initial Remarks

The following computers were used for the tests:

The tested Linux kernel version was 2.6.10rc1. Base distribution was Conectiva Linux version 10, that comes with GCC 3.3.3 as the C compiler.

Applications used in the raw tests were entirely written by the author. Applications that implement real protocols (HTTP etc.) are third-party; only the SCTP adaptations were made by the author.

The following network simulation tools were used: netem traffic discipline, and iptables firewall. Netem had important bugs and limitations when the tests were run (november/2004), so iptables was needed to supply the missed features. Also, tc filter command (from Linux traffic control and shaping) was used to simulate several bandwidths.

Network bandwidth simulation was made by dropping packets in receiving side. This technique was adopted instead of traffic shapping at transmission, because the lost packet still uses network bandwidth, and that is a better simulation of a real network where congested intermediate routers drop packets.

The raw tests are essentially measures of throughput and latency.

Throughput is data volume per time unit. Throughput test consists of transmitting a fixed data volume (100MBytes by default) in both directions simultaneously. The listed results are for one direction (example: if the result was 200Kbps, data transmitted in both directions was 400Kbps total). We chose to express one-direction values because most network technologies are symetric full-duplex (for instance, no one tries to sell Fast Ethernet as having 200Mbps speed).

Latency is the average time elapsed between client request and server response, as a client/server transaction. In some charts, this measured latency is named "transaction latency", to distinguish it from network latency that is just the datagram transmission lapse.

The protocols "TCPM" and "UnixM" will appear in the tests. These are application protocols capable of message separation. SCTP can separate messages by itself, but TCP and UNIX sockets can not. Message separation made by TCPM/UnixM is the simplest possible, only to meet test requirements. Any real application protocol would use more robust, complex schemes.

TCPM/Unix protocol was created to improve fairness when comparing TCP and SCTP, since SCTP always separates messages. Moreover, in latency tests, the server site must know when a client message is completely received so it can answer.

The SCTP test programs were created in three vresions: TCP style API, UDP style API, and UDP style API version 2 (this last one used C++ and STL for multiple client support, and is somewhat different in internal logic.)

No tunings of TCP or SCTP sysctl parameters were made by default. The tests that needed tuning will specify which parameters were touched.

Except in RTP testes, all SCTP tests used ordered and confirmed message delivery. It is possible to extract more performance using partial reliability, but it would be unfair with TCP, and unuseable by most application protocols in use today, since they demand a perfectly confirmed and ordered transport.

Another technique that can deliver more performance in lossy channels is to use several streams, to avoid "HOL blocking". It can be shown that it tends to be equivalent to allow unordered delivery of messages, and again most application protocols can not cope with it. We will use this technique only with HTTP, that can use one stream per file without changes in core protocol.