Network congestion control
MetadataShow full item record
In a shared network such as the Internet, end systems should react to congestion by adapting their transmission rates to avoid congestion collapse and to keep network utilization high. The robustness of the current Internet is due in large part to the end-to-end congestion control mechanisms of TCP. Although TCP congestion control is appropriate for applications such as bulk data transfers, many new applications would find TCP’s behavior of halving the sending rate of a flow to be too severe a response to a single congestion indication, as it can noticeably reduce the flow’s user-perceived quality. However, since the dominant Internet traffic is TCP-based, it is important that new congestion control schemes be TCP-friendly. By this, we mean that the sending rate of a non-TCP flow should be approximately the same as that of a TCP flow under the same conditions of round-trip time and packet loss. The first problem I investigate is a more general version of AIMD (GAIMD) than what is implemented in TCP; specifically, the sender’s window size is increased by alpha if there is no packet loss in a round-trip time, and the window size is decreased to beta of the current value if there is a triple-duplicate loss indication, where alpha and beta are parameters. Using the relationship between the sending rate of GAIMD and the two parameters, I derive a simple relationship between alpha and beta for a GAIMD flow to be TCP-friendly. This relationship offers a wide selection of possible values for alpha and beta to achieve desired transient behaviors. I then investigate the fairness, smoothness, responsiveness, and aggressiveness of TCP, GAIMD and two other representative TCP-friendly congestion control protocols. The properties of these protocols are evaluated both analytically and via simulation by studying their responses to three network changes. Considering the inherent fluctuations in a stationary network environment, I define three types of sending rate variations, and derive an analytical expression for the CoV for each of the four protocols. I also study protocol responsiveness and aggressiveness by evaluating their responses to a step increase of network congestion and a step increase of available bandwidth. The third problem I investigate is the congestion control issues in multicast environments. A multicast session may have a large number of receivers with heterogeneous reception capacities determined by the fairness requirement of a network or by device capacity. To accommodate this heterogeneity, various multi-rate schemes, based upon the use of layering or replication, have been proposed. For a general class of receiver utility functions, I show that there exists an optimal partition that is ordered, which gives rise to efficient algorithms to find an optimal partition based upon dynamic programming. I also show algorithms to efficiently determine the optimal sending rate of each partitioned group. Furthermore, for several typical distributions of receivers’ capacities, I show that the majority of the benefit of a multi-rate scheme can be gained by using a small number of layers (or groups).