SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Three states for a binary bit (was Re: TCP (and SCTP) sucks on highspeed networks)



    
    
    I think we are all struggling with an obvious limitation. We want a simple
    network architecture, one that either sends ACKs or loses them, to tell us
    the three possible events; "random error", "congestion error" and "all is
    good". Until we are willing to accept more refined algorithms in the
    network (and accept the complexity as a necessary medicine) we should not
    hope for anything MUCH better than TCP's traditional merging of the first
    two states, regardless of how much tweaking of the additive
    increase/multiplicative decrease constants or other slight protocol
    changes we make.
    
    I guess we should accept the fact that time changes, and the need for
    refined TCP/routers is way past due. The discussions about TCP's proven
    stability in the past does not shed any light on its efficiency in the
    future. More importantly, it should not be an evolutionary obstacle.
    
    A humble opinion of an Internet newbie!
    
    Hussein.
    
    
    On Mon, 4 Dec 2000, Panos GEVROS wrote:
    
    > 
    > "Dr. Carsten Bormann" writes:
    >  |> TCP's "congestion avoidance" algorithms are not compatible with
    >  |> high speed,
    >  |> long distance networks.  The "cut transmit rate in half on packet loss and
    >  |> increase the rate additively" algorithm will simply not work.
    >  |
    >  |I don't believe this is just a matter of algorithm.
    >  |The problem is really the dynamic range of the rate adaptation equation.
    > 
    > 
    > -it is not a matter of algorithm -but operating range / choice of control 
    > parameters -
    > we always take b=0.5 for granted, the choice of 0.5 for AIMD multiplicative 
    > decrease factor b  is a wise choice (compromise due to social considerations) 
    > for TCP since it allowed for trade offs across a wide range of traffic 
    > conditions (number of flows sharing congested link)
    > however for a given degree of statistical multiplexing on a link (and 
    > reasonably long lived flows), the AIMD "b" is the factor that governs 
    > utilisation/performance,
    > the additive increase factor "a" simply is not strong enough to make a 
    > difference (when compared with a less congestion sensitive implementation - 
    > larger b)
    > 
    > for transfers on the same subnet when congestion avoidance/control is "turned 
    > off" one gets the highest possible throughput
    > it would be interesting to see the limiting factors to TCP performance in such 
    > case : disk access, copy operations, transfer to the network i/face memory 
    > etc. (any pointer to reports/paper discussing TCP performance decoupled from 
    > congestion control :0 would be much appreciated)
    > 
    > regards,
    > Panos
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    
    


Home

Last updated: Tue Sep 04 01:06:12 2001
6315 messages in chronological order