SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: Three states for a binary bit (was Re: TCP (and SCTP) sucks on high speed networks)



    At 03:02 PM 12/5/00 -0800, Alhussein Abouzeid wrote:
    >The shared resources are in the network (capacity, buffer,..etc.) not the
    >sources. Ofcourse, the sources can go a long way in relieving network
    >congestion. But first, they need to know about it. In the absence of any
    >signaling from the network (other than dropping packets when congestion
    >happens) and in the prescence of random packet errors, I don't see how
    >"intelligent" source algorithms can make clearly "intelligent" decisions.
    
    Well, this is almost in agreement with my point that the network can detect 
    congestion.  However, it missed my point in two ways:
    
    1. Saying that the overcommitted resources are "in the network" has nothing 
    to do with where the causation is or where the remedy should be.  Might 
    just as well say that a highway causes traffic jams, and require a freeway 
    interchange to unclog itself by interrogating drivers about their home and 
    work addresses.
    
    2. Congestion is only one of several possible symptoms that arises as a 
    result of overloading capacity or misbehaving capacity managers.  The 
    problem is to get the sources and the resource provisioning back into 
    balance.  Congestion can arise from such things as transients in the 
    routing tables that persist for minutes (in which case the sources are 
    "blameless", but the sources need to respond anyway for their own 
    good).  Congestion can be transient, even though the sources have already 
    adjusted their behavior to compensate.  Congestion can be observed on flows 
    that have nothing to do with the flows that "caused" the congestion, which 
    may be distant in space or time.
    
    
    > > It's lazy and arrogant to presume that the network designer knows what the
    > > users need in a resource allocation algorithm, such as managing 
    > "congestion".
    >
    >In my humble opinion, it's not a good practice to call the designers of
    >existing network algorithms, or anybody for that matter, lazy or arrogant.
    
    I'm sorry if the words "lazy" and "arrogant" were felt as personal attacks. 
    I wasn't attacking designers of any existing network algorithms, but 
    instead a line of reasoning being espoused in this thread to incorporate a 
    variety of proposed features into the network.  In particular, I have no 
    problem with ECN as a means of detecting and notifying the existence of 
    congestion, as I have said a number of times on this list.
    
    I do, however, strongly insist that network designers should not assume in 
    their designs that they can know the intentions of applications using the 
    network, absent explicit information being provided into the network.  For 
    resource allocation among competing uses, that means that the competing 
    resource requirements must be inferrable from information actually provided 
    to the network.  Similarly, I think designers should be required to work 
    hard to demonstrate that edge-based (or at-the-endpoints, 
    at-the-application-level) means cannot be used to resolve resource 
    conflicts that are ultimately caused at the edges.  A common definition of 
    "arrogance" is "presuming to know others' intentions", and "laziness" is 
    "not working hard".
    
    
    >It is well known by now that one can not rely on the
    >sources ALONE to avoid congestion - it might have worked in the old days
    >when there were a dozen "nice" people using the
    >Internet. Non-cooperative games theory have not yet
    >provided a solution for this problem (if it has, please provide
    >a pointer).
    
    The idea that I am invoking a "good old days" of benevolent users is a 
    straw man you created, not I.  Bad actors have always existed, but so what? 
    The sources' benevolence or malevolence of purpose has nothing to do with 
    my point.  The decision to allocate resources to distinct uses (flows, 
    sources, or whatever) is the means to control overload.  "Malevolent" uses 
    could be denied access to originate packets, or just charged for the costs 
    they impose.  But it is only at the level of the application that one can 
    distinguish malevolence from legitimate demands for resources.  (unless you 
    propose mind-reading routers - but if you can know what traffic is 
    legitimate, you might as well not have a network at all, let the 
    destinations use precognition to bypass the need for transport at all).
    
    I suppose that the existence of evil hackers also implies that routers 
    should be the primary defense against viruses, worms, and trojan horses. :-)
    
    
    - David
    --------------------------------------------
    WWW Page: http://www.reed.com/dpr.html
    
    
    


Home

Last updated: Tue Sep 04 01:06:11 2001
6315 messages in chronological order