SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: Three states for a binary bit (was Re: TCP (and SCTP)sucks on high speed networks)



    
    
    "David P. Reed" wrote:
    
    > Guys - the real problem is that you keep acting like congestion is a
    > property of the network.  It isn't.  It is a property of the sources'
    > behavior (trying to put 10 lbs of s*** in a 5 lb bag).  Temporary queueing
    > delay is only a symptom, or as a more precise term, an epiphenomenon.
    
    No.  Congestion is a state in one (or more) nodes, where offered load exceeds
    the resources available to service it.  Those are resources ***in the
    network***.  They belong to the network.  The hosts have no visibility into
    what's going on with those resources, other than whatever semantics is carried
    in any feedback signal the network sends.  And the only incentive hosts can have
    to cooperate is if they're made to (by some elaborate system of conformance
    testing and tamper-proof software -- not very likely), if consumers accept
    David's congestion pricing idea (my mind's eye can see the angry mob armed with
    torches and pitchforks), or if the understanding is that the network will police
    cooperation and drop non-conforming packets.
    
    >
    >
    > So before you call for "more complexity in the net", try thinking about
    > "more intelligence at the endpoints".  Only if you have given that serious
    > consideration, AND tried to deploy end-to-end solutions (which take several
    > years), should you dare to try to impose centralized and
    > application-ignorant solutions.
    
    The great strength of the End-to-End argument is that it made a case that
    functions that belong to the hosts should be done by the hosts (unless there a
    good reason to do otherwise).  The great danger of the End-to-End argument is
    that it is used to make a case that functions that belong to the network should
    be done by the hosts (unless there's an overwhelming reason to do otherwise).
    
    I believe that what needed is a rate-based congestion feedback signal that
    applications can use to reduce their rate by whatever means (dynamic shaping,
    variable rate coding, timeshifting, whatever).  But if the application doesn't
    adjust its rate, the network should drop packets in proportion to the difference
    between the rate in the feedback signal and the actual rate.  The rate in the
    feedback signal should be established by whatever policy the network operator
    set... fairness, price-weighted fairness, some kind of auction, whatever....
    
    
    >
    >
    > It's lazy and arrogant to presume that the network designer knows what the
    > users need in a resource allocation algorithm, such as managing "congestion".
    > At most, the network can detect congestion.
    
    And it's [no, he says through clenched teeth, I'm not going to rise to the
    bait...] to presume that application designers know or care about allocation of
    resources ***that belong to the network***.  At most, applications can adjust
    their rates based on feedback signals from the network.
    
    And for a vivid understanding of why I'm so adamant that networks be able to
    police compliance by dropping at the ingress, read the "Security Considerations"
    section of RFC2309 (?) and think "Visual Basic Script".
    
    
    
    
    


Home

Last updated: Tue Sep 04 01:06:09 2001
6315 messages in chronological order