SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: ISCSI: draft-wakeley-iscsi-msgbndry-00.txt



    This appears to have crossed onto somewhat shaky ground.
    RFC 793 does specify the coalescing, and hence fiddling
    with that should not be the first choice option.  Somesh
    is right when he recommends "very careful investigation";
    figure out what the common scenarios are that are important
    to optimize and see how well the urgent pointer works
    without modification, even if its value is not always
    optimal for the situation.  In general, TCP implementations
    don't react well to multiple drops, so single drop
    scenarios are the right place to start looking.
     
    --David
    
    ---------------------------------------------------
    David L. Black, Senior Technologist
    EMC Corporation, 42 South St., Hopkinton, MA  01748
    +1 (508) 435-1000 x75140     FAX: +1 (508) 497-8500
    black_david@emc.com       Mobile: +1 (978) 394-7754
    ---------------------------------------------------
    
    > -----Original Message-----
    > From:	Matt Wakeley [SMTP:matt_wakeley@agilent.com]
    > Sent:	Friday, September 29, 2000 9:17 PM
    > To:	GUPTA,SOMESH (HP-Cupertino,ex1); IPS Reflector
    > Subject:	Re: ISCSI: draft-wakeley-iscsi-msgbndry-00.txt
    > 
    > Somesh,
    > 
    > What you describe is how current TCP stacks typically operate when it does
    > not
    > know anything about the protocol running on top of it.  A customized TCP
    > stack
    > that was closely coupled with the iSCSI protocol running on top of it
    > could
    > provide the appropriate urgent pointer in each segment that is
    > (re)transmitted
    > without "coalescing" the urgent data. Likewise a customized TCP inbound
    > path
    > that was closely coupled with the iSCSI protocol running on top of it
    > could
    > pass the urgent information to the iSCSI layer without "coalescing" the
    > urgent
    > data. This could be done without any modifications or violations of
    > current
    > TCP specifications.
    > 
    > -Matt
    > 
    > "GUPTA,SOMESH (HP-Cupertino,ex1)" wrote:
    > 
    > > Matt,
    > >
    > > Use of the urgent pointer requires very careful investigation. If you
    > > look at the TCP RFC it mentions that that the urgent pointer
    > > location is a connection-wide pointer, both on the receiver side
    > > and the sender side.
    > >
    > > >From the sender side, even if TCP emits a TCP segment for every
    > > user send, you are not guaranteed the preservation of the urgent
    > > pointer - e.g.
    > >
    > > 1. Let us say that a retransmission timeout occurs and a number
    > >    of segments have to be retransmitted. In this case, based on
    > >    (my interpretation) of RFC 793, only the last value of the
    > >    urgent pointer is meaningful.
    > >
    > > 2. Let us say that user data gets queued up due to flow control
    > >    issues. Again this will cause the last value of the urgent
    > >    pointer to take effect.
    > >
    > > the second part will have an impact in the slow start case. A
    > > further analysis is required, but it seems that in cases where
    > > packets are being dropped, the urgent pointer setting will not
    > > be the most optimal (just about when it is needed the most).
    > > The urgent pointer proposal may have a benefit when you are
    > > loosing exactly one packet per window size (maybe even two
    > > with SACK).
    > >
    > > On the receive side, again the same thing happens when multiple
    > > segments with urg pointer are received. Since it is a connection-
    > > wide pointer, only the last one should be provided to the user.
    > >
    > > If the data passes through "intermediate TCP devices" then again
    > > the urgent pointer setting will be again be suboptimal.
    > >
    > > I would urge some TCP expert to jump in here.
    > >
    > > Somesh
    


Home

Last updated: Tue Sep 04 01:06:57 2001
6315 messages in chronological order