|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: Urgent as Framing Hint?Matt, How is the Urgent Pointer defined if the OS wishes to update accoss rows in a RAID where each update is a 64 KB block of data causing more than 64 KB to be placed within the send buffer? In this case, overflow of the urgent pointer is not defined. The urgent pointer could simply point to somewhere into the next segment where this next segment redefines the pointer into the next and the next. As this urgent pointer is specified to coalesce at both the send and receive side, you could not be sure of the urgent pointer if it did point within the segment it is defined. As your concern is with regard to high levels of bandwidth, I doubt that the send will but very seldom define an urgent pointer within the segment should several segments normally be pending on the send. The TCP specification requires the urgent flag to be set even if the pointer is out of range. Page 41-2 of TCP (RFC 793): "This mechanism permits a point in the data stream to be designated as the end of urgent information. Whenever this point is in advance of the receive sequence number (RCV.NXT) at the receiving TCP, that TCP must tell the user to go into "urgent mode"; when the receive sequence number catches up to the urgent pointer, the TCP must tell user to go into "normal mode". If the urgent pointer is updated while the user is in "urgent mode", the update will be invisible to the user. The method employs a urgent field which is carried in all segments transmitted. The URG control flag indicates that the urgent field is meaningful and must be added to the segment sequence number to yield the urgent pointer. The absence of this flag indicates that there is no urgent data outstanding." Page 48: "Note that data following the urgent pointer (non-urgent data) cannot be delivered to the user in the same buffer with preceeding urgent data unless the boundary is clearly marked for the user." Page 48 of the specification indicates special data handling is required in the transition between urgent and non-urgent data. For a standard TCP stack, these exceptions are likely handled outside of the normal path. So this scheme is not good for a standard TCP stack, not defined for typical use, and only useful for modified versions of TCP. Doug > "Randall R. Stewart" wrote: > > > I am not sure what you mena by "work to a degree". I am quite sure > > after looking at TCP and hearing feedback from David Reed, that in > > ALL TCP implementations your idea will work a lot of the times. But > > I am also just as sure that your idea will NOT work when faced with > > a more than one packet loss.. > > > > If working with only single packet losses is what you had in mind > > then I am sure it will "work to a degree" right now. > > This is exactly the case I wanted to handle. > > > The real > > question is do you want a solution that will break under heavy > > load with multiple packet losses? > > In this case, performance is going to suffer from all the packet losses > anyway, so it would be ok in such circumstances to switch back to > the standard > TCP recovery algorithm. > > > I currently prefer the "magic sequence" proposal where you have > > a special escape sequence you can look for inside the data stream. > > I am not sure that this is managable for 10Gb data streams since > > it will involve a lot of horse power to do it.. but so far it > > is the only solution I can see that works reliably (if you have > > enough CPU)... > > This would be real easy to do in hardware. But it will really kill the > software implementations. > > -Matt > > > > > R > > -- > > Randall R. Stewart > > randall@stewart.chicago.il.us or rrs@cisco.com > > 815-342-5222 (cell) 815-477-2127 (work) >
Home Last updated: Tue Sep 04 01:06:13 2001 6315 messages in chronological order |