|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: iSCSI Urgent PointerJohn, It seems that the focus to these discussions is entirely based on implementing the whole of iSCSI/TCP in silicon. Whilst there are some who undoubtedly wish to do so, even in your groups labelled E and T I expect more implementations to be decoupled than integrated. Now before I get a hoard jumping down my throat, let me state that in these environments I do fully expect the TCP to be implemented in silicon, either as on offload NIC or perhaps VI and that in not too long a time frame, this will also be cheap enough for group L. All I am saying is that there will be many more software implementations of the iSCSI layer than hardware implementations. Up until now, the focus of discussion has been on using the URG flag as a record marker to allow tightly coupled hardware iSCSI/TCP implementations to synch on in coming data. Maybe I am being simplistic, but I look at the impact that this has on the sending side in order achieve this. It means that for every iSCSI PDU I have to call send twice, that is: send_iSCSI (char * buf, int len) { send (buf, 1, MSG_OOB); send (buf + 1, len - 1, 0); } I maybe wrong, but I thought that a send with the URG flag set would be sent out immediately, there will be no coalescing with the rest of the data about to come down the stack. Thus I will generally have a 1 byte TCP packet which because it is marked as urgent will get individually ACKed. Even if I am wrong about the URG behaviour, I fail to see why the majority of implementations must be saddled with the extra inefficiencies in order to meet with the wishes of a minority, if vocal, set of implementors. Please reword that paragraph and strike the word MUST. Mark At 02:16 AM 11/15/2000 -0800, John Hufferd/San Jose/IBM wrote: >I have been watching this discussion for some time now. The following is >what I gleam out of the discussions. > >First, we must talk about many sessions that come to the same target. Many >of the messages, on the reflector, take a SINGLE Session view, and make >arguments that say things like "it will slow down anyway during a >recovery, so why do it the Urgent Ptr. Instead, you need to talk about >the case where the NIC is handling many Sessions. That is what tends to >show the need for something to limit the impact on the NIC cost while >permitting the Multi session 1-10 Gigabit links to operate at high speed >and at long distances. > >Second, in a many session environmentsb, if there is recovery being done >for one session -- in cases where a high speed link is used that is at a >significant distance -- then the impact on that one session will greatly >effect others. This is because the NIC must queue the partial data, while >it is being recovered, and this can be a lot of data. > >Third,assuming that a Urgent Pointer can be found, it means that, only the >data before the Urgent Pointer needs to be queued on the NIC, and >processing can continue with the other work, without reducing the TCP/IP >window. > >Fourth, I do not think that the implementations envisioned will be dropping >the Data, instead it will be Queueing it. And so it will not be making >the congestion any worse in the network. The sooner the Urgent Ptr can be >found, the less memory needed to hold the data on the card (while recovery >for missing data is being done). This means that there will be a lower >probability that a slowdown will be needed for all the Sessions. > >If what I have stated in the four point above is true, then the need, and >the solution, seem to be a compelling match. Matt, please check the above >and let me know if I got it right. > >Now to the point about Must, Should, Will, might..... I think the word >Should is approprate. I think we have Three groups of folks to worry >about. Group L , Group E, and Group T. Group L is the Low end (but >probably high volume) Desktop and Laptop systems that will be using first >10/100 and over time 1000baseT connections. Many of these systems will use >normal SW TCP/IP stacks and will not be able to take advantage of the >Urgent Pointer. I do not think the extra overhead will be missed, at all, >in this environment. On the other hand, the E group (the Enterprise Class >Servers) will need the fastest cards possible. Likewise the T Group >(targets), will need to have as little overhead as possible, yet handle >requests at the very highest speed level. All of these Groups (L , E, & >T) need to have their requirements met, in order for iSCSI to be >successful. And if we think that iSCSI might have a chance to be used >instead of FC, you must be concerned with the E & T Groups being happy. > >Therefore, every implementation SHOULD use the Urgent pointer. Everyone >wins when this works, even the Desktops, because even their 10/100 >connections, get trunked together with others into a very high band fiber >that needs to enter the Target and be serviced with very high speed. >Hence the HW and its optimization and lowest approprate cost is KEY to >everyone. > > > > > >. >. >. >John L. Hufferd >Senior Technical Staff Member (STSM) >IBM/SSG San Jose Ca >(408) 256-0403, Tie: 276-0403 >Internet address: hufferd@us.ibm.com
Home Last updated: Tue Sep 04 01:06:25 2001 6315 messages in chronological order |