|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: iSCSI: Flow ControlJim, > -----Original Message----- > From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of > Jim McGrath > Sent: Thursday, September 21, 2000 4:57 PM > To: 'David Robinson'; ips@ece.cmu.edu > Subject: RE: iSCSI: Flow Control > > While memory may be getting cheaper, latency and transfer rates > are getting > higher. We have gone from 25 m parallel SCSI buses to transcontinental > TCP/IP connections; from 1 MB/s to 100 MB/s (and greater) transfer rates. > These combine to make the maximum amount of data in flight that keeps the > connection full to be growing much faster than memory cost is declining. > (Exponential growth rates are applied to both memory cost and transmission > speed; distance also appears to be growing very fast, although perhaps not > exponentially). > > So while your argument is works if you keep the fabric size the same and > increase the transfer rate (as it has been with the ATA interface - buffer > costs have declined over the years), it does not work if the > fabric keeps on > growing as well. > > If a fabric introduces 1 ms (two orders of magnitude less than the worse > cases I have heard) at Gbit speed, then we need 100 Kbytes of buffer space > for a connection. We don't have enough buffer to reserve this for all > possible connections we could get (Fibre Channel designs could > not reserve 4 > KByte for a smaller number of potential connections until recently). > > Jim > > PS if we actually are starting to need windows greater than 64 KBytes, is > this a problem? My understanding is that deployed TCP/IP products do not > easily support extremely large windows. This argues for > spreading a single > SCSI command across multiple TCP/IP connections for pipelining to overcome > latency, not for bandwidth. Realize that regardless of the number of connections, data does not move any faster unless you intend to crowd out other traffic. It would not seem wise to sit upon a single transfer until completion before moving to the next if that was your reasoning. Transfers could be broken down to 2k units as with FC using overlapping R2T and reads. As far as a safe estimation, I would tend toward using an expectation of 5 ms for TCP on a MAN and 100+ ms for anything farther (8us per mile) and you may get there by way of Canada. I doubt a process will keep buffers lean and you have retry and head of queue blocking with TCP. An agent translating for legacy drives extends buffer space. Imagine placing a raid controller behind this horrendous latency. No number of connections would save you. Doug > > -----Original Message----- > From: David Robinson [mailto:David.Robinson@EBay.Sun.COM] > Sent: Wednesday, September 20, 2000 11:00 PM > To: ips@ece.cmu.edu > Subject: Re: iSCSI: Flow Control > > > Joshua Tseng wrote: > > Not doing RTT means each write command must be completed atomically > > before proceeding on to the next command. There will be some very > > large data PDU's hogging the single connection. How about task > > management functions which the initiator may want to deliver > > asynchronously? With a large data PDU stuck in the connection, > > (and commands stuck in the pipeline) it may require a more > > catastrophic abort/reset of the entire TCP connection, which might > > not have been originally necessary. > > > > Somehow, it seems to me that the SCSI folks put RTT in there for > > a purpose, and iSCSI would be losing something by eliminating it. > > I don't know exactly what this is, but I would think it includes > > AT LEAST some performance impact. > > I understand the basis of RTT is in the SCSI legacy of targets > with very small buffers where limits might be in the order of > single digits. With modern buffers and cheap memory I question > its need. > > -David >
Home Last updated: Tue Sep 04 01:07:07 2001 6315 messages in chronological order |