SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Towards Consensus on TCP Connections



    John,
    
    Hard to imagine a Storage Controller unrelated to disk drives.  Are we
    discussing a protocol or your product spec?  Even the lowly IDE interface
    started out as a controller.  The quagmire is the mess created by the
    stateful mid-stream bottleneck you call a controller.  Here you are mixing
    protocols which you abhor.  You are not using a network or protocol
    effectively if you refuse to allow this interface to be suitable at the
    end-point, the device.  The fact that your architecture can not scale speaks
    loudly.  If you wish to use this interface at a controller, it should assume
    an identical role to a device.
    
    Doug
    
    > -----Original Message-----
    > From: hufferd@us.ibm.com [mailto:hufferd@us.ibm.com]
    > Sent: Friday, August 11, 2000 12:07 PM
    > To: ips@ece.cmu.edu
    > Cc: Stephen Byan; Douglas Otis
    > Subject: RE: Towards Consensus on TCP Connections
    >
    > I agree with most of what Stephen says.
    >
    > Now, do you think that now we can shelve the disk drive discussion and
    > focus more on the Storage Controller functions.
    >
    > When we started all this, I thought that we wanted to FIRST focus on the
    > interface to Storage Controllers, in order to avoid the quagmire
    > that Fibre
    > Channel got into when they tried to handle both controller  (with Switched
    > interfaces) and Disk Interfaces with Loops.
    >
    > I would like to continue to focus on the Host to Storage Controller
    > approach which we were following.  If there is some important item, which
    > will prevent its operation on disks, we should be aware of it, but our
    > focus should be FIRST on Storage Controllers.  This current Disk interface
    > discussion, seems to be distracting from our goal to insure that the
    > proposal works well at the Storage Controller level.
    >
    > I think that it has been stated several times that the use of a
    > Session per
    > LUN etc. is within the current iSCSI proposal, and it has been stated a
    > number of times by folks that make their living creating Storage
    > Controllers that they do not think they can support a protocol that
    > REQUIRES a Session per LUN.
    >
    > So I would like to propose, again, that we get back on track and work
    > issues which apply first to Storage Controllers.
    >
    >
    >
    > .
    > .
    > .
    > John L. Hufferd
    >
    > Internet address: hufferd@us.ibm.com
    >
    >
    >
    > "Douglas Otis" <dotis@sanlight.net>@ece.cmu.edu on 08/11/2000 11:14:13 AM
    >
    > Sent by:  owner-ips@ece.cmu.edu
    >
    >
    > To:   "Stephen Byan" <Stephen.Byan@quantum.com>, <ips@ece.cmu.edu>
    > cc:
    > Subject:  RE: Towards Consensus on TCP Connections
    >
    >
    >
    > Today's drives can deliver 320 Mbits/second of data on the outside
    > cylinders.  Improvement of the mechanics comes at a high price
    > with respect
    > to power and cost.  The cost/volume trend takes us to a single disk which
    > increases access time as read channel data rate increases.  By offering
    > scaled throughput using more drives where each drive's interface bandwidth
    > is restricted with respect to read channel data rates provides a system
    > with
    > uniform and superior performance.  The advantage of such an approach is
    > found with respect to smaller random traffic.  With more devices,
    > redundancy
    > is easily achieved and parallel access offers a means of performance
    > improvement by spreading activity over more devices.  The switch provides
    > bandwidth aggregation and is not found in the individual device.
    >
    > An 8ms access + latency figure in the high cost drives restricts
    > the number
    > of 'independent' operations that average 64k byte to 100 per second or 52
    > Mbit per second.  Such an architecture of 'restricted' drives would scale
    > whereas the solicitated burst approach does not.  An independent nexus at
    > the LUN is the only design that offers required scaling and configuration
    > flexibility.  Keeping up with the read channel is a wasted effort.  In
    > time,
    > 1 Gbit Ethernet will be the practical solution about the time drives are 1
    > inche in size.  Several Fast Ethernet disks combined at a 1 Gbit Ethernet
    > client makes sense in cost, performance, capacity, reliability, and
    > scalability at this point in time.  The protocol overhead should be
    > addressed.  There are substantial improvements to be made to allow this
    > innovation using standard adapters.
    >
    > The power cost to use copper 1 Gbit is high.  Firewire does not scale and
    > has a limited reach.  Firewire also places scatter/gather on the drive
    > together with direct access.  Doing such over a WAN will impose
    > significant
    > changes.  Serial ATA is nothing more than IDE through a SERDES.  The read
    > channel data rate is like a drug, just say no.  It is hard not to buy
    > enough
    > dram to allow a proper buffer these days.  Serial ATA removes all buffers.
    > Intel is just usurping any remaining electronics at the cost of
    > sensitivity
    > to a near by cell phone.  Fewer drives with less electronics.  What a good
    > idea?
    >
    > Doug
    >
    > > -----Original Message-----
    > > From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    > > Stephen Byan
    > > Sent: Friday, August 11, 2000 7:07 AM
    > > To: 'ips@ece.cmu.edu'
    > > Subject: RE: Towards Consensus on TCP Connections
    > >
    > >
    > > Stephen Bailey [mailto:steph@cs.uchicago.edu] wrote:
    > >
    > > > The gating factor for whether iSCSI succeeds is not going to be 200
    > > > MB/s instead of 100 MB/s out of a single LUN.
    > >
    > > In general, I agree. iSCSI can succeed in the high and midrange storage
    > > market without link aggregation for a single LUN. These markets can
    > afford
    > > 10 Gb/s links.
    > >
    > > As a disk device level interface, iSCSI will not succeed unless
    > > it offers at
    > > least 2 Gb/s by around 2002, at very low cost for the link. Note that
    > even
    > > Serial ATA starts at 1.5 Gb/s in 2001. Take a look at the Serial ATA
    > speed
    > > roadmap on slide 16 of Intel's Serial ATA presentation at WinHEC:
    > > http://serialata.org/F9pp.pdf.
    > >
    > > One can argue the technical merits, but from a marketing
    > > viewpoint, the disk
    > > industry (both suppliers and customers) has long held the view that
    > > interface speeds need to match the media data rate. iSCSI can try
    > > to make an
    > > argument that slower speeds are technically adequate, but this
    > > will increase
    > > the barriers to establishing iSCSI as a device interface.
    > >
    > > > If iSCSI works at ALL in a cost effective way that can be implemented
    > > > in a disk, there'll be wild dancing in the streets and you'll all (or
    > > > maybe your companies will) be rich beyond the dreams of avarice.
    > > >
    > > > The easier you can make it for the implementors, the more likely it
    > > > will succeed.
    > >
    > > Disk drive companies have implemented much more complex interfaces than
    > > iSCSI and TCP - e.g. fibre channel arbitrated loop. And multiple TCP
    > > connections don't look very hard to implement. They just look like a
    > wart.
    > > But I think a necessary one.
    > >
    > > Regards,
    > > -Steve
    > >
    > > Steve Byan
    > > <stephen.byan@quantum.com>
    > > Design Engineer
    > > MS 1-3/E23
    > > 333 South Street
    > > Shrewsbury, MA 01545
    > > (508)770-3414
    > > fax: (508)770-2604
    > >
    >
    >
    >
    >
    
    


Home

Last updated: Tue Sep 04 01:07:52 2001
6315 messages in chronological order