SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Towards Consensus on TCP Connections



    
    
    Doug,
    
    You can use the standard at the end user device, if you so wish.
    If you choose a "transprent" controller - i.e. one not holding any state or
    ordering - and
    view every LU as and independent unit - you ignore the numbering (that you
    are free to do anyway for the commands) and ship the commands to the device
    for execution.
    
    Initiator tags are unique per initiator and it should not be a problem to
    keep the unique
    per session.
    
    Target task tags are a different issue. We decided to remove the LUN field
    in both the incoming RTT and the the outgoing data to avoid the need to
    test for inconsistencies.
    But as this is the only thing that implies unique target task tags we might
    as well "reinstate" the LUN (the field is still unused) in the DATA packet
    (in the RTT it is implied by the initiator
    task tag).
    
    Relaxing the requirement of uniqueness for initiator tag is possible too
    (you can have the unique per LU) but that might complicate unecessarily the
    initiator.
    
    If this (minor) change does not anoy target controller implementors and
    nobody sees something else wrong with it we might introduce it in the next
    version.
    
    As for your FC claims - I still can't follow your argument. Are you talking
    about FC (the network protocol) or FCP-2 (SCSI-over-FC)?
    
    Julo
    
    
    "Douglas Otis" <dotis@sanlight.net> on 14/08/2000 20:24:29
    
    Please respond to "Douglas Otis" <dotis@sanlight.net>
    
    To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    cc:
    Subject:  RE: Towards Consensus on TCP Connections
    
    
    
    
    Julo,
    
    Keep the interface suitable for the end device.  A controller in the middle
    does not justify a unique interface for that scenario.  In trying to
    optimize "Bandwidth utilization", you are binding thousands of logical
    units
    together into the same session.  This is an added burden within the
    standard, if used at the device.  In doing so, session tags must be kept
    unique across all logical units.  Reset all drives to recover from even a
    single tag error?  If you wish to cache information at the gateway, it
    should still use the interface suitable for the end device.  At least this
    isolates error handling.
    
    By creating a simple FC tunnel, the controller remains bolted into the
    back-plane where it is known to work effectively.  Error handling is
    understood with a stateless translation and you have not created a standard
    that will set back efforts at getting a solution without myopic
    compromises.
    There WILL be errors not seen by this "In the Middle Controller"
    translating
    Fibre-Channel into iSCSI as FC does not have the same transport.  States
    within this ITMC adds to complexity and error migration when you intertwine
    logical units.  ITMC is a poor starting point and only makes for a bad
    interface as seen by this specification.
    
    Doug
    
    
    > -----Original Message-----
    > From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    > julian_satran@il.ibm.com
    > Sent: Saturday, August 12, 2000 12:24 AM
    > To: ips@ece.cmu.edu
    > Subject: RE: Towards Consensus on TCP Connections
    >
    >
    >
    >
    > Well - I understand the argument about a connection/LU and I did even
    > implement it (as many others - see Paul's reply). You can aggregate at
    the
    > TCP level but you have to aggregate somewhere
    > to use effectively the bandwidth. Eliminating the CU is not a concern of
    > this group and not a suggestion to be accepted lightly by the community
    > (who will do caching, storage virtualization and management etc.) and I
    am
    > confident most of the participants on this list do not want to discuss
    > this subject. FC over IP as  replacement for iSCSI is a poor suggestion -
    > FC is a networking protocol and IP too. FCP over IP is a gateway
    solution.
    >
    > Julo
    >
    > "Douglas Otis" <dotis@sanlight.net> on 12/08/2000 02:47:37
    >
    > Please respond to "Douglas Otis" <dotis@sanlight.net>
    >
    > To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    > cc:
    > Subject:  RE: Towards Consensus on TCP Connections
    >
    >
    >
    >
    > Julo,
    >
    > An architecture that scales does not need to provide the entire bandwidth
    > from a single device.  If these devices are each handling 25 Mbytes per
    > second, then 2 of them exceed Fibre-Channel.  Next week you could use 4,
    > and
    > the week after that 8.  The advantage of using a network without any
    > practical architectural restriction with intelligence at the client.  On
    > the
    > other hand, you want to design a single mid-point gateway to handle the
    > entire bandwidth.  To what end?  You argue it can not process a
    > TCP session
    > per device.  You will not be tracking mid-stream errors as this is more
    > costly.  You add to the burden of tracking the state of the individual
    > device with this mid-stream state machine requiring additional sorting
    due
    > to this merged protocol without taking advantage of TCP to aid this
    > process.
    > For your type of solution, a simple hardware based tunnel would be
    better.
    > Do not include handling of the encapsulated protocol and at least it
    > becomes
    > understandable and more likely to manage the task.  In other
    > words, make it
    > Fibre-Channel over IP and you have a chance with your architecture.
    >
    > The point to my statements was to indicate the device is able to handle
    an
    > IP interface today using Fast Ethernet as the rate from the drive is
    > relatively low.  Not to warm the cockles of marketing pointing to
    latency,
    > but at least they sell more drives overcoming this problem using scale.
    >
    > Doug
    >
    > > -----Original Message-----
    > > From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    > > julian_satran@il.ibm.com
    > > Sent: Friday, August 11, 2000 2:17 PM
    > > To: ips@ece.cmu.edu
    > > Subject: RE: Towards Consensus on TCP Connections
    > >
    > >
    > >
    > >
    > > Doug,
    > >
    > > I am not sure that I agree with your architecture statements but I like
    > to
    > > play with numbers (as most of the fellows engineers on this
    > list probably
    > > do). What would be in your opinion
    > > reasonable requirements for command and data transfer rates for the
    next
    > > 3-7 years?
    > >
    > > I would like to decouple that discussion from architecture - data
    > > rates can
    > > scale even in a shared
    > > architecture as mainframe channels have shown for years.
    > >
    > > I would rather like to understand if we can meet the data rates with
    > > reasonable latency.
    > >
    > > Julo
    > >
    > > "Douglas Otis" <dotis@sanlight.net> on 11/08/2000 21:14:13
    > >
    > > Please respond to "Douglas Otis" <dotis@sanlight.net>
    > >
    > > To:   "Stephen Byan" <Stephen.Byan@quantum.com>, ips@ece.cmu.edu
    > > cc:    (bcc: Julian Satran/Haifa/IBM)
    > > Subject:  RE: Towards Consensus on TCP Connections
    > >
    > >
    > >
    > >
    > > Today's drives can deliver 320 Mbits/second of data on the outside
    > > cylinders.  Improvement of the mechanics comes at a high price
    > > with respect
    > > to power and cost.  The cost/volume trend takes us to a single
    > disk which
    > > increases access time as read channel data rate increases.  By offering
    > > scaled throughput using more drives where each drive's interface
    > bandwidth
    > > is restricted with respect to read channel data rates provides a system
    > > with
    > > uniform and superior performance.  The advantage of such an approach is
    > > found with respect to smaller random traffic.  With more devices,
    > > redundancy
    > > is easily achieved and parallel access offers a means of performance
    > > improvement by spreading activity over more devices.  The
    > switch provides
    > > bandwidth aggregation and is not found in the individual device.
    > >
    > > An 8ms access + latency figure in the high cost drives restricts
    > > the number
    > > of 'independent' operations that average 64k byte to 100 per
    > second or 52
    > > Mbit per second.  Such an architecture of 'restricted' drives
    > would scale
    > > whereas the solicitated burst approach does not.  An
    > independent nexus at
    > > the LUN is the only design that offers required scaling and
    > configuration
    > > flexibility.  Keeping up with the read channel is a wasted effort.  In
    > > time,
    > > 1 Gbit Ethernet will be the practical solution about the time drives
    are
    > 1
    > > inche in size.  Several Fast Ethernet disks combined at a 1
    > Gbit Ethernet
    > > client makes sense in cost, performance, capacity, reliability, and
    > > scalability at this point in time.  The protocol overhead should be
    > > addressed.  There are substantial improvements to be made to allow this
    > > innovation using standard adapters.
    > >
    > > The power cost to use copper 1 Gbit is high.  Firewire does not
    > scale and
    > > has a limited reach.  Firewire also places scatter/gather on the drive
    > > together with direct access.  Doing such over a WAN will impose
    > > significant
    > > changes.  Serial ATA is nothing more than IDE through a SERDES.
    >  The read
    > > channel data rate is like a drug, just say no.  It is hard not to buy
    > > enough
    > > dram to allow a proper buffer these days.  Serial ATA removes all
    > buffers.
    > > Intel is just usurping any remaining electronics at the cost of
    > > sensitivity
    > > to a near by cell phone.  Fewer drives with less electronics.  What a
    > good
    > > idea?
    > >
    > > Doug
    > >
    > > > -----Original Message-----
    > > > From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf
    Of
    > > > Stephen Byan
    > > > Sent: Friday, August 11, 2000 7:07 AM
    > > > To: 'ips@ece.cmu.edu'
    > > > Subject: RE: Towards Consensus on TCP Connections
    > > >
    > > >
    > > > Stephen Bailey [mailto:steph@cs.uchicago.edu] wrote:
    > > >
    > > > > The gating factor for whether iSCSI succeeds is not going to be 200
    > > > > MB/s instead of 100 MB/s out of a single LUN.
    > > >
    > > > In general, I agree. iSCSI can succeed in the high and
    > midrange storage
    > > > market without link aggregation for a single LUN. These markets can
    > > afford
    > > > 10 Gb/s links.
    > > >
    > > > As a disk device level interface, iSCSI will not succeed unless
    > > > it offers at
    > > > least 2 Gb/s by around 2002, at very low cost for the link. Note that
    > > even
    > > > Serial ATA starts at 1.5 Gb/s in 2001. Take a look at the Serial ATA
    > > speed
    > > > roadmap on slide 16 of Intel's Serial ATA presentation at WinHEC:
    > > > http://serialata.org/F9pp.pdf.
    > > >
    > > > One can argue the technical merits, but from a marketing
    > > > viewpoint, the disk
    > > > industry (both suppliers and customers) has long held the view that
    > > > interface speeds need to match the media data rate. iSCSI can try
    > > > to make an
    > > > argument that slower speeds are technically adequate, but this
    > > > will increase
    > > > the barriers to establishing iSCSI as a device interface.
    > > >
    > > > > If iSCSI works at ALL in a cost effective way that can be
    > implemented
    > > > > in a disk, there'll be wild dancing in the streets and
    > you'll all (or
    > > > > maybe your companies will) be rich beyond the dreams of avarice.
    > > > >
    > > > > The easier you can make it for the implementors, the more likely it
    > > > > will succeed.
    > > >
    > > > Disk drive companies have implemented much more complex
    > interfaces than
    > > > iSCSI and TCP - e.g. fibre channel arbitrated loop. And multiple TCP
    > > > connections don't look very hard to implement. They just look like a
    > > wart.
    > > > But I think a necessary one.
    > > >
    > > > Regards,
    > > > -Steve
    > > >
    > > > Steve Byan
    > > > <stephen.byan@quantum.com>
    > > > Design Engineer
    > > > MS 1-3/E23
    > > > 333 South Street
    > > > Shrewsbury, MA 01545
    > > > (508)770-3414
    > > > fax: (508)770-2604
    > > >
    > >
    > >
    > >
    > >
    >
    >
    >
    >
    
    
    
    
    


Home

Last updated: Tue Sep 04 01:07:51 2001
6315 messages in chronological order