SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: RE: RE: Towards Consensus on TCP Connections



    Jim,
    
    Some people have already pointed out vendors with silicon
    based TCP/IP implementations. There are others who are in various
    stages of development, but they have to speak up for themselves.
    
    The capability of silicon implementations for networking has been
    increasing dramatically of late (and being very richly rewarded).
    Reasonably inexpensive switches implement very complex state
    machines for flow classification & prioritization, layer 3
    forwarding, layer 2 forwarding etc etc. Though not the same as
    managing the TCP connection state, it demonstrates rapidly
    increasing capability. Since TCP is the transport protocol
    for all of web traffic, it is becoming as important a protocol
    as ip at the network layer and ethernet at the link level.
    
    I don't think anyone believes that iSCSI will succeed if it
    consumes a lot more CPU resources or the adapters cost a
    lot more. The competition is FC and it has to do at least
    as well.
    
    Regarding alternate transports, once you decide to operate
    in the internet, you have to handle retransmission/packet loss
    and congestion control. (BTW, where can I find a specification
    for SCTP). Other protocols may have framing advantages over
    TCP (which can be solved), but TCP is a protocol that is
    proven to work in the internet from years of experience. It is
    better to committ TCP to hardware than to committ some new
    protocol to h/w and find that it causes instability under
    some scenarios.
    
    Also, it would be a mistake to let an intermediate objective of
    FCP <-> iSCSI bridging drive the end goal. As others have pointed
    out, bridging will - to paraphrase from the switch book by Rich 
    Siefter - give you the insersection of the capabilities while
    giving superset of problems.
    
    To summarize, 
    
    1. operating in the internet requires congestion
    control and recovery. 
    
    2. TCP is the most (only?) proven protocol
    for congestion control and recovery in the internet
    with strong momentum and investment
    
    3. Putting FC on top will require an additional convergence
    layers.
    
    4. The set of inter-operable management tools for IP
    networks & systems is very comprehensive.
    
    5. Security and authentication tools (if used) are much
    stronger. I don't even know what it is in fibre-channel
    (my ignornance) other than "verifying" www-name.
    
    6. Initial implementations of iSCSI may be bridges or native
    arrays. But as the silicon implementation becomes cheap and
    gigabit eithernet becomes "very cheap (already at low hundreds
    per port), you will see that in disk drives.
    (was that a summary?)
    
    Somesh
    
    
    
    > -----Original Message-----
    > From: Jim.McGrath@quantum.com [mailto:Jim.McGrath@quantum.com]
    > Sent: Thursday, August 17, 2000 9:07 PM
    > To: somesh_gupta@hp.com; dotis@sanlight.net; ips@ece.cmu.edu;
    > julian_satran@il.ibm.com
    > Cc: Jim.McGrath@quantum.com
    > Subject: FW: RE: RE: Towards Consensus on TCP Connections
    > 
    > 
    > 
    > Observation spanning many emails:
    > 
    > I think part of the problem here is that people have some different
    > experiences.  For instance, you mentioned silicon based TCP/IP
    > implementations, and yet I personally (and I think a number 
    > of other storage
    > focused people) are not familiar with them and their 
    > characteristics (e.g.
    > availability, cost, etc...).
    > 
    > Storage vendors have had to automate a lot of protocol in 
    > silicon for years,
    > and this had had a pronounced influence on the evolution of 
    > that protocol.
    > For instance, in both SCSI and Fibre Channel standards 
    > activities you have a
    > lot of discussion as to whether something can be easily and 
    > inexpensively
    > implemented in silicon - this is often a consideration that 
    > weighs heavily
    > in the decision making.  This is due to the combination of 
    > lost cost and
    > high performance focus of the industry.  Indeed, this focus 
    > on "simple"
    > protocols (relying of good links, etc...) is I think a reason 
    > why some folks
    > in the networking community have doubts concerning issues 
    > like congestion
    > control for storage protocols or anything based on them.
    > 
    > The networking industry is, as seem from a storage vendors 
    > perspective,
    > almost the exact opposite.  Its history has been dominated by software
    > implementations of (comparatively) slow transports on (comparatively)
    > expensive machines (normally computers of some sort).  I think you are
    > seeing some doubts being expressed about the speed and cost of these
    > approaches, and whether they can be transplanted to the 
    > storage world (which
    > is the focus for iSCSI).
    > 
    > Personally, I'd like to hear most on cheap, fast, TCP/IP 
    > implementations.
    > Afterall, if it really was cheap and fast to send FC out one 
    > end on top of
    > TCP/IP and get it back on the other end, then I for one would 
    > certainly
    > support that approach given its other benefits.
    > 
    > Jim
    > 
    > 
    > -----Original Message-----
    > From: somesh_gupta@hp.com [mailto:somesh_gupta@hp.com]
    > Sent: Thursday, August 17, 2000 5:41 PM
    > To: dotis@sanlight.net; ips@ece.cmu.edu; julian_satran@il.ibm.com
    > Subject: RE: RE: Towards Consensus on TCP Connections
    > 
    > 
    > Doug,
    > 
    > Tunneling FC through IP to connect FC islands together
    > is a good application and has its place, and an iSCSI <-> FCP
    > gateway will be an important app to connect the two worlds
    > together. However neither of these is a replacement for
    > using TCP/IP as a native transport for SCSI commands.
    > 
    > FC networks appear to me to be limited scale networks providing
    > a very reliable link (channel) using b-b credits to avoid 
    > dropping packets. Therefore any protocol using FC network makes
    > those assumptions. However this limits the scale, and also other
    > factors are driving the use of IP based networks. When FC goes over 
    > an IP based networks, then either the assumptions (of a reliable 
    > channel) are no longer true, or a convergence
    > layer has to be built to provide the same level of service (using
    > TCP and another sub-layer on top perhaps).
    > 
    > Even in FC, you have SCSI states etc and FC states 
    > (fairly involved protocol and does not have all the capabilities
    > as IP based networks provide - or at not yet at the same level
    > of interoperability and robustness).
    > 
    > You have made a comment multiple times about the compexlity of
    > SCSI over TCP and complexity of states and demuxing etc. I somehow
    > don't get it (when I compare it to the fact that systems have been
    > managing loads of apps running on top of TCP and that switches and
    > NICs are implementing TCP in silicon). Perhaps you can give some
    > details of the state and demux complexity and error recovery
    > issues and we can explore them to see if there is something we are
    > missing or perhaps some of us can help with explaining why they
    > may not be issues.
    > 
    > SCSI over TCP is probably more complex than FCP/FC. However, it should
    > provide more capabilities and a converged network. It should
    > also be simpler than encapsulating FC over IP (TCP). Also there
    > is far more expertise in this area and it is using existing
    > components except the iSCSI encapsulation which is fairly simple. I
    > don't think there is any unknowns in this area. So let us explore
    > and complexity issues in more detail.
    > 
    > Somesh
    > 
    > > -----Original Message-----
    > > From: dotis@sanlight.net [mailto:dotis@sanlight.net]
    > > Sent: Wednesday, August 16, 2000 8:04 AM
    > > To: julian_satran@il.ibm.com; ips@ece.cmu.edu
    > > Subject: FW: RE: Towards Consensus on TCP Connections
    > > 
    > > 
    > > > -----Original Message-----
    > > > From: owner-ips@ece.cmu.edu 
    > > [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    > > > julian_satran@il.ibm.com
    > > > Sent: Tuesday, August 15, 2000 10:06 PM
    > > > To: ips@ece.cmu.edu
    > > > Subject: RE: Towards Consensus on TCP Connections
    > > >
    > > > Doug,
    > > >
    > > > Please cool down and keep the discussion technical. I
    > > > personally do not enjoy bringing press quotes to a technical
    > > > discussion. And so much more when they are inaccurate.
    > > 
    > > You indicated confusion as to the comparison being made.  I tried to
    > > enlighten you as to the alternative.  I did it with humor and 
    > > with technical
    > > details.  I see nothing inaccurate nor have you indicated errors.
    > > 
    > > > Encapsulating FC ovr IP is not SCSI. Your basic assumption is
    > > > that FCP will be encapsulated on IP. FCP is tightly tied to FC
    > > > (as TCP is to IP) and has already a complex (and not yet closed)
    > > > recovery mechanism based on the assumption it runs over FC.
    > > 
    > > An IP datagram is nothing more than a vehicle for FC 
    > encapsulation and
    > > indeed includes only the defragmentation process.  There is 
    > no need to
    > > manage the defrag process or to be concerned about possible 
    > > failure at the
    > > IP level.  It is transparent and self healing as FCP-2 on FC 
    > > handles these
    > > errors and would be no less reliable in doing so.  There are 
    > > few uses for FC
    > > beyond SCSI so your point escapes me especially when you are 
    > > advocating a
    > > protocol wholly unsupported.  Are you suggesting FC over IP 
    > > will not work?
    > > 
    > > > Moreover - like SCSI - FCP has the notion of a target distinct
    > > > from the notion of a LUN.  It could be an acceptable solution to
    > > > extend the range of a FC island but even for this it has to go
    > > > some way in solving the congestion and security issues.
    > > 
    > > There are already provisions for security for FC over IP.  In many
    > > applications, bandwidth is controlled by a dedicated channel 
    > > which solves
    > > both issues.  There are many means for flow control where TCP 
    > > is but one.
    > > Reliability should be the differentiating factor.
    > > 
    > > > IMHO even for this application a FCP-iSCSI gateway could 
    > > provide a better
    > > > solution as the mapping is straightforward.
    > > >
    > > > Julo
    > > 
    > > This provides few techincal details as to how you base this 
    > > opinion.  If I
    > > had the choice of purchasing a blade for a switch or new 
    > > system adapters
    > > with high overhead and new SAN controllers with limited 
    > > connectivity where I
    > > could also expect massive faults, I would not see that as 
    > > straightforward.
    > > But then again, maybe I would.
    > > 
    > > Doug
    > > 
    > > 
    > > > "Douglas Otis" <dotis@sanlight.net> on 15/08/2000 19:24:34
    > > >
    > > > Please respond to "Douglas Otis" <dotis@sanlight.net>
    > > >
    > > > To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    > > > cc:
    > > > Subject:  RE: Towards Consensus on TCP Connections
    > > >
    > > > Julo,
    > > >
    > > > For details on proposed Fibre-Channel over IP which does 
    > not impose
    > > > additional catastrophic error states as does iSCSI, please see:
    > > >
    > > > 
    > > 
    > http://search.ietf.org/internet-drafts/draft-ietf-ipfc-fcoverip-02.txt
    > > >
    > > > "While the FC over IP specification is independent of the 
    > link level
    > > > transport protocol, it assumes a high bandwidth, high
    > > > reliability, low loss
    > > > link level transport such as Gigabit Ethernet, SONET, ATM, 
    > > or DWDM. This
    > > > specification treats all classes of FC frames the same -- 
    > > as  datagrams."
    > > > ipfc
    > > >
    > > > "Fibre Channel over IP's main advantage is it needs no
    > > > modifications to the
    > > > storage subsystem or the server operating system. SCSI over 
    > > TCP/IP, on the
    > > > other hand, requires filter drivers so the server operating 
    > > system can do
    > > > IP-network-to-SAN emulation (for which I humbly propose 
    > the acronym
    > > > INSANE)." Wayne Rickard
    > > > Network World, 06/12/00 wayne@gadzoox.com
    > > >
    > > > To see who is doing Fibre-Channel encapsulation see:
    > > > 
    > > http://www.cisco.com/warp/public/146/pressroom/2000/jun00/sp_0
    > > 61300b.htm
    > > >
    > > > At keeping the same architecture as the present Fibre-Channel
    > > > configuration,
    > > > you have not improved reliability.  By making compromises 
    > > that combine
    > > > states, you are reducing reliability.  It would appear, in 
    > > efforts to
    > > > reduce
    > > > overhead in TCP transport, you have added dangerous 
    > > overhead at the SCSI
    > > > emulation layer.  This is taking advantage of TCP in 
    > > hardware?  Allow a
    > > > session per LUN, (remove the LUN field) then there is no danger
    > > > of inducing
    > > > an error across multiple devices when sorting millions of 
    > > tags for a LUN
    > > > value.  Care about reliability.
    > > >
    > > > Doug
    > > >
    > > > > Doug,
    > > > >
    > > > > You can use the standard at the end user device, if you so wish.
    > > > > If you choose a "transprent" controller - i.e. one not holding
    > > > > any state or
    > > > > ordering - and
    > > > > view every LU as and independent unit - you ignore the 
    > > numbering (that
    > > > you
    > > > > are free to do anyway for the commands) and ship the commands to
    > > > > the device
    > > > > for execution.
    > > > >
    > > > > Initiator tags are unique per initiator and it should not be a
    > > > problem to
    > > > > keep the unique
    > > > > per session.
    > > > >
    > > > > Target task tags are a different issue. We decided to 
    > > remove the LUN
    > > > field
    > > > > in both the incoming RTT and the the outgoing data to 
    > > avoid the need to
    > > > > test for inconsistencies.
    > > > > But as this is the only thing that implies unique target task
    > > > > tags we might
    > > > > as well "reinstate" the LUN (the field is still unused) 
    > > in the DATA
    > > > packet
    > > > > (in the RTT it is implied by the initiator
    > > > > task tag).
    > > > >
    > > > > Relaxing the requirement of uniqueness for initiator tag 
    > > is possible too
    > > > > (you can have the unique per LU) but that might complicate
    > > > > unecessarily the
    > > > > initiator.
    > > > >
    > > > > If this (minor) change does not anoy target controller 
    > > implementors and
    > > > > nobody sees something else wrong with it we might 
    > > introduce it in the
    > > > next
    > > > > version.
    > > > >
    > > > > As for your FC claims - I still can't follow your argument. Are
    > > > > you talking
    > > > > about FC (the network protocol) or FCP-2 (SCSI-over-FC)?
    > > > >
    > > > > Julo
    > > > >
    > > > >
    > > > > "Douglas Otis" <dotis@sanlight.net> on 14/08/2000 20:24:29
    > > > >
    > > > > Please respond to "Douglas Otis" <dotis@sanlight.net>
    > > > >
    > > > > To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    > > > > cc:
    > > > > Subject:  RE: Towards Consensus on TCP Connections
    > > > >
    > > > >
    > > > >
    > > > >
    > > > > Julo,
    > > > >
    > > > > Keep the interface suitable for the end device.  A controller in
    > > > > the middle
    > > > > does not justify a unique interface for that scenario.  
    > > In trying to
    > > > > optimize "Bandwidth utilization", you are binding 
    > > thousands of logical
    > > > > units
    > > > > together into the same session.  This is an added burden 
    > > within the
    > > > > standard, if used at the device.  In doing so, session 
    > > tags must be kept
    > > > > unique across all logical units.  Reset all drives to recover
    > > > from even a
    > > > > single tag error?  If you wish to cache information at 
    > > the gateway, it
    > > > > should still use the interface suitable for the end 
    > > device.  At least
    > > > this
    > > > > isolates error handling.
    > > > >
    > > > > By creating a simple FC tunnel, the controller remains 
    > > bolted into the
    > > > > back-plane where it is known to work effectively.  Error 
    > > handling is
    > > > > understood with a stateless translation and you have not created
    > > > > a standard
    > > > > that will set back efforts at getting a solution without myopic
    > > > > compromises.
    > > > > There WILL be errors not seen by this "In the Middle Controller"
    > > > > translating
    > > > > Fibre-Channel into iSCSI as FC does not have the same 
    > > transport.  States
    > > > > within this ITMC adds to complexity and error migration when you
    > > > > intertwine
    > > > > logical units.  ITMC is a poor starting point and only 
    > > makes for a bad
    > > > > interface as seen by this specification.
    > > > >
    > > > > Doug
    > > > >
    > > > >
    > > > > > -----Original Message-----
    > > > > > From: owner-ips@ece.cmu.edu 
    > > [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    > > > > > julian_satran@il.ibm.com
    > > > > > Sent: Saturday, August 12, 2000 12:24 AM
    > > > > > To: ips@ece.cmu.edu
    > > > > > Subject: RE: Towards Consensus on TCP Connections
    > > > > >
    > > > > >
    > > > > >
    > > > > >
    > > > > > Well - I understand the argument about a connection/LU 
    > > and I did even
    > > > > > implement it (as many others - see Paul's reply). You 
    > > can aggregate at
    > > > > the
    > > > > > TCP level but you have to aggregate somewhere
    > > > > > to use effectively the bandwidth. Eliminating the CU is 
    > > not a concern
    > > > of
    > > > > > this group and not a suggestion to be accepted lightly by the
    > > > community
    > > > > > (who will do caching, storage virtualization and management
    > > > etc.) and I
    > > > > am
    > > > > > confident most of the participants on this list do not 
    > > want to discuss
    > > > > > this subject. FC over IP as  replacement for iSCSI is a poor
    > > > > suggestion -
    > > > > > FC is a networking protocol and IP too. FCP over IP 
    > is a gateway
    > > > > solution.
    > > > > >
    > > > > > Julo
    > > > > >
    > > > > > "Douglas Otis" <dotis@sanlight.net> on 12/08/2000 02:47:37
    > > > > >
    > > > > > Please respond to "Douglas Otis" <dotis@sanlight.net>
    > > > > >
    > > > > > To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    > > > > > cc:
    > > > > > Subject:  RE: Towards Consensus on TCP Connections
    > > > > >
    > > > > >
    > > > > >
    > > > > >
    > > > > > Julo,
    > > > > >
    > > > > > An architecture that scales does not need to provide 
    > the entire
    > > > > bandwidth
    > > > > > from a single device.  If these devices are each handling 25
    > > > Mbytes per
    > > > > > second, then 2 of them exceed Fibre-Channel.  Next week 
    > > you could use
    > > > 4,
    > > > > > and
    > > > > > the week after that 8.  The advantage of using a 
    > > network without any
    > > > > > practical architectural restriction with intelligence 
    > > at the client.
    > > > On
    > > > > > the
    > > > > > other hand, you want to design a single mid-point gateway to
    > > > handle the
    > > > > > entire bandwidth.  To what end?  You argue it can not 
    > process a
    > > > > > TCP session
    > > > > > per device.  You will not be tracking mid-stream errors as
    > > > this is more
    > > > > > costly.  You add to the burden of tracking the state of 
    > > the individual
    > > > > > device with this mid-stream state machine requiring 
    > > additional sorting
    > > > > due
    > > > > > to this merged protocol without taking advantage of TCP 
    > > to aid this
    > > > > > process.
    > > > > > For your type of solution, a simple hardware based 
    > > tunnel would be
    > > > > better.
    > > > > > Do not include handling of the encapsulated protocol 
    > > and at least it
    > > > > > becomes
    > > > > > understandable and more likely to manage the task.  In other
    > > > > > words, make it
    > > > > > Fibre-Channel over IP and you have a chance with your 
    > > architecture.
    > > > > >
    > > > > > The point to my statements was to indicate the device is able
    > > > to handle
    > > > > an
    > > > > > IP interface today using Fast Ethernet as the rate from 
    > > the drive is
    > > > > > relatively low.  Not to warm the cockles of marketing 
    > > pointing to
    > > > > latency,
    > > > > > but at least they sell more drives overcoming this problem
    > > > using scale.
    > > > > >
    > > > > > Doug
    > > > > >
    > > > > > > -----Original Message-----
    > > > > > > From: owner-ips@ece.cmu.edu 
    > > [mailto:owner-ips@ece.cmu.edu]On Behalf
    > > > Of
    > > > > > > julian_satran@il.ibm.com
    > > > > > > Sent: Friday, August 11, 2000 2:17 PM
    > > > > > > To: ips@ece.cmu.edu
    > > > > > > Subject: RE: Towards Consensus on TCP Connections
    > > > > > >
    > > > > > >
    > > > > > >
    > > > > > >
    > > > > > > Doug,
    > > > > > >
    > > > > > > I am not sure that I agree with your architecture statements
    > > > > but I like
    > > > > > to
    > > > > > > play with numbers (as most of the fellows engineers on this
    > > > > > list probably
    > > > > > > do). What would be in your opinion
    > > > > > > reasonable requirements for command and data transfer 
    > > rates for the
    > > > > next
    > > > > > > 3-7 years?
    > > > > > >
    > > > > > > I would like to decouple that discussion from 
    > > architecture - data
    > > > > > > rates can
    > > > > > > scale even in a shared
    > > > > > > architecture as mainframe channels have shown for years.
    > > > > > >
    > > > > > > I would rather like to understand if we can meet the 
    > > data rates with
    > > > > > > reasonable latency.
    > > > > > >
    > > > > > > Julo
    > > > > > >
    > > > > > > "Douglas Otis" <dotis@sanlight.net> on 11/08/2000 21:14:13
    > > > > > >
    > > > > > > Please respond to "Douglas Otis" <dotis@sanlight.net>
    > > > > > >
    > > > > > > To:   "Stephen Byan" <Stephen.Byan@quantum.com>, 
    > > ips@ece.cmu.edu
    > > > > > > cc:    (bcc: Julian Satran/Haifa/IBM)
    > > > > > > Subject:  RE: Towards Consensus on TCP Connections
    > > > > > >
    > > > > > >
    > > > > > >
    > > > > > >
    > > > > > > Today's drives can deliver 320 Mbits/second of data 
    > > on the outside
    > > > > > > cylinders.  Improvement of the mechanics comes at a 
    > high price
    > > > > > > with respect
    > > > > > > to power and cost.  The cost/volume trend takes us 
    > to a single
    > > > > > disk which
    > > > > > > increases access time as read channel data rate increases.
    > > > > By offering
    > > > > > > scaled throughput using more drives where each 
    > > drive's interface
    > > > > > bandwidth
    > > > > > > is restricted with respect to read channel data rates
    > > > > provides a system
    > > > > > > with
    > > > > > > uniform and superior performance.  The advantage of such an
    > > > > approach is
    > > > > > > found with respect to smaller random traffic.  With 
    > > more devices,
    > > > > > > redundancy
    > > > > > > is easily achieved and parallel access offers a means 
    > > of performance
    > > > > > > improvement by spreading activity over more devices.  The
    > > > > > switch provides
    > > > > > > bandwidth aggregation and is not found in the 
    > > individual device.
    > > > > > >
    > > > > > > An 8ms access + latency figure in the high cost 
    > > drives restricts
    > > > > > > the number
    > > > > > > of 'independent' operations that average 64k byte to 100 per
    > > > > > second or 52
    > > > > > > Mbit per second.  Such an architecture of 
    > 'restricted' drives
    > > > > > would scale
    > > > > > > whereas the solicitated burst approach does not.  An
    > > > > > independent nexus at
    > > > > > > the LUN is the only design that offers required scaling and
    > > > > > configuration
    > > > > > > flexibility.  Keeping up with the read channel is a 
    > > wasted effort.
    > > > In
    > > > > > > time,
    > > > > > > 1 Gbit Ethernet will be the practical solution about 
    > > the time drives
    > > > > are
    > > > > > 1
    > > > > > > inche in size.  Several Fast Ethernet disks combined at a 1
    > > > > > Gbit Ethernet
    > > > > > > client makes sense in cost, performance, capacity, 
    > > reliability, and
    > > > > > > scalability at this point in time.  The protocol 
    > > overhead should be
    > > > > > > addressed.  There are substantial improvements to be made to
    > > > > allow this
    > > > > > > innovation using standard adapters.
    > > > > > >
    > > > > > > The power cost to use copper 1 Gbit is high.  
    > > Firewire does not
    > > > > > scale and
    > > > > > > has a limited reach.  Firewire also places 
    > > scatter/gather on the
    > > > drive
    > > > > > > together with direct access.  Doing such over a WAN 
    > > will impose
    > > > > > > significant
    > > > > > > changes.  Serial ATA is nothing more than IDE through 
    > > a SERDES.
    > > > > >  The read
    > > > > > > channel data rate is like a drug, just say no.  It is hard
    > > > not to buy
    > > > > > > enough
    > > > > > > dram to allow a proper buffer these days.  Serial ATA 
    > > removes all
    > > > > > buffers.
    > > > > > > Intel is just usurping any remaining electronics at 
    > > the cost of
    > > > > > > sensitivity
    > > > > > > to a near by cell phone.  Fewer drives with less
    > > > electronics.  What a
    > > > > > good
    > > > > > > idea?
    > > > > > >
    > > > > > > Doug
    > > > > > >
    > > > > > > > -----Original Message-----
    > > > > > > > From: owner-ips@ece.cmu.edu
    > > > [mailto:owner-ips@ece.cmu.edu]On Behalf
    > > > > Of
    > > > > > > > Stephen Byan
    > > > > > > > Sent: Friday, August 11, 2000 7:07 AM
    > > > > > > > To: 'ips@ece.cmu.edu'
    > > > > > > > Subject: RE: Towards Consensus on TCP Connections
    > > > > > > >
    > > > > > > >
    > > > > > > > Stephen Bailey [mailto:steph@cs.uchicago.edu] wrote:
    > > > > > > >
    > > > > > > > > The gating factor for whether iSCSI succeeds is 
    > not going
    > > > > to be 200
    > > > > > > > > MB/s instead of 100 MB/s out of a single LUN.
    > > > > > > >
    > > > > > > > In general, I agree. iSCSI can succeed in the high and
    > > > > > midrange storage
    > > > > > > > market without link aggregation for a single LUN. These
    > > > markets can
    > > > > > > afford
    > > > > > > > 10 Gb/s links.
    > > > > > > >
    > > > > > > > As a disk device level interface, iSCSI will not 
    > > succeed unless
    > > > > > > > it offers at
    > > > > > > > least 2 Gb/s by around 2002, at very low cost for 
    > the link.
    > > > > Note that
    > > > > > > even
    > > > > > > > Serial ATA starts at 1.5 Gb/s in 2001. Take a look 
    > > at the Serial
    > > > ATA
    > > > > > > speed
    > > > > > > > roadmap on slide 16 of Intel's Serial ATA 
    > > presentation at WinHEC:
    > > > > > > > http://serialata.org/F9pp.pdf.
    > > > > > > >
    > > > > > > > One can argue the technical merits, but from a marketing
    > > > > > > > viewpoint, the disk
    > > > > > > > industry (both suppliers and customers) has long held the
    > > > view that
    > > > > > > > interface speeds need to match the media data rate. 
    > > iSCSI can try
    > > > > > > > to make an
    > > > > > > > argument that slower speeds are technically 
    > > adequate, but this
    > > > > > > > will increase
    > > > > > > > the barriers to establishing iSCSI as a device interface.
    > > > > > > >
    > > > > > > > > If iSCSI works at ALL in a cost effective way 
    > that can be
    > > > > > implemented
    > > > > > > > > in a disk, there'll be wild dancing in the streets and
    > > > > > you'll all (or
    > > > > > > > > maybe your companies will) be rich beyond the 
    > > dreams of avarice.
    > > > > > > > >
    > > > > > > > > The easier you can make it for the 
    > implementors, the more
    > > > > likely it
    > > > > > > > > will succeed.
    > > > > > > >
    > > > > > > > Disk drive companies have implemented much more complex
    > > > > > interfaces than
    > > > > > > > iSCSI and TCP - e.g. fibre channel arbitrated loop. 
    > > And multiple
    > > > TCP
    > > > > > > > connections don't look very hard to implement. They 
    > > just look like
    > > > a
    > > > > > > wart.
    > > > > > > > But I think a necessary one.
    > > > > > > >
    > > > > > > > Regards,
    > > > > > > > -Steve
    > > > > > > >
    > > > > > > > Steve Byan
    > > > > > > > <stephen.byan@quantum.com>
    > > > > > > > Design Engineer
    > > > > > > > MS 1-3/E23
    > > > > > > > 333 South Street
    > > > > > > > Shrewsbury, MA 01545
    > > > > > > > (508)770-3414
    > > > > > > > fax: (508)770-2604
    > > > > > > >
    > > > > > >
    > > > > > >
    > > > > > >
    > > > > > >
    > > > > >
    > > > > >
    > > > > >
    > > > > >
    > > > >
    > > > >
    > > > >
    > > > >
    > > >
    > > >
    > > >
    > > >
    > > 
    > 
    
    


Home

Last updated: Tue Sep 04 01:07:46 2001
6315 messages in chronological order