|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Connection Consensus ProgressJulian, If one of the reasons for supporting multiple TCP connections is to allow load balancing across HBAs for fault tolerance, this reason will be lost as the TCP processing moves down into the HBA. My understanding was that TCP on a chip technology is desired for high-performance iSCSI. If this is the case, then multiple connections for tolerance of HBA faults is a misfired bullet. -Chris >>X-Authentication-Warning: ece.cmu.edu: majordom set sender to owner-ips@ece.cmu.edu using -f >From: julian_satran@il.ibm.com >X-Lotus-FromDomain: IBMIL@IBMDE >To: ips@ece.cmu.edu >Subject: Re: Connection Consensus Progress >Mime-Version: 1.0 >Content-Disposition: inline > > > >David, > >I understand and share your concerns about how good we understand the >requirements for recovery and balancing. > >But as I stated repeatedly we can't wait for somebody else to solve our >problem and the >field requirement is there as witnessed by the products that attempt to >solve it in a >proprietary fashion (and BTW a TCP connection failure could also be >repaired simply by >TCP but TCP does not do it). > >However if they are there the both sets have to be solved at SCSI level - >since several links >if not handled properly increase the failure probability. > >However your last point about multiple HBAs is lost on me. >We attempted to make iSCSI work with several HBAs and went to some length >to keep >the requirements to the HBA hardware as if the HBAs act independently >(counters can be >shared). Is there something we missed? > >Julo > >David Robinson <David.Robinson@EBay.Sun.COM> on 25/08/2000 23:32:23 > >Please respond to David Robinson <David.Robinson@EBay.Sun.COM> > >To: ips@ece.cmu.edu >cc: (bcc: Julian Satran/Haifa/IBM) >Subject: Re: Connection Consensus Progress > > > > >> I agree with you up to a point. I know of customers that always need >> multiple physical paths to the Storage Controller. Regardless of how >fast >> the link is, they need a faster link, and these hosts need to be able to >> spread the load across several different HBAs. (Some are on one PCI bus, >> and some on another, etc.) When this happens, as it does today, with >Fibre >> Channel, we are required, as are a number of other vendors, to come up >with >> a multi HBA balancer. We call our Fibre Channel version "DPO" (Dynamic >> Path Optimizer), EMC has another version (I do not know what they call >> theirs). This Code sits as a "Wedge" Driver above the FC Device Drivers >> and balances the work across the different FC HBAs. I think this same >> thing will be required in the iSCSI situation. Note:I think, the FC >> versions only work with IBM or EMC's etc. Controllers. (SUN probably has >a >> similar one also.) > >I understand this scenerio, it is often used as a high availability >feature. >The key question is if this should be handled above at the SCSI layer >as it is most often done now, or in the iSCSI transport. While I like >the goal to unify this into one architecture, I have serious doubts that >we have the understanding of the requirements and needs necessary >to get it right. Thus we will ultimately end up in the same situation >we are in today with a SCSI layer solution. In addition, if the promise >of a hardware iSCSI NIC/HBA is acheived, allowing multiple paths using >different NIC/HBAs will still require "wedge" software. > > -David > > > > >
Home Last updated: Tue Sep 04 01:07:42 2001 6315 messages in chronological order |