|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: iSCSI: "Wedge" driversJulo Wrote: >I understand 3 and I somehow expected it. If I would have to >reimplement this under iSCSI I would choose a plug-in in >iSCSI to do it (a policy module) and iSCSI would clearly >support such a construct (and make it also future-proof). >I have some trouble with 1 and 2. In whatever design you >choose at the array level you have to have some sharing as >some commands are targeted to a LU and all it's queues >(i.e. separation per initiator is never complete if you >take SAM seriously). However some design might choose to >share only for those commands - and talking to you >with you chair hat off - I don't think that you are with >one of those (:- I am no sure I understand 4 either - if >the array does no "sense" the fail over. I have been reading the "Wedge Drivers" and their "load balance and fail over recovery" with great interest. I am not familiar with the mainframe I/O design. However, I worked on SCSI and fibre channel host bus adapters (HBA's) for long time. I would like apply those discussions on the adapters that I know of. The understanding of the HBA's would greatly simplify the discussions of wedge driver and its load balance and recovery. The SCSI and fibre channel HBA's today executes a SCSI command atomically without any intervention from the device driver. When there are multiple HBAs on one server, they don't share command execution states. Each HBA executes their own SCSI commands atomically. The HBA and its driver perform discovery of SCSI or FCP devices at power up. It numbers the devices sequentially. A table is created to map the device number to a SCSI ID or a fibre channel 24-bit address. When there are multiple HBA's, it is the responsibility of the Upper Level Drivers to direct the SCSI commands through their appropriate HBA's. The HBA itself can care less. For a iSCSI HBA, there won't be any discovery at powerup. Instead, the HBA will monitor the ARP and FARP requests and responses to create a IP-to-FC-address translation table. (This discussion does not exclude an Ethernet NIC card.) The HBA relies on the FCP/SCTP driver to create endpoints and associates for connection oriented exchanges. The iSCSI driver running above the FCP/SCTP driver and below the SCSI-class driver will be responsible for SCSI-over-IP messages. In fact the iSCSI HBA is no different from an NIC with the exception of the FCP/RDMA support. When there are multiple iSCSI HBA's, The FCP/SCTP driver will be responsible to determine if an IP device can be reached from a different HBA. (Whether an IP device can be reached by another IP address is beyond the scope of this discussion.) We may choose to include a subset of the SCTP function in the iSCSI driver if we don't wait for the SCTP implementation. In the discussion below we refer the iSCSI driver with a subset of the SCTP function. Knowing there is an alternative HBA to reach the same IP device, it is certainly the discretion of the iSCSI driver to sent IP-SCSI messages to the HBA of its choice. A response must come back to the same HBA because it is difficult for two HBA's to share execution states. For example, if we set up one HBA to do the RDMA function, it is not nice to have the data coming back to the other HBA. Both fail over and load balance can be done in iSCSI driver relatively easily because it is fully aware of the availability of alternative HBA's reaching the same IP device. In theory, we could break up the iSCSI driver by moving the SCSI-to-IP-packet conversion function to the HBA and by keeping all the remaining functions. If we do so, the SCSI command will be executed atomically inside an HBA.
Home Last updated: Tue Sep 04 01:07:40 2001 6315 messages in chronological order |