SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Concensus Call on Urgent Pointer.



    At 01:10 PM 11/20/00 +0000, Dick Gahan wrote:
    
    
    >client----------------switch---------------------switch------------------se 
    >rver
    
    Too simplistic for the majority of data centers.  The one sitting right 
    next to me has several hundred servers with multiple switches (varying from 
    4-8) between any two.  This is also true of many customer environments as 
    well.  In any case, it isn't really the configuration that is important but 
    the packet drop rate due to either corruption (very low rates should be 
    assumed - at least 10E-9 to 10E-12) or congestion which ranges from 1-3% in 
    the Internet (depending upon which set of measurements you go by) and will 
    vary within the data center depending upon topology.  With dark fiber 
    providers, one can provision around the congestion problem but at a higher 
    cost - equally applies to data centers.
    
    
    >Assume they are connected with copper at the max allowable distances i.e. 100M
    >between devices.
    >
    >Speed of light is 300m/uS and I?ll assume speed of signal on wire at 0.5c i.e.
    >150m/uS:- 300M => 2uS delay
    >
    >Assuming Max sized Ethernet frames, 1500bytes => 12uS to transmit at Gigabit
    >speed.
    >=> 12uS from Endstation to switch, 12uS from switch to switch and 12uS to 
    >other
    >Endstation => 36uS.
    >Lets assume each switch takes 6uS internally to switch the segment. => 12us in
    >total
    >Lets also assume that Each endstation takes 25uS to process the TCP segment.
    >This is acceptable for
    >Hardware based TCP Engines.(100?s of ms for software based solutions)
    
    I expect hardware implementations to be in the sub 10 usec range (most 
    likely in the 4-5) for their operations.
    
    We have software implementations today in the 35-45 usec range for request 
    / response workloads for small packets with all processing in 
    software.  The processing "time" would be slightly higher due to the longer 
    DMA times.
    
    >So in the LAN the memory requirements seem acceptable and not too costly.
    >Most iSCSI deployments will be into the LAN.
    
    Agreed - there majority will be in the LAN but that does not negate the 
    need to be robust in other topologies.
    
    >MAN Environment.
    >
    >Assume 10KM(max FC distance?) fibre optic connection
    
    40 Km should be the base value (FC protocol limitation) with up to 100 Km 
    being a better value given customer reqs for most MAN environments.
    
    For all of these configurations, one should do the calculations for 10 GbE 
    as a design point comparison.  Most will agree that most storage devices 
    will be able to use 10 GbE bandwidth by 2002-2003 (or at least an 
    aggregation of GbE in 2-4 port configurations).  I'd also recommend adding 
    in a 40 Gb (already deployed by some today) and 100 Gb (prototype optics 
    have been demonstrated) design point to the above and redo all of the calcs 
    just to show the entire data rate range.
    
    Mike
    
    


Home

Last updated: Tue Sep 04 01:06:22 2001
6315 messages in chronological order