|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: TCP (and SCTP) sucks on high speed networks> Matt Wakeley: > Consider a 10Gbs link to a destination half way around the world. > A packet drop due to link errors (not congestion or infrastructure > products) can be expected about every 20 seconds. However, with > a RTT of 100ms (not even across the continent), if a TCP connection > is operating at 10Gbs, the packet drop (due to link error) will > drop the rate to 5Gbs. It will take 4 *MINUTES* for TCP to ramp > back up to 10Gbps. Further analysis could make this case stronger. For example: I suspect your figure of one link-corrupted packet every 20 seconds assumes some particular encoding method. What sort of coding would be required to reduce the link's uncorrected error rate by a factor of 10 or 100? Are you envisioning that the entire 10 Gb link will be occupied by a single TCP connection? That is, are you thinking about a 10 Gb *link* or a 10 Gb *flow*? When do we expect 10 Gb/s cross-country paths where link losses dominate congestion losses? How does the bandwidth-delay product compare with the amount of outstanding data that SCSI can have for a single device? Similarly, how much data will a typical application be able to transfer before needing to stop for a filesystem-level synchronization operation (such as locking or waiting for data to commit to disk or doing a directory update or allocating more disk quota)? Dave Eckhardt
Home Last updated: Tue Sep 04 01:06:13 2001 6315 messages in chronological order |