|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Three states for a binary bitI think it has been quiet often asked why overengineering cannot remove congestion.over engineering can never be the ultimate solution to the congestion problem .As mentioned earlier, congestion is a transient behaviour, which can be possible in an over provisioned network too as internet is free for all. Having a higher bandwidth network would be like a wide road which can handle a large traffic most of the time,but which at times could lead to traffic jams with sudden surge in traffic from anywhere.Hence a solution very close to ideal would be to have greater control at the edges together with improved algorithms at the individual sources to handle congestion and corruption. Taking this into consideration, TCP certainly needs an improvement in its algorithms to improve itself at times of congestion. But that alone won't help it or the network in any way. The effort to make TCP less benevolent would certainly help TCP to some extent , but that would mean pushing greater amount of data to the network which inturn could be detrimental to TCP itself as it is going to increase the congestion more. Hence there is a greater need to have a congruent effort by both the network, represented by the edge gateways(as they will have a better idea of the whole picture of the network) and sources using TCP or any other protocols to reduce the congestion and inturn benefit from it. Concentrating on only one of these is not going to solve the problem in any way. overprovisioning could certainly not, as can be seen from the past. The "need for more" is an inherent quality of man which can never be fulfilled but control over it would certainly help to make things better. Renjish. Wise men talk because they have something to say; fools talk because they have to say something. Plato On Tue, 5 Dec 2000, RJ Atkinson wrote: > At 19:52 05/12/00, George Michaelson wrote: > > >If the marginal cost of 'overengineering' dropped radically, > >so we could get to 100x in front of current end-to-end requirements, would people stop dicking with protocols to make them more efficient? > > In my experience, this is possible today in many countries. > One lights long-haul glass with WDM that connects routers, then > builds MANs and data centres using GigE (soon to be 10GigE) over > glass at up to ~70km between sites. And it is a lot more affordable > to over-engineer using Ethernet technology or something else simple, > than to build something more complex and watch operations folks fail. > > Other folks in the ISP business will say roughly the same > thing. In many countries, there is a ton of unlight fibre in the > ground or in the process of being laid. 1/10 GigE is a very > cost-effective way to over-engineer. > > It doesn't seem to stop people messing with micro-optimisations > of protocols, however, to answer your other question. > > Cheers, > > Ran > rja@inet.org >
Home Last updated: Tue Sep 04 01:06:07 2001 6315 messages in chronological order |