Friday, 22 February 2013 13:07
Network skeptics get a hearing. Martin Geddes is making waves with the contentions that many networks have unidentified problems. Most of us assume that networks with consistent speed and reasonable latency are fine but that's not always true. "Bufferbloat" can occur when buffered packets get out of sync with TCP-IP acknowledgement and flow control. http://bit.ly/XQW4No There's related work by John Day at http://rina.tssg.org/docs/JohnDay-LostLayer120306.pdf
Broadband networks in my experience tend to be excellent with congestion problems few on major networks. Martin sees occasional exceptions and suggests a better way to manage networks is to measure and report congestion problems and delays. He writes about "quality attenuation" and sent me this description of the issue.
"There are two ways of thinking about networks, depending on which side of the mirror you stand. The dominant framing is that networks do something positive: they deliver packets. The more packets they deliver, the better. What you need in order to do this is to have lots of 'bandwidth'.
The alternative (and paradoxical) framing is to see them as machines that do something negative: they cause loss and delay to data in transmission. In this model, the job of a network is to supply 'fresh information' between computation processes, and ensure that each flow of information is not delivered with more mouldy delay and rotten loss than the process can tolerate.
This 'freshness' is about quality rather than bandwidth. What networks do is to impair flows, i.e. to create and allocate quality attenuation. It turns out that this model is far more congruent with the reality of networks than the bandwidth model.
This is rather like having a geocentric vs heliocentric model of the solar system. You are looking at the same bodies and relationships, but one vantage point both makes their relationship much simpler, and also enables an predictive theory ("gravity") that lets you model the motion of heavenly bodies you have not even observed yet. Likewise, the quality attenuation model of networking gives you a predictive power that was previously thought impossible. We can finally move from an alchemist craftsman-tinkerer model to a fully-blown networking science as a result.
For more information, see this essay or this presentation. The work of my colleagues Neil Davies and Peter Thompson is comparable to that of Turing and Church. Instead of a theory of computability, they have a viable theory of translocatability.
The problems of application failure due to contention and network failure due to chaotic behaviour are not new or novel. The Internet is a more like a cult than a technology - a system of belief and belonging that is sealed against outside heretical viewpoints, however well-anchored in reality and observation they may be."
My take is the problems are fewer than Geddes believes but I don't have enough empirical data to confirm that. This is definitely worth far more research.
Last Updated on Friday, 22 February 2013 15:08