Imagine a packet in the mail. You are in a hurry, and want it to be within the receiver in less a second.
The packet first passes in a queue in your room. Then it gets placed in a security appliance, that checks what kinds of content is within the letter. Then the messenger accepts your envelope for delivery. It reads the packet address, and decides which direction it will accelerate. There’s anywhere between 4-30 messengers, each which takes your package, and starts forwarding it into the right direction.
On the receiver end the envelope has to go through again a security check.
Your package will, however, reach the destination in only 0.060 seconds! That’s 60 milliseconds. Pretty good? This is Internet. In fact, if your path is short (meaning that the data passes only two or three special devices called network routers), you might get as low as 6-7 ms lag. So your packets travel from one part of the network to another part in basically one flash.
The TCP/IP working pair, a protocol pair works approximately in this way. It’s a standardized form of an electric signal propagating forward in a connected network, where each IP can reach into any other IP (at least idealistically). IP knows the numbers and directions, whereas the higher level TCP knows how streams are constructed.
Though this 60 millisecond is fast, it is still slow for some very latent-critical applications. There’s thousands of different kind of apps that use TCP/IP. Each have a little bit differing requirements. Gaming and voice transmission are very critical and want very fast lines, even though the width (capacity, bandwidth) doesn’t have to be that large.
What’s the naive way one would get more speed? Well, probably upgrade the gear – put better equipment in the place. In fact, companies spend thousands or tens of thousands of dollars per branch office to make a “industry grade” network. But not all of those bucks buy actual capacity (speed, bandwidth). Some of it goes to making sure that only the correct packets get through into the company internal network. That’s security.
Some of the latency is inevitable in our world. The limiting factor is very fundamental: the speed of light (or electrons, for that matter) – because modern telecommunications (in fact any telecoms since about 1860s) has relied on the traveling of electrons.
Now let’s concoct the logic and inspect it a bit more. “Speed of light” sounds something that only has to do with a science fiction space shuttles. But actually the c, a constant that measures or tells us how fast light travels, is very mundane. It’s approximately 299792458 m/s. That’s a large number; approximately 300,000 kilometers per second. But in another context, that of even Earth-Moon-Earth, it is not such a vast number of distance traveled per second.
Think about for example Earth and Moon: the Moon is a ‘c’ away from us. The distance between Earth and Moon varies a little, but currently as of 2011 it is approximately 384467 km. So the round trip time from Earth, to Moon (reflecting), and back to Earth would be around 2500 milliseconds, or 2.5 seconds.
In an ideal world, the packets in the Internet, or any global network, would really travel as fast as ‘c’ allows. But in real world each packet is like a car driving through half the world; it has to stop at some random intersections in the Information Superhighway. These stops are due to queues in the routers and switches, two fundamental devices in telecommunications.
A router inspects and directs the packets towards their likely destination. All of this takes time. A router is, no matter whether it is ultimately a computer or silicon-on-chip (SOC), hardware nevertheless. It is a sequantial processor of bits, the fundamental units of information.
In addition, if the router already has workload, it will put a little pause to the packet and keep it in queue. If the queue gets to be too long, the router simply drops packets – it does this in a manner that is deemed most just: the last packets to arrive are the first ones to be dropped (kind of LIFO principle).