Skip to main content

Byte-Sized Lesson: What Causes Latency?

Packages delivered via computer
(Image credit: Getty Images)

Latency is discussed widely in the AV industry. However, we’ll now focus on one aspect that is almost never considered: the impact of latency on TCP based video flows. These flows include adaptive bitrate streams such as HLS, DASH, and NDI. Since they are based on TCP, their transmission rate, frequency, and retransmission rules are subject to the TCP algorithm. This is where we need to look more closely.

There are three popular versions of TCP: TCP Reno, Cubic, and Compound. Their essential operation is similar, so for simplicity, we will base this discussion on TCP Reno. When a device has a TCP stream to transmit, it negotiates with the proposed receiver and is told a receive window that the partner device will be using. This is the number of bytes it can hold in its receive buffer. Common window sizes are 128kB, 256kB, or multiples of these sizes. The sender also initiates a sending block size, usually symbolized by the term cwnd, which is typically four TCP segments. When transmission begins, all four segments are sent. The sender waits for acknowledgement of that block of four. The receiver’s behavior is quite different. It acknowledges every other segment. So, it will acknowledge the second segment and then the fourth segment.

Related: Byte-Sized Lesson: Network Buffers

With the assurance that the receiver has all four segments, the sender raises cwnd to eight, or double its previous value. It immediately sends all eight segments and awaits acknowledgement of that group. The receiver again acknowledges receipt of every other segment. Following the same pattern, the sender will continue to double cwnd and await the acknowledgement of the entire block of data. The rapid escalation will slow when cwnd reaches half of the receiver’s advertised window. Then the sender will increase cwnd by increments of one. Notice that if the cwcn is eight and just one of the segments gets lost, dropped, or is simply delayed in a busy buffer, the sender must wait for acknowledgement of every segment in the block.

The purpose of this process is to gradually saturate the link between the sender and the receiver. The sender will back off its transmission rate when it gets notification that a packet was dropped by the network or the receiver. While that is a topic for another lesson, we have enough understanding to assess the impact of latency on the TCP process. Latency impacts the transmission rate because the sender must wait for all the packets sent under the current cwnd to be acknowledged. It is also important to note that the latency in the return path is critical. If the acknowledgements are slow to get back to the sender, the sender will continue to await the acknowledgement of the entire block. This aspect of TCP operation is typically overlooked. This can be a significant problem on asymmetric bandwidth links sold to consumers. Telco, DSL, and cable links are often 10 times bigger in the downward direction than in the upward direction. In receiving ABR video, it is the upward connection that is carrying the acknowledgements. High traffic on the upward link, such as a video upload, will cause significant slowing of any download.

Phil Hippenstel, EdD, is a regular columnist with AV Technology. He teaches information systems at Penn State Harrisburg.