HTTP/1 is basically a request and response protocol: the browser asks for a resource (be it a HTML page, a CSS file, an image… whatever) and then waits for the response. During this time that connection cannot do anything else – it is blocked waiting on this response.
HTTP/1 did introduce the concept of pipelining so you could send more requests while you were waiting. That should improve things as there is now no delay in sending requests, and the server can start processing them earlier. Responses must still come back in order requested so it is not a true multi-request protocol but is a good improvement (if it worked – see below). This introduced a head of line blocking (HOLB) problem on the connection: if the first request takes a long time (e.g. it needs to do a database lookup, then do some other intensive processing to create the page), then all the other requests are queued up behind it, even if they are ready to go. In fact, truth be told, HOLB was already a problem even without pipelining as the browser had to queue up requests anyway until the connection was free to send it – pipelining just made the problem more apparent on the connection level.
On top of this pipelining in HTTP/1 was never supported that well, complicated to implement and could cause security issues. So even without the HOLB issue, it still wasn’t that useful.
To get around all this HTTP/1 uses multiple connections to the server (typically 6-8) so it can send requests in parallel. This takes effort and resources on both the client and server side to setup and manage. Also TCP connections are pretty inefficient for varied reasons and take time to get up to peak efficiency – by which point you’ve probably done the heavy lifting and no longer require multiple connections.
HTTP/2, on the other hand, has the concept of bi-directional, multiplex streams baked in from the start. I’ve a detailed explanation of what they are here: What does multiplexing mean in HTTP/2. This removed the blocking nature of HTTP/1 requests, introduces a much better, fully featured, fully supported version of pipelining and even allows parts of the response to be sent back intermingled with other responses. All this together solves HOLB – or more accurately prevents it even being an issue.
The one point that should be noted is that while this solves HTTP HOLB, it’s still built on TCP and it has its own TCP HOLB issue which may be worse under HTTP/2 as it’s a single connection! If a single TCP packet is lost, then the TCP connection must request it be resent and wait for that packet to be retransmitted successfully before it can process subsequent TCP packages – even if those packets are for other HTTP/2 streams that could, in theory, be processed during that time (like would happen under true separate connections under HTTP/1). Google is experimenting with using HTTP/2 over non-guaranteed UDP rather than guaranteed TCP in a protocol called QUIC to resolve this issue and this is in the process of being set as a web standard too (just like SPDY – initially a Google implementation – was standardised to HTTP/2).
Leave an answer