Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

StackOverflow Point

StackOverflow Point Navigation

  • Web Stories
  • Badges
  • Tags
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Web Stories
  • Badges
  • Tags
Home/ Questions/Q 241259
Next
Alex Hales
  • 0
Alex HalesTeacher
Asked: August 10, 20222022-08-10T03:13:07+00:00 2022-08-10T03:13:07+00:00In: http, http2, Networking, tcp

http – How does HTTP2 solve Head of Line blocking (HOL) issue

  • 0

[ad_1]

HTTP/1 is basically a request and response protocol: the browser asks for a resource (be it a HTML page, a CSS file, an image… whatever) and then waits for the response. During this time that connection cannot do anything else – it is blocked waiting on this response.

HTTP/1 did introduce the concept of pipelining so you could send more requests while you were waiting. That should improve things as there is now no delay in sending requests, and the server can start processing them earlier. Responses must still come back in order requested so it is not a true multi-request protocol but is a good improvement (if it worked – see below). This introduced a head of line blocking (HOLB) problem on the connection: if the first request takes a long time (e.g. it needs to do a database lookup, then do some other intensive processing to create the page), then all the other requests are queued up behind it, even if they are ready to go. In fact, truth be told, HOLB was already a problem even without pipelining as the browser had to queue up requests anyway until the connection was free to send it – pipelining just made the problem more apparent on the connection level.

On top of this pipelining in HTTP/1 was never supported that well, complicated to implement and could cause security issues. So even without the HOLB issue, it still wasn’t that useful.

To get around all this HTTP/1 uses multiple connections to the server (typically 6-8) so it can send requests in parallel. This takes effort and resources on both the client and server side to setup and manage. Also TCP connections are pretty inefficient for varied reasons and take time to get up to peak efficiency – by which point you’ve probably done the heavy lifting and no longer require multiple connections.

HTTP/2, on the other hand, has the concept of bi-directional, multiplex streams baked in from the start. I’ve a detailed explanation of what they are here: What does multiplexing mean in HTTP/2. This removed the blocking nature of HTTP/1 requests, introduces a much better, fully featured, fully supported version of pipelining and even allows parts of the response to be sent back intermingled with other responses. All this together solves HOLB – or more accurately prevents it even being an issue.

The one point that should be noted is that while this solves HTTP HOLB, it’s still built on TCP and it has its own TCP HOLB issue which may be worse under HTTP/2 as it’s a single connection! If a single TCP packet is lost, then the TCP connection must request it be resent and wait for that packet to be retransmitted successfully before it can process subsequent TCP packages – even if those packets are for other HTTP/2 streams that could, in theory, be processed during that time (like would happen under true separate connections under HTTP/1). Google is experimenting with using HTTP/2 over non-guaranteed UDP rather than guaranteed TCP in a protocol called QUIC to resolve this issue and this is in the process of being set as a web standard too (just like SPDY – initially a Google implementation – was standardised to HTTP/2).

[ad_2]

  • 0 0 Answers
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report
Leave an answer

Leave an answer
Cancel reply

Browse

Sidebar

Ask A Question

Related Questions

  • xcode - Can you build dynamic libraries for iOS and ...

    • 0 Answers
  • bash - How to check if a process id (PID) ...

    • 396 Answers
  • database - Oracle: Changing VARCHAR2 column to CLOB

    • 370 Answers
  • What's the difference between HEAD, working tree and index, in ...

    • 361 Answers
  • Amazon EC2 Free tier - how many instances can I ...

    • 0 Answers

Stats

  • Questions : 43k

Subscribe

Login

Forgot Password?

Footer

Follow

© 2022 Stackoverflow Point. All Rights Reserved.

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.