Conversation

New HTTP dropped two months ago: datatracker.ietf.org/doc/rfc9114/ http3 build on "quic" on top of udp. Kinda interesting. Snarky analysis is just that with site's so crammed with ads vendors want faster ways to ship them. Solution should be ad blocker 😎
1
2
Less snarky thoughts are just interested in the trend away from TCP and more serialization. Will need to read more, curious what this offers on top of HTTP2/spdy multi fetch
3
1
Replying to
TCP provides the semantics of a stream so HTTP/2 multiplexing everything over 1 connection makes that all get tied together. Data is received in the order it was sent as one stream. It depends on having extremely good congestion control and properly controlled minimal buffers.
1
1
Replying to and
Probably want to set net.ipv4.tcp_notsent_lowat lower than 128k if you actually want HTTP/2 prioritization to work though. Consider a client starting up transfers for a dozen images and then wanting a higher prioritize resource. Server will have filled huge buffers with images.
1
Replying to and
Now consider what happens if some of those packets are lost and need to be retransmitted after the timer for that, etc. You're waiting on all of that because you're using a single TCP connection. High priority resource may be 200 bytes but it's going to be waiting for megabytes.
1
1
Replying to and
Download managers use 128 connections instead of 1 because most queuing is totally unfair or at best has fairness across connections (fq_codel), unlike CAKE which does fairness across hosts on either end and then connections. Browsers were doing it pre-HTTP/2, not intentionally.
1
Replying to and
It's nice browsers keep a single connection to each site instance now since it causes less congestion, results in better fairness (across sites, across apps, and due to lack of sophisticated fair queuing also across dst/src IPs), etc. but it causes head of line blocking issues.
1
1
Show replies