New HTTP dropped two months ago:
datatracker.ietf.org/doc/rfc9114/
http3 build on "quic" on top of udp. Kinda interesting. Snarky analysis is just that with site's so crammed with ads vendors want faster ways to ship them. Solution should be ad blocker 😎
Conversation
Less snarky thoughts are just interested in the trend away from TCP and more serialization. Will need to read more, curious what this offers on top of HTTP2/spdy multi fetch
3
1
Replying to
TCP provides the semantics of a stream so HTTP/2 multiplexing everything over 1 connection makes that all get tied together. Data is received in the order it was sent as one stream. It depends on having extremely good congestion control and properly controlled minimal buffers.
1
1
There's still a lot of bufferbloat and it completely breaks HTTP/2 prioritization. Most routers have something far worse than CAKE or even fq_codel as their queuing
discipline.
Guide at grapheneos.org/articles/serve if you're interested in making it work as well as possible though.
1
1
Probably want to set net.ipv4.tcp_notsent_lowat lower than 128k if you actually want HTTP/2 prioritization to work though. Consider a client starting up transfers for a dozen images and then wanting a higher prioritize resource. Server will have filled huge buffers with images.
1
Now consider what happens if some of those packets are lost and need to be retransmitted after the timer for that, etc. You're waiting on all of that because you're using a single TCP connection. High priority resource may be 200 bytes but it's going to be waiting for megabytes.
1
1
QUIC provides support for concurrent streams / messages within a connection with different priorities at the transport and encryption layers by replacing TCP and TLS.
Opening many TCP connections consumes too many resources, is unfair to other applications and they start slowly.
1
Download managers use 128 connections instead of 1 because most queuing is totally unfair or at best has fairness across connections (fq_codel), unlike CAKE which does fairness across hosts on either end and then connections. Browsers were doing it pre-HTTP/2, not intentionally.
It's nice browsers keep a single connection to each site instance now since it causes less congestion, results in better fairness (across sites, across apps, and due to lack of sophisticated fair queuing also across dst/src IPs), etc. but it causes head of line blocking issues.
1
1
Major downside of QUIC as it will be implemented in practice: it's going to be done via a library in applications rather than at the OS layer, so it's going to regress OS awareness and control over what's happening, and you just have to hope applications play fair and do it well.
1
1
Show replies

