Conversation

Replying to
It will evenly divide the bandwidth between hosts so a host opening many connections (download managers) doesn't end up with higher bandwidth. It also evenly divides the per-host bandwidth between the flows to the host. SSH latency will also remain very low even under heavy use.
1
5
CAKE: * client A with 1 connection gets 48mbit * client B with 8 connections gets 6mbit each adding up to 48mbit No CAKE: * client A with 1 connection gets ~6mbit to ~16mbit * client B with 8 connections gets ~6mbit to ~16mbit each adding up to ~80-90mbit Stark difference.
1
7
Those results are with an OVH server with 100mbit bandwidth. Can monitor stats via tc: watch -n 1 tc -s qdisc show dev eth0 Proper bandwidth configuration is essential. If it's not the bottleneck, it won't shape traffic. Can see that from the backlog clearing out in stats.
1
6
Making it the bottleneck means setting it to 99.9% of the provisioned bandwidth for this use case. Could probably go even closer since the limit is implemented in close proximity with a high level of precision. Need to just move that bottleneck to where you can shape traffic.
1
5
Ended up publishing this as an article on the GrapheneOS site:
Quote Tweet
Server traffic shaping: grapheneos.org/articles/serve. This is the first of many articles we'll be publishing on assorted topics outside the scope of the usual documentation. These are going to be maintained and expanded over time. It's part of our documentation, not a blog post.
5