Conversation

All I have to do is compromise one server to (eventually) compromise the contents of all the mirrors, and only one set of keyholders can do the core labor. So, yes, the bandwidth costs are distributed but many other important properties of the system are very much centralized.
1
Linus makes the mainline releases but not the stable/longterm releases. Nearly everyone is using downstream forks of the stable/longterm kernels. Even Arch never ships mainline kernels outside [testing] anymore and has downstream patches applied despite not usually doing that.
1
1
The mainline releases are largely only relevant to upstream kernel developers and the longterm releases are relevant to downstream kernel developers. Barely anyone is building and using those directly. Changes are usually made/shipped downstream before incorporated upstream too.
2
1
It often takes 1-2 years to get support for new hardware upstream even with the vendor doing it. Most things are really developed and shipped downstream, even as out-of-tree modules, before eventually ending up upstream. Changes flow in multiple directions and it's very complex.
1
At this point, I don't think it's the case that most Linux kernel development is done upstream first. Most people have given up on doing that a long time ago. You get what you need working downstream, ship it, and maybe you try to upstream it but it'll take years to see benefits.
1
Here at AWS, in recent times for features like Nitro Enclaves and Elastic Fabric Adapter getting changes upstream was a launch gating milestone. Elastic Network Adapter, being "just" a NIC driver, was briefly only available out of tree. I think "upstream first" is still needed.
1
Show replies