All I have to do is compromise one server to (eventually) compromise the contents of all the mirrors, and only one set of keyholders can do the core labor.
So, yes, the bandwidth costs are distributed but many other important properties of the system are very much centralized.
Conversation
that is literally true of any single-source software which is... basically all of it? What software exists that is published in multiple places independently
1
kernel.org is the canonical source distribution location of the Linux kernel, of which official releases are produced by a single individual (usually Linus).
1
1
Linus makes the mainline releases but not the stable/longterm releases.
Nearly everyone is using downstream forks of the stable/longterm kernels. Even Arch never ships mainline kernels outside [testing] anymore and has downstream patches applied despite not usually doing that.
1
1
Most users are on a more substantially modified downstream fork of a longterm branch that's at least 1-2 years old. Most distributions take many weeks or months to ship the longterm updates if at all. There's near zero quality control for kernel.org releases...
2
1
So now we are talking about gregkh instead of Linus who is cutting the releases. Those details are important, but not to the point that I think was making.
1
The mainline releases are largely only relevant to upstream kernel developers and the longterm releases are relevant to downstream kernel developers. Barely anyone is building and using those directly.
Changes are usually made/shipped downstream before incorporated upstream too.
2
1
It often takes 1-2 years to get support for new hardware upstream even with the vendor doing it. Most things are really developed and shipped downstream, even as out-of-tree modules, before eventually ending up upstream. Changes flow in multiple directions and it's very complex.
1
For an example that's important to me, Clang-compiled kernels with LTO and CFI have been shipped since 2018 on Pixel phones. Support for this still isn't in mainline despite many years of them trying to land it. It's strange seeing people talk about that as if it's bleeding edge.
1
2
At this point, I don't think it's the case that most Linux kernel development is done upstream first. Most people have given up on doing that a long time ago. You get what you need working downstream, ship it, and maybe you try to upstream it but it'll take years to see benefits.
Here at AWS, in recent times for features like Nitro Enclaves and Elastic Fabric Adapter getting changes upstream was a launch gating milestone.
Elastic Network Adapter, being "just" a NIC driver, was briefly only available out of tree.
I think "upstream first" is still needed.
1
I'm heavily focused on smartphones where nearly all the drivers are out-of-tree on launch and slowly trickle into the kernel. Literally a majority of the code is not from kernel.org but will gradually trickle there in the years following the launch of an SoC/device.
1
Show replies




