Very much still hub and spoke though. And the labor of ftpmasters and other people in the release pipeline was also historically very centralized (if anything, much more than in the current dominant language-package-manager model).
Conversation
hub and spoke is not centralized, it is by definition distributed with a single reference source of truth (i.e. the software as published), which does not contradict the above at all.
1
1
All I have to do is compromise one server to (eventually) compromise the contents of all the mirrors, and only one set of keyholders can do the core labor.
So, yes, the bandwidth costs are distributed but many other important properties of the system are very much centralized.
1
that is literally true of any single-source software which is... basically all of it? What software exists that is published in multiple places independently
1
kernel.org is the canonical source distribution location of the Linux kernel, of which official releases are produced by a single individual (usually Linus).
1
1
Linus makes the mainline releases but not the stable/longterm releases.
Nearly everyone is using downstream forks of the stable/longterm kernels. Even Arch never ships mainline kernels outside [testing] anymore and has downstream patches applied despite not usually doing that.
1
1
Most users are on a more substantially modified downstream fork of a longterm branch that's at least 1-2 years old. Most distributions take many weeks or months to ship the longterm updates if at all. There's near zero quality control for kernel.org releases...
2
1
So now we are talking about gregkh instead of Linus who is cutting the releases. Those details are important, but not to the point that I think was making.
1
The mainline releases are largely only relevant to upstream kernel developers and the longterm releases are relevant to downstream kernel developers. Barely anyone is building and using those directly.
Changes are usually made/shipped downstream before incorporated upstream too.
2
1
It often takes 1-2 years to get support for new hardware upstream even with the vendor doing it. Most things are really developed and shipped downstream, even as out-of-tree modules, before eventually ending up upstream. Changes flow in multiple directions and it's very complex.
For an example that's important to me, Clang-compiled kernels with LTO and CFI have been shipped since 2018 on Pixel phones. Support for this still isn't in mainline despite many years of them trying to land it. It's strange seeing people talk about that as if it's bleeding edge.
1
2
At this point, I don't think it's the case that most Linux kernel development is done upstream first. Most people have given up on doing that a long time ago. You get what you need working downstream, ship it, and maybe you try to upstream it but it'll take years to see benefits.
1
Show replies




