I feel like this is ahistorical. for the majority of the time I've used OSS (!) distribution was explicitly NOT centralized: (Debian) mirrors were operated by anyone and everyone
But eventually (GitHub) we moved away from community infra to centralization, mainly for convenience
Conversation
Very much still hub and spoke though. And the labor of ftpmasters and other people in the release pipeline was also historically very centralized (if anything, much more than in the current dominant language-package-manager model).
2
1
hub and spoke is not centralized, it is by definition distributed with a single reference source of truth (i.e. the software as published), which does not contradict the above at all.
1
1
All I have to do is compromise one server to (eventually) compromise the contents of all the mirrors, and only one set of keyholders can do the core labor.
So, yes, the bandwidth costs are distributed but many other important properties of the system are very much centralized.
1
that is literally true of any single-source software which is... basically all of it? What software exists that is published in multiple places independently
1
kernel.org is the canonical source distribution location of the Linux kernel, of which official releases are produced by a single individual (usually Linus).
1
1
Linus makes the mainline releases but not the stable/longterm releases.
Nearly everyone is using downstream forks of the stable/longterm kernels. Even Arch never ships mainline kernels outside [testing] anymore and has downstream patches applied despite not usually doing that.
1
1
Most users are on a more substantially modified downstream fork of a longterm branch that's at least 1-2 years old. Most distributions take many weeks or months to ship the longterm updates if at all. There's near zero quality control for kernel.org releases...
2
1
So now we are talking about gregkh instead of Linus who is cutting the releases. Those details are important, but not to the point that I think was making.
1
The mainline releases are largely only relevant to upstream kernel developers and the longterm releases are relevant to downstream kernel developers. Barely anyone is building and using those directly.
Changes are usually made/shipped downstream before incorporated upstream too.
It often takes 1-2 years to get support for new hardware upstream even with the vendor doing it. Most things are really developed and shipped downstream, even as out-of-tree modules, before eventually ending up upstream. Changes flow in multiple directions and it's very complex.
1
For an example that's important to me, Clang-compiled kernels with LTO and CFI have been shipped since 2018 on Pixel phones. Support for this still isn't in mainline despite many years of them trying to land it. It's strange seeing people talk about that as if it's bleeding edge.
1
2
Show replies
anecdotally, i recently had to add some patches from linux-mediatek patchwork to get my wifi card working
1
1
i think this mode of usage doesn't appear until projects reach truly immense scale (and/or are necessary to enable other software to work on top, like an OS kernel, programming language, or build tool)





