"Windows 95 was 30 MB" is such an ignorant, obnoxious, trite take. a triple buffered framebuffer (which you want for smooth scrolling) for my 4K display is 70 MB in *pixels alone*. Obviously a complete webpage with precomposed textures would take more.https://twitter.com/julienPauli/status/1042113172143067138 …
one could argue that a better solution is to return an error from malloc/mmap/fork when a page needs to be reserved but doesn't have anything to back it.
-
-
I see, but I would argue that returning an error from malloc is not too different than killing the process, so I'm not sure if the extra effort really worth it for desktop applications. (should mmap do that with signal? Not entirely sure how would that work).
-
mmap can return an error too. the benefit of returning an error is that you can actually handle it, and e.g. free some caches and try again.
-
IIRC there are solutions for these usecases too: https://lwn.net/Articles/590960/ … I thought you had in mind signaling the app that accessed anonymous page w/o backing physical page. If you're unwilling to virtually allocate without physical pages, isn't that possible with Linux?
-
I'm willing to virtually allocate, I want smart ram eviction (+swap) rather than arbitrary killing.
-
Check out the link, you can get just that with Linux. And as you know, killing is not arbitrary, it can be improved, but I don't think calling heuristic people worked on "arbitrary" is constructive.
- 1 more reply
New conversation -
-
-
Iow, disabling overcommit. Another alternative is to swap out pages of processes that spread their resident memory too widely, as to make thrashers pay for their thrashing rather than the entire rest of the system
-
And there are probably more alternatives, but once a "solution" is found, that kind of works most of the time, it's "good enough" and improvements are uphill battle against entrenched solution, as bad as it is. Quality (efficiency and correctness) are not important enough
-
I have my own example: a list of gripes with all existing build systems that make them quite terrible, especially reliability wise. I wrote buildsome to address these gripes, but most people just don't mind the occasional Voodoo bug from incorrect builds
-
While I'm sure buildsome is great, I think there are other correct buildsystems, and tup-like build systems also have issues with correctness. But I agree with the general spirit. It's better to avoid too much effort for fixing a recoverable bug happening once in a blue moon.
-
Tup like build systems have a small subset of the problems, that are inherent in free form execution. If you use file system access only, you will have deterministic builds. And quality not being worth the effort is what the rant is all about. I think quality is very important
-
And the voodoo bugs aren't that rare, it's just that they're shrugged off without even knowing where they're from. It creates a culture of ignoring bugs. Of distrusting test results. Makes catching rare races so much harder. Pollutes everything with uncertainty
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.