"Windows 95 was 30 MB" is such an ignorant, obnoxious, trite take. a triple buffered framebuffer (which you want for smooth scrolling) for my 4K display is 70 MB in *pixels alone*. Obviously a complete webpage with precomposed textures would take more.https://twitter.com/julienPauli/status/1042113172143067138 …
-
-
The same tech nostalgia people are often using low input latency of 40-year-old emacs (or whatever) as an example of how that tech is good, and I agree! Low input latency is important. Moderate RAM use is how we do this for webpages, and that is not inherently wrong.
-
They clearly don't use emacs on a day to day basis. I love it but make a line longer than 250 chars and that gap buffer implementation slows it to a crawl
-
Gap buffer is a bad design, but there's no excuse for long lines.
-
And FYI I do use emacs on a daily basis. This is nothing like nonsensical comparisons with Win95 which is unusable. Emacs is a maintained piece of software with ongoing nice developments.
End of conversation
New conversation -
-
-
Maybe I misunderstand; I thought you were talking about loading that much data over the network, not rendering pipeline on the client. Still not sure how it helps input latency though.
-
Ideal performance and memory usage should be achieved by compressing all image data, decompressing on the fly when displaying it. (Decompressing jpeg is a lot faster than memcpy if done right.)
-
ideal performance and power usage (which is more important than memory usage for the vast majority of browser users today) is achieved by doing everything on GPU, which precludes this kind of tricks
-
when just scrolling, you do not want to touch the DOM or pixels on the CPU at all unless you need to reflow the webpage. you render precomposed textures into the framebuffer, including any desired animation or such, all done on the GPU. Servo does this well, WebKit not so much
-
in theory if you had non-interleaved RAM dice and the OS could compact physical memory and manage NVM cache cleverly, using less RAM would actually reduce power consumption by powering off unused RAM, but I think it's not significant enough to spend time getting it to work
-
Yeah, an ideal laptop/mobile memory architecture would work like that. With a smart OS, you'd get sleep lifetimes comparable to hibernate, but with instant resume.
-
*makes an indefinite gesture* FeRAM
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.