noticing an interesting feedback loop in the web: all the major engines were built on software renderers, which leads to certain optimizations and perf characteristics, which leads to web pages which rely on that, which pushes web engines to need/want those opts, and so on
-
Show this thread
-
e.g. drawing things is so expensive that it's worth it to do lots of book-keeping to avoid drawing things twice. webdevs then see incredibly complex but static things are 'free'. now web engines *need* aggressive caching because tons of pages have super complex static elements
1 reply 0 retweets 17 likesShow this thread -
this feedback loop has been the biggest blow against the original "dream" of webrender, which was that *maybe* the win from gpu rendering was enough that you could just draw a page from scratch at 60fps without async scrolling, cached layers, invalidation, etc ya can't
2 replies 0 retweets 10 likesShow this thread -
Replying to @Gankra_
The vast majority of pages do just fine with WebRender when repainting every frame. There are a bunch that don’t, but there are also a bunch that perform badly with the traditional stack.
1 reply 0 retweets 2 likes -
Replying to @pcwalton
The percentages don't really matter; if important/major pages run fine in vanilla gecko but not webrender, but a bunch of oddball pages run great in webrender, I don't think that's a win (and I don't think we could politically sell shipping that either)
2 replies 0 retweets 0 likes -
Replying to @Gankra_
I actually pretty much entirely disagree with your take—the biggest problem is that we don’t control the OS compositor, so we need invalidation and so forth in order to get good energy efficiency. We’ve already proven that you can get good FPS in the repaint-everything case.
1 reply 0 retweets 0 likes -
Replying to @pcwalton
I am certain we haven't? Tons of cases where a page just slaps 5+ text-shadows on something and we fall over completely. *even* if we cache the blurs, just compositing them is too expensive. glenn is heads down working on picture caching because we have so many of these bugs!
1 reply 0 retweets 0 likes -
Replying to @Gankra_
I knew you were going to bring up that case :) That is easy to fix: just cache the blurs together. Much easier than picture caching. The reason why we need picture caching, in my view, is energy efficiency, not to get 60 FPS.
2 replies 0 retweets 0 likes
Traditional layerization heuristics have the exact same problem, by the way. That is why browsers have this delicate balancing act to avoid creating too many layers.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.