Perhaps surprising: I agree with @slightlylate that the discussion we're having about web performance isn't helping us move forward as a community.
These analyses can be used as the input to smart bundling tools, but only once we start focusing on techniques rather than the number of bytes in libraries.
-
-
Sounds potentially useful to me. Is there anything browser engineering teams like mine can provide to help enable framework technique experts (certainly not me) to more easily do and leverage such an analysis?
-
Telemetry can help answer the impact of deferred loading, deferred eval, successful use of lazy parsing in different environments.
-
Deferred loading means waiting until the user interacts to fetch and evaluate code, showing a spinner at that point. Deferred eval means fetching code up front as inert content and evaluating it on demand. Lazy parsing means eval up-front but hitting lazy-parse heuristics
-
TTI matters a lot, but so does how long users have to wait for spinners, how much "deferral" techniques trigger lazy jank.
-
Also, how much does "background fetching" affect these heuristics, where background fetch means optimize for TTI but download and eval the payload in the background (again, with the matrix of deferral options)
-
These techniques have different tradeoffs depending on how expensive network is vs. CPU. And sometimes expensive means literal money.
-
These are all the questions I could use data on more than "how many bytes is a React hello world" as we work on the next iteration of Ember tooling.
-
Yep, definitely some good things to study here. Some are thinks we're starting to track and expose more seriously in the browser - like overall input latency metrics. Also we're studying the effect of lazy loading iframes and images, so that's a related piece...
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.