Thanks for the subtweet here.
Yes, traditional client side routing where you replace the whole page seems like a problem. I don’t think it’s insurmountable though. One idea is to eg download the component for article text after initial render (ie on subsequent page loads).
-
-
Another is to download static HTML from the server even on subsequent page loads and not client render the whole thing, just like on initial page loads!
1 reply 0 retweets 2 likes -
Also I don’t think this is unreasonable. People want to eg write their whole site in React when much of it is static (see eg Gatsby). If that could be optimized to better HTML and JS as if they manually wrote it, that’s a win!
2 replies 0 retweets 6 likes -
Replying to @devongovett @chofter
>One idea is to eg download the component for article text after initial render (ie on subsequent page loads). One way to do it without compromising is to hydrate some parts of the page before the others! Maybe even based on interaction. That's what we do with Concurrent Mode.
2 replies 0 retweets 5 likes -
Replying to @dan_abramov @chofter
Yeah that’s neat. But you’re still downloading unnecessary code. So you are compromising overall performance. This is about finding a balance between client and server rendered parts of a page rather than duplicating work.
1 reply 0 retweets 1 like -
Replying to @devongovett @chofter
It's not unnecessary if you want future fast interactions. It's "code for fast interactions". If you download it too late, you make interactions laggy, negating the point of JS. I agree JS should be non-blocking, but I think calling it unnecessary is a flawed assumption.
1 reply 0 retweets 4 likes -
Replying to @dan_abramov @chofter
Depends on the app of course. Is it faster to download an API response and the whole JS to re-render the page client side on each navigation, or download some pre-rendered HTML and only the interactive JS?
2 replies 0 retweets 2 likes -
Replying to @devongovett @chofter
The assumption here is that JS that makes "static" content interactive (e.g. an article) is large. I think that's rarely the case in practice. As long as it doesn't *block* the initial HTML, I wouldn't worry too much. But!
2 replies 0 retweets 1 like -
Replying to @dan_abramov @devongovett
We’ve recently run into a situation that makes this invalid. Our NextJS site got a bad Lighthouse score for a large bundle because of parse+exec time on slow phones. Then Google stopped serving our ads due to this and business tanked, even though HTML not blocked
1 reply 0 retweets 2 likes -
Is there anything we can do to help surface this? We already show bundle size statistics when `next build` is ran. On top of that we're working on better bundle splitting: https://nextjs.org/blog/next-9-1#improved-bundle-splitting … Already available: `module.exports = { experimental: { granularChunks: true } }`
1 reply 1 retweet 7 likes
Perf budgets with reasonable defaults (100K total compressed script, warn; 150, error).
-
-
Replying to @slightlylate @timneutkens and
Also, maintain data on bundle sizes, notify when increased by a large percentage. We missed that pulling in a Dropdown component from a third party added 80kb of gzipped JS (I’m looking at you AntD!)
0 replies 0 retweets 3 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
& Web Standards TL; Blink API OWNER
Named PWAs w/
DMs open. Tweets my own; press@google.com for official comms.