Performance experts don't warn teams away from heavyweight tools because they're impossible to make fast; they warn them away because the amount of discipline required to keep experiences good through development & iteration scales non-linearly as a result of high base cost.
-
Show this thread
-
Performance experts will also start by asking questions about your baseline scenario -- who are your customers? In which market? What's their demographic and location? -- to avoid giving you bad advice.
1 reply 0 retweets 7 likesShow this thread -
So can you afford Framework X? Maybe! It's *always* "maybe". Always. And, often, "maybe" == "not given the maturity level of your organisation"
2 replies 1 retweet 10 likesShow this thread -
Nuance strikes again! There are multiple ways to attack this. You can retool and ensure that your baseline choices are cheaper, allowing you to afford more. But that level of caution only gets teams so far. More often than not, the big fixes are at the management level.
2 replies 0 retweets 4 likesShow this thread -
There's a reason I give PMs and managers slow phones. Those are the people who need to be convinced that performance matters, not (usually) the engineers. When management minds change, so does team practice and culture.
1 reply 2 retweets 21 likesShow this thread -
It's super difficult for individual engineers and designers to advocate effectively for users without org support. Making space for that is the big challenge in most teams. Nobody wants to do a bad job, and the definition of "good" is not set individually.
1 reply 1 retweet 16 likesShow this thread -
These are complex interactions! Tools like Lighthouse provide _visibility_ to what was previously invisible. But they are not talismans. Saying "X gets 100 on LH" is specifically meaningful, but only when quantified (which hardware? which test environment? where in the network?)
1 reply 3 retweets 6 likesShow this thread -
And, as a statement about success over time, "X gets 100 on LH" is functionally meaningless. The important question is: in projects where X is involved, what's the probability of a good LH score over the project's lifespan (again, situated in network/device/topology).
2 replies 0 retweets 7 likesShow this thread -
Many a disappointed partner team has enthusiastically told me about their super-fast starting-point. Then we put the real site on the bench, with real hardware, with real link conditioning. Turns out, point-in-time visibility isn't worth very much if you don't keep looking.
1 reply 0 retweets 8 likesShow this thread -
https://www.webpagetest.org/easy will run a LH analysis if you ask it to (bottom checkbox, "Run Lighthouse Audit"):pic.twitter.com/YDzlvAlWAF
-
-
Replying to @slightlylate @mjackson
As this thread started about Gatsby, here's a public-sector site built on this stack, running through WPT/easy in this config: https://www.webpagetest.org/result/190829_F7_a00e889c76d1c198ec401361ec901904/ …
2 replies 0 retweets 5 likes -
Replying to @slightlylate @mjackson
What's different about this run vs. opening up devtools locally is: 1.) the location in the network (this is in a rack in Dulles, VA thanks to the generosity of
@patmeenan) 2.) the quality of the network emulation. WPT uses dummynet: https://github.com/WPO-Foundation/wptagent#traffic-shaping-options-defaults-to-host-based … 3.) the hardware2 replies 0 retweets 7 likes - 8 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
& Web Standards TL; Blink API OWNER
Named PWAs w/
DMs open. Tweets my own; press@google.com for official comms.