"Expertise" === "nuanced understanding" E.g.: Lighthouse has major throttling challenges today, and that to get a real understanding of real-world perf, must run on actual hardware target... ...which is why http://webpagetest.org/easy is your friend https://twitter.com/aweary/status/1166805797231153153 …
-
This Tweet is unavailable.Show this thread
-
Unless someone sends me a http://webpagtest.org/easy trace, I don't trust their LH score. Today, there's no substitute for high-quality link conditioning (e.g. dummynet) and physical hardware. DevTools "throttling" isn't great. And that's *before* we discuss of budgets...
2 replies 0 retweets 16 likesShow this thread -
Performance experts don't warn teams away from heavyweight tools because they're impossible to make fast; they warn them away because the amount of discipline required to keep experiences good through development & iteration scales non-linearly as a result of high base cost.
2 replies 7 retweets 18 likesShow this thread -
Performance experts will also start by asking questions about your baseline scenario -- who are your customers? In which market? What's their demographic and location? -- to avoid giving you bad advice.
1 reply 0 retweets 7 likesShow this thread -
So can you afford Framework X? Maybe! It's *always* "maybe". Always. And, often, "maybe" == "not given the maturity level of your organisation"
2 replies 1 retweet 10 likesShow this thread -
Nuance strikes again! There are multiple ways to attack this. You can retool and ensure that your baseline choices are cheaper, allowing you to afford more. But that level of caution only gets teams so far. More often than not, the big fixes are at the management level.
2 replies 0 retweets 4 likesShow this thread -
There's a reason I give PMs and managers slow phones. Those are the people who need to be convinced that performance matters, not (usually) the engineers. When management minds change, so does team practice and culture.
1 reply 2 retweets 21 likesShow this thread -
It's super difficult for individual engineers and designers to advocate effectively for users without org support. Making space for that is the big challenge in most teams. Nobody wants to do a bad job, and the definition of "good" is not set individually.
1 reply 1 retweet 16 likesShow this thread -
These are complex interactions! Tools like Lighthouse provide _visibility_ to what was previously invisible. But they are not talismans. Saying "X gets 100 on LH" is specifically meaningful, but only when quantified (which hardware? which test environment? where in the network?)
1 reply 3 retweets 6 likesShow this thread -
And, as a statement about success over time, "X gets 100 on LH" is functionally meaningless. The important question is: in projects where X is involved, what's the probability of a good LH score over the project's lifespan (again, situated in network/device/topology).
2 replies 0 retweets 7 likesShow this thread
Many a disappointed partner team has enthusiastically told me about their super-fast starting-point. Then we put the real site on the bench, with real hardware, with real link conditioning. Turns out, point-in-time visibility isn't worth very much if you don't keep looking.
-
-
-
Replying to @mjackson
https://www.webpagetest.org/easy will run a LH analysis if you ask it to (bottom checkbox, "Run Lighthouse Audit"):pic.twitter.com/YDzlvAlWAF
1 reply 0 retweets 3 likes - 10 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
& Web Standards TL; Blink API OWNER
Named PWAs w/
DMs open. Tweets my own; press@google.com for official comms.