And, as a statement about success over time, "X gets 100 on LH" is functionally meaningless. The important question is: in projects where X is involved, what's the probability of a good LH score over the project's lifespan (again, situated in network/device/topology).
-
-
A 25 point difference (out of 100). If you were a decision maker, and your team told you "we score 85!" and then I show up in a meeting with you and say "wow, 60, you've failed pretty badly"...that would be an unwelcome surprise. Hence the value of real-hw testing = )
-
Oh wow, that's a huge difference. Thank you for taking the time to explain :) I don't think I'd ever noticed this little disclaimer on http://web.dev before
pic.twitter.com/1W7iG2LpKL
End of conversation
New conversation -
-
-
I've written a tutorial on how to run LH on mobile devicehttps://www.aymen-loukil.com/en/blog-en/run-lightouse-audits-on-mobile-device/ …
-
Location (real location connection) is also important for testing
End of conversation
New conversation -
-
-
I’d say that RUM and statistical look at your user’s experience is the answer anyway. Synthetic is good for digging into waterfall analysis and getting to the bottom of issues. It is also OK for running rules, but rules are not a great approach to begin with.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
& Web Standards TL; Blink API OWNER
Named PWAs w/
DMs open. Tweets my own; press@google.com for official comms.