it's poetic and illustrates why the original chart is a poor analysis of the situation ;)
-
-
this is nuanced, but the generality of the cost of parse time stands on its own.
3 replies 0 retweets 1 like -
not true in practice. https://github.com/nolanlawson/optimize-js … is one reason.
2 replies 1 retweet 3 likes -
Replying to @wycats
that's an interesting reading of that result, which is about code that then has to be parsed anyway
@samccone@tbreisacher@tdreyno1 reply 0 retweets 1 like -
claiming that JS parse costs are non-trivial is surprising. can u give a better justification?
@samccone@tbreisacher@tdreyno2 replies 0 retweets 0 likes -
people misunderstood the optimizejs result. lazy parse is bad for code that'll be evalled anyway 1/
1 reply 0 retweets 3 likes -
but it is very effective on code that *won't be* which is common in real applications 2/
1 reply 0 retweets 3 likes -
because lazy parse can be effective at unused code 3/
2 replies 0 retweets 2 likes -
Replying to @wycats
ok, i think we agree on this. but for js on the critical path,
@samccone is right, yeah?@tbreisacher@tdreyno3 replies 0 retweets 2 likes -
another major characteristic is how much your initial eval deopts. For a long time, Ember on v8 1/
1 reply 0 retweets 0 likes
was spending 1/2 of its eval time GCing deoptimized code (yes, this is real). 2/2
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.