Perf tip: if your web app ships large JSON-like configuration as JavaScript object literals, consider using JSON.parse instead. It’s much faster, especially for cold loads! https://v8.dev/blog/cost-of-javascript-2019#json …pic.twitter.com/p0WICUm7zx
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
That benchmark is testing caching, not JSON.parse vs. object literal. To test, create a JS file containing a large object literal, and a file containing the equivalent JSON.parse("..."). No loops or other boilerplate. Then run in d8 or with RCS https://v8.dev/docs/rcs
You parse the string, then you parse the JSON. Those may be individually cheaper, though you have to do both. I imagine it also means another large string needs to be allocated/copied/retained. And it also likely thwarts any browser attempts to do preparsing or streaming parse.
I don't understand this double parsing scenario; Wouldn't the JavaScript engine have to lex the object(before parsing it) irregardless, like it would the js string, so in the worst case they should both be doing the same amount of work?
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.