Even better if those artifacts are per-file, and contain no absolute paths. Shared caches! Critical for scaling for a large company/codebase. A cache shared between all dev machines. Not a capability you see often with open source JS tooling.
-
-
Show this thread
-
Here’s my plug: I had to think about this from the beginning with Rome. Having workers was a good constraint because all messages had to be serializable. Good boundary for thinking about these artifacts.
Show this thread
End of conversation
New conversation -
-
-
all my money for fast tools that aren’t daemons
-
Rome was a daemon-by-default, then
@sebmarkbage convinced me otherwise. Now it’s opt-in. Focusing all my performance work on making the non-daemon fast though, minimising initial overhead. - 4 more replies
New conversation -
-
-
If Rome could use decentralized, secure peer to peer caches, that would be great.
-
That’s not really possible. It’s impossible to validate the result without doing all the work anyway. If you base validation on whether multiple p2p nodes confirmed it, then you’re still vulnerable to cache poisoning.
- 3 more replies
New conversation -
-
-
This was my gripe with webpack for so long. Serializability wasn’t baked into the original design (including the plugin system) which made it very hard to add later
-
Yeah the plugin API became the entire public API. I remember there was some data structure they converted from an Array to a Set... It broke stuff because plug-ins were expecting an internal (underscores!) property to be an Array. Had to wait for a major...
End of conversation
New conversation -
-
-
Imagine doing things at Facebook scale without some sort of global cache
caching is important! -
To be fair though. It’s also used as a performance hack for slow/bad tooling. “Facebook scale” is a very annoying trope.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
he/him 