Screen recording of running Rome in daemon mode and linting a project. Same as yesterday, 524 files, 40,000 lines of code. Difference here is that subsequent commands pull from a memory cache. Watch the last command closely, it's instantaneous.pic.twitter.com/33dUDS07Ew
-
Show this thread
-
Also more context on what is actually being “memory cached”: Only the formatted code and errors for each file are cached. We are still computing all the possible files, filtering them with globs, talking to the workers to get their cached value, and printing it all.
1 reply 0 retweets 9 likesShow this thread -
Biggest way I make this so fast is that I’ve written all the code, so it’s pretty easy for me to identify inefficiencies. I can change on piece to fit the performance characteristics of another (ie. changing API, integrating caches etc)
3 replies 0 retweets 24 likesShow this thread -
Replying to @sebmck
The test here is after someone else takes ownership of the code does performance stay high. The only way I have seen this happen is by performance conformance tests being treated as pass/fail signs in presubmit.
1 reply 0 retweets 5 likes
Yeah, planning on implementing benchmarking into the testing framework. Making benchmarks act as any other test except they work serially (even across workers) to reduce measurement variance. Maybe even something to save the last time, and fail if the delta is >X%.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
he/him 