Ooh, snappy answers: 1. There have been multiple successful automated batching strategies, from "try all combinations" to "optimistic bisect-backoff" to various in-betweens. 2. Testsuites typically parallelize perfectly, or should. Engineering mis-spend to not exploit that.
-
-
you can work on latency instead of throughput (i.e. incremental) for the bootstrap case, second-order incremental *would be* a thing, but that would require a pure language to write a compiler in, and Rust ain't that I am seriously waiting for advances in linear-dependent types
-
as for speedups within rustc written in Rust: compile type-matching into a bytecode, and use a type representation that makes an interpreter fast, or even JIT it (like CSS selectors ;)) make LLVM cheaper with optimizations on MIR (superlinear wins from it being polymorphic)
- 9 more replies
New conversation -
-
-
This Tweet is unavailable.
-
The self-hosting nature of the compile means that there is a strong serial bottleneck. The build time is not something that a caching build system will help with. Believe me, if it were, we’d have fixed that long ago.
- 3 more replies
-
-
-
-
I absolutely did not say it's easy to get. I spent the past 3 years squeezing a factor-of-a-few out of Swift, and it was a mix of lots of analysis, hard reorganization work, tradeoffs and even periodic changes to language design to knock out bottlenecks. But it can be a priority.
-
Well, we did that. It’s 3x faster now: https://news.ycombinator.com/item?id=19638531 … It took a lot of time to get that far. Could it have been faster if we made it more of a priority? Dunno! Hard to argue counterfactuals, as you said.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.