Rust compiler performance is hard to have meaningful discussions about because: (1) There’s no single part of the compiler which is slow. In fact the compiler is pretty well optimized.
-
-
I don't know about whether this is meaningful to a language like Rust, but back in the 1960s / 70s when there was massive investment in Fortran compiler technology, there were two types of compilers:
-
1. Highly optimizing compilers - they spent a lot of compile time grinding out the best code they could. All the tricks in the book were fair game. 2. "Student" compilers, like WATFOR - they were optimized for compile speed. They were glorified macro assemblers.
- 4 more replies
New conversation -
-
-
Seems like there may be only one path then: be more incremental. First time compile is alright since you can sortof manage that yourself by not snowballing deps. But i'd love the ability to change small bits of code and hotload them rapidly. Doing that now for shaders but yea.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I think the basic architectural decision to make the compiler offline (instead of an online differential/incremental dataflow) is one thing that makes it “slow” (where “slow” is to be understood as something which lengthens the development feedback cycle).
-
Honest question: is there any evidence that *fine-grained* incrementalization would help *in a compiler for such a complex language*? Embarrassingly parallel tasks parallelize well, but compilation ain't it, even if you ignore monomorphization.
- 4 more replies
New conversation -
-
-
I think there are some clear unforced errors like name resolution (though these matter more for IDE responsiveness than batch throughput)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I guess what you mean is that compiler safety comes at the price of compilation time... this sounds logical
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Is the borrow checker even what's slow? I always thought llvm was the big bottle neck even in debug builds
-
LLVM is the bottleneck for Servo at the very least.
End of conversation
New conversation -
-
-
could borrow-check be cached between compilations? e.g., if borrow check passes for a debug build, then don't do it when release building if the code didn't change. Or is borrow-check dependant on which compiler options are set, so caching between debug/release won't work?
-
The code itself is dependent on whether you want debug or release, with debug_assert, cfg, etc. And the borrow checker is not a bottleneck anyway.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.