If this produces notably worse optimization, it's the fault of the person who wrote the code that doesn't obviously terminate, not the compiler's fault.
Conversation
I say this as someone who's been guilty (in musl) of lots of code with non-obvious logic for termination or satisfying of some other critical invariant, and I want to fix it. Penalizing it with bad performance is a feature. :-)
1
Compilers are extremely bad at proving termination and the issue with the sideeffect intrinsic is it acts as a universal side effect and is not handled by optimizations. It is treated as potentially doing global memory writes, etc. and completely breaks all other optimizations.
1
So for example, you can have a simple loop over an array based on the length, and the compiler may not be able to tell that terminates since it can't tell that the length of the array isn't changed (even though a human trivially can). If you insert the intrinsic it destroys perf.
2
For an array, the length may change but it's not infinite, so the loop always terminates. It would be better to evaluate the length only once before the loop, though, if you know it won't change but the compiler can't know that.
1
It depends on the simplicity of a condition and how it's written. You could write a binary search so that the compiler can see that it always terminates, but it wouldn't be the normal way to write it, etc. Many loops are not going to be verifiable as trivially terminating.
1
Mutually recursive function calls are another case that's very difficult. In order to avoid treating every single function call as needing to have that universal side effect inserted (destroying tons of optimization), the compiler would have to analyze the call graph.
1
Even if the recursion isn't via guaranteed tail calls stack overflow is well-defined and safe in these languages. It CANNOT be replaced with arbitrary memory corruption. They're saying that even remotely safe languages should have significantly worse compile-time and performance.
2
I'm pretty cool with pessimizing recursion of all forms. Recursion is a bug. :-)
1
It's difficult for a compiler to know if it happens though. It would need to do whole program analysis to build a potential call graph to avoid being able to insert a massive number of these performance killing intrinsics. Indirect calls also make it much harder to figure it out.
1
It penalizes code without any recursion, because the compiler doesn't know that. You really just end up needing to do the same analysis LLVM should be doing yourself in the higher level compiler: an internal function attribute for always_returns and functions without it get this.
Except that in the LLVM layer, functions without always_returns don't get penalized by being forced to pretend they have a universal side effect, just because the compiler couldn't prove they don't return. Instead, it just can't be as aggressive with hoisting / removal.
1
So, what they are doing is forcing a higher level optimization / analysis layer to exist (which hurts compile-time performance significantly since you do the same analysis at different layers) to have it output something to work around the broken lower layer.
1
1
Show replies
If a function is calling another function without a locally-visible definition, it can't be assumed to be pure anyway, no? If definitions are locally visible, potential for recursion is visible.
1
LLVM will go through the program and mark things as pure followed by bubbling that up. It has a function attribute pass responsible for bubbling up internal function attributes like noreturn, etc. It could do that with always_returns too.
1
Show replies

