Compilers are extremely bad at proving termination and the issue with the sideeffect intrinsic is it acts as a universal side effect and is not handled by optimizations. It is treated as potentially doing global memory writes, etc. and completely breaks all other optimizations.
Conversation
So for example, you can have a simple loop over an array based on the length, and the compiler may not be able to tell that terminates since it can't tell that the length of the array isn't changed (even though a human trivially can). If you insert the intrinsic it destroys perf.
2
For an array, the length may change but it's not infinite, so the loop always terminates. It would be better to evaluate the length only once before the loop, though, if you know it won't change but the compiler can't know that.
1
It depends on the simplicity of a condition and how it's written. You could write a binary search so that the compiler can see that it always terminates, but it wouldn't be the normal way to write it, etc. Many loops are not going to be verifiable as trivially terminating.
1
Mutually recursive function calls are another case that's very difficult. In order to avoid treating every single function call as needing to have that universal side effect inserted (destroying tons of optimization), the compiler would have to analyze the call graph.
1
Even if the recursion isn't via guaranteed tail calls stack overflow is well-defined and safe in these languages. It CANNOT be replaced with arbitrary memory corruption. They're saying that even remotely safe languages should have significantly worse compile-time and performance.
2
I'm pretty cool with pessimizing recursion of all forms. Recursion is a bug. :-)
1
It's difficult for a compiler to know if it happens though. It would need to do whole program analysis to build a potential call graph to avoid being able to insert a massive number of these performance killing intrinsics. Indirect calls also make it much harder to figure it out.
1
It penalizes code without any recursion, because the compiler doesn't know that. You really just end up needing to do the same analysis LLVM should be doing yourself in the higher level compiler: an internal function attribute for always_returns and functions without it get this.
2
If a function is calling another function without a locally-visible definition, it can't be assumed to be pure anyway, no? If definitions are locally visible, potential for recursion is visible.
1
LLVM will go through the program and mark things as pure followed by bubbling that up. It has a function attribute pass responsible for bubbling up internal function attributes like noreturn, etc. It could do that with always_returns too.
So, LLVM is in a position to do this correctly already, in a way that's essentially ideal. Instead, they are just ignoring non-termination as an effect, and providing a painful and costly workaround for frontends that actually care about correctness.
1
They do have 'noreturn' and bubble it up, just no 'always_returns' or the opposite 'may_return' which would accomplish the same thing. Their optimizations treat 'noreturn' as an effect already too... and they aren't going to remove that, since it should be.
1
Show replies

