No language needs to cater to optimization of functions which don't obviously terminate. Writing such functions is a bug - either a behavioral bug (if it turns out really not to terminate) or an analyzability bug (correctness is non-obvious).
Conversation
Turning non-termination into memory corruption and other awfulness isn't something that safe languages can accept. It doesn't matter that it's already a bug. It has to remain safe, and has an impact on language semantics. The sideeffect attribute has a huge performance cost.
2
That's not the consequence of what I said. Rather, unless the safe language has a proof of termination, it marks the function non-pure.
1
If this produces notably worse optimization, it's the fault of the person who wrote the code that doesn't obviously terminate, not the compiler's fault.
1
I say this as someone who's been guilty (in musl) of lots of code with non-obvious logic for termination or satisfying of some other critical invariant, and I want to fix it. Penalizing it with bad performance is a feature. :-)
1
Compilers are extremely bad at proving termination and the issue with the sideeffect intrinsic is it acts as a universal side effect and is not handled by optimizations. It is treated as potentially doing global memory writes, etc. and completely breaks all other optimizations.
1
So for example, you can have a simple loop over an array based on the length, and the compiler may not be able to tell that terminates since it can't tell that the length of the array isn't changed (even though a human trivially can). If you insert the intrinsic it destroys perf.
2
For an array, the length may change but it's not infinite, so the loop always terminates. It would be better to evaluate the length only once before the loop, though, if you know it won't change but the compiler can't know that.
1
It depends on the simplicity of a condition and how it's written. You could write a binary search so that the compiler can see that it always terminates, but it wouldn't be the normal way to write it, etc. Many loops are not going to be verifiable as trivially terminating.
1
Mutually recursive function calls are another case that's very difficult. In order to avoid treating every single function call as needing to have that universal side effect inserted (destroying tons of optimization), the compiler would have to analyze the call graph.
1
Even if the recursion isn't via guaranteed tail calls stack overflow is well-defined and safe in these languages. It CANNOT be replaced with arbitrary memory corruption. They're saying that even remotely safe languages should have significantly worse compile-time and performance.
It's unreasonable, especially since the optimization they're performing is still wrong with this feature available. So sure, they can decide to needlessly hurt safe languages, but their implementation is still wrong and unsafe. It's wrong for C and C++ too.
1
C compilers should be guaranteeing that stack overflow is safe and guaranteed to trap too. They shouldn't be optimizing out infinite recursion as they will happily do. It should either infinite loop or crash to be correct, not fall through to other code in an indeterminate state.
1
Show replies
I'm pretty cool with pessimizing recursion of all forms. Recursion is a bug. :-)
1
It's difficult for a compiler to know if it happens though. It would need to do whole program analysis to build a potential call graph to avoid being able to insert a massive number of these performance killing intrinsics. Indirect calls also make it much harder to figure it out.
1
Show replies

