Compilers are extremely bad at proving termination and the issue with the sideeffect intrinsic is it acts as a universal side effect and is not handled by optimizations. It is treated as potentially doing global memory writes, etc. and completely breaks all other optimizations.
Conversation
So for example, you can have a simple loop over an array based on the length, and the compiler may not be able to tell that terminates since it can't tell that the length of the array isn't changed (even though a human trivially can). If you insert the intrinsic it destroys perf.
2
For an array, the length may change but it's not infinite, so the loop always terminates. It would be better to evaluate the length only once before the loop, though, if you know it won't change but the compiler can't know that.
1
It depends on the simplicity of a condition and how it's written. You could write a binary search so that the compiler can see that it always terminates, but it wouldn't be the normal way to write it, etc. Many loops are not going to be verifiable as trivially terminating.
1
Mutually recursive function calls are another case that's very difficult. In order to avoid treating every single function call as needing to have that universal side effect inserted (destroying tons of optimization), the compiler would have to analyze the call graph.
1
Even if the recursion isn't via guaranteed tail calls stack overflow is well-defined and safe in these languages. It CANNOT be replaced with arbitrary memory corruption. They're saying that even remotely safe languages should have significantly worse compile-time and performance.
2
It's unreasonable, especially since the optimization they're performing is still wrong with this feature available. So sure, they can decide to needlessly hurt safe languages, but their implementation is still wrong and unsafe. It's wrong for C and C++ too.
1
C compilers should be guaranteeing that stack overflow is safe and guaranteed to trap too. They shouldn't be optimizing out infinite recursion as they will happily do. It should either infinite loop or crash to be correct, not fall through to other code in an indeterminate state.
1
Guaranteed-to-trap isn't really safe unless you just count process termination is safe. In the presence of optimizations, there's no good way to assess what changes to state have or haven't taken place yet at the point of trap, so you can't really continue execution.
1
I'm saying that it should be guaranteed to trap and lead to the process being killed. That's perfectly safe. It's always a bug and could be exploited for denial of service but it remains memory and type safe. It can't be exploited for arbitrary code execution.
2
1
Their wrong implementation of the optimization can lead to arbitrary memory corruption. It breaks the safety semantics in safe languages very blatantly. It lets control flow fall through somewhere that's supposed to be impossible per the language's control flow and type system.
So for example, Rust has a concept of a function that doesn't return. After you call one, it isn't going to force you to write any more code afterwards since it knows the control flow can't get there. It's often explicitly needed to write code that can pass type checking.
1
So for example, you may need to handle all possible cases of something, and you know you've done it, but the compiler does not, so you insert a call to a function that is guaranteed to never return rather than unsafe { unreachable() } to save an insignificant amount of space.

