Conversation

GC & memory safety vs. overflow checks on arithmetic, which is slower in terms of throughput? I suspect overflow checks are slower. (These might seem unrelated, but they are not: memory safety makes most [not all] integer overflows unweaponizable.)
9
58
Replying to and
Integer overflows in cryptography can often be exploitable. Can also simply have bugs in the application logic in ways that are exploitable. Plenty of those kinds of bugs in video games among other areas. Most of them aren't exploitable with memory safety but a few still are.
1
1
Bounds checks are usually cheap when they don't interfere with loop unrolling, vectorization and other loop optimizations. It's implied that you're doing memory reads/writes so it usually won't have much impact. It can certainly push you beyond certain CPU cache/predictor limits.
1
1
Integer operations are everywhere and existing compilers are terrible at optimizing out, hoisting or combining the checks. Bounds + overflow checks everywhere add up to a fairly high cost. Rust has very careful/rare use of unchecked ops in libraries to reduce bounds overhead.
1
2
By reusing those libraries including heavily using iterators, a lot of that gets avoided without depending on unreliable optimizations. LLVM is great at inlining and a few obvious optimizations but ridiculously bad as soon as there are pointers or anything needing range analysis.
1
1
What you'd ideally want is for them to propagate overflow errors as long as possible without branching and then only have a branch at the very end when you actually read the value to somewhere beyond the understanding of the compiler. Not how they actually compile stuff though.
1
1
Hardware support would be something like having support for overflow poison values which trap on read but simply propagate when performing integer operations, i.e. if you add poison to another integer you get poison, etc. so it propagates and you get an error as late as possible.
1
2
Show replies