Conversation

I love libdivide. Moving to libdivide-2.0 in hardened_malloc is an easy win. 16 byte malloc microbenchmark on Broadwell-E: Hardware division: 1s libdivide-1.1: 0.74s libdivide-2.0: 0.71s In a lightweight build: Hardware division: 1s libdivide-1.1: 0.62s libdivide-2.0: 0.59s
4
76
Replying to
I never used to think about it but I'm come to realize that integer division is ridiculously slow and it stands out to me in code that's supposed to perform well. The trick with libdivide is that it's doing the same kind of division by a constant optimizations as compilers.
1
6
Replying to and
So it's doing a fair bit of work to figure out the proper shifts / multiplications when you set up the divisor and then you reuse it many times. In hardened_malloc, it sets up a slab size divisor and size divisor for each size class, and then uses those to find the metadata.
1
2
This Tweet was deleted by the Tweet author. Learn more
This Tweet was deleted by the Tweet author. Learn more
For my use case, I could technically try doing something like making a switch covering all possible divisors with each case performing division by a constant. I do know the set of divisors in advance but it's a specific one at runtime. Not sure Clang / GCC would handle it well.
1
For max speed you would want to specialize the surrounding code (i.e., the loop calling your divide). You could accomplish it with template functions, macro magic or function pointers: essentially creating N versions of the core loop for N divisors, then dispatching at runtime.
1
Show replies
The reason it wouldn't help though is that it already has a convenient place to read them from at runtime. The only way a table would help is if the compiler was clever enough to do optimizations based on it which I wouldn't expect. It would just move them to a different place.
1
Show replies
Show replies