Conversation

Replying to and
The paxtest application has the proper algorithms to measure ASLR entropy. Note that paxtest cannot properly measure ASR entropy. FreeBSD is implementing ASR. This, paxtest cannot measure and compare between fbsd and hbsd.
1
Replying to and
It's fair to use it on FreeBSD to measure what matters most though. Fine-grained heap randomization is a separate feature that's best layered on top, and can't be accomplished well at only the mmap layer. Especially true with jemalloc involved, which aligns the mmap heap, etc.
1
1
Replying to and
ASLR can be extended with finer-grained bases via userspace features, and paxtest is mostly oblivious to that. It is capable of seeing one extremely tiny aspect of the difference between malloc implementations based on the entropy of one allocation between different executions.
1
1
Replying to and
glibc: Heap randomization test (PIE): 32 quality bits (guessed) jemalloc: Heap randomization test (PIE): 23 quality bits (guessed) hardened_malloc: Heap randomization test (PIE): 41 quality bits (guessed) Entropy of a specific allocation is such a tiny aspect of it though.
1
1
Replying to and
So, it's not even really worth noting or talking about beyond the fact that jemalloc aligning the heap is bad for ASLR and also fine-grained heap randomization, but it's one of the least interesting aspects of malloc security. It's not what paxtest is aimed at testing at all.
1
Replying to and
ASLR friendliness and fine-grained randomization (which it doesn't do) are a tiny aspect of malloc security though. Protected out-of-line metadata, detecting all invalid frees, canaries, quarantines, isolated partitions for different sizes / types, etc. are all more interesting.
2
2
Replying to and
It does still impact ASLR via the fundamental low-level approach it takes. The chunk size can be set lower to reduce the entropy loss but it's still an entropy loss. It's hard to offer alternatives with equivalent performance on 32-bit, but on 64-bit there's a better approach.
1
Replying to and
Rather than having aligned chunks in order to find metadata efficiently (chunk header for small allocations, global data structure for large allocations made of spans of chunks), the 64-bit address space is abundant enough to divide up memory into ranges for mapping to metadata.
1
Replying to and
It's a great approach for a performance-oriented allocator, even if it's oblivious to security and uses inline metadata including free lists for performance. It offers better performance and memory usage characteristics too. Non-traditional and not an option on 32-bit though.
1