@RichFelker Relying on inline metadata ends up ruling out good security properties like a guaranteed abort for free(any_invalid_address).
@CopperheadSec @musllibc I just don't accept that multi-MB heap granularity, heavy per-thread overhead, etc. are reasonable.
-
-
@RichFelker@musllibc The multi-megabyte granularity comes from the alignment-based metadata. Lowering it means higher metadata overhead. -
@CopperheadSec % overhead is fairly inconsequential. Large O(1) overhead sucks. n->0 asymptotics matter as much as n->∞. (e.g. 4 lg nprocs). -
@RichFelker The region alignment design provides cheap, low-overhead metadata. There's pressure for larger regions for various reasons. -
@RichFelker Allocations smaller than the region size aren't ever aligned to the region size and have their metadata in the region header. -
@RichFelker Other allocations are a multiple of the region size, so their minimum alignment is the region size. Need real data structures. -
@RichFelker So there's a pressure to raise the size because it's inherently faster and more parallel to manage allocations within regions. -
@CopperheadSec There's no sense in optimizing for speed of allocations larger than small_const*memset_rate*malloc_time. -
@CopperheadSec This is because you can assume any reasonable caller actually _uses_ the amount of mem it allocs, and use takes > memset_time - 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.