Let's start with the definition of "hard realtime" here. "hard realtime" means that there is an upper bound on the execution of an operation. It says nothing about the absolute latency.
malloc() in C is a good example. It's usually pretty quick, but it is not realtime.
Conversation
You can easily get hard bounds on malloc latency something like 4us in practice, independent of currently allocated amount, assuming a working realtime scheduler etc.
1
Replying to
Which malloc implementation was that again that doesn't traverse linked lists?
1
Replying to
None of the typical ones "traverse" lists. They simply add or remove an entry in a list, which is constant-time independent of the current size of the list.
1
1
Replying to
They all traverse a list of free chunks to find the next one that's big enough to satisfy the allocation.
I once was paid to write the realtime control software for an atomic force microscope. I know that stuff well enough.
2
Replying to
No, they have separate lists per size class, thus no need to traverse. The first entry is always valid. This is somewhat size-inefficient if you allow a range of sizes in each class, and split/merge free zones, but it's usually a good tradeoff, & perfect with fixed-size classes.
1
1
For most realtime embedded, you absolutely want fixed-size object pools for each size, no split/merge, so that you have a trivial proof of zero fragmentation.
1
1
Size class rounding can be considered 'internal fragmentation' but it can be quite limited without many size classes. The approach is jemalloc uses 4 size classes per doubling in size and bounds internal fragmentation at 20%. Size class use can also shift around leading to waste.
1
The size class approach with dedicated slabs works well in practice and expanding to larger sizes via the jemalloc class scheme is quite cheap. Segregated fit is okay for larger allocations but O(log n) precise address ordered best-fit causes significantly less fragmentation.
1
LIFO lists of partially filled and empty slabs with LIFO free lists within them is also just one design choice. Bitmaps have advantages like naturally doing address order packing and not needing to touch as much often cold memory on free. Lots of allocators use a mix of these.
1
1
In a slab allocator, free slots for one size class aren't used for another, which is a form of fragmentation. It's bounded based on the number of slots in slabs though, and there's almost always quite consistent reuse of the same size classes not major shifts from some to others.


