Conversation

Important but little-noticed effect of having a GC'd language: It forces the language devs to optimize memory allocation, because the language users can't do it themselves (at least, not nearly as easily). Lots of C++ users don't realize how fast malloc is in e.g. Java.
10
134
Replying to
If I may add, malloc no matter how fast has other disadvanges i.e. locking, cache trashing, numa/core un/awareness. Side effects of GC traversing random areas in memory is well known too.
1
3
Replying to and
A malloc implementation can approximate using per-core arenas by making an arena for each core and using sched_getcpu to choose the right arena. It's an amazing approach when the threads are pinned to cores but otherwise they move around and it's not necessarily an optimization.
1
5
Entirely possible to write a malloc implementation with lock-free algorithms. Not necessarily better than locks. Can use tiny thread caches to amortize cost of the atomic ops by doing operations in batches. OS can support doing it per-core instead of per-thread to save memory.
1
3
Yes, but the generation 0 bump allocator of the GC runtime can benefit from that, too! The main problem with GC runtimes is that they all have some performance limit to offered load, over which you enter into a death spiral, and then application has no visibility into this.