Conversation

Important but little-noticed effect of having a GC'd language: It forces the language devs to optimize memory allocation, because the language users can't do it themselves (at least, not nearly as easily). Lots of C++ users don't realize how fast malloc is in e.g. Java.
10
134
Replying to
If I may add, malloc no matter how fast has other disadvanges i.e. locking, cache trashing, numa/core un/awareness. Side effects of GC traversing random areas in memory is well known too.
1
3
Replying to and
A malloc implementation can approximate using per-core arenas by making an arena for each core and using sched_getcpu to choose the right arena. It's an amazing approach when the threads are pinned to cores but otherwise they move around and it's not necessarily an optimization.
1
5
Linux has support for restartable sequences which provide the ability to roll back operations after a context switch and that enables features like per-core rather than per-thread caching. A modern allocator can be a whole lot different than a traditional malloc implementation.
1
4
Yes, but the generation 0 bump allocator of the GC runtime can benefit from that, too! The main problem with GC runtimes is that they all have some performance limit to offered load, over which you enter into a death spiral, and then application has no visibility into this.