Conversation

People ask me where my aversion to GC'ed languages in production comes from. One data point: Spoke with a company with a very large cloud footprint recently. Quote: "We spend ~30% of our CPU time in GC, so the way we reduce CPU time is usually by profiling heap allocations." :-)
14
165
In a world where your memory hierarchy has many layers with vastly different latencies, the idea of periodically traversing a graph of *all of memory* does not seem like the right design any more.
6
48
I do wonder what a design for a programming language and garbage collecter would be like that is more cache- and prediction-friendly. Separating out pointer data from other data on the heap sounds like a good step (e.g. don't store pointers in the same object, have a "shadow"...
7
15
Replying to
A type system with the concept of Send from Rust with non-Send as the default (unlike Rust) to use task-local memory allocation by default. This would help a lot for both automatic reference counting and tracing GC. Combine that with an application based around short-lived tasks.
2
1
Replying to and
If they use automatic reference counting, it would be almost entirely non-atomic similar to in Rust. If they use tracing GC, they can use task-local heaps for nearly all data. Could also skip scanning most task stacks by having a flag for whether they may have Send references.
1
Replying to and
Beyond that, Top Byte Ignore is a standard baseline arm64 feature. It can be used to tag pointers with up to 8 bits of arbitrary data ignored during address translation. I think it counts as hardware support for tracing GC and it could go a long way to improving performance.