the most common complaint about GC i hear isn't the small delta in performance, but the large delta in worst-case latency that makes even soft real-time applications hard to develop
Conversation
I thought that fixed in recent years with more latency sensitive algorithms (which some modern languages always use and in case of java you can opt into using)?
1
4
I've never seen them effective in practice. In the real world it seems like with GC you can *maybe* pick one of memory usage, CPU usage, or performance consistency
1
11
Some of this is hard separate out because GC implementations generally come attached to VMs, etc
Pretty sure Java's GC outperforms Go's on many metrics but it's hard to tell because of JIT overheads
2
8
OpenJDK design choices are very focused on long running server applications. Design choices for the Android Runtime are far different. Has equally modern concurrent compacting GC with a focus on latency for apps in foreground and memory usage in the background, not throughput.
1
2
7
Android's JIT compiler is extremely primitive compared to v8 or OpenJDK but that's intentional to preserve battery life and because the focus is on AOT compilation driven by profiles produced by the JIT / interpreter. They distribute those now too:
2
1
11
Android 5 switched to full AOT compilation for everything other than one-time initialization code because JIT compilation is horrible for memory usage and battery life. Android 6 moved to current hybrid JIT/interpreter/AOT approach where most code used at runtime is AOT compiled.
1
1
4
Code run with the interpreter / JIT compiler gets recorded in JIT profiles and then that guides AOT compilation. Over time, you end up with AOT compiled code based on how you use apps. Interpreting the one-time init and cold code makes hot code faster and reduces memory usage.
2
3
JIT compilation in v8 and OpenJDK uses a ton of resources and a ton of memory. v8 has started focusing more on having a fast interpreter and they actually do cache code for reuse later on but they can't do that aggressively since web sites are so much less predictable than apps.
1
1
3
If you look at an Android app's memory, the GC heap is always in the lower 32-bit in order to use 32-bit pointers and apps have a small heap unless they're badly written and need to enable a large one. There's also special concurrent GC for bitmaps, etc. Special cases help a lot.
1
1
Simply knowing which apps are in the foreground vs. foreground services vs. background vs. idle/frozen helps a lot with GC. It can be latency focused for the foreground, more throughput focused in the background and heavily compact the heap of the idle apps with frozen threads.
I'd expect heap compaction makes GC into a memory usage win on Android at this point due to how apps are quickly frozen and compacted in the background on Android 12. It does ask malloc to drop cached memory alongside compaction but it's just not comparable to GC heap compaction.
1



