I've really warmed up to 's suggestion of using template instantiation for hot generic functions and an interpreter for cold generic functions. You often want an interpreter for your language anyway, so it's an elegant way to reduce code bloat.
Conversation
Fully polymorphic code is pretty large too in machine code, and is full of indirect branches already. Byte code has the opportunity to be a lot smaller, and an interpreter could be nearly as fast while using up less instruction cache space
1
2
I rely heavily on LTO and dead code elimination in generic code for the Rust I write on microcontrollers (< 100kB ROM, < 10kB RAM). I don't think my code would appreciate embedding a full interpreter just because a few cold code paths couldn't be optimized out LOL.
2
1
Sure, there are definitely places where you don’t want to introduce dependencies on a runtime. Across a full blown OS on a desktop CPU, being able to make better use of memory and cache by sharing code is a big systemic optimization; for a single purpose controller not so much
1
2
Cool! I like your idea/think it's worth trying out. Just want an option to back out of embedding a runtime in the cases where "I know what I'm doing" :P.
2
On modern Android versions, apps start out using a multi-tier JIT compiler with an interpreter as the baseline. It saves the JIT profile information persistently and that's used to perform AOT compilation when the device is idle and during background installation of OS updates.
Cold code never ends up being compiled to native code, since it's never identified as hot enough to warrant being JIT compiled in memory and the AOT compilation is based on an aggregation of those JIT profiles. The AOT compilation is more aggressive but still skips lots of code.
1
1
So it goes bytecode, then JITted native code, and Android decides _some_ of the JITted code warrants further optimizations that wouldn't be practical during JITting?
1
Show replies



