Conversation

The "ML" really means "multi level". This might make it easier to build compilers with tailored IRs and deeper optimization pipelines, similar to what Rust and Swift have
Quote Tweet
The Google MLIR team is happy to release MLIR Core as open source: a new multi-level IR compiler framework! Check it out: github.com/tensorflow/mlir More information with two talks at EuroLLVM next Monday, stay tuned!
8
34
The inner contrarian in me wonders whether language design is due for a NetBurst moment when we all realize deep pipelines are bad and go back to simpler single pass compilable languages
13
33
Replying to
if we're talkin' strange timelines: ship everything at -O1 and have a JIT infrastructure that can build and drop in the -O3 version using runtime feedback on where dynamic calls land etc. (kinda like .NET NGEN, but starting from a language made for AOT)
1
Replying to and
Android has this kind of compilation stack for code targeting the Android Runtime (ART). Historically they moved from an interpreter to interpreter + JIT to near full AOT compilation (interpreter still used for one-time init and dynamic code) before arriving at the current model.
1
2
Replying to and
The JIT compiler can do some low-level tricks and optimizations that an AOT compiler cannot, so it still has a purpose once code has been AOT compiled. There are also a lot of options for configuring how this works. Can still use near full AOT compilation or full AOT compilation.
1
2
Replying to and
There's partial documentation on this here: source.android.com/devices/tech/d For example, I use the near full AOT compilation mode ('speed') without JIT or profiling: github.com/GrapheneOS/pla Full AOT compilation mode (disabling heuristics for interpreting cold code) is 'everything'.
1
Replying to and
They also have some weird optimizations like shared RELRO sections and pre-generating the heaps for libraries, not just the code. I think the way it works is they load them up in a deterministic environment and then write out the heap in a way that's quick to verify on boot.
1
Replying to and
That sounds a lot like what Darwin does with the shared cache, to prelink the system dylibs into one image, pre-bind ObjC and Swift runtime data structures, etc., though we're starting from already native code
1
2
Show replies