Conversation

It doesn't generate any ninja files when you start most builds, because it's very rare that you need to do a clean build. It's something done for production releases, but not so much by developers on their workstations, especially now that incremental builds work so much better.
2
Replying to and
Build a production release of Chromium where LTO is enabled for CFI and it uses far more memory for the linking step than anything in AOSP. AOSP has to link far more than a single binary though, so if you choose to use 8 jobs, you'll sometimes end up linking 8 binaries together.
1
Which build system would go out of the way to serialize the linking phase? I'm not sure what that has to do with the chosen build system for Android. If you only have 8GB of memory, you're just not going to be able to run 8 parallel jobs for builds using LTO. Not AOSP-specific.
1
Replying to and
It takes me 40 seconds to do an incremental build of AOSP with no changes, which is still terrible, but at least 10x faster than it used to be before ninja. I save a lot of time having separate devices for testing signed production builds and test key signed development builds.
2
The current bottleneck is largely the incomplete transition away from GNU make. There's still a huge amount of build logic in makefiles which has to be converted by kati to ninja files, which takes a long time and generates very sub-par ninja files compared to blueprint / soong.
2
The way the old system worked was still much more usable for the hardening work I do than typical build systems. It's a unified build system for the whole OS and used a declarative code style, with centralized logic I could edit and reliably make changes I couldn't elsewhere.
2
Show replies