Can also save tons of space with `cp --reflink` if your filesystem has support (XFS and Btrfs but not ext4). It's essentially fork(...) for files. It uses block-based copy-on-write at the destination. Can copy an identical file over another to deduplicate without sharing writes.
Conversation
I moved to XFS from ext4 for better parallel NVMe performance and started using reflinks a lot. XFS doesn't have transparent compression support yet but I found it hurt build performance too much and most of my space is used by already compressed Git object/pack data anyways.
1
1
4
1
2
actually reducing the underlying disk space used by git wasn't implemented but would be possible by lazily fetching git history and otherwise maintaining a shallow clone. it would be a very large project but it would be fun to work on
1
1
XFS delayed allocation + delayed logging combined with 128GB of memory and a fast NVMe SSD means that barely anything ends up being I/O bound. End up with huge sequential writes as build outputs as flushed out. Reads mostly hit page cache. SSD stats have 10:1 write:read ratio.
1
2
Generating delta updates for GrapheneOS has parts that are extremely I/O bound unless it's done in tmpfs and 128GB memory is not nearly enough without using fewer jobs than CPU cores. It'd already go OOM without tmpfs with 1 job per thread (32) instead of 1 job per core (16).
2
3
this is also where the use case for big corps vs individual devs comes into play; at twitter we offloaded stuff onto remote process executions via the build tool but that's not available to most people + it requires rewriting the entire build for a remote enabled build tool
1
2
I need to do official builds, signing and delta generation locally for security and trust reasons so that's inherently bottlenecked by my local compute resources.
I wanted to make a 64 core Zen 3 Threadripper machine to help with this but Zen 3 Threadripper was never released.
2
3
oooh ok yeah hadn't considered signing, can't see a workaround for that. much sadness re threadripper
1
The official builds themselves also need to be done locally, not just the signing part of it.
Development builds tend to take 30 seconds to a couple minutes due to ninja. Release builds are 45 minutes on Ryzen 5950X with PBO2 and 90 minutes on the aging overclocked i7-6950X.
Signing takes longer than you'd expect because it has to resign all the apks/APEXes, regenerate filesystem images, generate dm-verity metadata, generate vbmeta and sign it, generate OTA updates / sign them, etc. but it can all be done in parallel at end. Then generate deltas.
1
3
git clone --depth=1 is my default when checking out others' projects
2
yes, i was thinking that the remote process executions can have inputs and outputs cryptographically hashed but that's no guarantee they haven't been tampered with unless you do the builds locally and confirm the hashes match which means doing it all locally anyway


