I “learned” cryptography back in the 1990s from Schneier’s Applied Cryptography, and what I remember most — all I remember really — from the cipher modes chapter was “some cipher modes give you error recovery and resynchronize after errors”.
Sounds like a good thing! Like, you don’t want a single bit error in your ciphertext to totally blow up your message?
Turns out, though, no, that’s exactly what you want.
The property we’re looking for, that any error — 1 bit, 10 bits, 1000 bits — blows up decryption, “authenticated encryption”. The original bits you produced from encryption _and only those bits_ can be decrypted with the right key.
There’s a bunch of theatrical examples of why this is the case. The best known is Vaudenay’s CBC Padding Oracle, where attackers could, by inducing errors in AES-encrypted blocks, decrypt whole messages under repeated trial decryption. Broke _TONS_ of real-world systems.
A point of clarification you might want to add to the thread is that it's ok to apply error recovery (e.g. ECC) on top of the ciphertext, as long as you're properly authenticating the ciphertext and the decryption process itself does not tolerate bit errors.
If a block fails to verify, it can try to fix it with the error correction data and try verifying it again. It avoids soft bricking the device due to storage corruption.
Only supports this for the high-level images but that's nearly all of the OS so chances are it happens there.