I just had a brief exchange on Reddit about DNA damage and failure modes, and it got me thinking about how most people don't have a good sense for how robust software is in the face of random damage.
-
-
Binaries are different. You can do an impressive amount of damage to a binary and still have it mostly or fully work. Try it, grab a binary - the more complex the better - flip some random bits (avoid file headers, focus on the code) and see if you can spot what breaks.
Show this thread -
Even when the hardware is totally broken things can still work. I was messing with Dolphin, trying to patch the JIT to better work with PIE. I managed to break the sign extension instruction, IIRC (or sign extension for some cases, I forget).
Show this thread -
You'd think that having such a basic feature broken would cause most software to crash and burn, but actually the game I was testing with, Wind Waker, booted fine. Ish.
Show this thread -
It got to the title screen, but instead of loading the title screen cinematic behind the logo, it loaded the first map, with a playable Link. You could move around, but the Y axis of the stick was "folded" so you could only ever move backwards, not forwards.
Show this thread -
So yeah, software damage is far from guaranteed to segfault. This is how you get really odd "undebuggable" problems from memory corruption. And how Intel gets away with still crippling desktop CPUs by disabling ECC RAM support.
Show this thread -
Your desktop probably flips a few bits in RAM weekly, but you just don't notice. RAM sizes are too huge for memory to be close to 100% reliable. It just isn't possible without error correction support.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.