One way to catch these errors is to use a protocol fuzzer; something that sends messages in random orders, tries to jam things up. But those are hard to write ... the messages need to be somewhat valid to get so far, and the combinatorial space to explore can be HUGE.
-
-
Now my favorite part ... how do we know that the verification itself actually works? We run it on every build, but what does that say?
Show this thread -
We *also* "verify the verifier" by forcing some negative test cases. We actually patch the code with known errors: https://github.com/awslabs/s2n/tree/master/tests/saw/failure_tests … and check that the formal verification fails! This is super important for checking formal verf btw, and often overlooked.
Show this thread -
We also do fuzz tests (https://github.com/awslabs/s2n/tree/master/tests/fuzz …) and we have integration tests with other implementations, to check for inter-op.
Show this thread -
Our goal with tooling is to ensure that we have failsafes beyond code-review. For these state machine type bugs, it actually takes quite a lot! There's a lot more code verifying it than implementing it. It's not surprising that these issues crop up in real world software.
Show this thread -
Anyway, that's it unless you want to AMA. I'll just ask
@threadreaderapp to please unroll this thread!Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.