TIL that some fuzzers use code coverage metrics to drive their strategies. That's really clever! If some input can hit a line that was not previously being executed, chances are variations on it may hit more lines that haven't been covered yet.
-
Show this thread
-
Gahh, Mutagen is so cool in this regard. It randomly mutates source code operations to see if tests still pass — if they do it points to missing tests. It can do this because it operates on ASTs.
1 reply 0 retweets 4 likesShow this thread -
Riffing here: could see some fuzzer come along for say, HTML parsing in Rust. It could generate HTML test cases based using an AST so inputs are valid tokens. But during execution it could instrument the Rust source to try and cover as many branches as possible.
1 reply 0 retweets 5 likesShow this thread -
Also what if we put these approaches together? What if we could use coverage information to generate inputs that haven't yet been covered by existing unit tests? Rustc has lots of "valid weird input" tests — having tools that can help expand the unit test corpus would be great
2 replies 0 retweets 1 likeShow this thread -
Replying to @yoshuawuyts
Have you seen https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html?m=1 … ? It talks pretty much about this approach... maybe just a thin wrapper around the parser could allow to auto generate all these testcases ?
1 reply 0 retweets 0 likes
I hadn't yet, thanks for sharing!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.