Corollary: An exponential increase in the time budget available to your favourite fuzzer yields a linear increase in coverage achieved (or # bugs found) given a fixed # cores. Maybe less. Not entirely unrelated: https://mboehme.github.io/paper/TSE15.pdf
-
-
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Alright. Let the prob. that a random input exposes a bug be θ, then the exp. prob. that the bug is revealed after n inputs is 1-(1-θ)^n and 1-(1-θ)^(nx), resp. for a fuzzer with with x more cores. The factor improvement is ((1-θ)^n)-(1-θ)^(nx))/(1-θ)^n). For θ=10^-4 and n=10^3,pic.twitter.com/uboLmqz7L1
Prikaži ovu nit -
Now it gets REALLY interesting. If the probability θ is *really* low relative to the available time budget (i.e., the practical case), putting in 10x more resources makes perfect sense --- but only up until a certain point when factor improvement plateaus. For θ=10^-8 and n=10^3,pic.twitter.com/OkSlVkLCIz
Prikaži ovu nit - Još 4 druga odgovora
Novi razgovor -
-
-
For LibFuzzer on FTS, an 𝗲𝘅𝗽𝗼𝗻𝗲𝗻𝘁𝗶𝗮𝗹 increase in # cores gives a 𝗹𝗶𝗻𝗲𝗮𝗿 increase in # features covered that are not covered by a same-length single-core campaign. Expecting the same for # bugs. Thoughts? Linking here for completeness.https://twitter.com/mboehme_/status/1220908919079358466 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
I think you're on the right path. Sounds like we need tuned fuzzers with prior experience to avoid the raw scaling factor.
-
Tuned fuzzers?
- Još 6 drugih odgovora
Novi razgovor -
-
-
Based on your experience, does it depend on the type of fuzzer used, e.g. coverage vs grammar-based?
-
A grammar-based fuzzer samples from a restricted input space, i.e., it does not generate inputs that do not adhere to the grammar. A coverage-based greybox fuzzer is generally more efficient than a blackbox fuzzer but (with a shared queue) maybe less scalable (in # cores).
- Još 2 druga odgovora
Novi razgovor -
-
-
If the time interval is continuous and with instantaneous IPC. Spinning up cores and handling and deduping the initial burst of thousands of coverage entries (* number of cores) is a real infra problem in fuzzing. Without enough time it's not an issue, but then plateaus are.
-
But, if I'm _not_ nitpicking. Yeah, I pretty much agree. Coverage is pretty much always logarithmic, and I always use a logscale X time axis when plotting it. Anything else just looks like a vertical line and a flat top.
- Još 4 druga odgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.