In the report we got, it was a "strange" list: customer IPs running unknown software, load balancers that were in FIPS mode, load balancers running on old hardware, there didn't seem to be much in common.
-
-
At this point, a lot of factors have to be combined: TLS sw would have to be coded in an uncommon way, using OpenSSL, negotiating older cipher suites, on older HW, with clients that send 0-byte records, and can be made repeat the same data over and over, with an active MITM.
Show this thread -
But that makes it more interesting! How do we find and prevent even these kind of rarefied cases? Automation, like the scanning tool, is clearly critical - but can we do more at the point of code?
Show this thread -
One thing I'm grateful for is that in s2n we kill connections on any error, and we do it in a way where s2n will completely refuse to interact with the connection after the error has happened. Just with a closed flag ... https://github.com/awslabs/s2n/blob/master/tls/s2n_connection.c#L1031 …
Show this thread -
s2n uses OpenSSL's libcrypto for the underlying cryptography, and the same issue in that code /could/ have caused impact within s2n were it not for that practice. Basically this check .... https://github.com/awslabs/s2n/blob/master/tls/s2n_send.c#L94 …
Show this thread -
Of course the impact still would have been small, because of the other factors, but I'm glad we have that check! Anyway, thanks again to the issue reporters, read their paper when it comes! and thanks for Andrew and Steven from the TLS team. That's it, unless AMA.
Show this thread -
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.