It boils down to two things; the server has to know how to decrypt the client's conversation, which means it really needs to have the conversation key that's being used. Most servers do this by encrypting the conversation key with another key and asking the client to remember it.
-
-
The answer is for the server not to use key-in-a-key BS. Instead if the server just remembers the key, let's a client use it ONCE, and deletes it when it's done ... we get FORWARD SECRECY and ANTI-REPLAY. REJOICE!!!
Show this thread -
.... except it costs the server money. It has to cache more keys, and it's not easy to distribute across wide geographic areas, and comes with its own distributed systems challenges. But guess what? THAT'S ALL THE TLS SERVER'S PROBLEM.
Show this thread -
... no need to modify thousands of applications, no need to teach PHP and RubyOnRails developers the intricacies of idempotency edge cases. Nope, just one slightly costly change within the TLS1.3 servers. So that's my plan, and REJOICE again, because TLS1.3 can have secure 0-RTT
Show this thread -
.... unless some TLS servers would cut corners, and just want the fast benchmarks, and you know .... deploy TLS1.3 0-RTT without built-in SAFETY mechanisms. That would be INSANE, I mean, why risk bugs and side-channels, right?
Show this thread -
Oh right, no that's exactly what's happening. So here's my advice: if you see a server supporting 0-RTT and that server doesn't give you an iron-clad guarantee that when the key is used, it's deleted, and that your EARLY CONVERSATION can't be repeated ... don't use it.
Show this thread -
Last message in the thread: no 0-RTT is not some NSA backdoor (Dear HN: grow up), there are no intentional back doors in TLS1.3, and it is still overall AWESOME AND EXCITING and we'll be adding it to s2n ... VERY SOON. EOF.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.