I've been getting a few questions about the recent "PortSmash" vulnerability announcement. Short answer: This is not something you need to worry about. If your code is vulnerable to it, you were already vulnerable to other (easier) attacks.
-
Show this thread
-
While it's great to see that someone put together exploit code for this, it's not a new discovery: I described this attack in 2005 when I first exposed the dangers of shared resources in Intel Hyperthreading.
1 reply 2 retweets 13 likesShow this thread -
I didn't write exploit code at the time because this attack only works when the sequence of instructions bring executed depends on sensitive information; and if that's the case you're already leaking information in many other ways (code cache, data cache, branch prediction...).
1 reply 0 retweets 8 likesShow this thread -
The defence against PortSmash is exactly the same as the defence against microarchitectural side channel attacks from 2005: Make sure that the cryptographic key you're using does not affect the sequence of instructions or memory accesses performed by your code.
2 replies 1 retweet 11 likesShow this thread -
Replying to @cperciva
Not just the key, the secret data too! This is the right advice, but it's very unsatisfying. It's still too hard to write and verify this kind of code, and sometimes coding mistakes there are bigger problems than the side channels they were designed to fix.
1 reply 0 retweets 1 like -
Replying to @colmmacc
Keys are generally more sensitive than data; but yes, it should all be kept secure. I've been asking compiler authors for 13 years to give us better tools, e.g. to mark variables as "cannot be used in control flow or address computation". Alas, no progress yet...
2 replies 1 retweet 2 likes -
Replying to @cperciva
Keys are strictly less sensitive than data! Keys can be changed, but if the data leaks ... that's it. Keys are often more highly leveraged ... can unlock a large volume of data, and more feasible to attack due to frequent use, but worth keeping in mind the data we're protecting.
1 reply 0 retweets 1 like
This is a personal nit of mine. Really bugs me when people put the keys in secret memory for example, or exclude from core dumps, but not the actual data! Especially when it's short-lived keys and long-lived data 
-
-
Replying to @colmmacc
True -- what I see more often is long-lived keys and short-lived data, though. For example, TLS secret keys (tied to certificates with months of validity remaining) vs. a single user's session.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.