That post blames Intel, but it's not just them. NaCl forcibly crashed (and Google refused our trivial fix!!!), random drivers wouldn't work, you had to get a patched Windows installer for multiple releases of Windows (difficult in the days of CD installers), etc.
-
Show this thread
-
Dan Luu Retweeted Dan Luu
The Google Chromium team banning our CPUs is especially ironic in retrospect since they cited security concerns. At the time, we were mostly shipping in-order CPUs, not vulnerable to Metldown/Spectre/etc. and of course Intel is the most vulnerable these.https://twitter.com/danluu/status/779746231287328768 …
Dan Luu added,
5 replies 34 retweets 165 likesShow this thread -
The security of NaCL required accurately predicting controlflow, and confidence it would work in adversarial conditions (e.g. someone trying to induce faults, undocumented opcodes, etc.). I said it seemed prudent to whitelist cpus we had tested, I stand by that.
1 reply 0 retweets 6 likes -
If someone overclocks their cpu, and an attacker does some x87 operation in a tight loop for a few minutes, how confident are you a branches won't be miscalculated? Usually no security consequences for this, so vendors didn't test, It worried me
e.g.https://devblogs.microsoft.com/oldnewthing/20050412-47/?p=35923 …3 replies 4 retweets 8 likes -
Are you saying this couldn't happen on whitelisted CPUs? As a hardware engineer, I don't see how you could expect that. You couldn't possibly guarantee this without information the vendor won't give you (detailed timing models w/manufacturing variation).
2 replies 0 retweets 6 likes -
You can maybe get a vendor to tell you they tested for this, but most likely they'll just lie to you and roll their eyes internally. Even without overclocking, I've seen this kind of thing happen on server chips (which have more rigorous testing done on them than consumer chips)
1 reply 0 retweets 5 likes -
You can't reasonably test for this yourself in any way. Even if you sample 1M chips uniformly across wafers, fabs, etc., a new stepping, could be a simple 1-layer fix, could completely change time and invalidate all of your testing, you'd have to test another 1M sampled uniformly
2 replies 0 retweets 2 likes -
You misunderstand, we check the whitelisted subset of functionality we rely on, of course we can check it works as we expect?
1 reply 0 retweets 0 likes -
You specifically said you're concerned about things like overclocking causing incorrect behavior on branches How exactly do you check that doesn't happen and branches are executed properly on overlocked CPUs?
1 reply 0 retweets 1 like -
We test it. I think you're asking how can I be certain that it will work on every unit of that model ever produced, but obviously the answer is we can't, but we have higher confidence after checking?
1 reply 0 retweets 0 likes
Since you're whitelisting based on vendor (?), no. Testing one uarch shouldn't give you any more that another uarch works than that a Transmeta chip works. And within one uarch, how exactly are you testing? I suspect the answer is technically yes, but only in a meaningless way.
-
-
It's a compromise, I would prefer a more specific check. Not sure what alternative you're proposing, just start depending on how obscure parts of the spec works under attack without even testing?
1 reply 0 retweets 0 likes -
IMO, this answer sounds like 1. Something must be done 2. This is something 3. We should do this I think almost any competent CPU engineer would tell you that this actually has no meaningful impact. You're pointing out a real problem, but that doesn't mean this helps at all.
2 replies 1 retweet 6 likes - 33 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.