Of course we can never be sure. But finite resources and prioritization requires that we go with the best evidence we have. I think that memory safety work should take priority over Site Isolation, for example.
-
-
Replying to @pcwalton @BRIAN_____ and
Because when attackers attack browsers, in practice they go after memory safety issues (leading to sandbox escapes, often times). Not Spectre.
1 reply 0 retweets 1 like -
Replying to @pcwalton @BRIAN_____ and
(And to reiterate I think we *should* do Site Isolation…just that we should be clear about what the real-world benefits are going to be.)
0 replies 0 retweets 0 likes -
You’d rather have a memory-unsafe renderer with Site Isolation over a memory-safe one without it? That’s putting a *lot* of faith in your sandbox mechanisms.
1 reply 0 retweets 0 likes -
Just due to the amount of effort it would take?
1 reply 0 retweets 0 likes -
Replying to @pcwalton @BRIAN_____ and
A sandbox is only as safe as its interfaces to the trusted process. On the Web, the surface area of that interface is so broad and so deep that I have a hard time imagining having confidence in it, any more than I have confidence in Linux kernel syscalls.
0 replies 0 retweets 0 likes
To me the issue isn’t so much that a few parts of the platform are problematic as that the platform as a whole is enormous.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.