Just killed Signal Desktop because it was using 2GB of memory. 🙃
Conversation
(The blame for this, like many things, lies entirely with Electron)
1
7
Replying to
what really prevents me from installing Signal Desktop is not the risk of RCE (there are ways to fix that like sandboxes and VMs!) but that a bug would still lead to accessing other chats, etc. and it's non-trivial to fix that
1
1
i think they added a CSP, etc. now so XSS probably isn't a huge risk
and yet 😰
1
i am perpetually tempted to try and build something using their new Rust client
1
1
Electron tends to cripple Content-Security-Policy. IIRC, it essentially breaks 'self'. Since you only need to deal with Chromium, you can strictly use hash-source as the only way scripts as permitted. Other browsers don't yet support hash-source for external scripts, only inline.
1
1
I very much doubt they have a strict enough CSP to do much to prevent XSS. They also probably aren't using Trusted Types. If you entirely avoid XSS sinks by working with the DOM using structured APIs, you can enforce Trusted Types with a strict policy not trusting sanitization.
1
1
Only 'self' without 'unsafe-eval' or 'unsafe-inline' is still unsafe if you have dynamic content considered to be 'self' which is a major issue with Electron.
See microsoftedge.github.io/edgevr/posts/e for Trusted Types. Strictest is `require-trusted-types-for 'script'; trusted-types 'none'`.
1
1
For a website, using 'self' means you permit everything from your server. You can be very careful to only have static content and APIs not serving JavaScript. You can't really do that with Electron. The local JavaScript has the ability to create files considered to be 'self'...
1
1
2
`X-Content-Type-Options: nosniff` is an important security header for making CSP and other security features robust combined with careful control over how you serve things.
Electron really messes with how these things are supposed to work. Android WebView is much better...
1
1
I have no clue how to write a Trusted Types policy and I don't need to learn to use the strictest mode.
I already wrote JS following the rules required such as never using innerHTML but rather using the structured APIs even when it's verbose/annoying. It simply enforces it now.
Can't make the mistake of using some API where dynamic JS/HTML injection can happen. There are some particularly subtle ones.
Signal presumably uses some bloated frameworks/libraries where you have no choice but to set up actual policies though, to either sanitize or trust them.
1
i think Signal uses React and maybe something else too. one of their XSS' was due to using React's aptly named "dangerouslySetInnerHTML"


