... up until you see the implementation. This involves creating SSH host keys on a different computer, and then shipping the keys to the Terraform resource (an AWS EC2 VM) via AWS and cloud-init. (7/n)
Conversation
Now the SSH host keys are in at least four potential places:
1. The user's computer (and backups?)
2. The Terraform state file (and wherever that's stored)
3. Maybe cloud-init's logs / files saved on the VM (I would need to test this)
4. The EC2 instance
(8/n)
1
3
IMO there is a significant cultural problem here. Chris could maybe talk about this... But how are these existing, long-standing systems that should be well understood by now being so completely misused? Everyone else is building on top of these bad assumptions - (9/n)
1
3
- and the cultural message is: "just iterate, get to the next step, don't focus on details".
I don't see how that is compatible with designing secure and reliable systems. Sorry for the rant. (10/10... maybe, I might have fucked up counting)
1
3
SSHFP works well. I set it up for every A/AAAA record with a bogus value if it's something like a DNS round robin record that's not supposed to used for login.
Consider it part of the same thing as setting up DANE TLSA records for every TLS service which is each non-SSH service.
1
2
WebPKI offers no real value over DANE TLSA records beyond Certificate Transparency and only Chromium fully enforces CT as of this month when backdating certificates to bypass it stopped being possible (due to 3 year issuance when they started requiring CT). Just started working.
2
1
Firefox doesn't enforce it and Safari just started their own policy so they can't fully enforce it for almost another year (which is now the max lifetime largely thanks to Apple's unilateral decision to force it).
Never seen any non-browser bother doing real WebPKI support.
1
Nearly everything else just uses the WebPKI CAs without actually enforcing WebPKI rules beyond that. Some things hard-wire CA pins which is very risky without an out-of-band update mechanism. Few things do leaf pinning and those that do often fail to support rotation properly.
1
Reality is that if you control DNS or at least can appear to control DNS, you can trivially obtain a valid Let's Encrypt or other CA certificate. It's not really misissuance if you don't have DNSSEC + CAA. Hardly anyone checks CT meaningfully beyond making sure it's the right CA.
1
1
I'm looking forward to Let's Encrypt deploying accounturi support to production. They have it in staging (used for dry runs by certbot, etc.) but I can't get information on when it's going to be deployed to production. We use that for grapheneos.org, etc. alongside TLSA.
1
1
1
Look at the CAA records for grapheneos.org and then one of the subdomains like matrix.grapheneos.org. They each have their staging and production accounturi pinned for Let's Encrypt. It'll be nice when they start enforcing in prod. TLSA in browsers would ofc be nicer.
accounturi at least makes certificate issuance secure for the CA you choose to trust. It doesn't stop other CAs from ignoring DNSSEC/CAA, being compromised, etc. but at least for us our users can assume it's an invalid certificate if it's not Let's Encrypt which is easy to check.
2
You can test it with the Let's Encrypt staging server by using the wrong accounturi or not including the staging one. Using certbot --dry-run does that. Works fine already. I just wish they'd get that and the related validationmethods (same RFC) moved along to production already.
1
Show replies

