It's still a reactive rather than preventative mechanism and most organizations don't and won't have any monitoring of CT logs. It only works for organizations like Google that are going to track all their valid certificates and monitor the logs to check for any invalid ones.
Conversation
It's still only able to catch something after the fact. That doesn't do much good when it's so hard to remove a CA. It's an incredibly broken system, and WebPKI relies on DNS secure regardless. There's nothing meaningful beyond DV which is based on DNS security anyway.
1
TLSA records (DANE) + DNSSEC work well. They remove CAs as trusted parties without adding a new one. It's being heavily adopted for non-Gmail SMTP federation already. Microsoft is adopting it for their mail services. Google is adopting an insecure mechanism (MTA-STS) instead.
1
1
2
MTA-STS doesn't even provide WebPKI level security. It only provides comparable security to only using http:// URLs with dynamic HSTS, no HSTS preloading and no Certificate Transparency. It works well for web sites too but browsers have chosen not to provide security to users.
1
Significant number of client side networks and resolvers have compat issues with DNSSEC. DNS-over-TLS / DNS-over-HTTPS resolve this when they're in active use. There's no technical reason Chrome can't do DNSSEC + TLSA record validation when using DoT/DoH. Why not even mention it?
1
1
The past reasoning for the path taken by Chrome no longer holds up. I'd like to hear why DNSSEC+DANE is good enough for Microsoft but not for Google. I'd like to see Google employees not go out of the way to avoid acknowledging it exists and give reasoning for stuff like MTA-STS.
1
1
It's not true that only organizations "like Google" monitor logs. Organizations big and small use Cert Spotter (both the open source and commercial version) or other tools to monitor CT. Meanwhile, there is no way to detect if a malicious registry signs unwanted TLSA records.
2
A registry can sign TLSA records all it likes, but it would also have to sign alternative DS (and suitable NS) records or their non-existence, and somehow serve this to the victim alone, and remain undetected. Registry malice would in most cases also easily compromise WebPKI.
1
1
Supposedly CT makes this harder, but we could and should have a CT analogue for DNSSEC. One good stepping stone would be requiring CAs to include RRSIG chains for RRs they used in DV as part of CT entry.
1
For time being they could just enforce DANE + WebPKI instead of having standalone DANE support. It's not as if websites can stop using WebPKI even if browsers fully supported DANE because not all browsers and other clients would have DANE. We don't disable TLS < 1.3 due to bots.
1
2
Disabling TLS < 1.2 is fine but disabling TLS < 1.3 for websites is horrible because you break all kinds of crawlers and screw up search engines etc. Google uses modern Chromium and other stuff for their crawling but they still fetch some stuff with TLS 1.2. It's really annoying.
Other search engines are still largely TLS 1.2 only. Lots of other bots too. Even if browsers deployed full DANE next month you'd probably have to wait 10+ years to be able to actually drop WebPKI if you didn't want to break lots of important crawlers.
1
1
Yes, I'm not arguing against any of that. Just putting forth proposals that would both strengthen webpki (programmatic detection of badly behaved CAs issuing fraudulent certs for secure zones) and shut down claims that DANE lacks some benefits of webpki.



