yeah, seems like there's a fundamental problem in a lot of security stuff that's, just
A wants to authenticate itself to B, but is using "user agent" C to interact w/ B
C *must* be trustworthy/authenticated to A for A to trust any requests for auth that C claims came from B
Conversation
Replying to
I don't think Google should actually have that hard-wired list because people are not strictly using their FIDO2 security key with Android and therefore Android trying to impose a security model of only trusted browsers doesn't work. It also isn't a viable approach anyways.
1
Replying to
yeah- h/w tokens are a fundamentally pragmatic "better than not having it, i guess" approach but without an OOB, direct-to-human (but phish/relay-resistant, somehow?) channel...
there's def. better designs and it sounds like graphene is once again doing neat stuff in this space
1
shades of the dTPM/fTPM split, actually
a dTPM can in theory be "more hardened" in some very specific ways than an fTPM might (especially around stuff like anti-hammering)
but the need for *physical separation* between the Secure Hardware Root-of-Trust and the software TCB
1
you can "pair" the Secure Hardware (dongle/chip/whatever) w/ the software TCB, and at least *encrypt that link*... but you *still need to trust the software TCB*, AND now you have to trust the Secure Hardware
1
biggest advantage is you can't exfiltrate keys (in theory) out of the Secure Hardware from the software TCB- you can give the Secure Hardware "policies"/programmable bits to release/unseal secrets only if (arbitrary boolean expression evaluates to TRUE) or whatever, but-
1
outside of rate limiting/antihammering, that doesn't get you much of anything useful *unless the Secure Hardware can actually verify the software TCB*
google's Titan seems to at least have gotten this right, or at least have *reasonably defined the problem*
1
Replying to
Titan M is paired with the SoC (via TEE) and the link has authenticated encryption, and there's key attestation which is what Auditor uses for hardware-based attestation of the OS including verified boot key fingerprint, patch level and so on since it includes all of that stuff.
1
1
The main thing it's missing is a secure display and higher level concepts than simply keys with low-level purposes (sign, encrypt, etc.) and requirements for their usage like the user being authenticated, device being unlocked, physical confirmation, etc. as it's too low-level.
1
2
Bitcoin hardware wallets have a display in addition to confirmation and show transaction with the destination, amount and fee to be confirmed on the hardware wallet along with being able to display receive addresses. Much different than a typical HSM for key storage with no UI.
1
1
Trezor has a FIDO implementation and they show the public key fingerprint or the organization logo + name (Google, GitHub, etc.) if it's on their hard-wired list. It can't display anything about the client from the OS or verify the OS though, and it can't do attestation.
Can't improve FIDO much because it's designed a certain way and has a specific interface. Can tack some stuff on that it wasn't really intended to support.
Google Play requiring most apps not on their approved list to be approved by domains is really annoying though... horrible.
2
Replying to
in theory (n.b. this is just a "might be interesting design space here", not a "oh the solution is trivial it's just an engineering problem" statement), a soft attestation-based fido2 implementation could still involve a hardware token for key storage and final approval-
1
trezor can't attest to anything on your phone because it's not *part* of your phone
but it can *validate* attestations from your phone and use that to display trustworthy human-readable information about the app making a request
1
Show replies

