Hacker News new | past | comments | ask | show | jobs | submit login
Concerns about Passkeys (micahrl.com)
66 points by mrled 3 months ago | hide | past | favorite | 46 comments



I’m still not sold on passkeys, and I don’t like to see KeePassXC singled out like this.

As someone who uses Mac, Linux, iOS, Android and Windows, I want something that lets me sync my authentication methods across all of them, and the KeePass ecosystem (even with 2-3 different apps) is the only game in town. I absolutely do not want to use a cloud-based or vendor-owned password manager, period.


These are valid concerns, and I can absolutely see situations where I would want to do both of those things that KeyPassXC is doing (skipping user verification and exporting private keys). But security isn't a one-stop shop.

Doing user presence verification gets in the way of the user doing what they want to do. Not doing user verification lets an assertion be made in the background - possibly by a malicious script. Is that tradeoff worth it?

Letting the user export private keys is absolutely important for backup, and transferring between devices and services. But if you can easily export a private key then cloning it becomes significantly easier. Are trivially cloned keys a risk we're willing to take?

The answers to these depend on the user, the provider the application and their combined threat model. Sometimes those risks are totally fine. Other times, they're totally not. The standards could open up more options and let the user or sites negotiate what they can and can't do. And the cost in that direction is that now the overall concept is more complicated, and we requires both site operators and users to learn what those tradeoffs involve - with an almost certainty that security will be weaker as a result.

This isn't a cut and dried issue, with clear 'right answers' and villains. Tradeoffs exist in every direction, and there just aren't any security free lunches to be had here.


My bank currently reprompts me for password whenever I make an important transaction (e.g., transferring out lots of $$$, or adding an account). Should they drop that security feature when they switch to passkeys?


Yes, because it's both annoying, and adds no extra security if you're using a password manager. While the database is unlocked, the password is in memory, and reprompting the user to enter in the unlock code for an unlocked database is just security theatre.


This assumes the attacker has unrestricted access to memory. If a malicious actor has that level of access, you've already lost all security guarantees, regardless of the auth mechanism.

A more realistic scenario is where the user has installed a malicious extension that can exfiltrate the cookies. Requiring reauthentication makes an exfiltrated cookie less valuable. While the extra auth step can be annoying, it also provides an opportunity for additional safety checks (like validating that the IP of a request matches that of the recent auth).


GitHub called this sudo mode, and it's a good idea more people should use


my bank requires me to use a chip&pin card reader in such a situation which i like.

but they want to get rid of it and use passkeys instead.


> Is that tradeoff worth it?

> Are trivially cloned keys a risk we're willing to take?

The point is that for my passkey stored on my device, I should be the one who gets to answer those questions.


Everything old is new again: https://www.netrek.org/about/netrekFAQ.html#10

""" I compiled the client source, but every time I try to connect to a server it kicks me out or tells me to get a 'blessed' binary. What gives?

It's possible to modify the client source to do lots of tedious tasks (like aiming, dodging, that sort of thing) for you. Since this gives you a big advantage over a mere human, netrek has a way of knowing whether you have a client that was compiled by the netrek Gods or by you. If you compiled it, netrek will assume it's a cyborg, and will kick you out if it's not cyborg hours. """


Has anyone managed to reverse engineer how "blessing" works yet?


Wasn't it some private key signature...? Man, that's digging back a bit.


It's all open source. No need to reverse engineer.


I can't believe the thing that passkey defenders swore up and down wouldn't happen is happening!


Don't worry, the passkey defenders have a perfectly good explanation for this. They'll tell you all about it just as soon as you email them privately to set up a phone call.


But not before you join their Alliance and sign some papers...


https://github.com/keepassxreboot/keepassxc/issues/10407#iss...

> You absolutely should be preventing users from being able to copy a private key!

Huh? This is dumb. Users should be able to do whatever they want with their private keys. Looks like the post in on point about the push to take away control from the user. This is an anti-feature that should not be sneakily accepted as a security feature.

When DRM-like stuff is shoved on the user in the name of security, it turns into the means to control the users by whoever makes those decisions for them. This should always be opposed.

Having requirements like "users should not be allowed to do X" stinks to extreme.


Later down thread, this bit honestly reads like a threat.

> The unfortunate piece is that your product choices can have both positive and negative impacts on the ecosystem as a whole. I've already heard rumblings that KeepassXC is likely to be featured in a few industry presentations that highlight security challenges with passkey providers, the need for functional and security certification, and the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).


I disagree. Not necessarily in principle, but because there is no good way for a passkey app to distinguish between a competent user and a malicious actor pretending to be a competent user. Passkeys are, in a sense, very very dangerous — with passwords, everyone knows that a password can be compromised, and any competent security system needs to tolerate passwords getting compromised. But passkeys (and TOTP secrets and such) are treated as long term secrets. If a website enrolls a passkey supposedly attached to an Apple keychain, then that website would like to be able to trust that a compromise of the Apple account that is subsequently recovered will not result in a persistent compromise of a passkey that predates the compromise and recovery.

If a passkey can have its private key exported, by anyone at all, then this property is lost. I do not want access even to my own passkey private keys!

I certainly don’t love having a few gatekeepers in charge, but the protocol (currently?) does not really support a good alternative. And doing better is hard!


> If a passkey can have its private key exported, by anyone at all

But it's not by anyone at all. It's only by users that have unlocked their database. I really don't see the attack vector here.

It's not like the Apple Keychain at all, since your interaction with Keychain is very different than KeePassXC, which makes the locked vs unlocked state very explicit (and you're almost always auto locking anyway), whereas Keychain is something happening in the background that sometimes prompts me for my password/fingerprint. I have no idea what the state is there and would be very annoyed if someone could leak all my secrets just by accessing my computer.

With KeePassXC I'm always aware if it's open or not, because I can't use it without knowing that, and I had to make a very explicit opening of it. Because it uses local files and not the cloud, it's very important to me to be able to import and export the contents. Without that ability, I will lose access to my passwords.


Passkeys are advertised as more convenient form of private / public keys approach. So how is that any different from other usage of private keys which are considered long term secrets? Incompetent users can compromise them now too.

It doesn't mean it should be easy to do, but it's also completely unacceptable to make a requirement like "users are forbidden to access their private keys".

> by anyone at all

What do you mean by anyone at all? By the owner of the private key. Not by anyone.


> What do you mean by anyone at all? By the owner of the private key. Not by anyone.

If I log into my computer and turn my private key into a plaintext blob, as a file or a Python object or something on a USB stick or a QR code that I photograph, then anyone who happens to have compromised my computer at the time has my public key, too. Even if I subsequently fix the compromise, they still have my public key.

I do not want this to happen.


IMO one of the biggest issues with the Passkey spec is that it doesn't provide a way to automatically rotate credentials. The entire security model relies on Apple/Google/[insert name of nonprofit they end up allowing through the DRM gates to avoid antitrust suits] being completely infallible, forever.


That's why normally private keys (like when used with ssh for example) are paired with something like a passphrase that should offer an additional layer of protection. But still, you (the owner of the private key) can access it. You should keep both your key and your passphrase secret. Not sure what passkeys are doing about it, but I still don't see any valid reason for the owner not to have full access to the key.

If someone has sufficient access to your computer (like being able to keylog and stuff) - it's somewhat late to worry about keys being compromised.


In general any one can make a passkey app. Keepass chooses to be out of spec. No one is gatekeeping them. If an nginx server receives bad data it spits out a 400 error instead of processing the request. One of the reasons browsers are still effed up is because they refused to be standards compliant and were still paying for quirks mode. I would like to see this article complain about an http server handling a bad actor.

Otherwise, create multiple passkeys. Create a passkey in your ios keychain and in your Keepass app. This walled garden has a gate, walk through it.


In the hypothetical scenario where websites block Keepass, Keepass would not be sending “bad data” to the website. Its interaction with the website would not be noncompliant in any way. Rather, the website would be punishing Keepass for a separate interaction between Keepass and the user.

A more apt analogy would be if the http server sent an 400 to all requests from browsers known to support ad blocking.


Not so hypothetical. PayPal supports passkeys, but does browser sniffing to only enable it in Safari on Mac. I could tell my browser to fake it's UA to use 1password's passkeys, but to what end?


It is potentially bad data, since authentication data is supposed to show that a valid user wants to log in to the service. If the client makes it easy for anyone to pretend to be that user, than the authentication data is bad data.


The better analogy is a web server blocking a web client because, even though it's standards compliant, it does something with that data it receives that the server's owner doesn't like. For example, yt-dlp.


Really, the example would be refusing to serve browsers that are standards compliant but aren't blessed by some certification authority, kind of like what they tried to pull with WEI. I'm sure FOSS developers will be able to keep up the passkey certification regime as well as the big boys, right?


According to this list majority of the clients are out of spec: https://passkeys.dev/docs/reference/known-issues/


The list has been filtered to include only non-compliant clients:

“The following list of passkey providers have not implemented User Verification in a spec-compliant manner.”


I think this is more like a webserver sniffing the user agent and choosing not to serve the request, not like sending a webserver bad data such that it isn't able to serve the request. I'm concerned that passkeys end up in a "This site is best viewed in Internet Explorer" mindset, where passkey providers that would work fine are detected and prohibited because the website operators want them to enforce user behavior.


In the sense of "I refuse to support browsers that only support tls 1.0", definitely. "Just let the user turn off TLS, why do you hate choice" isn't the instant win you might hope it is.


No, again, the protocol between the site and the authenticator is unchanged. It's much more like DRM that doesn't let 4K media play on systems that allow the user to do whatever they want, but in this case instead of the DRM preventing the user from copying someone else's copyrighted work, it's preventing the user from copying their own data.


I agree that it's not an unqualified win. If sites block passkey apps that allow exporting unencrypted passkeys, that probably will prevent some accidental passkey leaks.

It's just that it's not an unqualified win to allow sites to block passkey apps either. If we allow that, we can get to a place where sites block apps for the wrong reason, or it becomes more expensive to develop passkey apps so there is less competition for secure passkey apps.

It's not just whether it's a good idea to allow unencrypted exports. It's whether it's a good idea to give websites a say in how we manage credentials.


This article is very poorly informed, and is likely written by someone who has never had to secure a site or work in a large enterprise. The author seems to be upset that site owners also have some authority to make decisions. They’re users of the technology too, you know.

Based on this article, I assume the author is also raging about companies using “do not copy” physical keys, or dictating the use of a key card to enter.


Bitwarden also supports passkeys, and works on iOS, Android, Mac, Windows etc.

Mind, I’ve no idea how well it does so. Every so often, my passkeys fail in some incomprehensible way, so I’m not very comfortable with the concept.


Attestation is pure evil and is the only reason that passkeys aren't great. It's only useful for things like blocking authenticators that refuse to DRM the user, exactly as Okta is threatening to do to KeePassXC.

To be clear, the only thing KeePassXC is "out of spec" about is that where the spec says "you must not let the user do X, Y, and Z with their own data", KeePassXC will let you do those things, after a warning.


Poppycock.

The credential is not only the user's data. The credential is an agreement for access between the user and the service provider.

The service provider has every right in the world to demand the user prove that they are securely storing the credential in a way that can't be extracted.


> The service provider has every right in the world to demand the user prove that they are securely storing the credential in a way that can't be extracted.

Wait, really? Does this work both ways? Do I get to demand that the service provider store the data it collects about me in a way that can't be extracted? Oh, apparently not[1]...

[1] https://www.technologyreview.com/2023/07/17/1076365/how-tech...


> The service provider has every right in the world to demand the user prove that they are securely storing the credential in a way that can't be extracted.

I'm so glad people never crammed that into the TOTP protocol. You have recovery codes you can save (which are arguably just as sensitive as the TOTP secret) and a lot of apps let you export the secret entirely.

I used an app on iOS that doesn't let you export them, and it took hours to migrate each entry one-by-one to my new Android device. Even with recovery codes, it was a pain to log in to each site and drill through their menus to disable and set up 2FA again. I should have been wary of that.


> The credential is not only the user's data. The credential is an agreement for access between the user and the service provider.

The credential is, in fact, only the user's data. How does it even make sense that a credential could be an agreement?

> The service provider has every right in the world to demand the user prove that they are securely storing the credential in a way that can't be extracted.

No, nobody has any right to dictate, or even know, how my device stores my data.


You're dictating to your bank they shall not let you money be stolen, right? Perhaps not dictating, but if you thought that was a possibility you would go to anther bank. So they can honour that agreement they are dictating to you how you store your passkeys so they can be reasonably sure people can't use them to steal you deposits. And again not dictating in an absolute sense - you are free to find another way to safeguard your money.


They might demand whatever they want, but it translates into "I want to control what you can do on your system". Which is basically another DRM-like idea. This should not be viewed as an acceptable approach. Because there is no end to it once they get to tell you what you can or can't do with your own system.


Seems like this hot take is coming with a very specific use case in mind. I could see a company wanting fine grained control over how its employees access their privileged employee accounts. I'm not sure attestation needs to be in the spec for that, but I can see why some companies might want it in the spec for that. Ideally they would just have the right mix of policies, incentives, and culture to make sure none of the employees are grossly negligent about security.

Their customers' accounts, on the other hand, are a different story. They should have freedom to choose. Companies that try to restrict that freedom should be punished in the market, or, in cases of monopoly, by the FTC. I suppose that doesn't mean it definitely shouldn't be an option in the spec, though..


Even for companies, attestation isn't necessary. If your employer wants to make sure that your VPN passkey is really on a YubiKey, then they should generate it on it for you before they give it to you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: