Hacker News new | past | comments | ask | show | jobs | submit login
Advancing Our Bet on Asymmetric Cryptography (chromium.org)
181 points by HieronymusBosch 6 months ago | hide | past | favorite | 66 comments



Interesting. It does seem that being more agile in PKI deployment is going to be a requirement in the next few years as we grapple with rolling out a potentially interesting variety of PQ signatures and hybrids.

Especially considering exploding PQ signature and key sizes, this looks increasingly like a data synchronization problem between the server and clients. I wonder if we could kill two birds with one stone by using trust expressions consisting of a set of certificate indexes against a trust store database, instead of trust store versions and exclusion labels. In that model, a trust store is just a centrally managed list where each certificate is assigned a unique 64-bit index.

For example, a client says "I use trust store database XYZ with certificate indexes: <ordered, integer compressed 64-bit index list, maybe a couple hundred bytes>". The server constructs (or pulls a cached copy of) a trust chain from one of the listed roots and sends it to the client. Intermediate certificates may also be stored in the trust store database - and cached on the client. In subsequent requests, the client may include those intermediate indexes in their request, allowing the server to respond with a shorter chain. Clients with an old, long trust chain might have a long first exchange, but after caching intermediates can have a much faster/shorter negotiation. As certificates expire, they are removed from both the trust store database, as well as the client's cache - naturally moving the 'working window' of certificates forward over time.

This shifts a bit of work on the server, but dramatically reduce complexity on the client. The client just states which certificates it has and what algorithms it supports, and the onus is placed on the server to return the "shortest" chain that the client can use as a proof.


I'm far from being versed enough here, but from an adversarial standpoint, is having the client offer up the trust stores and certificate IDs it has going to increase the server's knowledge of aspects of the client we don't want it to know? IE could this contribute to fingerprinting or help to understand the browsing history of the device?


Yep. If the client gives up its full list of currently cached intermediate certificates it certainly reveals information about its connection history. Clients would probably need to use per-site certificate caching.

But in general your point about fingerprinting is well made. The more negotiation that happens between the client and server, the more data that is available for client fingerprinting and tracking from the server side.


One perpetual source of concern that I have is how will this work in practice? NIST has not standardized algorithms as of yet. NSA has come out in opposition of hybrid schemes (note that NSA is also a big fan of CsfC, which uses entirely seperate dual layers of crypto, which could be how they would have a hybrid scheme - one layer classical, one layer pqc. Will they? I ahve no clue). But this protocol is still a draft.

OpenSSH has chosen their own algorithm that afaik was on the NIST shortlist for PQC but not a final candidate and incorporated it in OpenSSH. That's not standardized either.

Given that Govt (which mandates encryption requirements via blunt tools like saying they will only purchase things that meet their requirements) and Industry are going two ways, and industry is doing whatever they think best without waiting for standardization, it feels like this is going to be a source of headaches to support properly in the future due to the diversity of schemes.

I am actually in favor of what Google / OpenSSH are doing, enabling new things shouldn't be breaking stuff, and should just be a net positive in their own bubbles, but the govt opposition and foot dragging makes this harder.


Everybody is going to use hybrid schemes for the foreseeable future.


Unlikely that signatures will ever be hybrid.

Fairly likely we move off of hybrid for key exchange once NIST finishes standardization.


Geez, turbolaser X-Wing before it even gets in the air. Can y'all just give us a SCW death-match debate on this?

Jokes aside, it would be interesting to have your optimistic take on the current PQ security trajectory. Do you think that it has proven comparably secure to ECC? Or just that by the time PQ primitives are ready to be rolled out they'll be load bearing enough that it is better to use them solo rather than the added overhead/complexity of a hybrid?


I don't see TLS adopting anything other than the current hybrid or a pure Kyber, but Bas would know better than me.

Signatures are very difficult to do hybrid in a way that's not strippable.

I think lattices are in the realm of boring crypto these days, but I ask the actual mathematical cryptographers when I need real opinions.


We're gonna fight about this!


Doubling down on my bet nobody reading this will live to see a quantum computer that breaks year 2000 era crypto.


It occurs to me that in year 2000, we had already invented Kerberos, Merkle trees, AES, and McEliece. Kerberos for one is built on symmetric cryptography, and is as far as I'm aware not vulnerable to the sorts of attacks that make RSA, Diffie-Hellman, and EC solve-able by a sufficiently advanced quantum computer.

How do you define your bet such that you don't just win by default?


Is there a way to put money on this? I'd be willing to put down a fair amount against your prediction.


If they win the bet against you, you'll both be dead by definition of the bet. Therefore the bet doesn't make sense and shouldn't be taken seriously.


Something like https://longbets.org/ maybe? The funds get held by a third party, and go to the winner's choice of charity rather than to the winner.


It could go to the estate.


You could bet in Bitcoin. If the quantum computer doesn't exist, one person gets it. If the quantum computer appears that can break the Bitcoin digital signatures, well, then the Bitcoin is worthless.


For some reason, this reminds me of the ding dong who bet bitcoin would hit a million dollars, immediately lost, then claimed he was the real winner because it proved people were interested in bitcoin.


That bet was the least interesting thing about John McAfee.


I believe he said he would eat a ding dong too.


I'm sure someone smarter than me can come up with a way of making bitcoin quantum-safe with a soft fork.

Of course, the addresses created right now would still be at risk.


Encrypt a file with “2000s era cryptography” (not too sure what that would be). The file contains instructions on how to unlock a sum of money.


Ok, but how do I make money from that?


One of the University of Iowa B schools runs a futures market that'll let you bet on predicted event outcomes

https://iem.uiowa.edu/

https://en.m.wikipedia.org/wiki/Iowa_Electronic_Markets


Problem with long term bets is time value eats away at escrow. But sure, come back in 20 years and we'll settle up at whatever a week's worth of US median salary is. (For specifics, 2000 era crypto is 1024 bit RSA. I'm not betting against a conventional supercomputer factoring such coprimes though.)


This ends up being a bet mostly about medical life extension, since drastically longer human lifespans are more likely than quantum computers going anywhere soon.


Hmm... this is about PQ cryptography while I was expecting a status update of Ed25519 in WebCrypto which, sadly, is still available only via experimental platform flag: https://caniuse.com/mdn-api_subtlecrypto_verify_ed25519


I just finished the Security Cryptography Whatever episode [0] about this and when Eric Rescorla is going on about how they almost threw the Web PKI overboard for a blockchain but it was too slow I was like, "just use a layer 2 like Lightning! It's fast, like Lightning!" But then they described SCTs and I was like, "well OK, way worse name, but they got there"

[0]: https://securitycryptographywhatever.com/2024/05/25/ekr/


This is a really well written article.


This has been causing a number of issues with proxys, we use nginx and we have started to see problems with chrome users and handshakes not working properly.


Is there an issue tracking entry anywhere for this?



If this lowers performance, can we just turn it off and forget about it?


Inside Google "Advancing our bet" is a euphemism for shutting down (hat tip to Fiber). I'm deeply surprised that an article came out with that title that is actually true, given how negatively that phrase is seen.


It's a joke, because it's about migrating off of pre-quantum asymmetric cryptography.


I remember there was a great 'translation' of that fibre post on Hacker News at the time: https://news.ycombinator.com/item?id=12793033


To tl;dr for people:

- As we've known for years, cryptographically-relevant quantum computers(CRQC) likely could wreck digital security pretty massively

- For HTTPS, 2 out of its 3 uses of cryptography are vulnerable to CRQC

- The currently accepted algorithms that fix these vulnerabilities transmit 30+ times the data of current solutions, which for more unreliable network conditions(like mobile) can introduce latency by as much as 40%

- Because attackers could store data now and decrypt it later with a CRQC, some applications need to deploy a solution now, so Chromium has enabled Kyber(aka ML-KEM) for those willing to accept that cost

- However, other algorithms are being worked on to reduce that data size, but server operators for your applications at the moment can generally only use one certificate, which older clients like smart TVs, kiosks, etc are unlikely to support

- So they're advocating for "trust anchor negotiation" by letting clients and servers negotiate on what certificate to use, allowing for servers to allow multiple at the same time

Honestly really impressively written article. I've understood the risk that a cryptographically-relevant quantum computer would pose for years, but I didn't really know/understand what was being done about it, or the current state of things.


Is ”advancing our amazing bet” a nod to the Google Fiber turndown?


[flagged]


Asymmetric cryptography


More specifically, post-quantum cryptography.


That’s a “bet” on something that everyone is sure about, so not much of a bet!


Most people don't believe a cryptographically-relevant quantum computer will ever exist as an real world engineering artifact.

If NSA believed it were possible, they would not publicly promote research on post-quantum cryptography, the would crack quantum cryptography.


Wikipedia:

> NIST calls its draft standard Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM). However, at least for Kyber512, there are claims that NIST's security calculations were amiss.


[flagged]


It works without JavaScript for me. Maybe your ad-blocker or uMatrix configuration has a rule that is blocking something unintentionally.


The mobile Firefox Focus tracker blocker basically breaks every Google blog for tracking, not for ads. So it’s blank for me as well


Text is in noscript tag, but noscript tag is dangerous due to massive abuse by everyone including google. The page has another noscript tag with doubleclick tracker inside.



[flagged]


That translation makes no sense. There's nothing in there that allows Google to deploy their own private keys and certificate on people's servers.

> This [well protected] certificate must both be issued from a trust hierarchy [which isn't Google and]

You should check out the issuer on the google.com certificate :)

Also, certificates are public data. They're not meant to be "well protected", they're sent to everybody connecting to the website. Maybe that's the misunderstanding? The private key is not issued by the certificate authority, it's issued by the server's operator. The CA only ever sees the public key.


This seems overengineered. There must be a simpler way to monitor you than this multi-certificate boondoggle, especially that Google controls the browser endpoint.


>This conflict, in turn, limits [our ability to monitor you] ... We propose to solve this by moving to a multi-certificate deployment model, where servers may be provisioned with multiple certificates [including those Google controls], and automatically send the correct one [i.e. the one Google controls] to each client.

How did you go from "chromium wants servers to support multiple certificates" to "chromium wants servers to support multiple certificates so they can put in their own certificates and MITM you"? Don't cloudflare and other CDNs already have the ability to MITM people given that they control the certificates? Why does google need this elaborate conspiracy to MITM people?


Admittedly it's a stretch, but I can't help but wonder why "Apple policies prevent the Chrome Root Store and corresponding Chrome Certificate Verifier from being used on Chrome for iOS." https://www.chromium.org/Home/chromium-security/root-ca-poli...

I also find it hypocritical that Google wants a multi-certificate architecture when they won't allow sysadmins their own certificate: https://issuetracker.google.com/issues/168169729?pli=1


>but I can't help but wonder why "Apple policies prevent the Chrome Root Store and corresponding Chrome Certificate Verifier from being used on Chrome for iOS."

It's probably a downstream effect from iOS not allowing third party rendering engines on ios, and the API for safari webview doesn't allow developers to swap out the TLS stack.

>I also find it hypocritical that Google wants a multi-certificate architecture when they won't allow sysadmins their own certificate: https://issuetracker.google.com/issues/168169729?pli=1

It really isn't. The linked issue seems to revolve around allowing root CAs to be added without user involvement, which presents privacy/security issues. Meanwhile being able to support multi-certificate in this context is motivated by being able to use newer cryptographic algorithms while still being able to support legacy devices, eg. presenting ECDSA certificate to modern devices but presenting an RSA certificate to decades old IOT devices that only support RSA. I don't see how that's contradictory with "we don't want to allow installing CAs behind users' backs because it might be abused to spy on them".


1. multi certs. a benefit to all. no reason all browsers should not adopt. w3c gets behind.

2. oh, you don't have a Google cert store cert in your bundle? why would you not want the best and free cert? seems suspicious, going to lower your google rank.

3. now that everyone have a google cert, chrome switches to using only that.

convoluted, yes, but then you have the exact same setup as today, with google making all the decisions


1. Can't they already do this (ie. favoring sites that have google issued certificates, or are hosted by google CDN) today? What does supporting multi-certificates have anything to do with it?

2. Google isn't a transit provider. Therefore they aren't really in a position to abuse their position to MITM traffic.

3. Google already controls the chrome browser itself. If they want your data they don't have to MITM you, they can just upload it after it's been decrypted by TLS


1 they do. and get backlash when they do it more openly, e.g. amp

2 they already do things like hitting THEIR dns servers for all links on a page you're reading "for performance". you have no idea what is in play here if you think "omg google is going to phish me my bank password with a fake login page". you're way out of your water here.

3 they can't because of backlash and that would move people to other browsers. they have to slow boil you. see point 1


Thank you for accurately reflecting the typical cynicism on HN.

"If you handed Professor Quirrell a glass that was 90% full, he'd tell you that the 10% empty part proved that no one really cared about water." -- HPMOR


I mean, this mechanism wouldn't make a server operator using entirely non-Google cert authorities more difficult to maintain. I'm pretty cynical on Google these days, but I don't see how this wouldn't be a boon to everybody pretty equally.


As far as I understood it, the tech is already there (Lattice-based algorithms, etc.) but nobody has bothered to deploy it yet.

Probably a similar issue as IPv6.


The tech is only being developed and standardized now. Some of the post-quantum algorithms (like SIKE) have fallen. NIST standardization is ongoing. "nobody has bothered" ignores the fact that this is probably the biggest thing going on in cryptography right now. We're in the comments on a post about how Google is working towards adding support!

And it's not really ready yet, unfortunately: The current post-quantum signature algorithms are too big for our current TLS/TCP/MTU packet sizes, and are going to be a big performance hit.

One of the above post's authors has previously written about the size problem on his own blog: https://dadrian.io/blog/posts/pqc-signatures-2024/ - with comments at https://news.ycombinator.com/item?id=39796349


so any post-quantum crypto requires packets bigger than TCP can handle in a single message?

I just super cynically see google pushing for quic and their other post-tcp visions for an internet even more theirs than it already is


No that would be silly - TCP handles “infinite” length streams. The problem is that there’s so much extra data for the cryptographic handshake that latency of establishing the connection is meaningfully higher by a lot and performance degrades by a meaningful amount. That’s all.

I don’t know why OP brought in MTU and packet sizes since that doesn’t really apply here. The most you could say is that the size exceeds the TCP window requiring an explicit ack but that’s unlikely (windows are quite big) and everything I’ve read only talked about the latency of the handshake being caused by the much larger data exchange needed (ie if TLS handshake requires 32 bytes each way, Kyber and friends need to send 1kib each way [1])

[1] https://blog.cloudflare.com/post-quantum-for-all/


> I don’t know why OP brought in MTU and packet sizes since that doesn’t really apply here.

It does apply. TCP exposes streams as the API, but the underlying data is still sliced into packets of size up to the MTU.


regardless of all that, I must now think about the correlation between cryptographic strengths and amount of information necessarily transmitted

is there such a correlation? why? how does it work? i don't even....


I’m not enough of an expert to cut through your confusion. if I recall correctly the cryptographic signature size has always been tied with the size of the key which determines the strength. The larger the key, the larger the signature and “more cryptographic strength”. What’s new here aside from the signatures being an order of magnitude larger than before and having different growth factors with increasing key size?


the involvement of quantum computing devices?

which is something else I must admit I cannot really fit together with what I imagine I understand about "classical" computing

but in information theoretic terms does it matter whether you use quantum or typical computers??? I would think that it does not matter but I may be wrong and I couldn't really explain why


The involvement of the quantum computer is only that it's an adversary that can break asymmetric encryption with different complexity constraints than a classical computer. For example, take two random prime numbers as a secret and publish the result of multiplying them. It turns out that if you want to find the two random prime numbers by knowing only the result of multiplication is a hard problem known as integer factorization. If you double the size of the prime numbers, discovering them takes exponentially longer. The theoretical quantum computer model though says that it can accomplish it in sub-exponential time. Now doubling the size of the number only increases your compute requirements by ~2x to recover the prime factors (specifically it's actually a logarithm so it's < 2x more compute is needed). The algorithm for doing this is known as Shor's algorithm [0] and because of how complexity works in computer science, it turns out that this algorithm can be applied to many many problems mechanically and these problems are the underpinnings of RSA, DSA, ECDH key exchanges.

These quantum-resistant algorithms are based on mathematical problems believed to be exponentially difficult even for a theoretical quantum computer - when you double the size of the problem, it again takes exponentially more time even for a theoretical quantum computer. These are of course unproven beliefs but that's true of classical algorithms too. So no, it doesn't matter where you run the cryptographic algorithm; it remains computationally difficult to solve the problem without knowing the secret. The quantum computer is critically important though for your ability to crack classical problems though - without it, all of this post-quantum cryptography is unnecessary.

[0] https://en.wikipedia.org/wiki/Shor%27s_algorithm

[1] https://arxiv.org/pdf/2212.12372


Interesting tidbit from the paper, they suggest to "ignore" O(log(n)^3/2) and take it as equivalent to O(log(n)), even though strictly speaking this is not correct, in one of their time complexity proofs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: