Hacker News new | past | comments | ask | show | jobs | submit login
Digital signatures and how to avoid them (neilmadden.blog)
278 points by _ikke_ 19 days ago | hide | past | favorite | 82 comments



The author mentions HMAC at the end. I think HMAC is really an underrated technique. I remember reading Colin Percival's classic Cryptographic Right Answers[0] and saw a section about "symmetric signatures." I pondered to myself what scheme I could use for that before I looked at the answer: of course it's just HMAC. I feel like this is another perspective that ought to be more widely known: if you want something to be like a signature, but the two parties (or just a single party at different times) can share a key, HMAC really is the right answer. Things like, a server needs to cryptographically sign a cookie to prevent tempering: that's HMAC. Or a server needs to know an API request is coming from an expected client: that's also HMAC.

[0]: https://www.daemonology.net/blog/2009-06-11-cryptographic-ri...


More generally, a MAC. You don't necessarily need one based on a hash.

(Unrelated) see also the more recent https://www.latacora.com/blog/2018/04/03/cryptographic-right...


I'd also throw in that HMAC is overrated. It's a workaround for bad hash algorithms that are vulnerable to length-extension attacks.

If you're using a "good" hash algorithm, then MAC-ing is simple: hash over your key and message.

It's pretty weird that SHA-256 has been king for so long, when SHA-512/256 (which, as I've noticed people don't understand, means SHA-512 truncated to 256 bits) was there from the beginning and is immune from this attack.

Anyway, in general it's a pet peeve of mine that many people so often say "HMAC" when really they just mean MAC.


> It's pretty weird that SHA-256 has been king for so long, when SHA-512/256 (which, as I've noticed people don't understand, means SHA-512 truncated to 256 bits) was there from the beginning and is immune from this attack.

A bit of a tangent, but I didn't know this, so thanks for pointing this out. It's insane to me that there's two SHA hash algorithms that result in a 256 bit string, named nearly identically, but the one is vulnerable to a length-extension attack but the other isn't. I had simply assumed that SHA-256 and SHA-512 are the exact same thing except the length of the result. Wouldn't anyone? The length of the result is right there in the name! I mean why does SHA-256 even exist when SHA-512/256 is what we should all use? Why does a single library implement an algorithm that everybody in crypto land, apparently (if you're right), already knew was broken from the start? Give the good one the short name and keep the bad one out of codebases! Come on! Crypto is hard but crypto people keep making it harder and I hate it.


> why does SHA-256 even exist when SHA-512/256 is what we should all use?

SHA-512 is more computationally costly so running that and truncating the result is slower than just running SHA-256. Where performance is key¹ and you have other protection in your protocol that mitigates extension issues, that could be a significant benefit.

IIRC SHA512 used 64-bit values throughout rather than 32 as used in SHA256, so it might actually be faster on software on modern 64-bit architectures, nullifying the above consideration on such platforms, but back when the SHA2 family were formally specified 64-bit processing was far far less common. Also if you have acceleration for SHA256 in hardware but not 512 that flips things back. Hardware support for SHA256 will be cheaper in silicon than SHA512.

----

[1] very low CPU power systems, or hashing en-mass on now powerful arrangements


>SHA-512 is more computationally costly

In fact, as you suggested later, SHA-512 is actually much less computationally expensive on 64 bit machines - it has 25% more rounds, but you can do twice the number of bytes per round.

All other things being equal (which they seldom are), you will often see a significant speed improvement with SHA-512 vs. SHA-256 on larger payloads.

Of course, I immediately tried to test this with "openssl speed" on my M1 Mac and SHA-512 is 70% slower, so I guess there's some architectural optimization there.


The answer is: dedicated CPU instructions for SHA256 vs. software implementation of SHA512. For amd64 there's SHA-NI, for Arm there's the crypto extensions, but both only provide sha256 (at least when I last looked at their specs)


Can the algorithm benefit from SIMD/AVX512? Not helpful for ARM Macs, I have one too, but might be a contributing factor to lower adoption since those instructions aren't as widespread. First consumer chips in ~2017 and first AMD chips in ~2022.


The 32 bit variants are accelerated via SHA-NI on most CPUs, which inverts the performance ranking again, making SHA-256 the fastest common cryptographic hash by far.


I did a quick check on a 2016-era Xeon E5 v4 (AVX2), and sha512 is much faster per openssl speed.


Being "vulnerable" to hash length extension is not a problem for a hash function. It is a problem for a MAC, hence HMAC exists. People confuse both, so SHA-3 competition explicitly requested functions resistant against hash length extension. SHA-256 is a perfectly fine hash function.

And, I don't know how to say it, if you don't know what are the difference between SHA-256 and SHA-512/256 you shouldn't use either. Cryptography really is hard.


Keyed SHA-512/256 would be a design smell. Just use HMAC.


Yes and no. HMAC is very inefficient for short messages, but that inefficiency quickly vanishes into noise for anything over a kB or two. (HKDF and HMAC-DRBG are probably the worst offenders as they are always running HMAC on small inputs).

But, on the other hand, HMAC has repeatedly proven itself to be resilient to all kinds of attacks. I definitely didn’t mean any MAC when I recommended HMAC: eg I don’t think Poly1305 is a good general purpose MAC. PRF maybe, but sometimes you need the MAC to be committing too. Yes, some hash functions can be used with a simple prefix MAC, but then you need to list which specific hash functions to use (and most of those are not yet widely available).


You're pointing out that SOTA hashes like SHA3 and Blake2 aren't length-extendable, which is true, but KMAC is more than simply keyed SHA3; it's also domain-separated.


Ah yes of course in 2018 it's still HMAC.


They published a followup to that article two months ago, and the correct answer in 2024 is still HMAC.

https://www.latacora.com/blog/2024/07/29/crypto-right-answer...


Who's "they"? This "right answers" thing is a meme (I ruefully share responsibility for it) that needs to die; Colin Percival has nothing to do with anything but the first one.


I linked to the older Latacora one upthread and this comment is linking to the newer Latacora one. So I think it's reasonable to read "they" as "Latacora" here.


Yes, I wrote the older Latacora one, which was based on thing I wrote under my own name before I founded Latacora; I'm pretty sure I'm on solid ground saying Colin Percival had nothing to do with anything I wrote, since I wrote the first one as a rebuttal to Colin. (Did I misread you? Maybe we just agree.)


I think we agree. I was only responding to the "Who's 'they'?" bit.


Sorry. I'm touchy about the cursed meme I helped create and also flinching at the idea that anything I wrote might get attributed to Colin. Definitely don't mean to jump down your throat.


Yeah, just in the interests of clarity, somebody linked to the Latacora article “Cryptographic Right Answers” and I’d happened to read the updated Latacora article “Cryptographic Right Answers: Post Quantum Edition” a few hours beforehand, so I linked to it. “They” means Latacora, not Colin Percival.


No problem.


One question I always wondered about with cookie signing is: Why not store the user and the cookie in a database and check against that when they try to present it to you? Performance reasons?


It's mostly about performance. If you can store all the required info about the user inside the cookie then you can avoid a DB query roundtrip before sending a response.

Now that your cookie looks like this (probably also base64 encoded):

  {"id": 42, "display_name": "John", "is_admin": false, "session_end_at":1726819411}
You don't have to hit the DB to display "Hi John" to the user and hide the jucy "Admin" panel. Without HMAC, an attacker could flip the "is_admin" boolean in the cookie.

You could also create a cookie that is just random bytes

  F2x8V0hExbWNMhYMCUqtMrdpSNQb9dwiSiUBId6T3jg
and then store it in a DB table with similar info but now you would have to query that table for each request. For small sites it doesn't matter much and if it becomes a problem you can quite easily move that info into a faster key-value store like Redis. And when Redis also becomes too slow you are forced to move to JSON Web Tokens (JWT) witch is just a more standardized base64 encoded json wrapped with HMAC to avoid querying a database for each request.

But even if you are using random bytes as your session identifier, you should still wrap it in a HMAC so that you can drop invalid sessions early. Just for making it harder for someone to DDOS your DB.


Back in the Justin.tv days, we used this for some messages that were passed by the client between two of our systems: The main authentication was done by the web stack which gave the client an HMAC-signed viewing authorization. That was then passed on to the video servers which knew how to check the authorization but weren’t hooked up to the main DB.

Setting things up this way meant that we didn’t need to muck about with the video server code whenever we made policy changes as well as isolating the video system from web stack failures— If the web servers or DB went down, no new viewers could start up a stream but anyone already watching could continue uninterrupted.


Thanks for the clear explaination, I suspected as much. Wasn't sure however if that was all there is to it.


Premature optimisation. We have a diverse set of clients but of all the ones I've audited with JWT and similar crypto-solutions, not one (of those that used sessions at all, not like a CDN or so) could not have run on a single database server. Some more comfortably than others, but also embedded devices with a handful of users at most will use cryptographic sessions nowadays. Some also choose to pack a session cookie into the JWT data and now you've got two things to validate instead of one

I understand it's nice to never have to worry about it regardless of scale, but generating sessions with an established CSPRNG and being able to invalidate them at will is an order of magnitude simpler. It's also standard and abstracted away for you already if you use any framework



This is how many (most?) session cookies work. Track the data on the backend, only send an identifier to the frontend.

The JWT and similar cookies exist for when you want to do scaling and such. You don't need much more than a user ID and a user name for many pages of a web application, your database may be in another continent, so you may as well store some variables in the client side. This has the added benefit of being able to put down as many frontends as you may need, integrating nicely with technologies like Kubernetes that can spawn more workers if the existing workers get overloaded.

By also encrypting the cookie, you can get rid of most of the backend state management, even for variables that should be hidden from the user, and simply decrypt+mutate+encrypt the cookie passed back and forth with every request, stuffing as many encrypted variables in there as can you can make fit.

They're also useful for signing in to other websites without the backend needing to do a bunch of callbacks. If a user of website A wants to authenticate with website B, and website B trusts website A, simply verifying the cookie with the public key (and a timestamp, maybe a n\_once, etc.) of website A can be enough to prove that the user is logged into website A. You can stuff that cookie into a GET request through a simple redirect, saving you the trouble of setting up security headers on both ends to permit cross-website cookie exchanges.

In most cases, signed cookies are kind of overkill. If all your application has is a single backend, a single database, and a single frontend, just use session cookies. This also helps protect against pitfalls in many common signed cookie variants and their frameworks.


Originally it was about scalability - signed/encrypted cookies are stateless, and hence (in theory) allow easy horizontal elastic scaling: just share the key with the new nodes. But I suspect that in a lot of cases now it is because it is easier initially to throw a key into an environment variable than standup a database, sort out caching, etc. It’s only later that you start thinking about revocation and idle timeouts and key rotation and all the other stuff that it becomes clear that it's not that simple to do well.


A bit of a tangent. This isn't a dig on HMAC itself, but using HTTP request body or query string as the HMAC "message" is the worst. My employer provides some APIs with that sort of scheme and it's a very common source of technical customer support tickets.

The problem is that many people are using web frameworks that automatically turn body and query into some kind of hash map data structure. So when you tell them "use the request body as the HMAC message", they go "OK, message = JSON.stringify(request.body)", and then it's up to fate whether or not their runtime produces the same exact same JSON as yours. Adding a "YOU MUST USE THE RAW REQUEST BODY" to the docs doesn't seem to work. We've even had customers outright refuse to do so after we ask them to do so in the "why are my verifications failing" ticket. And good luck if it's a large/enterprise customer. Get ready to have 2 different serialization routines: one for the general populous, and one for the very large customer that wrote their integration years ago and you only now found out that their runtime preserves "&" inside JSON strings but yours escapes it.

Rant over...


> "&" inside JSON strings but yours escapes it

What escaping of "&" inside JSON are you talking about? Some unholy mix of JSON and urlencode?


Ruby on Rails turns "&" into "\u0026".

See rails/rails, activesupport/lib/active_support/json/encoding.rb.


Shitty software developers will always find ways to screw things up unfortunately


I think AWS SigV4 tries (and succeeds?) in solving this issue?


Are things like Diffie Hellman generally available such that you can always get a symmetric key? Or is that a special case?


I'm no cryptographer, but I would say that it is indeed the case that you can assume that two parties can derive a shared key over an untrusted channel. The post Cryptography Right Answers PQ [1], linked in another comment, addresses this in the section "Key Exchange". Rather than thinking about Diffie-Hellman directly, you would turn to a Key Exchange Mechanism (KEM).

Before post-quantum cryptography concerns, KEM were indeed mostly built on top of Diffie-Hellman key agreement, but you could also build one on top of RSA, or on top of some lattice constructs. But you wouldn't build one yourself, there are good constructions to choose from! The OP actually has a 3-part series on KEMs, although I don't think it addresses post-quantum issues [2].

[1]: https://www.latacora.com/blog/2024/07/29/crypto-right-answer... [2]: https://neilmadden.blog/2021/01/22/hybrid-encryption-and-the...


Just want to point out that the article specifically says to use an authenticated KEM (AKEM). A normal, unauthenticated KEM would not work as it provides no authentication. There are no post-quantum authenticated KEMs as yet.


There are post quantum KEMs though that authenticate with a classical mechanism, which limits quantum attacks to interactive from the previous total breakage of recorded ciphertext exchanges (e.g. Wireshark capture at a router encountered in both directions of the traffic flow).


Are there? I’ve advocated for such constructions in the past, but I’ve never seen an actual proposal. Do you have a link?


Google's post-quantum TLS experiments that were done in public via Android Chrome are such; basically you just do normal TLS handshake but stack the key derivation from the traditional DH-type perfect-forward-secrecy exchange with a post-quantum-perfect-forward-secrecy exchange that you all seal under the same handshake authentication, and where you make sure to only use post quantum symmetric primitives to fuse the traditional session key material with the PQ session key material such that you don't rely on either one's resistance to keep your secrets secret.

Sorry I don't have a link quite on hand right now.


OK, sure. As far as I’m aware, nobody’s actually made that into an actual AKEM proposal though. (I wish they would, as I think many applications would be fine with pre-quantum authentication and post-quantum confidentiality).

One thing to note about authentication in DH-like systems is that you can derive symmetric key without authenticating the parties, establish secure (but unauthenticated) channel with the resulting symmetric key(s) and the do authentication inside that channel in a way that will only succeed if the symmetric key used by both parties is the same (this is called channel binding). For example SSH2 and many Active Directory related protocols do this.


DH + HMAC on its own doesn't give you authentication, anyone can establish a symmetric key. It's possible to build authentication on top but it requires pre-shared data or PKI.


The way DH is used typically for encryption (ECIES) or in TLS doesn’t give you authentication. But you can get authentication from DH alone, without PSK or PKI. See https://neilmadden.blog/2021/04/08/from-kems-to-protocols/ for some details on the security properties of various types of DH.


I meant that some data still needs to be distributed securely, just it's the sender's public key rather than a PSK. I recon "pre-shared data" was not the best choice of words...

(Still love the blog post!)


Ok, makes sense.


Yeah, if you have a shared secret, HMAC is the way to go.

It's also super simple: It's almost literally just concatenating the secret and the message you want to authenticate together, and take an ordinary hash (like SHA256) of that, the rest of it is just to deal with padding.

It's super intuitive how HMAC works: If you just mash secret and message together on your side, and get the same answer as what the other side told you, then you know that the other side had the secret key (and exactly this message), because there's obviously no way to go from SHA256 to the input.

HMAC is also useful if you want to derive new secret keys from other secret keys. Take an HMAC with the secret key and an arbitrary string, you get a new secret key. The other side can do the same thing. Here's the kicker, the arbitrary string does not have to be secret to anyone, it can be completely public!

Why would you do that? Well, maybe you want the derived key to have a different lifetime and scope. A "less trusted" component could be given this derived key to do its job without having to know the super-secret key it was derived from (which could be used to derive other keys for other components, or directly HMAC or decrypt other stuff).


> It's also super simple: It's almost literally just concatenating the secret and the message you want to authenticate together, and take an ordinary hash (like SHA256) of that, the rest of it is just to deal with padding.

It's not quite as simple as that. The output of the first hash is hashed a second time (to prevent length extension attacks).


Thanks, forgot to mention that. Needless to say, I always consult real cryptographers when working on stuff like that.


Do you ever need to implement an HMAC from scratch? I'd look for an off-the-shelf solution before trying to find a cryptographer.


I don't, and I absolutely did not mean to imply that anyone should implement HMAC themselves. I was addressing people who want to potentially use HMAC (after proper consultation with cryptographers), for which a general understanding of HMAC is prerequisite. Hence why my original comment only described implementation on a surface level, but elaborated over potential uses for HMAC.

Only cryptographers should implement crypto primitives. Even if I'd get the algorithm itself right, I might not know how to make it so that it runs in constant time (which is something that crosses into the CPU's ability to do so), and thus may inadvertently leak secrets through side channels.

But even if I just use HMAC, I still consult with cryptographers to make sure my use is correct, that there is no better solution, and that I am not missing any attack vectors.

Even in simple cases it can be a grave mistake to use seemingly simple crypto primitives without proper consultation, see for example some of the very prominent problems that were the result of improper IV usage with AES.


And this is is why I have come to love AWS Sigv4


This article is very relevant in the context of the EU Digital Identity Wallet, and digital credentials in general, such as ISO/IEC 18013-5 mobile driver licenses and other mdocs.

We may accidentially end up with non-repudiation of attribute presentation, thinking that this increases assurance for the parties involved in a transaction. The legal framework is not designed for this and insufficiently protects the credential subject for example.

Instead, the high assurance use cases should complement digital credentials (with plausible deniability of past presentations) with qualified e-signatures and e-seals. For these, the EU for example does provide a legal framework that protects both the relying party and the signer.


Isn't non-repudiation something we want for cases like this? If e.g. a car rental place checks your driving license before renting you a car, and then you get into a crash, no-one wants you to be able to claim that you never showed them your driving license and they never checked.


To prove that the car rental company has seen the driver licence, they just need to show the judge a copy of the licence which is e-sealed by its issuing authority. No need to include a non-repudiable proof-of-possession signature of the holder. Having that in addition would just introduce legal ambiguity and information asymmetry to the disadvantage of the holder.

The opponent may still claim that the car rental place is showing a copy that was obtained illegally, and not in holder presentation. To avoid such a claim, the car rental company should ask for a qualified e-signature before providing the car key. The signed data can include any relevant claims that both parties confirm as part of the transaction. To provide similar assurance to the customer, the company should counter-sign that document, or provide it pre-sealed if it is an automated process.

Note that with the EU Digital Identity, creating qualified e-signatures is just as easy as presenting digital credentials.


Getting the parties on the desk, or the people commissioning enterprise IT systems, to understand this is going to be a serious uphill struggle. Especially in places that are used to photocopying your ID.


the liquor store owner that scans the barcode on your Driver's License "does not understand" and does not care to understand. Yet the opening to your entire life to some low-level cog has been transferred. The correct answer for the wonks out there is : scan ID to answer the question "is this person over the legal drinking age" YES or NO .. which is stored. Similar situations with different contexts, abound.


I mean it's not a super big deal if the EU identity private key leaks in some arcane attack or if someone steals it the normal way, you can just cancel it and order a new one like a credit card. It expires every two years I think anyway.

This reminds me of a specific number that Americans have to give in plain text as proof of digital identity that they only get one of and can't change it ever. Lol.


That doesn’t matter. The claim being made by the grandparent post is that the legal system isn’t well-equipped to deal with scenarios like, “yes the digital signature is valid but it was improperly authorized.”


The legal system also isn't well equipped to deal with the conceptually roughly equal case of someone stealing your car and running people over with it, but it deals with it anyway.


> This reminds me of a specific number that Americans have to give in plain text as proof of digital identity that they only get one of and can't change it ever. Lol.

You can get up to ten replacements of your card in your lifetime. They do all have the same number though.

[1] https://secure.ssa.gov/poms.nsf/lnx/0110205400


Well, at least you can laminate it


Can you go into a bit more detail on what you see as the problem in non-repudiation of presentation?


Non-repudiation of commitments to a transaction can be good when both parties want to avoid later disputes about the authenticity of these commitments. It requires careful design of the data to be signed, the public key certificates, the policies governing the signature creation and validation processes, and the signature formats to enable validation as long as needed.

Attribute presentation is not designed for this feature. When attribute presentation becomes non-repudiable, it creates legal uncertainty:

1. In court, the verifier may now present the proof of possession as evidence. But this is, at least in the EU, not recognised by default as an e-signature. It is yet unknown if it would be interpreted as such by a court. So the verifier keeps a risk that will be difficult for them to assess.

2. Even if it would be recognised as evidence, the holder may argue that it is a replay of a presentation made in another transaction. Presentation protocols are not designed for timestamp assurance towards third parties, and generally do not include verifiable transaction information.

3. The verifier may protect itself by audit-logging attribute presentation input and output along with publicly verifiable timestamps and verifiable transaction information, and by editing its terms and conditions to claim a priori non-repudiation of any presentation. Typically such a solution would not create the same evidence files at the holder’s side. So the holder would not be able to present as strong evidence in court as the verifier. (This asymmetry aspect needs some more elaboration.)

Non-repudiation is well arranged in EU law for e-signatures. If anyone would want the same for attribute presentation, this should involve changes in law. As far as I can see, non-repudiation is now opportunistically being considered in mDL/EUDI just from an isolated technical perspective.


Another issue with non-repudiation of presentation is that it encourages relying parties to log full transcripts of presentation interactions. It could also encourage supervisory bodies to request such over-complete logging. Instead, relying parties should log the minimum necessary, to avoid developing a honeypot of personal data.


Can you make the comparison to the German eID (notable for it's unusually extensive privacy-preserving tactics)?


I’m not sure what the German eID uses today, but the German architecture team has explored KEM+MAC for the EU Digital Identity Wallet. Maybe its eID is similar. You can apply KEM+MAC at either or both of two points:

1. plausible deniability of the document’s issuer seal

2. plausible deniability of having presented the document

The second is great for legal certainty for the user. The first has problems. It would be incompatible with qualified e-sealing; stakeholders have no evidence if issuer integrity was compromised.

Also, it would mean that issuance happens under user control, during presentation to a relying party. In a fully decentralised wallet architecture, this means including the trusted issuer KEM key pair on the user’s phone. Compromising the issuance process, for example by extracting the trusted issuer KEM key pair, could enable the attacker to impersonate all German natural persons online.

The advantage would have been that authenticity the content of stolen documents could be denied. This potentially makes it less interesting to steal a pile of issued documents and sell it illegally. But how would illegal buyers really value qualified authenticity e-seals on leaked personal data?


To me, DKIM doesn't prove that the user john.smith@gmail.com sent that email. It proves that gmail.com sent it.

I'd avoid trusting FAANGs in courts when the fate of political leaders is at stake.


This is exactly what DKIM means, and this is why it has wide adoption, while S/MIME and PGP-signed mail remain relegated to niche uses.

The entire purpose of DKIM is not to prove that the individual behind john.smith@gmail.com sent the message, but that a legitimate server owned and operated by the entity behind gmail.com sent the message. It's mostly there to reduce spam and phishing, not to ensure end-to-end communication integrity.

This has nothing to do with the particular companies involved nor their particular trustworthiness.


Your last sentence kinda contradicts the fact that the company Google operates the server behind gmail.com.

If Google was evil (but in reality it's not), it could have forged and signed an email from john.smith@gmail.com with valid DKIM, sent on other mail servers or not (since we talk about leaked emails, we just need a file), when in reality the Google user john.smith@gmail.com never sent that email. To me, John Smith could have plausible deniability in court, depending on if everyone trusts Google to be 100% reliable. If the stakes are higher than what the company would risk to lose if found to have forged the email, what's stopping them?


Google could forge the email without DKIM (DMARC only requires one of SPF or DKIM to succeed, not both). While DKIM gives high confidence that the email came from Google, neither its presence nor absence says anything about John Smith.


Yes, the "D" is for domain.


I am a user, but not expert in cryptography, but I find the title of the article to be bait and switch. A more accurate title would be "Pitfalls of using Digital Signatures and Possible Alternatives".


> As well as authenticating a message, they also provide third-party verifiability and (part of) non-repudiation.

I think digital signatures and third party verification are an incredibly useful feature. The ability to prove you received some data from some third party lets you prove things about yourself, and enables better data privacy long-term, especially when you have selective disclosure when combined with zero knowledge proofs. See: https://www.andrewclu.com/sign-everything -- the ability to make all your data self-sovereign and selectively prove data to the outside world (i.e. prove I'm over 18 without showing my whole passport) can be extremely beneficial, especially as we move towards a world of AI generated content where provenant proofs can prove content origin to third parties. You're right that post quantum signature research is still in progress, but I suspect that until post-quantum supremacy, it's still useful (and by then I hope we'll have fast and small post quantum signature schemes).

EU's digital signatures let you do this for your IDs and https://www.openpassport.app/ lets you do this for any country passport, but imagine you could do this for all your social media data, personal info, and login details. we could have full selective privacy online, but only if everyone uses digital signatures instead of HMACs.


I already successfully used EU digital signatures through lex [1], but neither openpassport [2] not withpersona / linkedin [3] supports EU's new (2019+) identity cards, only passports. [1] https://lex.community/ [2] https://github.com/zk-passport/openpassport/issues/126 [3] https://www.linkedin.com/help/linkedin/answer/a1631613


The article's point is that these properties are not always desirable. Imagine someone messages a friend via an app, and mentions that they're gay, in a country where it's illegal. If the app uses signatures they can't deny that they sent the message. If it's based on key agreement (like Signal), then either party could have faked the exchange, so there's at least some deniability.


I have been trying to think of ways we could leverage digital signatures to prove that something isn't AI-generated, and it's really a fascinating topic to think about. It's hard to avoid making the leap from "this isn't AI generated" to "this exact person made this." Then there's the issue of ensuring that a person doesn't make ChatGPT write something, then copy and paste it somewhere else and sign it.

If anything, the hardest part of making an anti-AI proof system is ensuring people don't lie and abuse it.


https://youtube.com/watch?v=1FuNLDVJJ_c

This talk from Real World Cryptography 2024 is probably a good place to start.


Slightly off topic:

In school I only took one cryptography class (it was bundled with networking, at that), and to this day I still think it contained some of the most amazing concepts I've ever learned. Public-key cryptography being on the short list along with cryptographic hash functions. Maybe it's my particular bias, or maybe cryptography has just attracted some of the most creative genius' of the 20th century.


What I find the most mind-blowing is that you can do a diffie-hellman-merkle key exchange in a room full of people, everyone can hear you, but the two participants are the only ones in possession of the encryption key after exchanging three messages (once to establish the parameters, once to convey one computed number in each direction). The math is simple enough to do by heart, at least as a demonstration (of course, it's not possible to compute numbers in your head that are so large a computer, doing billions of calculations per second, cannot simply iterate over all possible values and thus break it; but the principle works)

Didn't get this in school unfortunately. They made us implement DES S-boxes iirc and Caesar cipher breaking... all very relevant and foundational knowledge for non-mathematicians who will never design a secure cipher




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: