Hacker News new | past | comments | ask | show | jobs | submit login
What is the point of a public key fingerprint? (johndcook.com)
114 points by rolph 11 months ago | hide | past | favorite | 146 comments



Beware of having too-small fingerprint hashes though, or not checking enough of the digits.

    $ echo -n retr0id_662d970782071aa7a038dce6 | sha256sum
    307e0e71a409d2bf67e76c676d81bd0ff87ee228cd8f991714589d0564e6ea9a  -
    
    $ echo -n retr0id_430d19a6c51814d895666635 | sha256sum
    307e0e71a4098e7fb7d72c86cd041a006181c6d8e29882b581d69d0564e6ea9a  -


The interesting thing about that is that while a fairly impressive number of the beginning and end values match, a trivial side-by-side comparison glance makes it clear they don't match even if you don't directly compare any of the digits. I think humans are actually pretty good at this task if they approach it more like humans than machines and just give the two values a quick glance with the intent of spotting differences rather than a nibble by nibble comparison.


On the other hand, I initially thought this was a collision until a second glance :-S


Yup. Me too.


My favorite way of comparing "things" is to put them next to each other and then use the exact same technique one uses for viewing those "Magic Eye" stereograms [1]. With that you basically overlay both sides and where it doesn't match, you just see the difference.

[1] https://en.m.wikipedia.org/wiki/Magic_Eye


Interesting. I use `diff`.


Well yeah, if I can copy and paste stuff I do that too :-)


The best such method is dynamic comparison when the position of each string is the same, you just change the value, this also works for pictures, much easier to spot the deference vs having to glance back and forth

Though for this specific case of only needing to check for identity, no human comparison should be involved, copy&search to see if the other is a match if you have no better tools


True, if you can do a side-by-side comparison! On a mobile device, switching between apps and manually comparing, it would have probably fooled me.


For sure, I often just look at the beginning and end


> a trivial side-by-side comparison glance makes it clear they don't match

A "trivial, side-by-side comparison glance" would lead me to believe the two number strings are the same. If I'm just glancing I'm not going to take care to read each and every character out and compare them, that's not what glancing means.


This is a good point, and by your example it's obviously feasible to find enough of a "collision" on both ends of the digest to make it look like a match at a careless first glance.

In your particular example, I counted at least 96 bits that match on the ends (in total...from 48 bits on each end), and now I'm curious what kind of hardware you used and how long it took to find this match.


If you have the available RAM/SSD for a birthday attack, you only need 48 bits worth of complexity, which makes this a manageable problem. I suppose the ASICs for bitcoin mining don't have the needed bandwidth to storage devices which would put my guess onto GPUs. Assuming 32 bytes of storage per hash (probably more but you wouldn't store the full hash, instead a hash table with the prefix+postfix up to a certain amount, then of course some hash table management overhead), you'd have to write around 9 PB if you write once, 1 petabyte if you write 64 times, for the raw hashes only.


You don't need to store all the hashes, just the Distinguished Points[1]. It took me a couple of days on a single AMD RX6700XT desktop GPU, using some negligible amount of RAM (it was a few GB of main system memory iirc).

[1] https://www.cs.csi.cuny.edu/~zhangx/papers/P_2018_LISAT_Webe...


On way to deal with this is to hash it again. For demonstration purposes I used CRC8, but a longer hash is probably less prone against further collisions):

    307e0e71a409d2bf67e76c676d81bd0ff87ee228cd8f991714589d0564e6ea9a = 0x7B

    307e0e71a4098e7fb7d72c86cd041a006181c6d8e29882b581d69d0564e6ea9a = 0xFC


Um, not sure what you aim to solve here.

You are effectively increasing collision risk by an arbitrary amount by running hash output as input to a crc algorithm.

Kids, don't roll your own security.


Before: two hashes that might pass a manual check

After: four hashes that don't pass a manual check

The short ones are there so you can easily spot the collision in the long one, even if the start and the end are the same. If you read just the short ones your criticism applies, but that was not the idea. You read the long one first, then the short one. This is a logical AND not a logical OR, thus it gets harder to make both collide at once (because there are less similar looking values for the first one that produce the same values in the second one + your attacker might not know which one you are using).

Maybe if you read my comment, you will also realize I pointed out that CRC8 is the wrong choice for this, but I was on my phone and this was the hash calculator I found first.


I think you misunderstood:

This is if you compare two hashes and they look really similar but you want to be really sure.

Of course a real equality check is better but visual check of original hash + this is better than visual check alone.


Ah, thanks. Yes, I did not understand this properly.


A similar technique is employed by git to get past SHA-1 collision risk, so I wouldn't say it's entirely without merit.


More structure can help here. My fave example:

    9999 9999 9999
    9999 9999 9999
    9999 9999 9999
Now the users will not always pick the beginning or the end when they fail to compare the whole thing. That is because they can easily identity each group to the other person (combinations of top, middle, bottom, left, right, centre).


If I randomly check 4 digits sampled uniformly across all positions you won’t have a higher chance than 1/16 to convince me unless you match 128 digits


1/16 is several orders of magnitude too high for comfort. There are only 64 hex digits total, so I'm not sure where 128 comes from?

If you meant bits, the two example hashes I gave already match 184 out of 256 total bits.


Exactly. A 1 in 16 chance to send money to a scammer instead of the intended party is unacceptably high, for example.


The software should check the hashes


The whole point of fingerprinting is to produce something that's feasible to manually verify, as described in the article. If you're comparing in software there's no need to hash first (unless it's part of a certificate chain etc.)


There is a need... If you compare untrusted files with trusted software.... At the very least, it should be an additional check because I'm pretty sure that this example could have fooled me... But diff would have had no issues


Honestly I really wish I could have my shell recognize this situation and highlight differences for me automatically. The number of extra steps and the low number of negatives you get promotes complacency.


The first time I got on a telegram call using their Android client, the call had a string of 4 emojis and said something along the lines of "if you and the other person see the same emojis, this call is secure". I thought that was pretty neat!


Matrix clients use similar way to verify login sessions


but they only show a green check-mark... not sure how it works


How would you know what emojis the other person is seeing?


By asking them


I think you'd need to do it out of band though, otherwise you're vulnerable to MITM? (That's the primary attack this is meant to protect against, I think?)


Depends on your threat analysis. If you're not considering an active attacker who has an ability to seamlessly fake voice and/or face to be a realistic attack scenario (e.g. you're guarding against mass surveillance, but not a prepared targeted attack - i.e. you don't find it plausible that someone has prepared actors - human or machine - with quality deepfakes made specially for you and your peer) and you know the other party personally, you can verify in-band, relying on natural biometrics aka your knowledge of one's voice and face. If you do - certainly the verification must be done out-of-band.


I can't imagine what it would be like to actually have to deal with the scenario you described.


Customized deepfake attacks are definitely happening:

https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-sc...

https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-...

My thinking is, you might as well get in the habit of defending yourself now. The alternative is to monitor how widespread the attack is and only adjust your policy once it becomes "sufficiently" widespread. But I don't think that's even a labor savings, since defending yourself isn't actually that hard.


The man in the middle would need to spoof the voice (with current microphone+environment etc.) in ~real time, probably both parties as well. And with the awareness to detect which words to replace in the middle of a normal conversation.

With the demos we've seen feels absolutely doable, but for now requires quite some effort.


You're talking about an attacker who wants to tamper with communication. Eavesdropping is far easier. [I agree you'll want deepfakes if you want your MITM eavesdropping to handle emoji codes, for mass surveillance at least.]

But even tampering seems pretty easy if the attacker has a more modest objective, of having you and your buddy each talking to one of the attacker's henchmen using voice changers. The emoji verification won't help here -- each henchman just gives the emoji for their respective conversation.


Yes the whole point was how you'd have to deal with the emojis.

I feel it is implied that the latency is low enough (a few 100s of ms) to not impede the conversation, and that the parties have talked before and would notice if the tone of the conversation was completely different. Or maybe I'm misunderstanding.


Yeah that seems right. I guess an attacker could tamper with the call early on, say "hey what are your emojis?" as soon as the call starts. Deepfaked voices and 2 henchmen should be sufficient here, no need for real-time audio rewriting. Then once the emojis have been verified, switch to a pure-eavesdrop MITM.

To defend, could ask to verify emojis at a random point in the middle of the call to make the attacker's life more difficult. Especially right before discussing sensitive information ;-)

Or drip verify over the course of the call, e.g. "what's your 3rd emoji?", and listen for signs of an attacker cutting in and out.


This is when multifactor includes "what you know they know about you" and specialized in-group slang is the real mark of credit.


That doesn't help in a hypothetical where the attacker is doing a passive audio-forwarding ("pure eavesdrop") MITM for everything except emoji verification.


There are four emojis. I reveal the first 1, they reveal the 2 nd, etc. how many rounds of exchange do you need to happen to be safe?


If the process you just described was actually secure then the computers could verify on its own. The problem is with an active mitm attack the attacker knows both sets of emojis and can modify the messages to both parties to look like they recieved the correct ones.


if this is voice chat, in-band might be good enough. There is no technology (yet?) which can real-time recognize spoken emoji description like "weird cucumber with mouth.. wait I think its an alligator or maybe even a crocodile?" and then retroactively replace it with a different one while keeping the timing correct.


OLVID App by the French Government asks users to swap a 4 digit PIN. App is great for OpSec check it out.


> OLVID App by the French Government

Recently the French Government required its members to use it, but it's made by a French startup, AFAIK.


I don't get it. Can you explain me why it is, and what the emojis represent?

Do you have the projection of some binary string in the unicode emoji space? (then you'd need to chunk it and possibly use many emojis)


I don't know about Telegram, but it is indeed how Matrix does it. Here is the encoding they use: https://spec.matrix.org/v1.8/client-server-api/#sas-method-e...


So, 64 symbols. That same amount of information could be conveyed with lowercase letters, uppercase letters, digits, and two additional symbols, just like Base64. That seems a lot more straightforward than trying to interpret what each emoji represents. To my old eyes, a lot of the chosen animal emojis look really similar. Or take symbol 34 for example, listed there as "spanner" (wrench). Unless I zoom in pretty far, that one looks like a diagonal line.


I read https://spec.matrix.org/v1.8/client-server-api/#sas-hkdf-cal...

So is it the representation in emojis of a server controlled shared secret?

That'd make 2 clients talking to eachother through the server vulnerable to tampering at the server level (ex:MITM)

Shouldn't the 2 clients not involve the server for the secret? This would require each of them being able to access the other public key fingerprint without trusting what the server says. But if they see eachother fingerprint projected into the unicode emoji space, they would see different emojis.

I think I may be missing something obvious. I just don't understand this trick.


It’s not a server controlled secret; it’s a MAC of a shared secret negotiated via ECDH between the two clients. Diffie Hellman ftw. See https://www.uhoreg.ca/blog/20190514-1146


As far as I know, if the 2 clients talk directly, their IP addresses are exposed to each other.


Essentially, the fingerprint.

Emojis take up 32 bits each. So 4 emojis would be 128 bits.

Of course, this doesn't account for all the 4 byte unicode combos that don't result in an emoji, but still.


It's just a fingerprint for their chat. It's almost the same as a random four-letter code to verify that you're both seeing the ame thing.


This is for calls, not chat. It's probably ZRTP (hash commitment) with emojis instead of letters.


If you see the same emojis as others, this comment is secure: <HN doesn’t support emoji>


(╯°□°)╯︵ ┻━┻


In case anyone doesn't know: these code points are legit (Japanese) characters, whereas so-called emojis are a range of code points specifically for graphics. That's why I imagine HN software allows this to be posted even though emoji code points are removed


I've informally referred to this as "ASCIImoji", but TIL that its proper name is "Kaomoji": https://en.wikipedia.org/wiki/Kaomoji


I've understood a subset of those, but apparently I've totally misunderstood this bit:

> The emphasis on the eyes in this style is reflected in the common usage of emoticons that use only the eyes, e.g. ^^

I always interpreted "^^" as equivalent to "this" or "ditto" -- arrows pointing at the immediately prior message. In hindsight, it could just as easily have been happy eyes!


> I always interpreted "^^" as equivalent to "this" or "ditto" -- arrows pointing at the immediately prior message.

That's... a good thing to be aware of as a heavy ^^ user myself! Thanks for sharing


these were called emoticons for forever

https://en.wikipedia.org/wiki/Emoticon


Aren't kaomoji a subset of emoticons? most western old emoticons are turned 90° CCW, whereas kaomoji are mostly horizontal and often using CJK characters.


if you and the other person see the same emojis, this call is secure.

sincerely,

the man in the middle

ps confirming this out of band will not add to your security


You haven't lived until you've manually typed your ed25519 public key into your Hyper-V VM authorized_keys because "Type clipboard text" still doesn't work out of the box...


By the way, if it's the same pubkey you use for github, you can curl https://github.com/username.keys


I think you have to have WSL for this to work, but this is how I do it on Linux with Xorg:

  sleep 3 ; xdotool type --delay 5 "$(xclip -selection c -o)"  ## Force-paste the secondary paste buffer (Ctrl+Shift+c)


Hey atleast it wasn't a RSA key. Though for this reason I just have my public key publicly accessible (For instance github exposes all of user's public keys) and just curl it in.


I wonder if that's a situation where you could use a smartphone to mimic a keyboard (over USB or USB-to-PS2 or whatever), using it to "paste" content that way.


There's gotta be some way to do it with autohotkey?


Where I work we often have to exchange keys with partners. Key fingerprint is often used as a mechanism to make sure the partner really has OUR public key. We email the key first and then another known employee is expected to verify the fingerprint on phone.

Another approach is to host the keys on a HTTPS endpoint on our official domain name and their servers can fetch it programmatically and rely on TLS to verify that it is indeed our endpoint.


Upon reading that first sentence, I immediately checked your username to see if you're my colleague. We do that as well and, while I think it sensible, I also feel like it's relatively out there compared to most businesses (even for banks and the like).

Now I'm curious, are you willing to disclose what line of work you do?

For me, it's security consultancy (code reviews, penetration tests, network scanning... occasionally physical security tests or other related things, but those three are the bread-and-butter), so new employees get to verify everyone's fingerprint on chat. I've been trying to get people to use key signing for PGP (email) and about half the people get it, but now that Thunderbird dropped support for the Enigmail plugin, it also stopped supporting the web of trust and you just have to go through and verify everyone manually no matter how many signatures a key has from people that you've already verified. They managed to make the PGP experience even worse, which is honestly something that should grant an award


>Another approach is to host the keys on a HTTPS endpoint on our official domain name and their servers can fetch it programmatically and rely on TLS to verify that it is indeed our endpoint.

That's only as secure as the weakest CA in their trust store though, right? https://en.wikipedia.org/wiki/Certificate_authority#CA_compr...

IMO the best way is to put your key fingerprint on your business card and all your promotional materials. Then you just have to ensure that an adversary doesn't tamper with those :-)

(Of course, use of additional verification for the sake of redundancy is great too)

Spreading your Signal phone number is another approach. There was a recent HN thread discussing the merits of GPG vs Signal:

https://news.ycombinator.com/item?id=38557888

https://news.ycombinator.com/item?id=38558231

https://news.ycombinator.com/item?id=38555803


> Spreading your Signal phone number is another approach.

That's... like putting your username on your business card as though that's key material.

If you want to do fingerprint distribution, you should actually publish your Signal key's fingerprint (they call it 'safety number' to keep everyone on their toes). The phone number is your user identifier (like a unique username); the safety number is the key material you're meaning to publish as an alternative to the CA system.


>If you want to do fingerprint distribution, you should actually publish your Signal key's fingerprint (they call it 'safety number' to keep everyone on their toes).

"Each Signal one-to-one chat has a unique safety number that allows you to verify the security of your messages and calls with specific contacts."

https://support.signal.org/hc/en-us/articles/360007060632-Wh...

I don't see how I could publish my safety number if it's unique to each one-on-one chat?

I've been looking at the Signal website, and I don't actually see a way to distribute a fingerprint...


As I said, safety numbers are how they keep everyone on their toes! Can't have it be easy to verify that the Signal servers are honest :) This is why I joke that moxie must be a double agent (I don't think he is, but I find it funny that many of Signal's principles (see also: alt clients; federation; phone numbers; etc.) can be explained that way).

The key material shown in each chat is a concatenation of your fingerprint and their fingerprint, ordered alphabetically so that you are both shown the same thing. By checking two of your chats, you can find out which half is shared (that's yours) and which is unique (that's theirs).

The QR code contains more data, I think your phone number and perhaps a longer/stronger fingerprint (I looked into it once but forgot the details), so that's marginally more secure/foolproof to compare but also even harder to distribute since it'll only ever be valid for one contact


Thanks, this is valuable information. Is it documented anywhere?


What, the source code isn't documentation enough? (jk)

I vaguely remember critique towards PGP coming from Signal's corner of the internet (probably before it was called Signal) for having long-term stable keys and published fingerprints that make it so you want them to be long-term stable for verification purposes. Problem is, I looked for this critique a few months ago and can't find it anymore, so perhaps I'm putting words in their mouth that, instead, came from Signal supporters in a comment thread or so, though I also can't think of any other reason to hide your public key's fingerprint. Regularly swapping out keys protects from temporary key compromise situations, that's simply a fact, but it trades off being able to publish your key somewhere and people being able to use that to not have to trust "the server" (a central key distribution system) in an E2EE application. I have a different opinion than Signal seems to have on which variant is the lesser evil, but I can see why they've made the choice. (Imagine my surprise when finding out that Signal's public keys are long-term stable with indefinite validity.)

So, given that they seemingly don't want people to use their public key fingerprint the way that you can with PGP (printing it on a business card), I am not surprised if there is no user documentation on how to undo the concatenation. I'm not aware of such documentation myself, and it wouldn't be the first time that I have to dive into Signal's source code to find info on already-pushed-to-users functionality.

Let me know if you find any docs, though, because I seem to type out the explanation of how to use signal key fingerprints somewhat regularly (I should store it somewhere in reusable form, yeah) and sending a link with screenshots will be much nicer


Maybe you can write the O'Reilly book on Signal ;-)


> That's only as secure as the weakest CA in their trust store though, right?

You are absolutely right. Not only the weakest CA but now you needlessly involve a lot of third (or at least one third party).

It is not something I like but some of our partners demand it.


No public keys are meant to be public, either yes key will be correct or not. This is why (to my knowledge) package managers like apt still check http endpoints instead of https ones.


>No public keys are meant to be public, either yes key will be correct or not.

Yeah but how do you know if the key is correct if you're getting it for the first time?

>This is why (to my knowledge) package managers like apt still check http endpoints instead of https ones.

Your distro ships with a public key that lets you verify package signatures. TLS is redundant because you already have that trust anchor which came with the distro. (I would suggest using TLS anyway though, to force an attacker to break 2 layers of security.)


OpenSSH has built-in support for retrieving a key fingerprint over DNSSec-secured DNS. It's disabled by default.

If you enable it, the first connection to a new host will say "matching host key fingerprint found in DNS" if DNSSec is operational AND the retrieved key matches.


It's a little baffling that anyone would set this up, as the thing it promises is key integrity backed by government-controlled PKI. The impulse is sound! It's a good thing to want! But you can accomplish the same thing, and get other benefits, without forklifting DNSSEC into your zone configuration, by using SSH's certificate system. SSH certificates aren't X.509; they're much simpler.


First, the DNSSec keys are TOFU so only the first connection is attack-able by someone in control of DNS zone root keys, and any other connections set off alarm bells.

Second, DNS lets you connect to someone else's host and get their public key without needing to find their (not your) CA.

Third, if you're not entering IP addresses by hand DNS forms some part of your systems trust no matter what you do. You're free to pin a domain's DNSSec KSK if you're very worried about someone in control of the whole Internet using that control to trick your first SSH connections.


All SSH keys are TOFU! You can't mitigate the state-controlled PKI thing by saying it's TOFU; you get that property even without DNSSEC!


I don't deny that, I was just pointing out that the threat model in question is someone in control of every DNS record somehow making a change that only affects new SSH connections. Because they control the record that says which KSK is okay for which domain, but presumably don't have your DNSSec private key.

The attack would be delicate to say the least .

Repeating, the purpose of DNS is for discovering sonething you don't administer. Like the SSH public key of a host, or the mailserver for a domain. CA certificates are for securing something you do administer.


And to mitigate sniffing


Yes that's a great point. I wish distros would force all mirrors to be TLS. Security should be the default. If users want to use a fast insecure mirror, they can enable that option at their discretion.


You get security in the current system, just not privacy.


In this case they're related because of supply chain attacks. If an attacker learns that you're using a small obscure package, they might be able to hack the developer's machine and insert a backdoor or bugdoor.


I suspect you could extend that line of logic to always combine privacy and security. Which, granted is not exactly wrong, but that's just security through obscurity and you really shouldn't rely on it.

(Of course, if it's free then you should strongly consider taking the obscurity, and privacy is a compelling argument all on its own. I just think blurring the line here is iffy.)


I'd argue that privacy and security are unavoidably related. Is keeping your password secret about privacy, security, or "security through obscurity"? Depends which term you want to use :-)

IMO the concept of "security by obscurity" is overused. Ultimately what matters is the cost for an attacker. If you're trying to design a secure system, your system will be stronger if you put it out there for people to criticize instead of keeping the details secret. This argument doesn't really apply to encrypting the packages you use. Security solely through obscurity isn't ideal, but what really matters is the cost/benefit ratio. It's way easier to encrypt your package downloads than it is to read all the source code changes on every package update. (Does anyone even do that?)

I agree with this article: https://danielmiessler.com/p/security-by-obscurity/


I'd even argue that security is a general concept, and for communication, includes the concepts of confidentiality (privacy), authenticity, and integrity.


I thought it was obvious that a fingerprint allows you to verify that you have the correct public key via some alternative channel. So I read the article to find out why I was wrong; perhaps the article detailed an obscure wetware hole in the verification process, or maybe a dramtically better way of verifying public keys.

Nope: the article was a straight answer to the question in the title. Oh, well: it was short and to the point.


Nice write up. I have had two customers who had me install PGP in Apple email, which was fairly painless.

Until ProtonMail, which is what I use, using PGP was a great option. More people should try to encrypt communication.


> Nice write up. I have had two customers who had me install PGP in Apple email, which was fairly painless.

Mail.app supports PGP? How?


I use to use this plugin: https://gpgtools.org/


The free version of this software is available at https://github.com/Free-GPGMail/Free-GPGMail


Wow, I didn't know that existed. Nice!

When I used GPGMail, I didn't mind giving them a few dollars to support it. It's nice to not have to, though.


can affirm that gpgtools + mail.app is close to seamless, once the keys are already exchanged.


But only if it works with your OS version. It always takes them a few months to support the update. I'm not complaining. It's not like they're a giant company making megabucks to support it. But if you buy a new Mac today with Sonoma pre-installed, you're gonna have to wait a while before you can encrypt your email.


So I guess Apple Mail + Keychain Access is all it takes https://support.apple.com/guide/mail/sign-or-encrypt-emails-...

Pretty neat!

(Edit: not technically PGP, but PKI)


I like that https://github.com/FiloSottile/age has small public keys.


I'm not sold on this argument. Why is a 40-char long fingerprint better that verifying the last 40 chars of the public key?


Because there is less entropy in 40 characters of public key than there is in 40 characters of cryptographic hash of a public key.


Great answer, but why? Doesn’t that suggest the public key should be suitable for lossless compression to fewer bits?


I think the two existing replies misunderstood your question, but it's a good one. Which is to say, I don't know the answer but I feel like I should!

I'll take a guess.

There are large gaps between good RSA keys. 100 may be a valid key and 138, but not anything in between. Or, well, they're valid but they're trivially broken by having a divisor other than 1, itself, and the huge prime factors (the example of 100 and 138 are not good keys for exactly that reason; finding secure keys is left as an...). That's why we need RSA keys that are more than 256 bits in length: the key space is sparse and an attacker can, with some amount of efficiency, skip over the gaps. (This is all from years-old memories of how RSA works, don't take this for absolute certainty.)

What I'm guessing the answer to your question is, is this: it must be inefficient to reconstruct the key from an indexed form (e.g.: the first good key (100) has index 1, the second good key (138) index 2, etc.) without spending computational power disproportionate to the amount of extra resources that storing/transmitting the full key takes.

Now that I read the other answers again, maybe that's what Dylan meant, but to me that answer seems wrong because the public key is argued to not be uniformly random and that's precisely what compression algorithms are able/made to deal with. Perhaps not as efficiently as indexing can, but still. You wouldn't need to apply it to the prime factors or private key (doing that would, as they say, leak information), just the public part which people were saying is not fully random.


You have to leak information about the key generation to compress it.


Hashing is not compression, it should not be reversible/inflatable.


It's really easy to generate a public RSA key with desired patterns in it. The 1992-era PGP did use the last few bytes of the public key as the identifier, but later versions moved to using a truncated hash (MD5, and later SHA). At some point someone generated colliding keys for all the keys in the public keyring and uploaded them all, which kinda drove the point home.

(I assume that it's harder to generate a public ECDSA key with a specific pattern, but elliptic curve stuff didn't become common until after hashes were used for key identifiers.)


Because of tampering. If an attacker can produce a pair where the public key's last 40 chars match the victim's public key last 40 chars they effectively have a public key to dish out via MITM.

How feasible it is to produce said pair is another story.


It doesn't sound too hard to generate an RSA "vanity key", with any value you want for some of the bytes.

You can't control _all_ of the bytes, because it still needs to have the right structure and for you to have the corresponding private key, but 40 bytes of your choosing seems completely doable.

And if you can do that, you can impersonate someone else whose pubkey has the same 40 bytes. With a hash, any bit difference in any part of the key should result in a completely different fingerprint (hash collisions being extremely hard to find).


Note that this is specific to RSA keys. In no scenario (that doesn't involve extraterrestrial resources or perhaps nuclear fusion) can you create an ECC key, let alone a hash fingerprint, with 40 bytes of vanity. Afaik ECC keys are considered to be half the strength of a symmetric key (pre-quantum), so that's 20 fully random bytes or 160 bits of entropy. The sun simply doesn't hit the earth with enough energy even if you'd capture 100% of that and starve all life for it to do a computation of that magnitude. (The boundary is around the standard 128-bit key size iirc; I keep forgetting if the sun's energy would be sufficient for like ~120 bits or more like ~140 bits... and that's assuming perfectly efficient computational machines.)

Since the person you're responding to didn't specify which public key type, and since it's not obvious that your mention of RSA is to the exclusion of other algorithms such as ECC, I felt like the comment is a bit misleading


Others have answered this question pretty well for large public keys, but I wonder about the same argument for shorter elliptic curve keys where the key length and hashed fingerprint length may be the same. For example, is comparing 8 bytes of a Curve25519 public key as good as comparing 8 bytes of a SHA256 hash of that key? My gut says no, since generating partial cryptographic hash collisions is completely random while a structured public key of any kind is presumably less random, but I'm not sure how much less random (and thus easier) it would be.


One possible public key is zeroes + public fingerprint. If I remember correctly diffie-helman is based on multiplications , so maybe finding the private key is now equivalent to finding the private key of a 40 chat public key, which may be doable.

I'm probably wrong on the details here , but there's probably some math tricks you could use to more easily find some private, public key pairs that end with the fingerprint.


I wonder something every time I download Mullvad’s updated VPN client. ‘Verify this download with the GPG signature’, it tells me.

But surely if I’m in a position where I’ve poisoned Mullvad’s executable, I’m also in a position where I’ve doctored the GPG sig to match? That sig just being something that I download from mullvad.net.

Unless it’s somehow independently verified, I’m not sure that I see the point?

(Or anything else similar. Not picking on Mullvad, it’s just the one that comes to mind.)


Ideally, you'd have downloaded the GPG key on first use and check newer updates against that. It's a version of "Trust on First Use". It doesn't protect you against attacks during the initial installation, but it protects you against fake updates.


You've gotta check if they published their GPG signature somewhere else before, either in their social media pages, internet web cache, or forum.


>What you’d really like is a cryptographic hash of the public key, short enough to conveniently compare, but long enough that it would be infeasible for someone to alter the public key in such a way as to produce the same hash. And that’s exactly what a fingerprint is.

This seems overly specific to PGP. x509 (ie. "SSL") certificates have fingerprints as well, but they're almost always expressed in 128+bit formats, not truncated.


X.509 fingerprints cover entire certificates and public keys. Neither PGP nor X.509 "truncates" keys.


Truncating the hash, I mean.


"If I give you my public key, say I post it on my web site, how can you be sure that it's really my key? Maybe my site has been hacked and I don't even know it. Or maybe someone tampers with the site content between the time it leaves my server and arrives at your browser (though TLS is supposed to address that).

We could get on a phone call and I you could read the key back to me. The problem with this approach is that keys are long. A 4096-bit RSA key, for example, encoded in hexadecimal, is 1024 characters long."

If trust the "phone" and phone numbers but not the "internet" and IP numbers, then why not just use modems to transfer the public key.

Is the assumption that it would be impossible for both the person's website and his phone to be simultaneously compromised.


My favorite solution to this is probably ENS (Ethereum Name Service) - you can link your public key to a domain name, essentially, and it's extremely verifiable and widely adopted.


DNSCurve was putting public keys into NS records before Ethereum existed. The subdomain for the nameserver contains the public key.

See https://dnscurve.org/integration.html

For example,

example.com. IN NS uz5bcx1nh80x1r17q653jf3guywz7cmyh5jv0qjz0unm56lq7rpj8l.example.com.


What does this bring to the table over having txt records at e.g., _key.example.com? Why add a bunch of intermediaries?


trusting a phone call means a whole lot more than trusting your phone's hardware. calling somebody lets you recognize their voice, mannerisms, personal knowledge, etc. spoofing that is many degrees more advanced than getting some spyware onto a phone that can intercept keys.


If it's someone the person has never talked to before how would the person recognise voice, mannerisms, personal knowledge, etc.

Why might someone trust a phone number more than trusting an IP address. Is the network qualititatvely different.

Is it easier to "steal/take over a domain name or a server" than to "steal/take over a phone number or a phone". What if someone can do both.


When I read that I thought about the ability to replicate someone’s voice using AI. How would you know it’s me on the other end reading you my key over the phone? Then again, you would need a substantial amount of audio from me to accurately train a model of my voice.



If your public key on your website is hacked and your hash is on the website, well, your hash will be hacked too.

If you're going to transmit the hash via a secure side channel, just transmit the public key.

I think the author is confusing secrecy with authenticity. The only way to prove the public key you received is really the intended public key is with a third party certificate authority that authenticates it. Which is what TLS does.

But I will assume the author is smarter than me.

Am I missing something or is this article useless? Just use the same secure channel for the hash but for the public key.


Sure you can. Fingerprint of key is just a shorter and more convenient option to reference key everywhere, including manual verification. Article tells just that and it's commonly used in the industry.

And, by the way, signature in X.509 certificates ("TLS certificates") also hashes data to be signed - signed data input has to be smaller than sig size. Hence it also verifies pubkey indirectly.


Isn't the main problem to secure the distribution of the key?

> Maybe my site has been hacked and I don’t even know it.

How would the fingerprint help in this case? If the fingerprint is also hosted on the website.


`gpg --auto-key-locate` is a thing, except not many actually use DANE for key distribution...

https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


I think one can verify the fingerprint by other means, e.g. phone


Can someone tell me if I have understood this correctly:

A public key thumb print is a hash of the public key. It useful because it is easy to manually validate.

Edit: when would I need to manually validate? I'm guessing it is used as an identifier when configuring a deployment for a system to a particular environment, for example?


My first thought is that RSA is outdated: EdDSA has 256-bit keys, and if you're lazy enough you can probably just read the equivalent of 160 of those bits without security loss (I think?).

But then I remembered post-quantum crypto. Yeah, gonna need those fingerprints.


I always wonder why randomart (or an improved version) never seemed to catch on


It's one of those things that looks kind of useful but the more you think about it the more the use case doesn't actually make sense. A randomart image is actually harder to share than a hash (you can't read it over the phone for example) and any situation where you can share the image you can just as easily share a hash.

It's visually easier to distinguish differences in randomart than with a hex encoded hash value, but there are very few situations where that's actually a useful property in practice. If you actually are able to share the whole value, you probably have a computerized way of sharing information that you trust and you might as well just programmatically compare it to what you expect and that can detect even a single bit difference regardless of the format.

A version of the idea that might be more useful would be something that translates hex values into something that's easy for humans to share in any number of out-of-band ways. Translating a hex value into English words would be great for verbal verification for example. Or if we're being very 2023, maybe use the binary input to feed an AI image generation algorithm and you describe the image to the receiver.


Isn't it much harder to compare a bunch of random characters scattered across a mostly blank field than some sort of number?


>We could get on a phone call and I you could read the key back to me

Length isn't the only problem. In theory, real-time voice MITM with an AI swap on the verification readback such that it matches the modified key is a real possibility these days (albeit remote).


Way more possible today than in the past, but still would be insanely impressive if executed in reality.


By a criminal or as part of a prank at least. If security agencies turn out to invest the money needed for doing this, or just have a big enough pool of soundalikes (perhaps additionally trained as voice actor), that wouldn't surprise me much honestly. Also keep in mind that phone compression degrades the quality quite a bit. Still interesting to know about once it's actually happening, but not unexpected


> We could get on a phone call and I you could read the key back to me.

This makes no sense, the article is not from the 80-s, copy-pasteable modes of immediate communication has reached even grannies!

> Manually verifying 40 characters is feasible; manually verifying 1024 characters is not.

It's not, no user should ever be exposed to this nonsense of manual verification of such long strings




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: