Hacker News new | past | comments | ask | show | jobs | submit login
Keybase.io (keybase.io)
409 points by andybons on Feb 13, 2014 | hide | past | favorite | 121 comments

Hi everyone, Chris here, I've been working with Max on Keybase. I can't help but feel this ended up scooped a bit early. (Crap!) Not a surprise, because HN is quick.

The alpha site's changing every day, and we're working on the documentation now. I don't use the term "alpha" loosely. There will be extensive security details published, explaining every aspect of the identity proof system, client sessions, etc. They will be on the site before we open general access or turn beta. Right now only a few friends are on there. All that said, Max and I can answer questions here.

My profile on the site is https://keybase.io/chris if anyone wants to look. My profile demonstrates early examples of how identity proofs will work, including both twitter and github. We'll of course be adding other public identities in the future.

The site design is also very iffy at the moment; I was about to move into firefox bugs tomorrow.

There were multiple questions/comments below about this, so I felt I should clarify one detail about the keybase client's trust of the server. When the keybase client requests maria's key from the keybase server, it does not simply trust the public key because it trusts the server (or uses https - huh?).

Rather, the server replies with links to tweets, gists, etc. -- maria's public identity proofs. The keybase client does not trust that these are honest, so it scrapes them directly and makes sure they were signed by the same public key that the server provided. In other words, the server could reply with a different maria, and simply lie, but not with the real maria's github or twitter account.

The server could also lie by omission, leaving out an identity. But it cannot invent ones that do not exist, without the client knowing.

Again, the premise here is that maria is the sum of her online identities.

The website itself is of course a different story. When you look up maria on keybase's website, you are trusting that keybase.io did not lie about her github account. Fortunately you can confirm by following the link to her gist, where she announced her keybase username and posted her key fingerprint.

I don't see why you don't just get the key once, allow you to verify it, and store it locally. It seems pointless to make all these extra requests to you.

There's a reason that gpg does this..... Maria's twitter being hacked, Maria's github being hacked, Maria's Keystore being hacked....a lot can go wrong.

There are still weaknesses like, you lie about a github and link to your own github, and lie about the public key. And...many others.

I don't see how this is any better than a keyserver and just asking confirming their GPG fingerprint by some other means. Not knowing someone and guessing that their fingerprint is right from some third party is very sketchy because it doesn't use a trustworthy, authoritative source (the other person).

Also, WoT works best when people meet other people they trust in person and sign each other's keys as the GNU/Linux community encourages. https://www.kernel.org/signature.html Then it's possible to get other people's keys elsewhere on the planet and know they're probably good given they're signed by someone you trust.

What the GNU/Linux community encourages is clearly not being useful for making lots of people use PGP.

Then let's make apps that explain i) how it works with pictures and ii) exchange keys more easily: say share key ids with barcodes.

GPG mail plugins that popup a barcode that someone else can scan with their laptop's webcam or some mobile app.

GPG mail plugins should also have a search toolbar that can quickly get a key so it can be verified.

Mobile platforms must support GPG natively, many only support S/MIME.

yes, it does do this; once you're satisfied with maria's identity, that she's the person you want, you sign a statement to that effect, which you can store just locally or post back to the server. (or of course you can just sign her key in GPG!) The latter - posting back to the server - is for portability reasons. A keybase user will likely use keybase on multiple machines.

The point of SKS is signing keys each other's keys and being distributed. This just fragments into a SPoF service without making the existing ones better.

Perhaps I don't understand the whole keyserver concept... But how is a keyserver not a centralised "IdP" like construct?

PGP keyservers talk to each other, if you send your key to GnuPG keyserver it'll end up on MIT's keyserver pretty soon.

Thanks. The WoT depends on not trusting the keyservers, but trusting that humans on the other end know whom to trust and get them to countersign each other's keys.

SSL CAs:GnuPG (GPG/PGP) -> Subversion:Git

The first thing I thought about is a man in the middle attack with homoglyphs. I don't know if I'm paranoid, but look at this

    > keybase id maria
    pgp:     C4B3 15B4 7154 5281 5100 1C58 C2A5 977B 0022
    github:  mаria_leah   ✓ https://gist.github.com/23423
    twitter: mаria_h20    ✓ https://t.co/mаria_h20/523554
    site:    mаriah20.com ✓ https://mаriah20.com/X904F...
I looked up for 'maria', all ascii. The answer, served by a malicious server, contains the first 'a' of maria in Cyrillic (check yourself, you'll see that 'mаria_leah' != 'maria_leah'). This would fool the user.

Maybe the client should apply some logic as browsers do for IDN homograph attack to show characters not in your locale in a different way, or at least warn you.

Hi riquito - this is a very legitimate concern, and it has to be reviewed individually for each type of proof keybase supports, in the client. With twitter, keybase, and github, you can't have a username containing any character other than an alphanumeric, dash, or underscore. Which means this kind of attack is impossible.

But for future identity proofs (domains, for example, which we've yet to implement), this kind of attack is real. Our approach here will be that anything outside of normal ascii will be highlighted and addressed to the user, as a serious warning.

These are good news. Thank you and good work!

I wrote "man in the middle" attack, but that's wrong, since the connection is over https. The point is still valid if the server is compromised (or managed by evil people).

Even though there is a disclaimer, I think the "encrypt in your browser" feature (https://keybase.io/encrypt) undermines Keybase's security credibility.

This form has essentially the same level of security as Hushmail. Anybody using it should consider the content exposed to Keybase or anyone compromising Keybase.

I'm not an authority on hushmail, but it seems like they do crypto on the server, and the server is just trusted to throw away the keys and plaintext?

In the keybase Web client, all crypto happens on the browser. The server knows no keys or data in plaintext. Of course, you'd have to audit the front-end JS code to believe that claim.

But our intention is that the only way to compromise the Web-based tools would be to insert malicious JavaScript into the client's browser. A read-only compromise of the server yields only encrypted data, and the server never has access to the decryption keys.

Then the only difference between hushmail and your model is exactly what the FBI will get a subpoena to have you type into your server to subvert your users. The models are equivalently insecure.

Incidentally, you can't simply audit the "front-end Javascript"; you have to evaluate everything that influences the Javascript runtime (the DOM, stylesheets, cached resources, &c) every time the page loads. Browsers aren't designed to make content-controlled code "auditable"; it simply isn't a capability of the environment.

FWIW, I found your post "Javascript Cryptography Considered Harmful" very helpful in understanding problems with client side crypto in the browser. I will recommend it to anyone who thinks it is safe: http://www.matasano.com/articles/javascript-cryptography/

And don't forget all extensions which can "Access all your data on all webpages, Access all your tabs and browsing activity".

If this functionality were provided by e.g. a signed extension (so the code can't be changed without the user being told: I think browsers can do this?), then you would worry mostly about how well that extension was sandboxed away from other extensions and various websites, right?

As you just said, users must trust the JS coming from Keybase. It might be compromised at any time.

Next, people usually mumble about auditing it, downloading a copy, signing it, etc. At the end of the day, you arrive to code installed on the client - which you already have.

The web version just weakens your story.

But it's a good idea: Sign the js. Even md5 would be enough, it's just so that when the FBINSA subpoenas you, we'll know it.

Yes, you are correct. Unless browser extensions are used (and even then) the web is not a good platform for cryptography. The web is a platform for letting a potential attacker run untrusted code on your computer without all hell breaking loose, not for building trusted cryptography applications:


Hi Chris, a few comments:

1. I like the site design, the story flow on the front page does a great job of explaining what keybase is.

2. I see (from the abovementioned story flow) that keys can be verified by reviewing signed tweets/gists. Is this functionality extendable to arbitrary links; i.e. verifying keys against personal blogs, Tumblr, WordPress or does the third-party site need to implement a recognized API?

Again, thanks so much, and it looks like a terrific site so far.

Good question! There will be no such thing as a general check, because -- for any identity -- the client software has to perform a check that a human would agree means something. For example, what does it mean that you own a certain blog? How would a person confirm it? Well, at first glance it might mean that you have the power to post a message there. But someone else could do that it in a comment, and so that wouldn't work with Keybase. So any given identity check has to match some human definition of what it means to have that identity. And it has to be publicly auditable.

With twitter, it's the ability to post a tweet under a certain username. With owning a tumblr account, it might be something similar. With your known StackExchange profile it might mean posting a statement in a specific part of your profile. And so on.

The common thread in each case is (1) that you post in a place where only your identity can, and (2) what you post is a signed statement claiming a connection among three things: (a) your keybase username, (b) your public key, and (3) the identity on that third party service. (The third one is necessary so it can't be moved elsewhere.) Note how twitter and github's are totally different, but achieving these three things.

We will build out this list of identity checks, hopefully making all kinds of them easy to do. Everything from proving you own a domain to having a tumblr or reddit accoun. The definition of those checks will all be publicly reviewable, both in the spec and in the client, which is what checks them for you.

Seems like you could get around that with a meta tag on the claimed site? Meta tags are basically never commenter-editable, and are usually owner-editable, they're basically a perfect fit for this. Alternatively there's a site/.well-known/keybase style URL (I have no idea what the best-practices are for .well-known. Personally I prefer meta tags.)

Obviously Twitter isn't likely to implement either of those, so some high-value custom implementations are still great. But if Maria owns maria.com and can assert it automatically, that's pretty strong supporting evidence.

Well, to follow up, could this be extended to ownership of a domain (via DNS txt record)? Could we use this as a means of authentication of a self-signed certificate for a domain?

Yes to DNS, though we have to be careful here since DNS can be spoofed more easily than github or twitter proofs over https. I was thinking a slightly better way to prove ownership of foo.com would be to post a proof at https://foo.com/_keybase (or something similar). To spoof this, an attacker would have to spoof DNS and also the https certificate.

Authenticating a self-signed domain certificate via keybase is a neat idea, but would probably need some browser support, unless there's a clever hack that I'm not thinking of.

Have you heared of PKA? https://grepular.com/Publishing_PGP_Keys_in_the_DNS

If you want to encrypt a message to my key, just run the following command:

  gpg --auto-key-locate pka -ea -r mike([-dot-])cardwell([-at-])grepular([-dot-])com
It will automatically look up my PGP key in the DNS, fetch it, and encrypt to it. My DNS is secured using DNSSEC so if your resolve supports DNSSEC, you can be reasonably sure that the response is trustable.

  mike@glue:~$ dig +short txt mike.cardwell._pka.grepular.com

Well if an attacker is successfully spoofing DNS, she can spoof MX records, thus getting emails for the domain, which is the only precondition on acquiring a certificate. You're obviously adding more complexity, butt security-wise it doesn't change much


Not with DNSSEC, and the second part is covered by DANE.

Please remember that some blogs (and possibly other types of sites, such as Tumblrs) can be owned and managed by multiple people.

Chris, a bit OT, but who made the illustrations for the site? They are incredible.

I think https://keybase.io/chadilaksono did. Mentioned near the footer of the main site.

Her portfolio: http://www.hadilaksono.com/

confirmed, yes! Caroline is doing both the artwork and the site design. She's a wonderful artist and we're lucky to work with her. Note the site isn't done yet, so anything that looks funny or imbalanced is not her fault but mine.

Glad to see you're focused on improving, but you're being too hard on yourself. The design is good as is imho (not that there isn't room for improvement). And the idea itself is pretty genius, so I'd say you're ok even if you don't immediately achieve the level of polish you're shooting for.

Great idea and good luck!

Could you do email verification by emailing a challenge to users, and having them reply with a signature of the challenge combined with their email address?

It does demonstrate control of the email account, and you cannot fake it either.

Hi Chris - it looks like a neat idea. One question: Is this another global namespace, or do you plan to support pet names?

I really, really want crypto, specifically, safe and secure-by-default crypto, to become much more usable.

Despite this hope, I can't seem to help the fact that the first thing that popped into my head when I read their webpage is "oh, they're wrapping and abstracting important key authentication and critical key trust configuration to make it more user-friendly, and implementing it all in javascript. WHAT COULD POSSIBLY GO WRONG?"

Even if I got whacked on the head one day and suddenly loved javascript, I would not use it for certain projects when I wanted to be taken seriously by, say, cryptographers.

Then again, look at all the success cryptocat has had!

As long as the trust model is that the server is untrusted, it can be written in the language they prefer. As for the client, there can be as many implementations in as many languages as needed to make everyone happy IMHO (they call it reference client in the home).

Honest question, which part of writing it in JS makes it less safe? I understand that running in a browser is an inherently unsafe model because of the various mechanisms that make it impossible to track code that's running along side yours. How is it that JS is unsafe in a server environment?

The median programmers it attracts to write it.

Argh. That barely warrants a response.

The average skill of developers using a language is low, therefore it's not possible for anything written in the language to be of high quality?

Wow. Even assuming that the initial assertion is correct, that's horrifically illogical.

True reason: you have an anti-js bias. You're welcome to that, but for goodness sake be rational in your hatred.

LOL. http://tobtu.com/decryptocat.php

On a serious matter, javascript (eg all browsers) absolutely needs to change to make the web more trustworthy. I agree with some of the Matasano points[0], but these are the minimum, exhaustively complete changes that would improve browser security:

i. js that can be cryptographically signed and verified, a trust model and a browser security policy to enforce it.

Think ascii armored GPG signature as a comment for the code it encloses.

ii. js native extensions: able to talk to native code that was previously installed

iii. js objects that can be made immutable (can't change them in any way)

iv. js objects that can be made un-protypable (can't copy or "subclass" them)

v. js properties that can be made read-only

vi. js properties that can be made private (only the object itself can use them)

With folks pushing to make this happen across all browsers, javascript theft of bank passwords and credit card numbers would be much harder. Crypto stuff like the Stanford library would benefit.


0. http://www.matasano.com/articles/javascript-cryptography/

1. https://crypto.stanford.edu/sjcl/

As an update, I was hacking on https://github.com/moxie0/Convergence to try to get it working on modern FF. FF supports binding system libraries to JS objects. (If anyone knows how to modernize a FF extension, please help: https://github.com/moxie0/Convergence/issues/178)

If this talks to keybase's API over https and any large groups come to rely on this, we've then effectively replaced the decentralized safety of the Web of Trust used for authenticating PGP keys with the PKI that's used in browsers, which is completely and totally fucked.

I cannot support a project that doesn't build and strengthen the underlying WoT. Getting https involved for authenticating unknown keys is a huge step backwards. Madness.

We're not big fans of browser PKI either, but we're using it as scaffolding that hopefully one day can be torn down.

`keybase-installer` needs an initial install over https from npm. We unfortunately saw no way around this.

Assuming that install succeeds with integrity, then all future upgrades of the installer and client are verified with PGP keys stored locally on the client.

Once the client is installed, it speaks HTTPS to the server, but we're not trusting the root CA. Rather, we sign with our own CA that we ship with the client.

The proofs themselves, on twitter and github, all can be verified in the clear, as FiloSottile points out, but of course relying upon the HTTPS certificates of twitter and github to make sure the proofs weren't corrupted in transit between those services and the client.

> `keybase-installer` needs an initial install over https from npm. We unfortunately saw no way around this.

Write it in a language that has a packaging system not designed by amateurs.

I don't understand how using HTTPS for the API has any bearing whatsoever on the WoT built via PGP.

You can still verify the keys with your client's cached copies, or using another PGP client.

Exactly, and moreover. If there is no trust in the server, everything can even go over unencrypted HTTP. CAs have no business here.

Is it really impossible to make browser crypto a reality?

Browser crypto can be scary! Do you have a malicious extension installed? We can't tell. Further, how can you guarantee we haven't been tortured into serving you custom, targeted JavaScript? Hopefully you're not that important.

I realize malicious extensions can currently do as they please, but can't browsers allow extensions to define a security policy that forbids all other extensions from modifying a page? This policy could be specific for a single website: Keybase.

Because if browsers could do that, they could then support proof carrying code, which could be used to verify Keybase hasn't been tortured into serving a custom, targeted JavaScript.


So even if you have a valid crypto implementation baked directly into the browser, and you can call crypto primitives directly from JavaScript, what's the point? I'd just grab whatever you are trying to encrypt before it gets encrypted, or decrypt it myself. Or replace the encryption functions with my own wrappers.

Remember, I can introduce any code I want so long as I control the server which is serving your web page. JavaScript crypto is an attempt to not trust the server serving the data, but if that server, or any other server can inject any code into the web page which is handling the encrypted data, then you have no security. You are still left trusting the server to not screw you. Which is the same as using HTTPS which we already have today.

The point is to make it impossible to do what you just described.

For example, to make it impossible for code sent by a server to execute any Javascript (or other scripting languages) at all. The server could instead send a data structure (as opposed to code) describing what to do, without having the power to replace any encryption functions or to execute additional functions that can subvert encryption. I realize a first version of this might sound too restrictive, but the point here is to show how it can be made to work.

If it's possible to reduce what the server sent to the browser down to a fingerprint, it will also be possible for the browser extension to verify this fingerprint with multiple third parties. It can verify the fingerprint of the server code matches a fingerprint published on Twitter, or GitHub or other sources, which is something Keybase tries to do.

An attacker would need to break into all (or at least a majority) of those services to serve you bad code. Which is harder than breaking only into your server.

Forbidding other malicious browser extensions from interfering with a Keybase browser extension would allow the Keybase extension to perform all this fingerprint-checking logic with the guarantee the verification hasn't been tampered with.

I don't get it. We can already do this: just have your server serve up XML, JSON, CSV, etc. Serving raw data with no code attached to it and disabling all extensions is something we already have. It's also not very useful.

The point of web applications is that you can quickly distribute an application that runs on a common platform to everyone at once. It's a very nice idea. It is also insecure to boot.

You are proposing two different changes. First, an enhanced ability of your web server to tell my browser which code is allowed and which code is not. This is good. This way, for example, my bank running on bank.example.com can tell my browser to not load any JavaScript code, or even any external resource from anywhere but bank.example.com. Now nobody can inject a JS file from evil.example.com. Fine-grained control over what the browser should and should not allow is a good thing. Controlling extensions is a bit different. I want my extensions to work. I want my ad blocker and my privacy guard to function even on sites like my bank's. In general, I would not want a site like Google to disable my ad blocker, that would be evil.

Now, if you are saying that you want to sign this application and distribute the signature to other services so that I as the user can verify that the application blob I got from your server has not been tempered with, then how do you go about updating your application? If you found a critical bug in your JavaScript code and fixed it, now you have to create a new application blob, a new signature, and distribute that signature to all these other services that are supposed to arbitrate whether you are delivering honest code. Notice that you and you alone still control the signatures. There is no external verification that you are not delivering evil code to me, I still have to trust you, personally. Adding a signature/checksum means that the code you delivered has not been tempered by a third party, but it says nothing about you. And the point of in-browser crypto is so that I don't have to trust you.

That's where the whole thing breaks down. If someone forces you to change your code, then update all the signatures and launch this code, then I still have no idea that it happened. So at best this might protect me from a malicious third party. But guess what? HTTPS already does that, and is a much simpler and proven solution.

In-browser crypto does not work, and will never work. There is no way to make it work. The web is not a platform where the client can treat the server as untrusted. Every time I see an attempt at this I cringe since someone clearly wasted a whole lot of effort thinking they finally cracked it. Keybase is probably the first place where I am not completely against it as they are using it as a demo of what your actual client would be doing. Then again, they could probably have just scrapped it completely and done the whole thing server-side without so much effort.

The alternative to what you are trying to achieve is this: every website is distributed as an open source application blob and a number of trusted third parties reviews the code before it gets published. These third parties each sign the the code with their private keys, showing that the believe the code to not be evil. The problem with this is that it completely undermines the central promise of the web application: instant deployment to all your users. This system is exactly what you have with Linux distributions' repositories. It work, it's secure, but it's slow.

That sounds like an interesting use case, to have a browser API whereby an extension can disable other extensions from having access to certain domains. Perhaps you should submit a proposal and/or start a discussion on the appropriate mailing lists or bug trackers.


Advantage: it's distributed

Disadvantage: this site does nothing to make cryptography more accessible / easy to use for the common person.

Accessibility should not introduce a SPoF.

What happens when Snowden uses it and the USG requests access?


Businesses / nonprofits cannot provide privacy-as-a-service unless they're "SWAT proof" (distributed).

Advantage: SKS is "SWAT proof."

Try https://encrypt.to/ to send encrypted messages by one click :)

Disadvantage: it doesn't have a memorable name like keybase.io

I love sks as much as the next GPG user, but this is about as clean as I'd expect from a user friendly keyserver.

Sounds like a datamining goldmine too, but what do you expect from OkCupid founders.

btw had to do it. https://i.imgflip.com/6w8mc.jpg

i'm surprised by the number of comments which attempt to mock SKS because "the UI is not nice looking" "the url sucks to remember".

I'm like, wow, if that's the only issue, that's great lol. Sounds pretty easy to fix ;-)

Disadvantage: I spilled my coffee.

Cannot tell what this does on a mobile browser.

This, exactly. All I see are two buttons, Join and Login. But no description, no clue, nothing on what I'm being offered to join our log into.

Just want to say cool art work on the landing page

Done by the extra talented Caroline Hadilaksono! http://www.hadilaksono.com/

For me it increased the perceived trustworthiness of the website 10 times. I've seen illustrations drawn in similar style in Scientfic Amercian and subconsciously carried over the trust I have for SA to this site.

It's very charming!

This is totally off-topic however I really like the graphic at the bottom of the page. The graphic really sums up the challenges developers have in creating secure communication channels. There are so many threats now a days it seems overwhelming.

Where are the security details published? I think that's what we all want to see...

On top of this....I think this is cool in theory but bad in practice.

The assumption that Root CA's are trustworthy is already hard enough to make, how do I know that Maria is actually Maria? How will you verify that ``Maria'' actually owns that twitter, github, gmail. Maybe it is possible to devise some type of scheme for those sites, but how about more obscure services?

One mistake in one single account causes the entire thing to fall apart...

The idea here isn't that you use keybase to find out Maria's twitter, github or gmail identities - it's the opposite. The idea is that you already know who Maria is on one or more of those services, so the fact that the account you know is Maria's at github has posted a signed message from that public key is supposed to testify to you that that is really your Maria's public key.

You could of course manually review and verify Maria's github post that contains her public key - all that keybase is really doing here is providing an easy way of discovering that github post (or tweet, or whatever).


Only if you don't trust your copy of the keybase client (as opposed to the server, which you should not need to trust).

As Chris said, we would like to publish everything, just haven't found the time yet. We have bits and pieces in wikis in our various github repositories (almost all of which are open source and public).

The high bits are: all crypto is with GPG/RSA as per RFC4880. There are of course problems here, but we wanted backwards-compatibility and well-tested, well-used clients.

We encrypt server-stored GPG private keys (if you choose to use that option) with TripleSec (see https://keybase.io/triplesec).

Users use GPG to sign a series of JSON objects, of the form "I'm maxtaco on twitter", or "I checked Chris's proofs as of 2014/2/14 and they look good to me." All JSON objects that a user signs are chained together with SHA-2 hashes. So a user can sign the whole group of JSON statements by just signing the most recent one.

Here's an example (click on "Show the Proof") https://keybase.io/max/sigs/ZnBizHMA8RKSB598TaDtjlPlLKSEu1Wu...

There's a fair amount of engineering that went into the software distribution system. We rely first on npm to get the initial client out there, but after that, exclusively GPG for code-signing. That's documented here:


We hope to have better documentation soon, and we value feedback, we just haven't had the time to put it together yet.

Okay I'll dig into the details when I have time. I noticed you an malgorithms took the time to write big posts to me. Thanks, Ill make sure I repay you with some of my time too.

> how do I know that Maria is actually Maria? How will you verify that ``Maria'' actually owns that twitter, github, gmail.

> confirmed they're all her, using GnuPG to review a signed tweet and gist she posted.

So it sounds like it you believe in GPG as a viable method of id, there's no reason not to trust this.

Except the part where you're contacting keybase over https and they're handing you some key that says "this is maria's key, trust us".

That's the part where you trust the PKI, and that part is easily subverted, breaking the trust of the entire system.

If they were using the inherent properties of maria's key (e.g. the fingerprint), then they wouldn't need this whole silly website and username database.

Maybe this should be an offline tool that just goes and fetches tweets and gists so we don't have to trust them. You could add friend mappings with key fingerprint + nickname.

The... the whole point of this is to avoid the "you could add friend mappings with key fingerprint + nickname" part. You do realize that, right?

Sorry if this is a dumb question, but I do not heavily follow cypto goings. How are you normally accessing the GPG public keys? The way I understood it was that you always access key servers through http or https from a key server.

When I get someone's GPG key I can call them on the telephone or go to their house and make sure I got the right one.

I add it and use it. When you use this, I'm assuming I get that key every time from the server. I can get it and verify it once, or twice, or three times, but what about the 1000th time? What happens when I am important enough that they return a public key that is not Maria's, and I am none the wiser.

boss, I'm glad you answered this question. Because it explains the impetus for Keybase.

I think what Keybase is addressing in the status quo is twofold: (1) sadly, almost no one does what you describe; in person meeting key exchanges and webs of trust may sadly be as unpopular in 20 years as they are now, and as they were 20 years ago. People who go to them are often confused, even programmers. I wish it were different.

And (2) more important, in 2014, often the person you're dealing with is someone whose digital public identity is what matters, not their face in real life or phone number. If you know me online as github/malgorithms and twitter/malgorithms, to get my key, meeting someone in person or talking on the phone to someone who claims to be me is actually less compelling than a signed statement by malgorithms in all those places you know me.

And if you do know me in real life, then I can tell you my keybase username and fingerprint, exactly as you're used to. So it's still as powerful for meeting in person. With the added benefit you can confirm my other identities, which you likely know to.

In answer to your scenario about verifying: you only need to review the "maria" the server provides once, and then your private key signs a full summary of maria -- her key and proofs. Cases 2 through 1000 of performing a crypto action on maria involve you only trusting your own signature of what "maria" is. The client can query the server for changes to her identity, and this will be configurable; if maria adds a new proof, you might wish to know.

This is what I assumed the answer would be, and at this point it just becomes a difference in opinion. I personally do not believe that the methods you describe are generally acceptable options in the modern age. My phone number and address are much more important to me than the off chance of someone capturing my https traffic, breaking it, and inserting a fake public key. There is a point where the absolute security of exchanging public keys written on pieces of paper in a park are called for, but it's not for everyone or even most.

None of the attacks I thought of rely on an attacker to break https....

Looks very cool, but one piece of feedback: Let the user know it is in invite-only beta on the homepage.

I downloaded the command line util and tried to login, only to be let down :(

Excited to try it out!

Hi addisonj - sorry about this. The site is clearer about this limitation. If you request access via the site now (just click join on there) and remind me this happened to you in the comment field, I'll move you forward in the queue. Sound good?

Same boat here! Any sense of how long the queue is?

OK, finally looking at it on a desktop...

So my first question is this: if I know "maria" and I want to look her up to get her GPG key, how does keybase handle that? Does it just do an email address lookup, as in goes to, say, GitHub, grabs her email address, maria@example.com, then goes to a public key server and grabs the key that corresponds to maria@example.com?

If that's the case, there is a security issue: what if Maria never published a GPG key, but Chloe did using Maria's email address? Moreover, what if Chloe has access to Maria's inbox and can read these messages I believe to be only readable by Maria?

Edit: I see from responses below that various online presences of an identity tied to "maria" are checked. Is this not then susceptible to its own attack? For example, if Maria does not have a Twitter account and I create one, or compromise hers and post a different key, will I be able to at least introduce doubt into her identity, if not take it over outright?

No, there are no proofs based on e-mail addresses, because such proofs are not publicly-auditable. We could ask that maria prove to the server that she controls a given gmail account, but there's no way for the server to prove that to you.

We want the server to be untrusted, ideally just a dumb message router.

If Chloe wants to impersonate maria, she'll need to get control of maria's twitter and github accounts. Just claiming maria's email address won't get her anywhere. (Note that GPG keyservers are susceptible to exactly the attack you describe).

Hold on. First, GPG servers are susceptible to the same type of attack, except they would never be used that way. You never look up a person by email, the send them an encrypted message using the key you get. Instead, you verify their key and email address out of band: you meet them, check their credentials, then sign the key. Keybase is trying to get rid of the in-person verification, an effort I applaud, but in favor of a much weaker check: whether a few centralized accounts had been compromised.

The other part, where you check Maria's Twitter and GitHub accounts, means that a few things like Twitter, and GitHub are impervious to Chloe: a tall order and a centralized one at that.

Once again, is the point here for me to get a tuple of (email address, public GPG key) so I can email Maria securely? If so, then someone somewhere has to prove that this tuple fetched from the public key servers is valid.

If the point is to only communicate via keybase.io, then the service is centralized, and useless once actual sensitive info is exchanged, the US government takes notice and shuts it down at the DNS level.

Cool, I agree no one should use PGP servers the way I described, but you never know what people are doing out there. To do things the proper way, as you described, is difficult in practice for lots of people.

To answer the question, the point isn't to get an (email address, GPG-key) mapping. It's to get a (public-internet-identity, GPG-key) mapping. People sometimes do this today in an adhoc manner (e.g. tweeting your GPG fingerprint). We want it to be checkable by user-friendly software.

I see. That MO makes a bit more sense then, though is it not then limited to just Keybase.io and will no longer work if something happens to this service? Or more importantly, is there a way to make this distributed?

If the site went away tomorrow, you'd still have keys in your GPG keychain. You'd also have a local cache of the server-side data relevant to you.

All public server-side data is available as a dump (https://keybase.io/__/api-docs/1.0#call-dump-all). Private data like encrypted public keys and password hashes we of course will keep under wraps.

We don't have immediate plans to make the system distributed, but if someone did it, we'd find it very cool. It's just too much for us to do right now.

Could you effectively add an online "in person" check by having keybase.io send the person a large uniquely watermarked placard with a unique OCRable pin that the person holds up while taking of photo of themselves mugshot-style that must be sent in as a lossy image type (where image tampering is detectable). With this image publicly published online, it would be easy to visually verify the person yet another way. Obviously for people with pseudonym identities that are online only, these check would be irrelevant.

This is great. Proper cryptography is the solution to so many of the problems the modern internet is facing right now, but the key problem with cryptography is that it is never user friendly enough and never distributed enough.

This looks like a great step in the right direction.

What do you think of http://invictus.io/keyhotee.php user friendly and distributed identity

Awesome! Unix `finger` is back! I loved that tool!

This is effectively the same idea to provide your ownership of a domain. For example, if you want to use webmaster tool from Google you'd either insert a text in some file or modify the DNS A record to contain the expected text.

One thought is vouch and level of credibility by the person's profile. If a lot of people vocuhed for Maria or if Maria has a lot of active tweet and/or a lot of Github activity there is a good chance this is a real Maria. However, the activity-based credibility is easily forged and defeated so probably not a good idea to add, bur worth thinking about :)

What does this do? The mobile site has zero information, just a form to sign up.

Styles are broken on Firefox, text is flowing off the right side of the screen. Why do lots of sites seem to have forgotten about testing on Firefox recently?

The whole thing seems to be very early alpha. I'm sure that will be taken care of a little further down the road.

As a former Opera user, I certainly sympathize though.

It has the same text flow issue with Chromium so I'm guessing that they didn't do much testing at all. Furthermore, using a narrow window results in absolutely no information on the front page except "Join" and "Login" links.

For reference, I'm using Firefox 25.0.1 and Chromium 30.0.1599.11 in Linux.

Very cool idea. The idea of automatically verifying public keys over publicly accessible and known channels is great. This is more or less the manual process I follow when I want to verify a key remotely. Looking forward to seeing where this goes!

Also, being able to use this with arbitrary crypto software (eg GPG) would be even better!

On iPhone, I only see a graphic and login/registration links; can someone describe/summarize the service?

Am I correct in thinking that this would not prevent a targeted MITM where an attacker generates a "valid" cert that allows them to serve up a modified response for the Twitter and Github public key verification requests (say, providing you with an alternative public key)?

I've got to say this site does 'responsive design' the exact wrong way. On small screens all the words are hidden explaining what it actually is, instead you just get a giant meaningless image and buttons with no context.

Interesting. Another approach is https://encrypt.to/ which loads the public key from key servers and encrypts client-side via JS.

To clarify the difference, it seems encrypt.to is a service which does PGP crypto in the browser, based on keys pulled from keyservers. In contrast, Keybase is an identity-proving service, which proves key X belongs to person with twitter account Y, github account Z, etc. As a convenience, it also does encryption and other crypto actions for its users.

This looks pretty cool! I like the story flow.

You might want to include some links that explain what the keys are, what PGP is, etc - because not everyone who lands on your site will know.

There should be a big sign-up call-to-action button. You're missing out on tons of potential users.

This is awesome. We didn't anyone did this before?

Is there a way to remove an associated account?

Right, but this isn't just a keyserver. They are allowed to break those rules in their service if they wish.

Sure, just pointing out that it's still basically useless.

Yes, though it might be broken right now. Our plan is to allow this, for sure.

Is it FOSS?

sweet site

but... node? really?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact