Hacker Newsnew | past | comments | ask | show | jobs | submit | more malgorithms's commentslogin

This was a summer internship project at Keybase, and the whole team is thrilled with how it turned out. The OP of this post is the author of the project and would be happy to answer questions here in HN.

One of the biggest devops pain points for a large team and large infrastructure is updating N servers every single time a team member is added or removed. Of course there are some other solutions to this problem, but the Keybase one is extra slick and just works automatically once it's set up.

It's also entirely powered by an open-source 3rd party bot, so it can be forked for improvement or to build something else triggered by cryptographic team membership changes.


Further: Keybase is a security product and it wasn't deemed worth the risk for the CEO. And while Keybase isn't made of money, the $5k was roughly irrelevant compared to the other costs mentioned here and the _magnitude of the risk_.

If you haven't been through this kind of thing, it's hard to understand how scary it is to have a break-in of unknown origin. If you use strong, unique passwords as Max did, then you're almost certain it's a server break in (and again, this is why Slack is scary for sensitive info)...but being 99% certain isn't enough. Removing that computer permanently from the team gave peace of mind.


There is giphy integration, and how it works is described here. https://keybase.io/docs/chat/linkpreviews . Similarly we'll be launching a location-sharing (including realtime) feature soon, working on a similar model.

Ah spell-check in the desktop app. We explored one library and weren't happy. We're visiting option soon, but yeah, privacy is critical.


Wonderful. Giphy integration works now. Thanks for the tip on link previews! The TCP tunnel to Keybase.com for giphy requests specifically is so delightful.

NB for others: may leak metadata so use with that in mind. One can also just whitelist the giphy domain only.


I'm so excited about real time location sharing over Keybase!


Uhh. Sarcasm?


David Mazières (designer of the protocol) has given a number of talks. Here's a good video of his talk at Google, and I believe it's the one I watched a couple years ago: https://www.youtube.com/watch?v=vmwnhZmEZjc


CoinMarketCap's paid API. We're open to considering other data sources.


I would suggest OpenMarketCap instead. CoinMarketCap is known to include exchanges that fake their volume and I wouldn't be surprised if their price data is somewhat manipulated too.

https://openmarketcap.com/exchanges/faq


Correct. The price especially is wrong on CMC when there is high volume events.

I also invite the OP to check out Blockmodo's API. We deliver this data to developers and institutional investors.

https://blockmodo.com/docs/api

We stream everything which includes the price, trades, news, social media and community posts, and code checkins.

On the price side, we have two channels. We have a stream channel and a ticker channel. The stream channel derives the price of tokens by looking at raw trades from a select set of exchanges which have been vettted.

We are based in SF and happy to help at no charge! Drop us a note.


Good q - this step will likely be automated soon. Still, there will always be one final step of our approving any integration, otherwise there would be 10,000 pr0n sites or ad sites. (We mention this in the FAQ.) But we can automate everything up to turning it on.

For now, we want to talk to everyone working on integrations, so we can see what steps are working and what are confusing, what could be improved, etc. So we're talking to everyone doing an integration.


I still don't get it. You have always been able to get a keybase proof for ANY website/domain without being approved first. Why do you need to whitelist mastodon instances? Why not just let people type in the domain name for their instance and get rolling?


But now they're showing every integration possible (as in, every mastodon instance they approve of) on their UI


Again…why? who cares? Why is picking from a pre-approved list better than just letting people type in their instance domain name and allowing every instance by default?


Agreed. Not to mention Mastodon could've a linkback to Keybase with all data pre-filled (username + instance name). For example in Settings a link "Connect with Keybase".


> otherwise there would be 10,000 pr0n sites or ad sites.

There's a middle ground: you can add integration so that it's available from CLI (`keybase prove ...`) but don't show it in GUI ("select integration") so it's not advertising that site.

The proof integration guide looks neat by the way.


CLI integration available to all without a human step, but requiring approval to show up in the UI when adding integrations? I'd like that solution


Oh, wow, this jumped the gun for us. We weren't expecting to announce this until next week! It's also in a bit of a draft form, and we expect to improve this integration guide as we add partners.

Keybase's view: identity on the Internet should not be just about Twitter, Facebook, and the other superpowers. Your membership to any site might be meaningful to other people, whether that membership is to something small like a phpBB forum about motorcycles, or something big like LinkedIn or Etsy. Often, the smaller the community, the more meaningful and close-knit membership is. And the more that community might want access to secure tools such as Keybase. If you're on the forum, you might _really_ need to reach out to another user of that forum, securely.

It might be a good time to mention that Keybase is looking to hire an Identity Evangelist[1]. This would be someone with a tech background (i.e., from the HN crowd), who has great presentation skills and experience, and who wants to help other sites and apps integrate with Keybase.

[1] https://keybase.io/jobs#evangelist


I'm really glad to see opinions like this that are strongly tinged against the centralized web, coupled with engineering work to enable more independence. Thank you.

A highlight point for me is that services should support both those superpowers and smaller entities, as yours does. Rather than isolating the open web from these entities, we need to make bridges between them so people can cross.


Why does Keybase allow a single remote identity to link to multiple Keybase accounts, but does not allow one Keybase account to link to multiple remote identities (on the same service)? If I run multiple Twitter accounts, I don't see why I shouldn't be able to link them all to my Keybase profile.


I think this might be a side effect of the Keybase UI, not the service itself. If you interact with the APIs directly and look at the JSON, you can see that at a high level the proofs are a dict, with each service as the key, and the value is an array aggregating several proofs. You could therefore potentially have several Twitter proofs associated with different accounts ("nametags") that could then be presented all together if someone wanted to tie all those accounts to your identity.

I only skimmed TFA, so I'm not sure what the process would be like for actually validating those proofs, but the backend datastore seems like it wouldn't have a problem, at least.


The new Proof Integration Guide explicitly says that a Keybase account can only be linked to a single identity on your identity service (but that you can link multiple keybase accounts to that same identity if you want to).


a16z - also Chris Dixon - led our round at Keybase. We only have great things to say about both the firm and Chris. Chris sits on our board and has been a class act the whole time.

Also: during our fundraising, we faced a number of the "Monday pitch meetings" -- this is where you've gone through the early crap talking with VC's and are invited in to pitch to the partners. It's typically the last step before an offer. a16z's partner meeting was, by far, the most tech-savvy and aware group. It seems obvious that VC's would understand the technology they're investing in, but honestly, that's often not the case. We faced a lot of brand-name VC firms that couldn't understand what we were working on. We'd get a sense of that and quickly adjust our pitch to focus on what they could understand.

For those asking "Why give them so much credit when they're just doing their job?" -- there are special occasions when a startup's interests and its investors' interests are not aligned. The first big opportunity for a VC to mess with you is the period between a letter of intent and closing the round, when all the smaller details come up and are negotiated. a16z was excellent in the process and we closed quickly without issue.

A later possibility of disagreement is what ar7hur describes here, and here's why it happens: VC's have zero risk aversion and aim to maximize expected value in dollars, which is what you'd want as an investor in the VC. But especially if you're a first-time founder, your dollar-to-utility curve is anything bit linear. Most humans wouldn't trade $5 million for a 1-in-10 chance at $100 million. This discrepancy is the source of a lot of possible problems. How VC's behave during both subsequent rounds and possible exits is perhaps the most important measure of them from a founder's perspective.

tl;dr very happy with a16z and Chris Dixon.


Did you guys ever eventually work with VC firms who didn't understand your product? I get a sense that there might be a big mismatch but wonder how much that really affects things day to day.


I don't know the moderation policy on title changes at HN, but I just changed the title of the post. Internally at Keybase - and thanks to a conversation with a peer - we've been feeling pretty guilty about calling out a specific project that we think is basically the gold standard outside Keybase.

We'd rather focus on the positive solution to the problem (which Keybase has implemented), rather than just pointing a giant finger at any other services which have the problem we're trying to address. I think I personally will sleep better tonight this way.

Still we want this conversation to continue.


From the updated FAQ: How DARE you attack Project XYZ?

This still reads as unnecessarily acidic to me, given the update notice at the top of the article.


Sleep well! We've updated the headline here.


<3


It _seems_ you (someone from the Signal project?) are actively diverting from the point, with what is ultimately a security theater request. Keybase's app - which IS open source - doesn't trust the server at all. We could be running anything server-side, regardless of what we do or don't publish. Meanwhile, Signal's story is "you MUST trust our server, over and over again," as the blog post explains. Unfortunately there's no way to know what's happening on the server. So being like Signal and publishing your server source is strictly worse than being like Keybase and not (yet?) publishing server-side source. At any time, Signal could be throwing in these fake key upgrades, either due to running other source code on purpose, or being forced to, or just plain getting hacked. The most malicious Keybase server could not.

This comment may be of interest (we could release server code at some point, and I will take this as a vote), but I hope people reading this aren't distracted by Signal's flaw here.

[edit: chilled a bit!]


> But to be clear, you're actively diverting from the point.

...

I get why you showed up here, but you're really not addressing the point of the post at all, and in fact you're trying to distract with the suggestion that Signal's publication of that code protects people from this flaw. It doesn't. At all.

Wow, that's a pretty hostile (and accusatory) response to a fair ask. This is one (small) step removed from accusing someone of shilling/astroturfing.

Let your product stand on its own merits. If you have a good reason why you won't open source Keybase's server implementation, own it. Don't undermine requests to open source the code by publicly accusing people of supporting a competing product.

The person you're replying to didn't make an argument in favor of Signal - or any other competing product, for that matter. In my opinion, your response is actively distracting from their request.


Even if they publish their server code there is no way for anyone to verify that it's the code they are actually running and it would be just a PR move. If the client implementation is good there should be no way that the server can compromise any message.


It's a step toward people running their own servers, either federated with Keybase proper, or just as a personal instance. That would be valuable for quite a number of enthusiasts. Federation (like email/XMPP) is a very reasonable feature for any forward-looking communication platform.


Then this is not a request for transparency but for them to change their business plan


Keybase's target is to become a central identity point. Other features (like team chat and git repos) are made to showcase what you could do with that.


Okay, and that's exactly the kind of reasoned response that's appropriate. What's not appropriate is implying the request is simply unfounded because of its source. It's not charitable.


Well, equally, GP could've disclosed their conflict of interests instead of just using a hit&run one-line red herring. OP makes a post advocating not having to trust a server, and most upvoted comment is someone asking them to open their server so they can trust it? Doesn't make much sense..


Agreed, they could disclose a conflict of interest. But I don't think it matters here, because their request could reasonably have been brought up by someone unaffiliated with Signal.

In other words - you don't need to be affiliated with Signal to be in favor of open sourcing the server-side code. It's a fairly common complaint on HN, and I can see why it was the top comment for a while even if I don't ultimately agree with the need to open source the code. Likewise, if you look at the link to the GitHub issue you can see many other people likewise asking for - or reacting to responses to - open source the server code.

Do all those people have conflicts of interest? Is it possible that the affiliation with Signal doesn't matter here? Then be charitable, and let your actual reason for not fulfilling the request stand on its own.


I think it's a pretty reasonable way to combat FUD.

It's tough to compete on security because users struggly to know what's actually better (on top of needing convincing security is a worthwhile differentiator in the first place). A client that doesn't trust a server is a great improvement and "show us the server" is a terrible response.


Then the reasonable thing to do is to explain why it's a terrible response. What's unreasonable is to imply that the request is a sideshow in favor of a competing product because of the identity of the person who brought it up.

If you have a good reason not to fulfill the request, charitably responding to the request with that reasoning is an educational opportunity for the audience. There's just no need to bring identities into the mix like this, and I think a dispassionate response outlining why the server need not even be trusted would stand on its own.


Fair enough - I don't want to dilute my point by coming across as too hostile, even though my point is that it seems like a well-crafted diversion. Let me edit it down and your quote of the original can stand.


You don't need to apologize IMO ¯\_(ツ)_/¯


Whoa... I don't work for and have never worked for Signal. Feel free to ask Moxie if I have ever worked for Signal and the answer will be an abject "no".

I think it's worth meditating on the tradeoffs of your system design. Nothing is perfect.

Signal is trying to do the best that it can, and I really think that the starting line in writing secure software is open sourcing the whole thing from top to bottom. Anything less isn't auditable.

Note: I edited this post to make the language more addressable. I love the work Keybase is doing, but I want them to open source their server.


How does open sourcing the server help you audit what their servers are running? There's no way to know if what's open source matches their running code, and if the security of the system depends on the server being open, it's not secure.


Because if you don't trust it you can run the server yourself and see if the behavior is correct AND over time as we move to a world where servers become better able to verify the code they're running, we can improve the trust model.

Given the choice between having the server code open sourced or not, the choice that is higher trust has to be open source.


That's just false. You can get all of the trust information necessary from the client. It's exactly the same amount of trust.

edit: To be clear, even if they open sourced the server right now, I would not even look at the code to determine if running the client was safe. The only time I would care to look at the server code is if the client's correct operation depends on the server running specific code. If it doesn't do that, then the server code doesn't matter. And if it does do that, I wouldn't trust the system, anyway.


As just one example, you have no idea what metadata logging the server is doing. This is just the surface.


So audit the client assuming that all of the traffic it sends is in the clear (i.e. not over https to keybase's servers). If that's not sufficient, then don't use keybase.

Regardless, until you have a way to ensure the server is running the code you expect it to be running (can this even exist? what about hardware level attacks on the servers keybase is running?) the server code is useless from a security perspective.


Releasing the server code allows concerned users to run their own servers, and this way they can ensure that they are using a server that runs the code that they expect.


There is no need to run the code that you expect if the client is designed properly, and you are only sure you're running the code you expect until your server is hacked, which also becomes a non-issue if the client is designed properly. There are other reasons to want to self host: to be in control of your (encrypted) data so that it isn't lost if keybase goes under, for example. But security is not one of the reasons.


Why assume that the server will eventually get hacked? Why shouldn't the server be designed properly, just like the client?

I still think it makes sense in terms of security. If I run a piece of client software X that connects to a server Y, it will always be better in terms of security if I'm in control of what runs on Y. This is independent on how X has been audited. So yeah I would argue that security is also a reason.


Assuming the server will eventually get hacked is how you design secure systems. You assume the worst, and ensure you are still secure. The server code being designed well had nothing to do with the hardware running the system being hacked. For instance, no amount of good design in my server stops a remote 0-day against linux, for example.

This is a subtle point, but that thinking is misguided. You, as a client, have no control over what server you’re actually talking to. The only way to be sure you are secure is to be secure independently of if the server has been hacked, or is malicious, or whatever. Thus, you design your system such that the client only discloses information that is allowed to be public to the server (public keys, encrypted messages where the server can’t decrypt it, etc). In that way, you don’t care what code the server is running, and auditing the server makes no difference to security.

The whole purpose of cryptography is to reduce the set of things you need to trust. Including the server in the trusted base is not only a worse design but a false sense of security if your trust ends up misplaced (hacked).

So, this is different than how a bank would work, for example. In the case of a bank, you have to trust that their servers are secure, and their software being open would help with auditing that. In this case, keybase is not the endpoint you’re talking to: another keybase client is. The keybase server is just an intermediary, just like any router on the internet.


You’re just pretending the server is outside of scope. In reality you have to consider the security model of the whole system.

The server is not outside of scope.


You can construct a client such that the server IS outside of the scope. You can determine if a client has such a property from the client alone. I would only want to use a client such that the server is outside of the scope. It's strictly better if the client has this property even if the server is open and magically trusted to be running the code you expect. The keybase client is such a client. If it isn't please inform someone with details on why.

edit: Do you audit all of the software running on all of the routers between you and keybase's servers? Why or why not? If not, why does this reasoning not extend to the servers? Why would the routers not part of the whole system, top to bottom?


It would be great to audit all of the software running on all of the routers I use. I hope that we someday move to a model where all routers run open-source attestable software.

Let me phrase this another way: if there's nothing to hide on the server, why isn't it open source?

We can go back and forth on this forever. My position is simple: strictly speaking, it's more rational to trust open source software. Trusting closed source software ultimately boils down to "trust me". I would love to reduce the degrees to which we have to blindly trust the systems we use.


And yet, all of the routers are not audited, and I presume you believe in the security of some applications that use them. In other words, the trusted base of the system does not depend on all of the components in the system (you start with the assumption that any closed components are hostile). The keybase servers are exactly the same.

The answer is that they do have things to hide: anti-spam/anti-scam systems, for example. The question is if they are hiding something that matters for security. You can determine this by auditing only the client.

Sure, open source software is great, and has many uses. In this case, it has no use in ensuring that keybase is secure. Somehow, you don't have to trust a great many components in the secure software you use on a daily basis, and yet you have to trust keybase's servers because.. reasons? And somehow this trust is important even though you'd still be blindly trusting that they're running what you hope.

I won't argue that some people would benefit from the server being open source, but to argue that open sourcing it has anything to do with security is just inane and FUD.


The question boils down to "what can an attacker learn by owning the server" and right now you don't know. You can pretend that doesn't matter, but it's not inane or FUD.

You have to model the whole system to understand the threat model. Anything less is blind trust.


You can know exactly what the attacker can learn because you can see ALL of the information that your client passes to the server by auditing ONLY the client. Your argument applies equally well to every single router or middle box on the internet, and it's just as wrong there.

You prove that it doesn't matter by assuming that keybase is running the most malicious code possible, auditing your client, and deciding that the system is still secure. This is what auditing the client means.

Additionally, to bring up this fact again because it has only been hand-waved away: Even if the server was open source, there is no guarantee they are running that code. Thus, there is no benefit to security until systems exist (somehow?) to prove the server is running the code you expect.

Double additionally: even IF you can prove that the server is running what you expect, how do you know that some box, after https is peeled off, but before the request makes it to the server, is not sending the same request off to some other, malicious, server?


I am going to say this one more time because I think it's a real point and I think you're dismissing it out of hand is unreasonable: there are things that can be learned from the server.

It's one thing to tell people that you aren't logging anything. It's another thing to show everyone you're not logging anything except the account creation date and last access date by open sourcing the software and then show exactly that in a response to a national security letter: https://www.aclu.org/open-whisper-systems-subpoena-documents.


Did you notice how your proof rests entirely on the NSA letter, and not the source code of the server at all? Isn't a world conceivable where they open sourced the server, with no logging, and then sent an NSA letter that contained information that wasn't logged in the open source code? If this is somehow impossible, please explain how.


Did you notice how you can compare the NSA letter to the source code and realize the effect is that they're the same?

If you didn't have the NSA letter, would you be able to verify the source code? If another project got an NSA letter and responded to it, would it tell you anything about the source code?

This is simple: Having the source code means you get to learn more from the other signals, no pun intended, of how that source code is used.

Again, as we move to a world where servers have more verifiable code running on them, the value of having open source code will increase.


I don't understand your points about the NSA letters, which makes me think that my point was missed. I am saying that the NSA letter claiming that only some information was logged is fully independent of the open source code of the server. Assuming the NSA letter reflects the truth, there could be more information or less information that what appears to be collected from the open source server code because, once again, the server does not have to be running the open source code, and even if it were, that does not preclude other systems from running against the same information the server has access to. Hence, open sourcing the server does not affect the security of the system at all. If the system is insecure without knowledge of how the server works, then the system is insecure. Period.

I think you're trying to argue that open source is good, and I agree with you. Open sourcing the server has many benefits. The only point I have consistently been trying to make is that open sourcing it does not help with determining the security of the system, whatsoever.

edit:

> If you didn't have the NSA letter, would you be able to verify the source code?

No, but even if the code was open sourced, you would not be able to verify the code that is running.

> If another project got an NSA letter and responded to it, would it tell you anything about the source code?

It would tell you something about the code they are running, yes, but nothing about the code they open sourced.

> This is simple: Having the source code means you get to learn more from the other signals, no pun intended, of how that source code is used.

This is equally simple: the source code that is open may have nothing to do with the source code that is running, and you must assume that they are not equal when auditing the security of the system.


Just to be extra clear, the chances of someone lying to the NSA in a letter are really, really low. Given that we can compare the response to the NSA to what is expected and it matches, we can make some inferences that the software running on the servers is as presented.

In contrast, if you received an NSA letter for keybase and they delivered similar information, you couldn't make any suppositions about the server's code.

To be extra, extra clear, to me, the future of the private internet is further verifiability of remote systems. That begins with Open Source. I concede that we aren't there for most parts of the systems we use today, but we are getting better (see attested contact discovery in Signal as one example).


Why would I not be able to make inferences about the software the servers are running if the chances on lying on the letter is low? I haven't read Signal's source code, and yet I believe with just as much confidence that they aren't logging extra information as if keybase had sent the same NSA letter. To me, Signal's source code is effectively closed, and reading it wouldn't increase my belief. (Have you read all of their server's source code? If not, why do you justify your belief?)

The article on attested contact discovery states "Of course, what if that’s not the source code that’s actually running? After all, we could surreptitiously modify the service to log users’ contact discovery requests. Even if we have no motive to do that, someone who hacks the Signal service could potentially modify the code so that it logs user contact discovery requests, or (although unlikely given present law) some government agency could show up and require us to change the service so that it logs contact discovery requests.", which is exactly the point I'm making. They choose to solve it by signing code and ensuring that exactly that code is running (seems like they just move the trust to Intel. Hopefully SGX never has any bugs like https://github.com/lsds/spectre-attack-sgx or issues with the firmware, as noted by the Intel SGX security model document), which is fine, but an equally valid way to do this is to make it so that the secure operation of the system does not depend on what code the server is running.

Doing that has some tradeoffs: there's usually overhead with cryptography, or an algorithm you need may not even be possible (Signal disliked those tradeoffs for this specific algorithm), but for some algorithms, it's entirely possible to do. For example, one can audit OpenSSL's code base, and determine, regardless of what the middle boxes or routers do, that the entire system is secure. Just replace OpenSSL with keybase's client, and middle boxes with keybase's servers, and do the auditing. Hence, open sourcing the server is not necessary for security. Would it be great if more systems could be audited? Absolutely. Is it always necessary for security? Absolutely not.

edit: Another quote from the article: "Since the enclave attests to the software that’s running remotely, and since the remote server and OS have no visibility into the enclave, the service learns nothing about the contents of the client request. It’s almost as if the client is executing the query locally on the client device." Indeed, open sourcing the code running in the secure enclave is effectively open sourcing more code in the client.


Just to be clear, code running on a remote server is not code running in the client. Just because the server attests to the client doesn’t mean the client is running that code. You still have to do all of the threat modeling for the attested code differently from the threat modeling for the client.

I’m not yet prepared to publicly get into all of the nuances of SGX, but I think it’s worth noting that there’s something very interesting happening there. I look forward to being able to discuss my team’s technical findings on the subject in public.

To summarize why this is so interesting: the attack surface is the whole system. Enclaves let us extend parts of our trust model to systems we don’t own. That is a real change and, if it works, it’s going to change how systems are designed at a deep level. The problem is that there aren’t very many working implementations of sgx in the Wild (signal is the only one I know of).

We’ll see where the wind blows.


Enclaves are interesting, and I also look forward to all of the new things they allow. But all of that has nothing to do with open sourcing the server being important for security if given the ability to audit the client, and the client is not designed to require a cooperating server.

I'm tired of trying to get you to understand this point and have you respond with red-herrings and FUD. Please be intellectually honest when asking keybase to open source their server in the future, and don't claim that it's relevant to the security of the system.


> And yet, all of the routers are not audited, and I presume you believe in the security of some applications that use them.

Why do you presume that? I certainly don't believe in the security of many applications that I use. I generally try to avoid putting any damaging information into them though.


I didn’t say you believe in the security of every application, I said you believe in the security of some applications. For example, websites secured by https do not require the security of routers to be secure.


This looks even less chill than the previous message, which didn't (iirc) accuse "someone from the Signal project?" of making "security theater requests".

Your comment would be much better without any allusions to this person's affiliation. Just answer the question directly without casting aspersions. You seem confident in your answer, so that shouldn't be hard.


Imagine how much harder it would be for your dastardly competitors to "distract" and "divert", if you or someone else from your project actually addressed the obviously legitimate question of keeping the server closed-source?

I mean other than coquettishly dropping a tantalizing hint by saying 'yet?'. That's nice, but insufficient.


This degree of hostility and the borderline doxing is offputting, to such a point that my trust in your organization is degraded.


Borderline doxing? The original poster literally has this information in their bio of this very website.


MobileCoin != Signal.

I literally do not work for Signal.


The only named advisor for MobileCoin is the founder of Signal. "Work" is not the only conflict of interest in the world, and you're dodging by consistently talking about it.

We know you don't work for Signal. We also know that you very obviously have a close professional relationship (at the very least) with its founder.


I have 0 control over the Signal project in any way shape or form. The fact that Moxie advises MobileCoin has nothing to do with his work at Signal. I can't force Moxie or anyone at Signal to do anything.

Moxie and I have a close professional relationship but I'm not sure what bearing that has on asking for the code of Keybase's server to be open-sourced. That's not a biased statement, and I would say the same thing in any thread about Telegram, WhatsApp, or FB Messenger's privacy. It's all the same.

If you want trust, you have to be open source. That statement has absolutely nothing to do with Signal.

Yes I use Signal. Yes I'm a fan of the Signal team's work. No, I don't think Signal would be better off with a closed source server. Yes, I do think Keybase should open source their server.

I honestly have no idea why this is even controversial :/.


> Moxie and I have a close professional relationship but I'm not sure what bearing that has on asking for the code of Keybase's server to be open-sourced.

You have a professional relationship with the founder of one of their competitors. It's appropriate in those cases to note that you have a bias. I realize you don't think you have a bias, but that's the whole point.

> If you want trust, you have to be open source.

That's such a confusing statement, and a particularly misleading one coming from someone that works for a crpyto company.

The whole point of end to end crypto is that you don't need to trust the server.

> I honestly have no idea why this is even controversial :/.

That's the core of the problem. You have a professional relationship with the founder of Signal. If you comment on a thread about Signal, or it's competitors, we shouldn't have to click the link to your site to find that out.

Think of it as building trust...


I see. I will include that disclaimer in all threads I comment on about Signal in the future. Thanks for explaining this to me.

I've been a fan of Signal since it was RedPhone and TextSecure and my professional relationship with Moxie is quite recent in the scheme of watching the rise of his projects. I apologize if my lack of awareness was offensive, it was unintended.

Edit: Just to be clear, I don't think you need to have the server open-sourced to trust the end to end encryption of the messages, but that's just one part of the overall trust model.


I don't think any kind of disclaimer was necessary. Your point about the Keybase server being closed-source has nothing to do with Signal and is completely valid no matter what competing interests you may have.


Mobile Coin is not Signal. Fuck, I had a Catholic Priest listed as an advisor once. It doesn’t mean my startup had a direct line to the Vatican.


Yes, but we’re discussing the pope here not a priest.

Surely you’d have a direct line to the Vatican if you had the pope as an advisor, no?


You're also diverting from the point to argue that Signal is worse than Keybase for server trust. Releasing server-side code comes from the open-source philosophy that Keybase claims to be a part of on its website. Several people in that Github issue would like to self-host Keybase servers - open source is all about that kind of accessibility. No one's making the claim that publishing the code would make them trust Keybase more, though Signal arguably has benefited from releasing its code for public audit. What would be the cost of publishing your server-side code?


Ever worked in licensing large systems that encompass dozens of differently licensed components? That often costs a lot, for many reasons.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: