Oh interesting. I don't think we've talked about this decision publicly, so I can write about it for a second. Not letting people re-use a device name is an inconvenience, I admit, but arguably it's not like other cryptography inconveniences, where people are confused, troubled, etc. We figured people would say "huh, weird requirement" and pick a different name and move on.
The goal is a 1-1 mapping between devices (keys) and these names. So whenever we need our UX to talk about a key, it can talk, safely, about it in terms of device names. Once committed to your chain of signatures, "Laptop-Warhol" means a specific device key, and it can't be used again. So, for example, if one of your Keybase installs wants to tell you "oh, Laptop-Warhol just added a new device, iPhone-Vangogh" then it doesn't need to look like this: "Key 34858234589234895897234598734 added key 90123845890230948234234324."
If Laptop-Warhold could mean multiple devices (keys), well then we'd need to start talking about the keys. Which is a nightmare for usability.
A lot of this decision was driven by something we've seen with apple devices. Every now and then I'd get a popup on my computer - say when updating iOS - that said something like "you just started using iMessage on a new device, 'chris's iphone'. if you don't know what this you should freak your shit out." well - it has basically said that so many times with the same names over again, that I can safely assume that it's a near-useless warning.
Note I mean unique to you; 2 different users on keybase can name their devices the same.
Generally speaking...it's been a goal from the beginning that names on keybase are meaningful. Similarly if you look up "chris" in in our merkle tree (which is pinned to bitcoin) that leads to a deterministic chain of signatures. inside that chain, where I mention "work-imac-warhol", you're guaranteed to see the same answer as I am. So "chris" is as good as a key fingerprint or safety number. And so is my device name.
I understand where you are coming from, but I my irritation comes from the fact that they device cannot be revoked from my account and recycle the name. (e.g. OS reinstalls, etc. I don't really like naming my device MyMainPhone-2, logically because it's not my second phone. It's a same device to me.) If the device that's active already have that name, I would agree with your decision that duplication is not allowed.
Maybe some people are more irritated by that than others (I'm certainly former-type), perhaps consider "advanced" to remove existing names for those who knows what they are doing?
I'd be curious if people on HN would want a zero knowledge survey and voting system inside Keybase, and if so, what would it look like?
The background: we talk about it sometimes as a solution to a real problem: in certain teams and workplaces, people can be afraid to give honest feedback (who dares to submit an "anonymous" survey to HR?), but Keybase may be in a unique position to let people in a group give written feedback, vote on something important, or rate an experience. Without any risk of exposing identity, short of writing something identifiable in a text field.
I'd be curious, personally, to see management get a yearly vote of [no] confidence, for example. Is that crazy?
Keep in mind we are mostly focused right now on user experience and performance improvements. But we allocate a certain amount of time to cryptographic features that just aren't possible in other software, such as this coin flip thing. We've been talking about voting and surveys, too.
OT: One of the things I find interesting is that "zero knowledge" has become a buzzword. On the one hand it is frustrating, because when cryptographers say "zero knowledge" we mean something very specific and rigorously defined (a survey protocol cannot be zero knowledge because the results of a survey do reveal something about the respondents' inputs). On the other hand, the fact that non-experts are comfortable with the idea of using an interactive protocol to securely compute functions means there is one less mental hurdle to deal with when trying to deploy these technologies.
From the anonize paper [1]: “Our system is constructed in two steps. We first provide an abstract implementation of secure ad-hoc surveys from generic primitives, such as commitment schemes, signatures schemes, pseudo-random functions (PRF) and generic non-interactive zero-knowledge (NIZK) arguments for NP.”
Thank you, I had a client say that they are providing zero knowledge authentication system which didn't mean that you can prove that you're logged in, but without revealing your username (or something like that), but simply that you can login using public/private key.
This is absolutely very useful. Definitely within a specific team or company, but generally anywhere, especially when combined with Keybase's proven identities feature. I can imagine a "Vote with Keybase" button ubiquitous on the internet wherever they want to conduct surveys.
Further: it will remove the friction of doing anonymous surveys. I would do them way more often for various things (similar to the coin flips) if they were easy to do.
A ring signature would do. You can be sure that the signature came from one of a set of public keys, without knowing which particular private key was used.
I worry this ends up being a technical solution to what is ultimately a social problem. If the problem is that people feel threatened submitted feedback at their workplace, the issue is the structure of the workplace.
Yes, but it's possible for these structural problems to be invisible to the people who could change them, precisely because of the structure that's setup. There are definitely cases where the structure is there on purpose to create this sort of environment...and this won't do anything to fix those. But there are also cases where people are afraid to give honest feedback, but if they were able to do so in an anonymous way, management would either be pressured to, or would want to make a change.
That’s a fair point, but even if workplace norms are sane, the technical solution additionally protects against (say) a rogue IT admin gathering info in secret, or against future policy changes.
I recently registered keybase.vote for a related web app idea. Rather than anonymous voting, rather, I wanted the opposite: authenticity in voting, polls, surveys, etc. A common problem in surveys is verifying that the respondents are real and from people you trust. Within small communities, you would have a large enough web of trust that you could rely on who you are following to determine who you individually pay attention to from the result set.
So my idea was simply to have the survey/poll generate a text field of all the Q/A in a JSON body, kinda like the proofs of keybase, and then have the user copy/paste it and sign it on keybase and then submit their response.
I would have the whole result set downloadable in raw format that anyone could easily verify with keybase commandline tools. But I’d also employ the web of trust created by following on keybase.
I thought I’d try it out and see if works. I like the idea of Keybase being a general way to authenticate without needing any elaborate login process or email acccount.
imho the more interesting cryptopgraphic proof would be proof of address or bounding box. I feel that if you allowed third-parties to pay you for supporting a validation of locality via cryptographic features sent in postcard, then the rails would come off what was possible with digital systems. Knowing location will be increasingly powerful imho. Our opinions count most in a local spaces, at least with city-building. And I feel that third parties would be willing to fund the main cost of postage, if it allowed them to be assured of certain geographic bounds of users.
In order to cheat that system, people would need to engage in mail fraud or buy a PO box.
Happy to discuss, chris. Sidewalk Labs is setting up camp in Toronto, and I was speaking about the above at a local event, and they were really interested in the concept. I had a call with their head of identity, but was disappointed that he couldn't say anything of substance on _why_ it was relevant to SL efforts, at least not without my signing an NDA. As a community organizer in the civic tech scene, I had no interest in that. More secrecy in the smart city / open gov sector :/ blech
Yes! Each horizontal row in the rectangles represents a participating device. The purple/blue rectangle that comes in first represents all the bytes of the commitments coming in. Since we constrain the size of the rectangle it makes (IMO) a cool visual effect as the rows squeeze to accommodate more data.
Each little square inside it represents a byte, so we map bytes (0..255) to colors ranging from a blue to a purple.
The matching secret is also 32 bytes, and of course those come in in random order, so we line up secret rows with the matching commitments. It sure is fun to watch.
We played with some different visualizetions. We actually had one version with a 3d sphere getting covered in data, but it felt too gimmicky. This gives a good feeling of people showing up.
Perhaps an even simpler analogy is a light switch. Each person decides randomly either to flip the light switch or leave it where it is. This is basically what random XOR'ing is.
If you're one of 10 people doing this to the light switch, then as long as you choose randomly, it doesn't matter what the other 9 people do. It has a 50% chance of ending up on and a 50% chance of ending up off. Even if the other 9 people are cheating together.
Of course this has the problem that whoever goes last wins, which is why the commitment ceremony is necessary.
Author here, thank you for the comment. "flip again" was added at the last minute, after a night at a bar...where some beta testers were making real-world decisions using the app.
I didn't cover some details I find fascinating but which might have been overkill outside of HackerNews. For example, some assume the "one-way"ness of a hash function makes this protocol work. But that's not enough: we can't have Alice generating 2 different secrets with the same hash, even if Barb can't reverse the hash. What we also need is _collision resistance_, so Alice doesn't get to pick and choose what to expose in the final stage.
Lately, we've made much bigger, but less blogworthy, improvements to Keybase. It's faster, team on-boarding is getting better, and we'll be launching a very improved UX in the next month or so. I rarely get to stop and write about Keybase, so this was fun.
And for anyone looking to test, I'm `chris` on keybase. You can start a chat with me and do a `/flip cards 5 chris,yourname` and we'll see who gets a better poker hand. If you can deal yourself a flush or better on your first try I'll give a prize or something? Who knows. Anyway, we're having fun with it.
Author here. I'm seeing the same comment in 4 different places on here, worded with various amounts of hostility. I now wish I had addressed this in the FAQ on the post.
There's the suggestion that an exploding feature is worthless, given your partner can just take a screenshot or video of what you sent.
This suggestion is missing (1) that your relationship with a partner is disproportionately okay at the time you sent something (i.e., you trust them THEN) and (2) there's a whole different class of adversary who compromises your or your partners' devices in the future.
SnapChat, as far as I know, has none of the cryptographic implementation of Keybase. And yet it has likely protected hundreds of thousands of kids from severe bullying. Consider the teen girl who sends the goofy sexy pic to her boyfriend. Before the advent of exploding messages, he might've iMessaged or emailed that to a friend, just one friend, his best friend, out of pride. And that friend sent it to a few more, and so on. Not out of malice, but suddenly the whole school has seen her pic of god knows what and she literally wants to die. But with Snapchat, taking a screenshot is knowingly violating a social agreement. It's also violating the trust of his current girlfriend - everyone knows it's not okay to screenshot that shit. And the number of people who would do that is much tinier. Second, consider the far worse scenario: she dumps him a month later and until then he has been NiceGuy. But then he becomes r/niceguy, the guy who will look through the old pictures and spread them around.
Finally, let's not forget that your device can be compromised by loss, theft, or hackers, at any time. Exploding messages are gone when that happens.
People can be tricked, compelled, coerced, blackmailed, and hacked. Or just turn evil. All in the future. Which is what a timed message protects against. This is why Keybase is doing this. Paired with encryption it's quite powerful.
The most important purpose of these exploding message capabilities is destruction of data that doesn’t need to be archived.
The primary threat is compromise of a device. Keybase allows you to revoke keys but that assumes you are aware that the device has been compromised. Which is already too late for sensitive messages.
The average user doesn’t understand data persistence, or secure destruction of data. Manafort is a good example of this. I wish apps just expired messages by default. I don’t understand why WhatsApp doesn’t have this feature.
As a user of messaging services, I nearly never want to delete a message. I want to be able to use my digital memory extension (phone) to store messages so that I can easily recall my conversations. Rarely do I want to delete a message. In fact, I would only want to delete it if it's sensitive: I rarely message such sensitive things. Most people fall into this camp. It's rare for someone to never want any message to be kept.
Why do you want your messages deleted by default when you use one of these secure messaging clients?
Plenty of people feel exactly the opposite, and avoid using messaging services for many purposes because of it. They want the bulk of what they say to fade away, because it is ephemeral, and they don't want to worry about it forever. More and more people are aware that, even if what you say today is perfectly benign, tomorrow it may be a problem. And why create potential problems, when there is absolutely no benefit to you in putting your request to your partner to buy some eggs on the way home on a permanent record?
You might worry about not being able to find something you said. Others worry about being able to find something they said.
I personally chose my defaults appropriately, with work stuff getting archived and everything else not even getting backed up. And realistically, even the work stuff is completely useless after a couple of years; a problem I have is not finding information, but finding current, useful information.
Ephemerality is liberating. A large portion of social media use is not about exchanging information (which would be useful to persist) but about socializing. Just as you probably wouldn’t feel comfortable if every conversation you had with your friends while hanging out were recorded, a lot of users (particularly young users) feel more comfortable expressing themselves when they know with reasonable certainty that their communications are not being recorded online. Its often for sharing moments and making jokes and hanging out, not for conveying actionable information.
I deliberately don't pay for Slack because of this. The 10,000 message limit is perfect for "enough memory to be useful, not enough to be dangerous". I'd love to see it as a feature in other messaging apps (i.e. "permanently erase all messages over 6 months old")
Does Slack actually do that, though? Or just soft-deletes the older messages, hiding them from the UI? (I don't think they make a statement either way)
My understanding is that it just hides them from the UI; if you upgrade your plan, you get access to all your old messages and files that were previously "gone".
Yes this is correct. In fact even without a paid subscription you can access all files that have been added to a Slack through the web UI (myslack.slack.com/files). You can't see the related messages, but all the files (images, snippets, etc) are available as one big list.
Hell, I wish messaging services made conversation much more searchable. I hate having to scroll and scroll to find some past conversation topic that maybe had interesting thoughts/links/shared media.
Any client with proper log files (many IRC clients, Pidgin, etc) is much better than Slack, which uses word indexing rather than full search, meaning it doesn't find the message "helloworld.com" when you search for "world".
I have never searched my message text history with the exception of trying to find images sent to me. Never content. Most companies I’ve worked at have a similar policy of not archiving text messages from internal chat. No reason to keep content, minimizing the amount of data you archive is a core element of security and risk mitigation for a number of reasons. Plenty of large organizations don’t archive employees Lync/internal chat messages for similar reasons. And from a threat perspective you don’t know ahead of time what information an attacker will find useful.
Sure, as a recipient I want to keep a history of everything. But as a sender, I might want sometimes to send a message with some guarantees that it will self destruct after a period of time, mission impossible style.
Because WhatsApp is done with the endeavor. The founders would have wanted this kind of feature but they've since parted ways after selling out to FB, probably because FB isn't interest in such features and privacy.
> SnapChat, as far as I know, has none of the cryptographic implementation of Keybase. And yet it has likely protected hundreds of thousands of kids from severe bullying.
Is this true? (Asking with no implication of criticism or being a leading question - I just genuinely don't know the answer)
I can believe both that these teens were going to sext each other anyway and Snapchat is keeping them safer, or that they weren't going to and Snapchat has convinced them that it can be done more safely than it can actually be done.
Has anyone done studies on this? (Is it even possible to do studies? I suppose you'd either need information from Snapchat itself on how often they detect screenshots, or from high schools on bullying cases over time and whether Snapchat is involved + hope that bullying cases that get escalated to adults at high schools is a meaningful proxy for actual bullying.)
I'm inclined to buy your argument that because of the implementation making stored pictures not the default, and the social pressure not to take screenshots, probably Snapchat's disappearing messages are better than iMessage. But this seems like the sort of thing that's dangerous enough (in either direction! if the technology works and we refuse to deploy it, that's bad too) that hard data would be useful.
Sexting on Snapchat is rampant in high school today (I'm a current high school student). The self-destruction principle has allowed for people to feel comfortable about sending explicit photos to eachother -- in relationships, it's almost ubiquitous.
That isn't saying that Snapchat has removed the potential of spreading explicit content. As someone mentioned in another comment, screenshotting the snap circumvents the system. It's also just as easy to take a photo of the screen with another device -- both an untraceable and permanent record of the photo.
As a whole, Snapchat has had a net positive effect for people my age. I can attest that teenagers make unwise decisions now and again, and Snapchat has helped in that those rash decisions are less likely to bite us in the future. While I don't have the data to back up the claim that hundreds of thousands of kids have been protected due to Snapchat's impermanence, I certainly wouldn't be surprised if it was true. It's the most popular social network in my demographic for a reason -- it oozes the ephemeral teenage spirit.
> "Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected. Although usually small in comparison to the fundamental benefits of safety interventions, it may result in a lower net benefit than expected."
> "In the run-up to the crash, consumers and even policymakers had come to believe that smart regulators and forward-thinking bankers had made the world of money a much safer place.
> "The fundamental insight of Ip’s new book, Foolproof, is that this very belief was a key factor in the lead up to the crash. When people believe they are safe, they take more risks – they drive faster, in motoring terms – and “speed makes everything worse”. Or as the economist Hyman Minsky, whose work Ip revisits, put it: “Stability is destabilising.”
There are applications in our field too:
- safety features for users might make them behave less safely (e.g. exploding messages)
- better reliability of systems might lead us to put more trust in them, leading to even bigger outages when they occur (e.g. centralising trust in cloud providers)
It's interesting to see things like Chaos Engineering (https://principlesofchaos.org/) introducing intentional "danger" into a system in order to improve system-wide stability. Of course, maybe Chaos Engineering will give us more trust in our systems which may lead us to take even bigger risks...
Yup, that's basically what I'm getting at, thanks for the links!
So, I'm okay with risk compensation if people are net doing better. I don't think that "if even one person is hurt by this, that's too much" is a meaningful basis for decisions, especially when there's a risk that even one person will be hurt by not doing the thing. So at the risk of reducing people to numbers, if, say, 100 teenagers send sexts when they otherwise wouldn't have and get screenshotted, but 1,000 teenagers send sexts when they otherwise would have sent them to a non-disappearing-by-default client, and now their photos don't get copied because of social pressure / high-but-not-impossible technical barriers, that still seems like a clear win.
That's the sort of data that I think would be very interesting to inform good engineering decisions, and also pretty impossible to get.
I would also expect people's propensity to take screenshots to be correlated to how sensitive the image is. For example, I would expect many people to take a screenshot of a nude pic their partner sent just so they can look at it for longer than the default timeout of a snapchat message; this is even more likely for teenagers who may be less mature about not betraying the other person's trust.
Indeed, that's the digital equivalent of the $5 padlock. Sure, you could pry it open with a crowbar, but most people won't. IMHO the situation is more of "opportunity makes a thief" rather than "keeping honest people honest" - crossing the line is very explicit in both cases, analog and digital.
I don’t think you can turn back the clock and do studies with any sort of control nowadays. The generation using Snapchat is the one prior to mine - they saw the value from my generation getting bit over and over from text logs and pics getting posted. Sexting existed the second the technology was there for it.
> I can believe both that these teens were going to sext each other anyway and Snapchat is keeping them safer, or that they weren't going to and Snapchat has convinced them that it can be done more safely than it can actually be done.
I guess I quoted poorly - I meant "Is it true that Snapchat has likely protected hundreds of thousands of kids from severe bullying," not "Is it true that Snapchat does not use encyption in its implementation of disappearing messages".
I think it is possible that Snapchat has net caused more kids to get bullied as a result of ill-advised sexting, by being the company advising ill. I can see both arguments and I don't know which one is actually true.
slightly unrelated note but you both are also talking about the way the official Snapchat app chooses to handle snaps (opt in and notifying the user) when theres a multitude of workarounds and non-official snap apps only a google away that make it extremely simple to save a picture someone sent to you without the sender knowing.
Preventing phone-screen capture isn't really something you can't get around but Snapchat could certainly afford to put their money where the mouth is and try to provide their users with a safer experience by cracking down on 3rd party apps.
> But with Snapchat, taking a screenshot is knowingly violating a social agreement.
My exposure to SnapChat suggests that this is not the case. Screenshoters are treated more like rascals than felons. This may depend on the content of the message though. My incoming messages tend to be more silly faces than nudes.
Edit: Or rather, it is the case, but the social agreement is a lightly enforceable one. Closer to not holding an elevator door than eating a coworker's lunch.
This is what people don't seem to get, exploding messages aren't an airtight solution to the risks of sharing sensitive information with someone. You're always taking a risk when you do that. Exploding messages change the default way that sensitive information is handled, and changing the default can have a profound impact, for all the reasons you lay out.
My issue is with the way they are marketed. I would be cool with just a “don’t retain” flag that does just that.
But making a big deal about “exploding” is dangerously incorrect that many users will make incorrect assumptions.
I’m not worried about screenshots, I’m worried about my plugin that archvives all text inbound to me that then requires me to respond to subpeona, etc.
From a security standpoint, this feature should not impact behavior since it is meaningless. If users don’t understand this, then it will cause heartache.
I don’t see your point. If you archive all inbound text, this feature is clearly not for you. This is like saying a door lock isn’t useful for anyone because you keep your window open.
The people I chat with do not know that I archive (nor should the) and will have an inaccurate and misleading expectation of behavior.
To use your door analogy, it’s like telling someone that a door lock keeps people out when there’s an invisible teleported that also gets installed with the door lock.
It’s a hard analogy to follow because me retaining information you sent me is different than me breaking into your house. If you send me info, it’s mine. The weird mental model is that you still control what you give to me.
A use case I run into often is with people I trust, so I don't fear they will take screenshots, etc, but I don't want to keep that data in the chat history. Most of the time I turn to protonmail using their expire option, now I can use keybase.
Most of the time is when I need to pass a password to coworkers.
I hate to use this adjective, but this feature is cute. I love the little bomb. I love the concept. I love how you've applied it to several types of things. And I love how you've taken something that could be complicated and made it simple.
I love this! And I love the bomb gif. I still miss your original logo, but have come to like the little girl.
Anyway, maybe it's just me, but I never communicate anything to anyone that would be hugely problematic if published. That is, for that persona. Which is carefully compartmentalized from other personas. So Mirimir has rather restrictive limits. My meatspace identity has even more restrictive limits. But some of my personas have no limits, and are basically throw-aways.
Edit: And that's basically how accounts work on HN, right? I mean, throwaway use seems quite common, and accepted.
Well, it has been for me, so far. But then, it's my main hobby these days, and I take extreme care.
If you're interested, I explore that and related issues in one of my series on the IVPN website.[0] There's also an old guide on nesting VPNs and Tor with VMs.[1] And a tribute to Kevin Mitnick, featuring onion SSH hosts for chaining.[2]
The tl;dr is that compartmentalization is the key. At all levels. At physical levels such as hosts and VMs, LANs and vLANs, and uplinks and proxy chains. And at behavioral levels, such as interests, forums and social media, projects, and language and writing style.
Mirimir is my only main persona that writes about privacy issues. He has temporarily had a few secondary personas for particular projects, just for casual deniability. But none of my other personas have written at length in English.
You send a message to someone whom you trust (and therefore won't screenshot). If their device is later compromised, forward secrecy ensures the message can't be retrieved.
Even revoking the compromised device is insufficient, as they could retrieve your chat history long before the user realizes they've been pwned.
These are great rationale, but I think they belong in the feature marketing and UI, not just the FAQ.
As publicized (by Keybase and every other platform), exploding messages appear to put control of post-receipt management in the hand of the sender. This is especially credible coming from Keybase, since you guys are educating a lot of people about possibilities with careful crypto (e.g. forward secrecy). This has risks... you mention the Snapchat user who was protected from bullying, but what about the teen who wouldn't have sent that pic in the first place but felt safer because of SnapChat -- only to be bullied over a screenshot anyway?
Your description here is that exploding messages make it easier for both sides to announce and abide by a social contract about deletion. A name like "flag messages for auto-delete" (I'm sure someone can do better) would set the right impression.
Don't forget a major reason for message accumulation: laziness. People often just don't bother to delete private messages. Especially true after long conversations because there might be stuff to keep in there somewhere.
I don't think anyone's being as hostile as you make it out to be. They're just talking about how you can't really guarantee safety, which is true.
And I find it weird that you're comparing yourself with Snapchat. Snapchat is a casual app, targeted at a completely different audience than the people Keybase targets (at least that's the impression I got so far)
Also Snapchat is mobile only product, which makes all the difference. It's much easier to detect screenshotting on mobile than desktop. And as far as I know, Keybase is desktop-first app. So it's kind of ridiculous that you're comparing yourself to snapchat.
I don't know if you are aware of above distinctions or not, but if you're not aware of this, there's something wrong here. You guys are supposed to be completely aware of all these subtle differences. And if you ARE aware of this, why are you trying to make these claims pretending there's nothing wrong?
I have nothing against Keybase, I'm just pointing out the faulty logic in this specific comment you're making (which happens to be hostile towards those who are just pointing out the issue with no trolling intent)
> And I find it weird that you're comparing yourself with Snapchat. Snapchat is a casual app, targeted at a completely different audience than the people Keybase targets (at least that's the impression I got so far)
I don't think they're comparing themself to Snapchat; I think they're using a hypothetical situation that everyone can understand in order to explain the threats that an "exploding message" protects against; Snapchat is used merely because the scenario is easy to understand.
Keybase may have started from the technical community b/c of its foundation with how it handles identity and encryption, but I definitely don't view it as an app targeted at a different audience. It is an app that can be used by the general public and I use the mobile version quite often. I don't find the comparison odd at all.
Fwiw, as a non-security at risk casual user; I really enjoy ephemeral chat. I don't like snapchat as a main chat application (ie, Telegram-esque replacement), and aside from that I don't have many options. I think we're going to try Keybase out, assuming it has native desktop clients.
I will suggest that if you add this to the FAQ, you spend more time talking about things like your device can be compromised by loss, theft, or hackers, at any time. Exploding messages are gone when that happens. and less time talking about how people can go from seemingly a Nice Guy to r/niceguy when a relationship ends. Make relationship drama a footnote, not your primary emphasis.
They always push features to their limits and then criticize. Even telegram’s “screenshot taken” notification can be overcomed by taking a photo/video of the chat with an another phone. But the hassle of doing that is not worth it sometimes, so one can estimate the expectation of the leak, while being completely unsafe before “special forces”. We figured it out in one of in-house intrigues, but didn’t do it even having three phones on the table. Boring, unproductive and shady methods were high enough barriers to stop. Do a good thing and don’t care about pedants.
I agree that a feature doesn't have to be 100% foolproof to be beneficial. I also agree that leaving sensitive things lying around "by default" is a poor approach to security, and think that software should facilitate automated cleanup. However, I fundamentally object to the subversion of my will by my device or any program running on it. In my opinion, DRM in any form is not a solution - it in inherently evil.
I wouldn't mind messages that were flagged for automatic deletion after some time interval, if I were also provided with controls for when and when not to honor such requests. But currently Signal, SnapChat, Keybase, and others don't provide me with such a choice - they do what the sender requested, regardless of whether or not I approve.
It goes without saying that providing such an easily accessible option would almost certainly result in it being used at times in socially inappropriate or distasteful ways. But consider, do you really want to give up control of how your device behaves in an attempt to prevent others from behaving poorly? Perhaps applications should focus on providing practical security (ie facilitating, not forcing, automated removal), and leave the social aspects up to the humans to sort out.
I would be hesitant to trust a controversial screenshot of text because I know that can be faked so easily. A lot of people don't have that awareness, though.
Another feature of Keybase's exploding messages is that when they expire, the text is replaced by the md5sum of the message. So a faked screenshot can (potentially; I haven't verified this) be proven to be faked by appealing to the md5sum in its place, crucially, without needing to reveal the contents of the original message.
That would only work if everything else about the photo was identical - device, resolution, carrier, time, battery level. Seems very unlikely one could substitute even identical text in a screenshot with enough accuracy to get the same hash from an image file.
I'm not a Snapchat user but doesn't that app, at least on Android, alert senders when a receiver takes a screenshot? You can still take a picture with a second device and that functionality isn't totally portable, but interesting feature. Do you think that concept has any utility here?
if someones determined they'll root their android and capture them anyway.
I think it's a great feature if you think of it (exploding messages) not as an assurance against someone who shouldn't be trusted, but that they won't forget to clean the trash.
Can you make that disclaimer obvious in the software? "Keybase exploding messages only work if who you're chatting with doesn't have a hostile client or intent"
That would come uncomfortably close to the toothpick instructions in the Hitchhiker's Guide to the Galaxy series (http://hitchhikers.wikia.com/wiki/Wonko_the_Sane). Does anyone think that technology can stop people from divulging secrets?
Blog author from Keybase here. Always game for a Hacker News discussion!
There's a subtle point I cut from my post for simplicity reasons, but which feels perfect for HN. I've been convinced by Mazières and the Stellar team that the classic "blockchain" works great for native tokens but is extremely dangerous for anything with counterparty redemption. For example, imagine the shitshow after a truly contentious fork, if there are tokens which are supposed to be redeemable with a counterparty.
Let's say Deutsche Bank had put €1 billion into colored coins on Bitcoin. Suddenly, after a fork (e.g. bitcoin vs. bitcoin cash), there would be €2 billion IOU's in the wild. The people on each side of that fork would not roll over and die, and it's not simple to say "Oh, whoever Deutsche picks wins." Or even "Whoever has the strongest chain wins." I have a hard time imagining a company would ever take that risk. I worry big companies would never dare to put anything real-world redeemable directly onto, say, Bitcoin or Ethereum, for this reason. They'd just get sued over and over again.
The Stellar federated consensus story (HN debates about SCP below [1][2]) has Deutsche Bank as an actual player on the network. If you want DB redemptions then you would include them in your trust lines / quorum slices, and if Stellar fell apart and became partitioned, you would stay on DB's side. All said, it seems significantly faster and more stable for cryptocurrency-to-real-world mappings, both for the consumer and counterparty.
Didn't the "tokens backed by something off-chain, and then a fork happens" already happen with Digix?
I believe they just basically said "we are treating the tokens on Ethereum, not on Ethereum Classic, as being the ones which are redeemable".
I don't see what the problem with this is. It is inconvenient for the people who prefer to use Ethereum Classic, yes, but they didn't lose their tokens. They still have control of those tokens on the Ethereum (ETH(F)) chain, and can sell those if they want to end up only using Ethereum Classic.
This is unfortunate for them, and if this inconvenience could be avoided for free, then that would be better, but I don't think it is unfair to them. They still have the same control over the same tokens that are accepted as legitimate as they did before.
Indeed, I don't think it's a deal breaker as well. There is a huge potential for scams, here, though (people buying invalid tokens for near than nothing on ethereum classic, then luring unawared people to buy them at full price misrepresenting them as the real thing). In this case, forks basically create counterfeits.
I don't understand how the stellar case and the bitcoin case differ in a network partition. You said "if Stellar fell apart and became partitioned, you would stay on DB's side."
How is that different from a network fork happening, and DB saying "We only accept tokens from ETH and not ETH classic".
At the end of the day, DB is deciding on a network partition to support, and you either support the network partition DB is supporting, or you don't do business with the DB tokens.
Note that in general there is no way to name a particular branch of a blockchain fork. In cases with a protocol change coordinated well in advance, a counterparty anticipating the fork could announce that their tokens on one branch will be useless. However, if you just have two competing mining pools duking it out with the same protocol, there will be no way to name the branches ahead of time.
What's worse is that colored coins could distort the incentive structure to make it profitable to bribe miners, because the benefit to an attacker of subverting consensus could far outweigh the value of 12.5 BTC/block.
I'm not familiar with the "trustlines determine partition choice" feature of Stellar, but I am with trustlines in general; they are explicit app-level concepts that you define on your wallet, which in this case would say something like "I trust DB to redeem up to 1m worth of EUR credits".
If the Stellar client's behaviour in the face of a partition takes trustlines into account, that's much safer than the default behaviour in bitcoin, which I believe is "pick a partition at (pseudo)random".
It's possible to manually coax the client to pick a partition, but that requires user interaction, i.e. it's not fail-safe, it's fail-unsafe.
Your justification for the Th/s being an issue for electricity consumption is one I have seen bounced around a bunch but other than the justification of the environment there are other issues. But, the idea of cryptocurrencies is that without some kind of artificial scarcity you will have other incentives. If you are really concerned about electricity consumption what about using a ledger technology that is designed for low resource consumption such as Sawtooth? I'm not sure how Stellar helps you in this regard. Also, why are you seeking more funding in the first place?
"Also, why are you seeking more funding in the first place?"
What is the purpose of this question? You want KeyBase, a private entity, to convince you, a stranger on the inter-webs about their plans for said funding? That's giving yourself FAR too much importance. Here's $5 bet that the company will ignore this question.
I mean that artificial scarcity is enforced by a distributed ledger system is by using the concept of a miner. The best implementation we have so far that is electricity consumption because this makes people consume electricity which costs them money. If you don't have this piece then people will subvert the proof of work some other way.
For example if you have some type of mathematical problem (the unknotting problem) to take the place of the proof of work then you have to worry that that problem is actually hard to perform. Hypothetically that problem could be quite easy and you can subvert the system by having some shortcut.
But inverting a hash is known to be hard. Through this hardness you get artificial scarcity by making people efficiently design systems to consume electricity.
i wouldn’t say that’s true at all. proof of work isn’t the only “artificial scarcity” that we can have... its paying money to compute some thing to get a vote... why not just put money for a vote? or better yet, prove that you have the means to pay a lot of money for a vote.
if you have to hold a lot of limited thing X (that’s difficult to get; eg $10m worth of ETH) on the very chain you’re securing, then that’s scarcity too.
another benefit here is that you can make a trusted network; that way you don’t have nodes coming and going, you have a group of trusted parties with their own network, and there’s less potential for a 51% attack
electricity and work isn’t the only thing that can be scarce
I think scarcity is a bit of a red herring. The key is that it has to be more lucrative to play fairly than to cheat (well, it also has to resist attacks that are simply trying to destroy the service). Proof of work by itself is not enough. It's the scaling of the difficulty to the potential gain -- the implication being that more people mining == more money to be gained by cheating.
The other thing that is compelling about Bitcoin's proof of work protocol is the ideal that every participant is equal. The whole point is to avoid the circumstance where more money means more control. Now, I think we can probably all agree that this didn't pan out -- whoever controls the big mining pools controls the system. I think that if anyone is going to go to the next level they probably have to step back and look at the problem with fresh eyes. Substituting X into "Proof of X" is unlikely to provide the solution, IMHO.
Regarding electricity consumption, what do you think of the argument that mining is so competitive that it can only be profitable by subsidized electricity, and that electricity is only subsidized when the local jurisdiction is creating more of it than it needs anyway? In other words, mining doesn't actually create new demand for electricity, it just soaks up the remainder already available.
"electricity is only subsidized when the local jurisdiction is creating more of it than it needs anyway?" is false - there are a few cases of negative spot prices, which nonindustrial customers can't get, but all the rest of the time electricity subsidies are for economic and social reasons.
Normally surplus electricity results in a curtailment of fossil sources.
There is no such thing as "excess electricity". If that power wasn't being wasted on useless cryptocurrencies, then we could have used it for useful purposes, such as processing aluminum.
there absolutely is a disbalance between energy produced and energy consumed. if we're lucky, there will be storage capacity nearby. we're mostly not lucky. ever heard of australia? or tesla? google some.
> wasted on useless cryptocurrencies
yeah, transferring value securely without trusting third parties is not useful at all. gotta run all those banks and employ all those bankers, because that costs no energy nor other resources.
That argument makes no sense to me. It essentially boils down to "people can choose whatever level of security they want, so given that they chose the current level of security that must be the most energy efficient -- and since merchants have to pay for this system they always choose the cheapest fees which must therefore mean they choose the miner with the cheapest energy source". I'm not sure what this argument is meant to prove other than PoW chooses the most energy efficient way of doing PoW, but it doesn't prove the PoW isn't an energy hog.
This argument also explicitly ignores schemes such as PoS that don't have this massive power drain, and also ignores interesting schemes like PoC (Proof of Capacity)[1] that also don't have this power drain (apparently).
I’m not going to spend a bunch of time reading this particular whitepaper, but not one of the three proof-of-capacity “mining” systems I’ve looked at are Sybil resistant and decentralized.
This is a vast oversimplification, which only sounds plausible if you pretend that energy production scales linearly with demand. In the real world, if there's a plant producing more than its consumers demand, the reason for the excess capacity is to accommodate spikes and future growth. If you come along and build a BTC mining rig next door, that "future growth" has arrived sooner than expected, forcing the utility to build new capacity years ahead of schedule (presuming it's feasible to build it at all), resulting in significant price increases, their refusal to sell you as much electricity as you want to buy, or both.
The (amortized) cost of building out new supply is a major[0] part of the cost of electricity - this is why most big energy producers spend millions of dollars per year on energy efficiency incentives. Haven't you ever wondered why your power company will give you a $50 rebate on an EnergyStar dishwasher? Isn't it counter-intuitive that they would pay you to buy less energy, when they have excess? It's not because they're tree-huggers - it's because decreasing demand growth delays the day when they need to build a new plant to meet demand, which increases the profitability of the current plant enough to make those incentives cost-effective[1].
0: I can't find a good estimate and it varies by fuel type, but I rememeber an environmental engineer at a former job telling me it was about half. Look up "Levelized Cost of Electricity" for more info.
1: If you're not convinced, instead of demanding more details, I urge you to just stop and ask whether the proposition "There is a lot of excess energy production lying around which BTC miners can soak up without impacting everyone else very much" really passes the sniff test.
Hydroelectric is one of the oldest electrical storage systems there is -- you can store more electricity as gravitational energy by pumping the water back uphill, and then letting it flow back down during peak times.
There's a reason why hydroelectric plants and dams go together.
Very few dams have the capacity to pump water back uphill.
And they don't need it, because power-generating dams are generally only built in places that provide lots of downstream flow naturally, like the U.S. Pacific Northwest, or in places where a reservoir is already desired for other reasons (typically drinking water, navigation, and/or flood control).
Dams have the capacity to route downstream flow around some turbines to lower their generating capacity. It's not really accurate to think of that routed water as "wasted capacity," because that water would flow naturally even if the dam wasn't there.
With massive regional grids like what's present in North America, it's not just as simple as that. Someone on the other side of the continent could use your 10TWh excess and curtail their own fossil fuel use instead.
I was wondering the same, why not add more options, just make it easy for people to get paid by whatever they want. Its also possible to simply just put a .txt on your public shared library and you can always list everything and anything you want there. Its just nice to see all that nice verification in the public profile nice and easy to see for everyone else.
Perhaps they'd be worried that their tool would be used primarily for narcotics trade. Oh wait, they already have zcash. Well, perhaps it's the redundancy between those two.
This doesn't exist right now, but theoretically a network like Ethereum whose tokens exist as state within Turing-complete contracts, could solve "counterparty redemption across forks" by simply allowing each contract to react to the fork "event" independently on each resultant forked chain. It'd be a lot like how the actual POSIX fork(3) call works!
Presumably, the default implementation of such a fork-event handler would have all but one of the contracts destroy themselves (and not in the common Ethereum sense of a contract "suicide", with the owner getting returned any held value; but instead with the contract simply blackholing all its value and state.)
I think this would create a bunch of security concerns and make the state of the contracts not immutable and would result in even more problems. Even though the contracts are deployed on a chain it they also take resources to execute (which is represented by gas) so at the end of the day if this occurred you would basically have one master contract with all the resources which would be a huge security concern since who is enforcing that people are honest?
Contract state isn't immutable. Or am I misunderstanding what you mean by "state" here? The storage slots in an EVM contract can be freely written to by said contract. That's how ERC20 tokens work—the balances of the token in people's accounts are simply storage slots, that get updated by the contract when the tokens are moved around. (Yes, actual slots. EVM arrays are weird; they actually expand out into the contract's storage-slot keyspace by hashing all the array indices together and storing the result at the slot identified by the hash.)
> they also take resources to execute (which is represented by gas)
...and so you'd need to pay to fork, proportionally to the number of contracts that wanted to react to your fork. Though keep in mind that the forked network on the "new" side would have a low hashpower, and so a low gasPrice, and so could afford the required gas quite easily (the base-case being one where the forking entity temporarily controls 100% of the hash power of the network, and thereby can just "pay themselves" the gas, just like when bootstrapping a new Ethereum private chain.)
The more questionable aspect is that the network "being forked against" would also need to pay. Somehow, you'd need to make it such that the whole of the network would "want" to execute such transactions. Mind you, gas just prioritizes which transactions go through; if the "right of way" of fork-event contract-input transactions is hardcoded, it doesn't matter how much gas goes along with it—the network will run them. You can even just add some code that means that the network can't make progress until those transactions are in. (I.e. that chain consensus will treat chains that had the same fork-event transactions appear "earlier" in them as better, so it's useless to put work besides inserting a fork-event transaction in, knowing that the branch you'll be creating by doing so will be outcompeted by one that just executed the fork-event transaction first.)
> you would basically have one master contract with all the resources
I don't see how this implies that. The fork-er doesn't get to decide what's happening inside the contract, on either side of the fork. The network, on each side, is just sending an event—"hey, the network forked, you {are/aren't} on the forked side"—to each contract that wants to know, and it's up to the contract to decide what to do with that information. Each contract is still a standalone program with its own private memory-space that nothing else can touch.
I've been convinced by Mazières and the Stellar team
As someone who's been in the cryptocurrency space for a long time I can guarantee you got swindled. Countless investors I have seen move to centralized "blockchains" due to buzzword powerpoint presentations and floating_nodes.jpg bootstrap landing pages.
This is no different than what current financial institutions do other than a new UI.
Are you saying this after having read up on Stellar[1] in particular or is anything other than "full" decentralization a deal breaker for you?
I think Stellar's model if Internet-style (as in the backbone network infrastructure) organic, federated trust relationships is the most practical and viable approach to decentralization that I have seen.
Organizations decide who to trust based on their relationships with them and individuals decide which organizations to trust for the same reason. That is how trust works in the real world, and for good reason. The problem is that the costs of operating at a global level in the current financial system are very high, which leads to centralized control.
Stellar will potentially allow many different types of new, small and innovative organizations to participate in the same network as larger more traditional institutions, with no restrictions for joining the network. Of course those small organization still have to compete for customers in the real world and the playing field is never fully level, but certainly opens up the game.
I personally just don't see the Utopian, fully-decentralized, f*ck-the-system future that a lot of crytpo-enthusiasts envision. It just doesn't mesh with the reality of how non-technical users operate in the real world. For example, right now the vast majority of US users purchase Bitcoin through Coinbase, because that is who they deem trustworthy. What I do see with Stellar is the possibility for greater innovation and greater individual access to the global payments network.
I have known 'Stellar' since its beginnings as Ripple. I have no expectations of changing your mind or obligation to convince you otherwise. Just thought I'd share my viewpoint.
I don’t understand this comment. It doesn’t seem to address any of the key technical points behind the Stellar project as it is actively used today, but instead tries to imply that the foundation is some group of fly-by-night scammers that have won over keybase by nothing more than a twinkle in their eye.
My impression was that Lumens seemed to be solving one of the legitimate problems of the world—the inefficiency of the SWIFT system for cross-border transactions—and appears to have a viable model for doing so.
I don’t see how whether or not it’s centralized makes it a swindle?
the foundation is some group of fly-by-night scammers
Never said that. There are plenty of institutionalized and heavily trusted ventures on wall street that rake in billions and buy their way to the top.
This was my implication. I have known 'Stellar' since its beginnings as Ripple. I have no expectations of changing your mind or obligation to convince you otherwise. Just thought I'd share my viewpoint.
It's possible more than one system could replace SWIFT, but why would it be Stellar/Lumens who are still in the gate with their fork when Ripple/XRP seem to be already round the first bend with seemingly very large momentum in terms of interest, trials and actual production use by financial institutions?
How is Stellar still "in the gate," more than two years after deploying their decentralized Byzantine agreement algorithm?
Ripple has only just now, in 2018, published their decentralized consensus algorithm (Cobalt), which as far as I know is not even in production use yet, and doesn't provide optimal safety. (In settings where Cobalt is guaranteed Safe, SCP would be too, but not vice versa.) Their production network still uses a protocol that, by Ripple's own analysis (https://arxiv.org/pdf/1802.07242), fails to guarantee safety without >90% agreement on the UNL.
Yes, but perhaps your framework for approaching this is a little too old school?
When SWIFT was devised, the idea of having a singular system for resolving these transactions not only made sense but was (probably?) technically necessary. I think given where we are today, multiple competing protocols, each with their own advantages, may be viable.
Lastly, for finance, consumer choice is valuable: I like being able to Venmo my friends, autodeposit my landlord, slow mail my bills, and Apple Pay my retail purchases. I don’t send money overseas but I could imagine a similar bifurcation of solutions in this space, all with their own advantages.
While I generally agree, there is a difference between what bankers are doing atm with their ledgers and SOME projects in the private permissioned ledgers space. The difference is that they are not Byzantine fault tolerant, and distributed. If even just between each other, so there isn't the sort of robustness that you would have otherwise. It reduces a lot of overhead they have atm, with their ACH system. Which works, but it is not perfect.
An analogy I can be able to give, is banking is double entry book keeping. DLT's are triple entry book keeping, there is simply many things that their current ledger entries can't do. DLT allows their money to become commodity money, there is also a bunch of contracts that can be added on top of that. So main thing is banks could be able to potentially offer many more services than just verifying payments, at a much lower cost reducing the amount of employees they need. Make more offices or w/e else bankers do lol.
Oh cool - wasn't really expecting to see this one on HN! The changes here are all a result of "iterating" on our product. Since we work in cryptography, it's not usually the case we can move fast. But this mini-blog post outlined some quick changes we could make.
Stuff we learned from testers:
(1) In many ways, Keybase's chat is like Slack (except encrypted!), but unlike Slack, our user database is public and connected to known identities. So there was an opportunity we were missing, namely to teach people about teams they might be interested in, run by people they are interested in. Seems obvious now, but we had our blinders on.
(2) A large "open" team still makes sense on Keybase, even though anyone is allowed in. It's worthwhile because sender authenticity is extremely valuable. Protection from phishing attacks has been driving a lot of our team signups/migrations...especially in the cryptocurrency space.
If there are any technical questions about these changes or how teams work on Keybase, happy to answer them here. As you can tell from my HN profile, it can be proven I'm keybase.io/chris .
Actually, heck, to illustrate all this, I just made a team called `hners`, for "anyone who loves Hacker News." It's an open team. You can join straight from my profile in the Keybase app, or by running `keybase team join hners` in your terminal. Come say hi.
It looks like I'm not the only one who visits your blog almost daily in anticipation that a feature as cool as Keybase Filesystem or Encrypted Git gets announced.
While this was an iteration, you nailed a feature I really need with open teams -- I was about to start looking at Slack or Discourse because I'm in need of that right now. And the team management interface will be very pleasant -- I don't have to add people to my private teams often enough that I can ever remember the exact command-line without using "--help" (that's not a knock to your CLI; it's way more intuitive than a lot of the tools I use).
Thanks, so much, for this product and the support that the team provides. I ran into an issue over the summer that affected Windows 10 Insider Fast Channel builds. It ended up being Dokany, not keybase, but your team was as involved as the folks were over there and you were the only Dokany down-stream that provided any help in resolving the problem. Having been in this industry longer than I'd care to admit, I've come to expect that support tends to have no correlation to the price paid for the product, but I fully expect "free" to mean "good luck" (though, paid often means the same thing)[0]. Your team treated it with urgency and was friendly to boot (a characteristic that can be rare in software development but is virtually non-existent in the crypto- space). Love it -- and you have the dubious honor of being the first thing that I install afte I load/reload a Windows / Linux machine/VM.
[0] And couple that with the fact that I was running Insider Preview builds ... I'd have probably thought "well, what did you expect?!"
Any chance notifications on Android will be grouped anytime soon? I had to uninstall the app as it just got to crazy showing all the notifications for messages.
A little off topic but have there been any improvements to the chat experience itself? I tried pushing some of my team to use the chat a few months back but to be honest, the time it took to view and reply messages on iOS was too long and overall reliability was hit or miss.
Not to say your other features aren't great, we are testing out using private git repos for some of our non-essential keys and so far no problem.
I second this. I have tried to move chats with some of my more technical friends to Keybase, but after a few months we all came to the conclusion it was not reliable. Messages not being delivered without any visual feedback, messages received out of order hours late, etc.
I like the fact that keybase has a lot more functionality, but because of these issues in their core product I had to switch to Signal.
Keybase team member here. Interesting fact: git doesn't check the validity of sha-1 hashes in your commit history. Meaning if someone compromises your hosted origin, they can quietly compromise your history. So even the fears about data leaks aside, this is a big win for safety.
From an entrepreneurial perspective, this is my favorite thing we've done at Keybase. It pushes all the buttons: (1) it's relatively simple, (2) it's filling a void, (3) it's powered by all our existing tech, and (4) it doesn't complicate our product. What I mean by point 4 is that it adds very little extra UX and doesn't change any of the rest of the app. If you don't use git, cool. If you do, it's there for you.
What void does this fill? Previously, I managed some solo repositories of private data in a closet in my apartment. Who does that? It required a mess: uptime of a computer, a good link, and dynamic dns. And even then, I never could break over the hurdle of setting up team repositories with safe credential management...like for any kind of collaboration. With this simple screen, you can grab 5 friends, make a repo in a minute, and all start working on it. With much better data safety than most people can achieve on their own.
So I love Keybase unconditionally and if you guys weren't rolling in physical offices (and not one in Boston) I'd have been beating down your door to come work there--I think what Keybase is doing is important and it's something I'd love to work on. But I have a serious question that maybe you can answer, and it's something everybody who I've showed this to has asked me:
How is Keybase gonna make money? How am I assured that this, and everything else in my Keybase storage, is going to be there in six months? Like, I still have a private server in a closet in my apartment that syncs all the stuff I trust Keybase with because I don't know what the business-side failure case is.
You guys should be taking my money, is what I'm saying. Also probably hiring me. But definitely taking my money.
We believe the right long-term answer for Keybase is finding a way to charge large corporations and offer pretty much everything else for free. Obviously there would have to be some paid tier if you really wanted 10TB of storage or something, but very few people want that right now. We're still just getting started.
Of course to achieve our goal, we'll also have to find a way to distinguish communities - which we'll want to use Keybase for free - and companies.
Many of us on the team have come from ad-supported businesses and we really, really never want to do that again. I personally guarantee I will never be a "publisher" again. Fortunately that just can't work with Keybase, so no fears there.
But charging for anything on Keybase right now would be a big mistake. We only have ~180,000 users, and we want to bring crypto to everyone. That basically means making products we believe are better.
Another way of looking at your concern: I think if we were charging right now, it wouldn't actually decrease the odds we disappeared in a few years. It might distract our attention from working on the best product and cause our bloody demise. So maybe we're not choosing the path that gives you the highest impression of safety, but I think we actually are.
That being said, I think Keybase is one of the most important companies around right now. I would gladly pay $10/month, even if literally all it did was put a "Supporter" badge on my profile. I'm sure hundreds of other people agree.
Crypto is far too important for it to remain locked away behind GPG.
For what it's worth, I think my above comment is my highest upvoted comment of all time. There's a lot of people out there who want Keybase to succeed.
My comment that started this subthread is in my top ten, and I have been here entirely too long, so, yeah. Keybase is good. It staying around is important. People around here, at least, seem to know it, and that's awesome.
Piggybacking off of the original question, I too have a question in this scope:
With all the products you're offering, is there any indication which products will be staples of Keybase? Eg, I'm always hesitant of the "Google Product", where something gets added only to be abandoned ~1yr later after it doesn't gain the traction the company expected.
For example, I'd love to get my wife and I switched to Keybase Chat from Telegram. With that said, I love the features of Telegram, they're killing it for me honestly, but I can't expect Keybase to compete with Telegram unless they're really invested in it.
So which products from Keybase are one-off experiments, and which are long-roadmapped products - expected to have continued development and support for years to come? I'm having trouble understanding what to trust.
Note, none of this is critical to Keybase. I'm wary of startups in general, despite loving you guys, so I'm just seeking understanding. I appreciate whatever information you can give me, even if small :)
Signal only provides chat functionality... And doesn't support multiple devices... Also the history is lost on device change (or upgrade/reset etc)... And you cannot chat with people who you don't trust with your phone number
My best guess as an uninformed lurker and a Keybase user is that it's too early to know. You would have to know what's the impact of "sunsetting" features and for that you probably need more than 180k early adopters.
In case of chat you can always fallback to Telegram (I've done that after trying to move people to Wire).
In case of git you can always move the repo.
With the setup that's there now I can see how it could be used as the main origin along with a push to GitHub hook. Pull requests would be even mergable (blessed be Torvalds), though I'm not 100% sure if GitHub would pick up on that and autoclose the PR.
The enterprise would be a valid target, but if you really want them to trust you, you'll need to offer localized hosting (host from EU, Russian, Chinese datacenters) as well as on-premise hosting.
Actually, in that last one you should probably also offer consultancy to set up the servers securely - both software and physical hardware security. Secure software isn't worth much if the systems it runs on is compromised. Consultancy can be worth a lot of money, if your customers think it's worth it.
I'd start working on offering a paid enterprise solution soon tbf. I'd also tweak your landing page, the blurb is "a new and free security app"; the "new and free" doesn't instill much trust, and the "security app" doesn't really describe what it does. The second phrase tries to explain that "it's Slack" or "it's Dropbox", which I guess is fair, but I'd aim towards distancing yourself and describe it as e.g. "End-to-end encrypted communications and file sharing". What makes Keybase unique? I mean Dropbox has a pretty solid security page (https://www.dropbox.com/business/trust/security/architecture), as does Slack (https://slack.com/security).
IIRC it boils down to a new Merkle root and a self-hosted server instance that uses it. Add snapshot pushing to the blockchain and you've got yourself an independent Keybase instance with a fresh and clean database ready to be filled with employees.
I wonder what the identity proof adding would look like. I guess corporations are not interested in public proofs from Twitter.
I'm (unfortunately, at times) intimately familiar with what big corporate IT departments look for in terms of features, authentication, RBAC, auditing, etc., etc. in "Enterprise" products and if you need it I'd be willing to help you understand what we look for and why. Feel free to drop me a line. Either way, I love what you're doing and I hope you nail it.
OK, awesome. I'm glad you wrote this, because this makes me feel a heck of a lot better about using Keybase. This was in a way my hunch, but I figured--this is something good and cool, I want to make sure it stays good and cool. =) Thanks for the reply.
This is a fantastic answer, and I wish more folks were this dedicated to making sure they have something great before trying to hawk it. That said, I do wish I could pay for (at least) a TB of Keybase storage right now. :D
> That means that our highest priority is removing any obstacles to adoption. Anything that people might use as a reason not to use Trello has to be found and eliminated.
In this case I am weary of using something like this that is free because I have seen so many things in the past that were free only to shutdown rapidly after they grew in size, but with no way to pay for themselves and had to pivot or sell out. So being free is actually an obstacle in adoption.
I am intimately aware of this frustration, but what's the alternative? Stable companies also kill or abandon projects. The whole software and consumer product ecosystems are constantly churning.
Personally I'm old enough that I don't have to try every new service, but if something is solving a real problem in the short-term, I will give it a try and hope for the best. Keybase is definitely in this bucket. Worst case they go away and I have to come up with a different solution, but right now it's adding tremendous value.
but what's the alternative? Stable companies also kill
or abandon projects.
The alternative is products which, considered in isolation and with all costs taken into account, produce more revenue than they cost to maintain.
Nobody shuts down a project that costs $500,000 per annum and brings in $1,000,000 per annum.
Of course, 'all costs' there doesn't just mean employee salary - it has to include difficult-to-measure costs like the opportunity costs of the attention it demands from executives, paying a portion of the support costs of any legacy systems it needs, and suchlike.
Throwing my "I <3 Keybase" comment in the ring while doing some brainstorming here.
It seems to me that there's a lot of product opportunities in the corporate world that go beyond what Keybase is providing today. Chat and Git are interesting, but there's already a lot of momentum in both these areas. Been thinking how I use encryption and where things fall short today. One of those areas is build signing and hardware key management for our team.
Everything that goes on our servers get signed by an official PGP key. Only a couple people can sign builds, and each has a Yubikey with PGP subkeys on it. This is kind of annoying to manage. We use an airgapped computer that houses the private key, can create subkeys and assign to Yubikeys, can handle expiration management, etc. When we want to deal with this, we have to get the computer, unlock access, and deal with the command line. This is error-prone and annoying. Having a solution that allows for safe storage of a private key and easy management of subkeys on smartcards would be amazing without the need for an airgapped computer and a command line would be really interesting.
(The signing/verification part can probably be handled today by the keybase tool.)
Okay, that's maybe more specialized. Let's move away from paranoid server builds and go toward something similar that's gotten plenty of companies in trouble: Malicious e-mails. How often are we hearing about some poor employee receiving an e-mail that appears to come from a co-worker that contains a finance document with a trojan? Or maybe just a simple document with a form, instructions, and a link that results in information leaked to some third-party?
If there was a dead-simple way to sign and validate documents over Keybase (and I mean dead-simple, built for people who only know Word and Excel), for use in e-mail and document management, with marketing around "For $XX/user/month, you don't have to worry about getting hacked," I bet plenty of companies would bite.
I don't know what that looks like exactly, but just playing around loosely with some thoughts, it would be interesting (particularly for fully IT-managed systems) to have a Keybase Shield product that would automate much of the signing and verification of documents. It could tie into Word, Excel, etc. via their plugin interface and sign on save, and/or provide a big "Sign this document" widget on the side of the screen that a document can be dropped onto (or a Share action on phones). It'd then own the file associations for these documents, intercepting them when opening via e-mail or file servers, and would validate their signature. A document from the outside world (or one not going through the corporate-mandated signature process) would outright fail to open with an error message and instructions to ask the sender to please sign the document.
(Lots of details to work out there, but if this process could be made simple and mostly automatic, you'd help close a major attack vector that companies are susceptible to today.)
Anyway, it's great hearing your thoughts on how Keybase plans to make money. I've been in the same boat of loving Keybase but being uncertain about where it'll be 5 years from now. We'll keep an eye open for some paid products :)
On the document management end of things: that's exactly what the public/yourname/ subdirectory of KBFS is-- every document there gets signed when edited, then they're automatically verified (by the KBFS client) when someone tries to download them (either the original author, or another Keybase user).
There's no explicit signing process involved, but that's part of Keybase's value proposition: automatic and transparent public key cryptography.
If you can tie the shield into KBFS, that's even better. It's not enough to protect a company from attacks, though. People may still click that random document coming in via e-mail that claims to be from a co-worker. A mandatory technical solution on that end, no matter what the actual technology looks like under the hood, would be essential for protecting people from making these kinds of mistakes.
The value proposition of automatic and transparent public key cryptography is strong, and what I love about Keybase. Just thinking of other ways that can be applied transparently.
A team-based 1Password-type service would also be interesting, particularly one allowing heavy use of 2-factor authentication with something like a Yubikey.
> Many of us on the team have come from ad-supported businesses and we really, really never want to do that again. I personally guarantee I will never be a "publisher" again.
So prove it. Provide a way that customers can try to give you money for solving their problems. Even if it is just a dummy static page with a form to contact your "sales" department, really show that you will be here for the longer term.
Putting up a fake sales page isn't a sales strategy and wouldn't prove anything. If anything, it could add to the distraction.
Sales and being around long term are more complicated and won't simply be proven to you because it's what you want. It requires more vision and coherence than that.
It doesn't have to be a fake page, talking about a mvp where they can gauge who is interested in paying and what their problems are so they can concentrate in those areas. When confident they could even setup a simple paypal re-occuring sale system too.
A better(maybe?) idea could be to send out a survey asking what features people are interested in, whether they would be willing to pay for them, and how much if so.
It could be an option when you log in to the UI. I wouldn't mind it, as long as it isn't being e-mailed to me every week/month.
> You guys should be taking my money, is what I'm saying.
Completely agreed. The reason I don't use Keybase more than I do is because I half expect them to be acquired/something else to happen. Would gladly give them my $10/mo. for a 1TB instead of Dropbox.
With that said, I completely understand why they aren't right now -- maybe they're not going after the consumer market, maybe they don't want to box themselves in with customer support obligations, etc. But I really would like to use them.
@malgorithm's answer is fantastic, just wanted to add some side-comments...
> How am I assured [?]
You're not, even if they start making money. Sucks, but true.
> You guys should be taking my money
One way to pay, if you want to help ensure their success & longevity, is to evangelize for them, and get other people hooked on their product. Getting other people hooked on it like you are and seeing the potential and get over the adoption humps... that's valuable! They're not taking money because it raises the barrier to entry, and growth is most important. Pay them by helping them grow.
It's valuable, but not in the capital sense. Each person you get hooked on their product increases their burn rate, and both makes them more attractive as an acquisition (which is scary for users) and more desperate for cash (which makes acquiescing to acquisition more tempting).
Without a road to profitability (or at least a road to revenue) even attracting equity is difficult; investors who enter with that knowledge will be looking to exit through acquisition, since that's basically the only way to exit, other than just getting more capital.
100% agreed. Hosting sensitive git repositories is problem that companies and people are willing to pay $$$ for and stuff that is free has a tendency to go away after a few years. Heck don't bother putting any technical work into it or anything (aka work) and continue being free, but allow me to have a "paying account" or whatever. Pretty much if you are providing value let me prove it by giving you some cash.
> Keybase team member here. Interesting fact: git doesn't check the validity of sha-1 hashes in your commit history.
I heard this a couple of times and tried to confirm it a while ago, but was unable to. I wasn't able to forge a repository with faulty hashes in it.
I also heard plenty of people tell me that there exist public repositories with wrong hashes in them, but when I asked them they never could come up with concrete examples in the wild.
I'm seriously curious about this, can you provide any clonable proof of concept repository with wrong hashes?
> git doesn't check the validity of sha-1 hashes in your commit history. Meaning if someone compromises your hosted origin, they can quietly compromise your history.
That second part of the fuller quote makes the first part irrelevant.
Git, sans GPG, does no validation of the given username and email - it is trivial to configure my laptop to stamp commits with hannob@ instead fragmede. All I need to do to frame hannob, then, is write access to a repo that they contribute to.
In the centralized world of github, that's a little bit more tricky, but at larger organizations where large groups (eg, all of eng) simply have write access to the repo(s), if git blame says hannob wrote the commit that stole passwords/money/etc, guess who's getting fired?
With GPG, I'm able to configure git so that commits that actually come from me have a GPG-validated signature. Snarkily, the blog post claims "no one" does this but I do. Given that this feature is known to be infrequently used, I'd believe it if git would accept commits with a bad signature.
I may be wrong, but here's my current understanding.
I believe Git CAN check the validity of sha1 hashes (I read the source a few years ago and have a very tiny git commit) using git fsck, which I believe kernel.org does nightly. It just doesn't do so automatically with every commit or whatever. But you can set up a test in your server, I believe, if that's important to you, either watching the files, or checking pushes which I believe github does. So that's not the issue.
It's sha-1 collision attacks that are a theoretical issue.
My understanding of the currently known SHA-1 attack is that it requires binary data (hence PDF files for the example) and requires you to control both the original file and the subsequent file. So an attack would have to generate an apparently innocent file and a malicious file both of which have a binary block, insert the innocent file into the repo, and then somehow, most likely outside of a git push given mitigations like github's, replace that innocent file with the malicious file.
Now to your question, checking in the PDF files from the proof of the attack in git doesn't work, because git also adds header info. And generating the files requires ~ $100,000 dollars worth of ec2 time, or the equivalent, so nobody has gone through the trouble of generating files that allow this specifically to prove it for git. Bit it's definitely possible, and cheap enough for a criminal organization or a state agency to do. Just because someone hasn't done it for git specifically shouldn't mean that the attack isn't possible, just that security researchers don't have unlimited funds, and the existing proof, while not specific to git shows the issue generally applies.
Last I saw, the git mailing list was debating sha3-256 and BLAKE vs SHA-256. There's some indication that SHA-256 may get intel HW support, and that may be useful for speed with really really big git repos (like microsoft's apparently). SHA-256 doesn't have an attack on it that's known but unlike ShA3-256 (and I believe BLAKE since it's a stream cipher) SHA-256 is a block cipher, so it's not stateful. That means, while no known attack exists, theoretically if an attack existed you could corrupt a specific block in a similar manner to SHA-1. But SHA-256 has been much more extensively tested for issues while SHA3-256 is newer... it was created ostensibly as a backup in case the current known safe standard of crypto like SHA-256 is attackable.
There are some issues with SHA-256 being used in repos that have signed SHA-1 hashes already, in terms of mapping SHA-256 to SHA-1 hashes without borking the signing. Obviously if you change the underlying structure of signed stuff to store a new hash, it changes the hash.
My personal thought would be to implement SHA-256 and SHA3-256 as options simultaneously, as they are both NIST standards, make SHA-256 the standard so big repos can be as fast as possible.
I am not a crypto expert, or a git expert though, so if I'm wrong, please correct me. Being wrong means I get to learn stuff and that's great!
SHA2-256 has had hardware acceleration instructions on Intel since the Skylake series and on AMD since Ryzen; even ARM has has SHA2-256 acceleration for a while. Software support is the issue at this point.
How likely/easy would it be to add "know nothing" mirrors of these encrypted repositories? Say that I trust the keybase app (or something that speaks its protocols) possibly indefinitely, but maybe I'm not keen on a single cloud storage backend and want additional secure backup options. (Maybe I'm even unconvinced about the long term guarantee of keybase's storage space offerings due to possibly changing cost/business model factors, as others have pointed out here.)
It would be nice if I could have an encrypted copy in S3 or Dropbox or somewhere, that presumably maybe git couldn't directly make use of, and would be encrypted and those services couldn't touch either, but that the app could still push/pull changes to.
Certainly, I'd still have an unencrypted view of the contents in any local clones of the repository I may have in the case that I couldn't access keybase storage, but it still seems like there may be useful cases where an encrypted backup is somewhere else in the cloud as well, as a safe failover just in case.
I use [Pass](https://www.passwordstore.org/), a password manager, which uses GPG and Git, and I keep an encrypted copy of my Pass Git repo in Dropbox and have that repo copy setup as a remote in all of the local copies of my password repo. So, the contents of the local repos are encrypted, but in the encrypted copy all of the Git data is encrypted too.
Signing tags are not as affective as you'd think. refs are never actually signed, it's the objects they are pointing at that are signed. This opens up to interesting attacks where you can move refs around to previous vulnerable versions.
Git also never checks if the metadata the tag points at is correct!
Yeah, we implemented this paper's proposal (their version has some bugs, gaps, and infinite loop issues) where I work to be able to have higher assurance on the validity of our source repositories.
First version in shell with a fairly robust test suite, and the next version in Rust. Originally started to do it in Rust, but libgit2 was sufficiently obtuse that we opted for getting to a complete, working thing first.
This looks fantastic! I have a couple of questions not answered in the FAQ though:
1. Is there (or will there be) any way to create an encrypted git repo shared between a few users that aren't part of a team? e.g. could I create a repo that belongs to eridius,chris and have us both access it?
2. Can I create a repo that belongs to a subteam?
And on a different note, I want to create a team but the name is currently taken by a user. The user has zero activity (no devices, no proofs, chain is completely empty, literally nothing). Is there any way to recover a name that's being squatted on?
> 1. Is there (or will there be) any way to create an encrypted git repo shared between a few users that aren't part of a team? e.g. could I create a repo that belongs to eridius,chris and have us both access it?
Yep, though it's undocumented and it won't show up in the GUI right now (maybe ever). You can just push/pull directly to repos like "keybase://private/u1,u2,u3/foo" and it will create it on the fly. But we warned, there's currently no way to delete those, and typos in the git URL can cause unintended repos to pop up.
I will pay a LOT of money if you can slap a half decent web interface on it.
Surprisingly, you guys look like a direct clone of the new Bitbucket interface. Its not my favorite (I like github so much better) - but Bitbucket with its inbuilt Pipelines integrations is so much better than Github.
> Interesting fact: git doesn't check the validity of sha-1 hashes in your commit history.
Isn't the commit sha1 determined, in part, by the sha1 values of the tree it refers to as well as the sha1 of the parent commit? If you fetch a branch from a compromised remote, all the sha1 values of the commits that were compromised would be different.
Ah, so if I were to manually craft a commit in a text editor in the format:
tree sha1
parent sha1 of parent I want to attach it to
author some string
committer some string
The commit message
I could add this to the git object store manually under the same sha1 file and a client could just fetch it? Would the client try to fetch the faked objects when it already has the real objects in its copy of the object store?
That is, would it think it has the commit because the sha1 hasn't changed, but the tree sha1 has been updated and it would presumably refer to blobs that the client doesn't already have and try to fetch them. Or would it not proceed because it already has the commit?
It doesn't seem to verify hashes of objects on checkout, but it does when receiving packfiles. So it's difficult to see how this could be an exploit unless the attacker has access to your local .git directory.
I’m sure there’s a law with someone’s name that states that. But just in case it hasn’t been claimed yet, I’m proposing that we call it the fuck you law. Because the next time someone comes to me to ask me to fix their trello to zappier to email to google sheets setup they use as a project management tool, I want to be able to say, “Fuck you and there’s a law that says so.”
No it doesn't. I have many of my git repos in Dropbox but I'm not using Dropbox for sharing. Having those in Dropbox means I get automatic backup and that they are available when I switch to a different computer, which I do, but not frequently. As only I use my Dropbox account, I'm aware of the potential sync problem, but it's never been a problem. I do run fsck & gc more frequently than most, but I probably don't need to.
EDIT: I should emphasize that this model is way more convenient than manually having to remember to push and pull all the time. Now push is only for publishing outside as it should be.
For your first point, can't you verify the signature for the commit? In order to to compromise the origin, they must also compromise the secret key of whoever is signing commits.
I say that in full realization that 99% of people probably don't even know that you can sign commits, but the first point doesn't seem valid, as you can ensure integrity of commit history.
And even then, I never could break over the hurdle of setting up team repositories with safe credential management...like for any kind of collaboration. With this simple screen, you can grab 5 friends, make a repo in a minute, and all start working on it.
You can already do that with Gogs.. It's a single binary, uses git, supports accounts, 2 factor, etc.
https://gogs.io/
Really useful for small teams that don't want to use github or gitlab.
Congratulations on the launch. I'm a Keybase user myself and I think you all have done a fantastic job.
When the SHA-1 collision was calculated earlier this year, Linus commented on git and SHA-1. No further questions, just sharing it here if you happened not to see it: https://marc.info/?l=git&m=148787047422954
Again, thanks for all the hard work. Best of luck.
This looks sweet. I bounce between using Bitbucket or Dropbox for private repos depending on my needs. Bitbucket has lots of features but is a little annoying to set up a new project. Dropbox is really easy but doesn't always work well (e.g. git push ends up being effectively async). Your version of it looks to be just as easy as Dropbox, maybe even easier, but without any of the downsides. And it's encrypted!
Does it matter much? If I hose my repo (which I don't think is that easy, since I've been doing this for years and never had an issue) then I can delete it and clone a new one from my local copy. Especially when it's just me, and I'm only pushing to the repo from one machine at a time.
It can hose your local, too. And it can happen easier than you think--I've seen it happen because a laptop that pushed to Dropbox went to sleep mid-sync and a desktop synced after. Fighting the Dropbox API to unwind it is a huge pain.
git-remote-dropbox works as you would expect a Git remote to work; it's API-driven and actively discourages even syncing the remote repository down to your machine. I would so, so strongly suggest you switch to it if you want to use Dropbox as a store.
Bare-git-repo-on-KBFS is inadvisable for a similar reason, which is why I'm so excited to see what they're doing here.
How would it hose my local? I thought git's design meant that it might possibly pull down new corrupted refs, but whatever I currently had would remain intact, so it's just a matter of reverting. Not so?
I believe it would be Dropbox doing the overwrite. Dropbox will just replace data - it doesn't do anything with respect to the reflog. I suppose it might be safer to work on a local copy and push to a second local copy in dropbox, so your working copy isn't touched by dropbox at all.
Yeah, keeping a local copy outside DB and pushing to a bare repo in DB is what I do. It didn't occur to me that one might work directly in a repository in DB. The hazards there, at least, are quite clear!
OK, so maybe we're using "local" for different things. Are you developing in your local copy of Dropbox, or are you cloning to a local directory using the Dropbox directory as a source (probably bare)? I assumed the former, which is what I meant by "local"; you can end up syncing multiple different instances of the repo and horking the contents of your .git directory (as well as cross-edited files, etc, that bleed changes onto multiple branches).
Both have the possibility of breaking because of concurrent or delayed syncs--like, which is actually HEAD?--but the latter is probably safer than the former. Or you can just use git-remote-dropbox and never have a problem.
If you always, always-always, develop on a single computer, Dropbox-as-normal-file-system can be fine. But if you have a desktop and a laptop, or multiple people partying on it, I get worried. :)
That explains the confusion, I'm talking about keeping a bare repository in Dropbox and cloning it to a non-Dropbox location on each computer where I work. It never occurred to me to keep the working copy itself on DB, that would be silly!
I expect that this could break the bare repository on DB if I ever pushed from two places simultaneously (where "simultaneously" could potentially encompass a period of hours or days if I pushed from an offline computer) but I should be able to repair it by recreating the bare repository.
Using something like git-remote-dropbox seems like a good idea. But at this point, I can just start using Keybase, hooray!
> ...keep the working copy itself on DB, that would be silly!
I don't think it's necessarily silly; it can be very useful in some scenarios.
I keep all my local working copies in a folder synced across several machines. I use Resilio Sync because it is better[1] than Dropbox for this purpose, but it's basically equivalent.
What this lets me do is stop working suddenly, at any moment (baby crying upstairs, or I lost track of time and have to bike to the office for a meeting) get up from my computer and move to another one (in another room in my house, or across town at my employer's office).
The code doesn't have to be in any finished state, needn't compile, I can literally be right in the middle of a line of code. As long as I've saved my work to disk, it will have synced before I reach the next computer, so I can sit down and resume work.
Before I had kids I didn't need this as much, so I just did git push/pull.
But then you have to do the work of pushing your half-finished junk to a different private repo, or rebasing to avoid polluting the git history with a bunch of crap commits just because you had to move, or not do that and just accept having a git history filled with crap.
Frankly I wish more of my work was capable of being distributed like this, but it's really only suitable for collections of plain files, which are amenable to being synced file-by-file. Luckily that includes almost all my programming work, however.
[1]: Resilio Sync is better than Dropbox for this because: it is much faster to sync than Dropbox, it supports symlinks so it doesn't corrupt your data when syncing folders containing them, and it syncs my data only among computers I control, not to any cloud service.
For sure--I'm going to go poke at the Keybase one this afternoon! (Also, to be clear, the Keybase method is essentially the same as git-remote-dropbox. Both set up git remote helpers.)
It took me about two seconds to create a new repository with Keybase and clone it to my computer, so I'm pretty impressed so far.
Thanks for the info about git-remote-dropbox and the potential failure modes of going without, even if they don't all apply to the way I've been doing things. It's still not ideal, so here's hoping Keybase makes it obsolete. If not, I'll keep git-remote-dropbox in mind.
What would I need to do to permit someone read-only, clear-text, non-public access to an encrypted repo? Can a combination of existing GIT / GitHub privileges and the Keybase solution help? If yes, and if you can add 2FA and we might be interested in becoming a customer.
Today your usecase can be solved ad-hoc by additionally manually signing what you push to keybase git, shared with the people you want to have read access.
If you want an encrypted storage solution with integrated read only access capabilities, I recommend using Tahoe-LAFS. You can probably store a git repository in it just fine.
It looks like you guys use react for a lot of your development. How do I know that you won’t push compromised code behind the scenes? Even unknowingly.
Btw your product is awesome! Multi platform encrypted team chat that doesn’t even need 4gb of ram :)
> hurdle of setting up team repositories with safe credential management...like for any kind of collaboration
Identity continues to be the key selling point of keybase. I'm excited by this.
I can keep clones of my private repositories here. Things like dotfiles and configurations. That sounds like a good start. And I can also easily share code to people who need to see it.
I'm on the Keybase team and as you can see we jumped to sponsor a small block. We participated in this for two reasons:
(1) shazow (co-author) was also the author of the official Keybase Chrome and Firefox extensions, and he did an amazing job. We'd gamble on anything he does. The space we bought wasn't that expensive from a company perspective, and it was an educational experience. And maybe it will draw us attention.
(2) Holy crap, it's really impressive a dapp like this is possible with Ethereum. If it hasn't sunk in yet how this thing works, really read the FAQ and stop to think about it. As shazow and ontoillogical said, "there's no backend!" and "It's immortalized!"
* the cost of the area you're trying to buy is calculated as _width * _height * pixelsPerCell * weiPixelPrice
* with weiPixelPrice = 1000000000000000 and pixelsPerCell = 100
* The grid is defined as a double array of 100x100:
bool[100][100] public grid;
* The for loop checks that none of these pixels are set to true, if any pixel is then `revert` reverts any changes to the state that happened during execution. Otherwise the relevant coordinates are all set to true.
* The space is reserved under the address that send the ether (`msg.sender`). To do that, an `Ad` struct is filled with the information and it is pushed unto the state.
Ad memory ad = Ad(msg.sender, _x, _y, _width, _height, "", "", "", false, false);
idx = ads.push(ad) - 1;
* It returns an index (`Idx`) that you can use to specify the details of your ad later.
~~~
For more details in 2:
* You must pass the `index` of your reserved space as argument to the function
* It will check that your address is indeed the address which reserved the space:
require(msg.sender == ad.owner);
* It sets everything to the arguments of the function you're calling.
* An event is triggered, it is probably being listened to by the web app so that the web page can be updated live
* Looks like you can call the `publish` function over and over. Modifying your ad over and over.
* There is a `forceNSFW` function that the contract owner can use to force an ad to have the `NSFW` flag. But this can be re-modified again and again by the ad owner.
* But the owner of the contract can remove ownership of an Ad, so there's that.
* I just audited the contract and I couldn't find any vulnerability.
Well said, because the dynamic is the same. Throw money at the house and hope you win. The house does.
This can't be duplicated (maybe every few years), it's a copy of a previous project, and it is literally a page of ads that won't be worth anything in a month or less. It only has value because of the attention people are giving it right now, so maybe reconsider the investment.
Own a piece of blockchain history!
That doesn't feel the least bit hucksterish to you? They say the same thing about those commemorative coins they sell on late night TV. The only person who gets anything out of this is shazow, so I'm not sure why people are so willingly enthusiastic about supporting it. It has no value except to itself.
I think you're missing what is interesting about this, from a technical point of view.
This is neat, and novel:
> "Ads displayed above are loaded directly from the Ethereum Blockchain. This Decentralized Application (DApp) does not have a traditional backend. No MVC framework, no SQL database. It's just a JavaScript application served statically from Github which speaks to the Ethereum blockchain using Web3.js."
That they're profiting from it is just an amusing side effect, imo.
That is cool! But, if this DApp thing takes off, how will Ethereum nodes be compensated for essentially acting as a free CDN?
It works fine as long as there aren’t a lot of users, but once the user count become substantial, Ethereum nodes will need compensation in order to serve the amount of data needed. Is there a workable solution for this?
The images are actually hosted over http (or ipfs or swarm). The former requires you to find a host, and the later two have ways of compensating for traffic :)
The goal is a 1-1 mapping between devices (keys) and these names. So whenever we need our UX to talk about a key, it can talk, safely, about it in terms of device names. Once committed to your chain of signatures, "Laptop-Warhol" means a specific device key, and it can't be used again. So, for example, if one of your Keybase installs wants to tell you "oh, Laptop-Warhol just added a new device, iPhone-Vangogh" then it doesn't need to look like this: "Key 34858234589234895897234598734 added key 90123845890230948234234324."
If Laptop-Warhold could mean multiple devices (keys), well then we'd need to start talking about the keys. Which is a nightmare for usability.
A lot of this decision was driven by something we've seen with apple devices. Every now and then I'd get a popup on my computer - say when updating iOS - that said something like "you just started using iMessage on a new device, 'chris's iphone'. if you don't know what this you should freak your shit out." well - it has basically said that so many times with the same names over again, that I can safely assume that it's a near-useless warning.
Note I mean unique to you; 2 different users on keybase can name their devices the same.
Generally speaking...it's been a goal from the beginning that names on keybase are meaningful. Similarly if you look up "chris" in in our merkle tree (which is pinned to bitcoin) that leads to a deterministic chain of signatures. inside that chain, where I mention "work-imac-warhol", you're guaranteed to see the same answer as I am. So "chris" is as good as a key fingerprint or safety number. And so is my device name.