The background: we talk about it sometimes as a solution to a real problem: in certain teams and workplaces, people can be afraid to give honest feedback (who dares to submit an "anonymous" survey to HR?), but Keybase may be in a unique position to let people in a group give written feedback, vote on something important, or rate an experience. Without any risk of exposing identity, short of writing something identifiable in a text field.
I'd be curious, personally, to see management get a yearly vote of [no] confidence, for example. Is that crazy?
Keep in mind we are mostly focused right now on user experience and performance improvements. But we allocate a certain amount of time to cryptographic features that just aren't possible in other software, such as this coin flip thing. We've been talking about voting and surveys, too.
You can read the basic protocol here: https://aytwit.com/about#technical_details__thoughter_gist
It would be cool to see something like that in Keybase. Feel free to steal the idea. :)
Further: it will remove the friction of doing anonymous surveys. I would do them way more often for various things (similar to the coin flips) if they were easy to do.
I recently registered keybase.vote for a related web app idea. Rather than anonymous voting, rather, I wanted the opposite: authenticity in voting, polls, surveys, etc. A common problem in surveys is verifying that the respondents are real and from people you trust. Within small communities, you would have a large enough web of trust that you could rely on who you are following to determine who you individually pay attention to from the result set.
So my idea was simply to have the survey/poll generate a text field of all the Q/A in a JSON body, kinda like the proofs of keybase, and then have the user copy/paste it and sign it on keybase and then submit their response.
I would have the whole result set downloadable in raw format that anyone could easily verify with keybase commandline tools. But I’d also employ the web of trust created by following on keybase.
I thought I’d try it out and see if works. I like the idea of Keybase being a general way to authenticate without needing any elaborate login process or email acccount.
In order to cheat that system, people would need to engage in mail fraud or buy a PO box.
Happy to discuss, chris. Sidewalk Labs is setting up camp in Toronto, and I was speaking about the above at a local event, and they were really interested in the concept. I had a call with their head of identity, but was disappointed that he couldn't say anything of substance on _why_ it was relevant to SL efforts, at least not without my signing an NDA. As a community organizer in the civic tech scene, I had no interest in that. More secrecy in the smart city / open gov sector :/ blech
(Sorry if this sounds harsh or rude, there's no point in sugar coating the truth. Hopefully the keybase teams reads this criticism and does a little soul searching.)
It covers aspects (think pareto principle) of email, linkedin, slack, github, dropbox, whatsapp, online banking and probably more core use cases that I don’t have on top of my mind.
That is now, today. With a decent user experience that keeps getting better (see recent improvements @ user profiles).
=> all of that end-to-end encrypted and delivered in a way that it is accessible and usable and fun with a long-tail of users in mind.
[I have no idea what an equivalent would be right now, not even as combination of multiple separate projects. Think about that.]
Soooo... A centralized solution for everything? ;)
A lot of trust is rooted in their centralized proprietary walled garden API and to make matters worse they actually silently bypass hardware security modules in favor of keys exposed to system memory!
They even encourage users to expose their PGP private keys to their browser and didn't even bother to isolate it to a service worker so browser plugins can't steal it (or just supporting hardware tokens which GPG already did just fine)
Almost everything they do is non standard, not interoperable with anything else, not distributed to keyservers. They are the internet explorer of cryptography.
They did this in the name of UX but it turns out you can have super easy PGP UX AND follow standards as OpenKeychain has demonstrated.
Keybase introduced lock-in and their own protocols for problems that did not at all need them. They are 2 steps forward on UX and one huge backwards step for security.
They have been focusing on a per-device key system and its not really a gpg front-end. NaCl is a well known library and what they do is based on it. Saltpack is an open library they use and they use other open libraries as well. I happen to like the how the keybase security system works and I think it has advantages over the GPG that I like.
If you don't want to use the evil centralized system at least spamming the same issue every-time Keybase comes up. If Keybase is not the solution you like then just move on with your life.
There is integration with Stellar in keybase. I can use the app as a stellar wallet and send/receive lumens.
All these little quality-of-life improvements on top of that can only be a good thing, IMHO.
You're an idiot if you believe that.
I want a heads to come up. I add a couple of hacked members to the group, so there are 3 honest members, and lets say 3 coordinated dishonest members.
Everyone shares their commitment hash, and the dishonest members share their actual commitments amongst themselves. Once everyone has the commitment hashes, the 3 honest members broadcast their commitment. The three dishonest members now have everyone's commitments, but honest members only have other honest member commitments. Dishonest members compute the ultimate value - if it turns up heads, then they just share their commitments with everyone, and the final answer is heads.
If it turns up tails, then the dishonest members compute possible permutations of various dishonest members dropping out and never sending their commitments. So maybe if dishonest member 1 drops out, the resultant value from just the group of 5 would be heads. So dishonest member 2 and 3 share their commitments and dishonest member 1 goes offline.
So, this system will work when it is composed of only people you trust, but will not work when it may be composed of people you don't trust. And if you trust everyone in it, why go through this process in the first place? And if you decide that when someone drops out and doesn't share their commitment, you just have to rerun the algorithm, then you have just given a very easy way to give the dishonest people a way to spike your coin flipper, so that no one can ever get a value out of it, or the dishonest members can just keep dropping out until they encounter a round where the final value is determined to be heads.
2. All members submit a hashed value of that random value (the commitment).
3. All commitments are distributed to all members. At this point, no more commitments may be submitted.
4. All members reveal their random values to each other and validate each value against its hash.
5. All values are then used to deterministically calculate the coin flip.
A dishonest member will not receive any values used in the coin flip calculation before step 4. If a member decides to change their random value, the hashed value / commitment will be invalid.
But what I proposed doesn't require anyone to change their random value. It relies on the fact that step #4 does not happen instantaneously, and honest clients will send their random values as soon as they transition into step #4.
So from my above example, the process reaches step #4 as normal - all honest clients send out their random values, and all dishonest clients wait. Once the dishonest clients have all of the honest random values, they can now determine what the result for #5 will be(because they know all other dishonest client random values because they are colluding, along with all honest client random values), whereas the honest clients cannot do so, because they don't know the random values from the dishonest clients.
So at that point(half way through step #4), dishonest clients can then determine if they want that value to be the end result, or if some of them should drop out(never broadcast their random value), in order to modify what everyone determines to be the final value from step #5.
In my implementation, step 4 happens in two phases:
4a. All members submit their secret values to the server.
4b. Once all secret values are submitted, they are all sent to each member at once.
This ensures that members cannot determine the outcome before anyone else.
What you are describing could actually be an issue if there is no server involved, however something like commitment encryption (rather than hashing) utilizing secret sharing could help resolve this issue in a p2p environment.
Another option would be that all possible outcomes are randomly shuffled with their positions encrypted and distributed to all members prior to the game beginning.
My protocol implements the latter... all possible outcomes are encrypted and publicly available on IPFS to members before and after the game.
One could argue that the dishonest clients could use this to basically abort the procedure before the value is determined (by not broadcasting their values). But once the procedure is complete, being dishonest does not mean anything.
> ~My wife~ Not sure.
Seriously? You work professionally in the crypto space and don't know where this is from? Or don't feel it's important to attribute such fundamental ideas to the appropriate people? If you really don't know, a quick google would have educated you. But what I fear to be more likely is that you apparently just don't give a damn.
For anybody remotely interested, look up Manuel Blum's work, e.g. "Coin flipping by telephone" presented at CRYPTO 1981. ACM Turing Award.
Or Rivest, Shamir, Adleman, "Mental Poker". Oh, those guys also got the ACM Turing Award.
None of the folks you mentioned, I believe, invented commitment schemes. Whit Diffie has a good history here: https://ee.stanford.edu/~hellman/publications/24.pdf. So these innovations would have been ~1 academic generation prior.
Of course, the GCHQ probably invented them even earlier ;)
I didn't actually find the joke too unfunny, but not acknowledging the giants on whose shoulders his little business stands is a pretty inexcusable faux-pas.
My understanding of the chronology is that Whit was a bit earlier than Ron to the party, and Whit's seminal work is what inspired Ron and gang to work out RSA.
Hence I put Whit 1 'academic generation' prior because Ron's work was directly inspired by Whit's work. This is slightly different than what I'd label pure contemporary work (I don't think Ron was doing much cryptographic work before Whit's seminal papers).
Could be wrong, and again, splitting hairs.
We can even think of a few "fun" uses of this new feature.
Since the end of January 2019 I’ve started to notice more and more profiles with suggestive pictures of underage humans and anime characters. I almost never log into my Keybase account, but frequently check my profile to see “friend” recommendations, then I log in and follow people who I think are interesting, mostly tech influencers. But since January-February I randomly get recommendations that include one of these NSFW accounts, which is quite unsettling because other people may think my account is somehow involved with them. One day I clicked around and found that many of these accounts follow each other in a circle that composes around 20-30 profiles. Knowing that Keybase offers encryption as one of its main features, I wouldn’t be surprised if pedophiles are already using the service to share offensive content among them.
Edit: I reported 42 accounts a few weeks ago, but not knowing what these profiles were doing, I just asked them to check. The Privacy Team at Keybase started an investigation the day after. Today, none of these accounts exist anymore, I’m not sure if that was really a pedophile network or not, but I’m glad Keybase did something about it.
That said, I'm getting to a place personally where I think we will have no choice but to either accept the absence of privacy/security, or accept that bad people will be empowered for evil just like we are empowered for good. I hope I'm wrong tho, and we can figure out a way to prevent evil use while permitting (and encouraging) righteous use.
Note: I use the words evil and righteous for illustrative purposes, not for religious reference.
First, you're conflating child porn with regular porn.
Second, certain services are more often used by serious deviants, and it's a huge reputational, moral, and legal risk. If you're running a business, or a user of that service, you can't afford to bury your head in the sand.
What I don't get is the whole "Consider following" thing. But then, I don't use Keybase as social media, per se.
About the suspected pedophile network, you'd think that they'd be more discreet.
The goal is to fairly select some candidate from a set of candidates. Each candidate `Ci` generates a UUID `Ui`. The hash of their UUID `hash(Ui)` is published by each candidate. Once all hashes have been collected, each candidate reveals the verifiable original UUID to all the others.
Each candidate then concatenates these UUIDs together (after normalizing the sequence in some way - e.g. sorting), and produce a selector code: `H = hash(U1 ++ U2 ++ ... ++ Uk)`. Finally, the selected candidate is simply the one whose UUID is the closest to `H` under some distance metric.
I tinkered a bit with adapting it for situations where the candidate set could shrink during the selection process (i.e. a candidate drops out), but didn't really pursue it much.
I'll try to read up and see if I can answer that question.
>Say that all the other participants have revealed their pre-images, and you're the last one left to your reveal your pre-image.
The third party isn't any different than the main parties in the mix. If everyone can decide who the most trustworthy party in the mix is, you can have them reveal last.
This doesn't work for large populations because the probability of a drop occurring during the selection procedure approaches 1. There, I was considering a tournament style selection - partition the population into small groups, select one from each, and treat the winners as a new candidate population.
SHA3 and Blake2* use different constructions that don't have these flaws, thus they don't require HMACs for their needs.
also love the details like “flip again”
I didn't cover some details I find fascinating but which might have been overkill outside of HackerNews. For example, some assume the "one-way"ness of a hash function makes this protocol work. But that's not enough: we can't have Alice generating 2 different secrets with the same hash, even if Barb can't reverse the hash. What we also need is _collision resistance_, so Alice doesn't get to pick and choose what to expose in the final stage.
Lately, we've made much bigger, but less blogworthy, improvements to Keybase. It's faster, team on-boarding is getting better, and we'll be launching a very improved UX in the next month or so. I rarely get to stop and write about Keybase, so this was fun.
And for anyone looking to test, I'm `chris` on keybase. You can start a chat with me and do a `/flip cards 5 chris,yourname` and we'll see who gets a better poker hand. If you can deal yourself a flush or better on your first try I'll give a prize or something? Who knows. Anyway, we're having fun with it.
It would give you an office suite play very very quickly - I can only see it as a winner.
>A bad actor can't change the outcome of a flip but could prevent it from resolving.
The post glosses over this but it can get pretty bad. E.g. 99 actors have revealed their seeds and then the 100th decides whether to reveal or not, based on whether they or their confederates will be the winner after that final reveal.
Edit (and meta-edit): I changed the wording in the FAQ accordingly.
Each little square inside it represents a byte, so we map bytes (0..255) to colors ranging from a blue to a purple.
The matching secret is also 32 bytes, and of course those come in in random order, so we line up secret rows with the matching commitments. It sure is fun to watch.
We played with some different visualizetions. We actually had one version with a 3d sphere getting covered in data, but it felt too gimmicky. This gives a good feeling of people showing up.
If you're one of 10 people doing this to the light switch, then as long as you choose randomly, it doesn't matter what the other 9 people do. It has a 50% chance of ending up on and a 50% chance of ending up off. Even if the other 9 people are cheating together.
Of course this has the problem that whoever goes last wins, which is why the commitment ceremony is necessary.
But when using XOR even if 9 people conspire they can not know if the 10th votes 1 or 0.
Is he being coy here? I mean - poker, right?
> What if someone loses network before the secret stage?
> A bad actor can't change the outcome of a flip but could prevent it from resolving.
> The Keybase app will highlight this scenario. Odds are it was just a network issue, but if you have such a person disappearing often, you should break up with them.
So someone with a malicious client could force the flip to not resolve until it creates an outcome they're satisfied with, which is... Not great. I can't tell if "The Keybase app will highlight this scenario" means it'll abort the roll or if it'll automatically reroll.
Also see my other comment in this thread where in a client/server architectured version of this protocol, the reveal step is split up in to two stages:
> 4a. All members submit their secret values to the server.
> 4b. Once all secret values are submitted, they are all sent to each member at once.
> This ensures that members cannot determine the outcome before anyone else.
Even requiring a deterministic reveal order (without a central, independent server) would reduce the chance an individual could force a reroll after knowing the outcome from 1/1 to 1/n, which would be significant. The 'central server' solution is probably more relevant for Keybase though.