Hacker News new | past | comments | ask | show | jobs | submit login
Cryptographic coin flipping, now in Keybase (keybase.io)
212 points by aston 7 days ago | hide | past | web | favorite | 97 comments

I'd be curious if people on HN would want a zero knowledge survey and voting system inside Keybase, and if so, what would it look like?

The background: we talk about it sometimes as a solution to a real problem: in certain teams and workplaces, people can be afraid to give honest feedback (who dares to submit an "anonymous" survey to HR?), but Keybase may be in a unique position to let people in a group give written feedback, vote on something important, or rate an experience. Without any risk of exposing identity, short of writing something identifiable in a text field.

I'd be curious, personally, to see management get a yearly vote of [no] confidence, for example. Is that crazy?

Keep in mind we are mostly focused right now on user experience and performance improvements. But we allocate a certain amount of time to cryptographic features that just aren't possible in other software, such as this coin flip thing. We've been talking about voting and surveys, too.

OT: One of the things I find interesting is that "zero knowledge" has become a buzzword. On the one hand it is frustrating, because when cryptographers say "zero knowledge" we mean something very specific and rigorously defined (a survey protocol cannot be zero knowledge because the results of a survey do reveal something about the respondents' inputs). On the other hand, the fact that non-experts are comfortable with the idea of using an interactive protocol to securely compute functions means there is one less mental hurdle to deal with when trying to deploy these technologies.

Thank you, I had a client say that they are providing zero knowledge authentication system which didn't mean that you can prove that you're logged in, but without revealing your username (or something like that), but simply that you can login using public/private key.

From the anonize paper [1]: “Our system is constructed in two steps. We first provide an abstract implementation of secure ad-hoc surveys from generic primitives, such as commitment schemes, signatures schemes, pseudo-random functions (PRF) and generic non-interactive zero-knowledge (NIZK) arguments for NP.”

[1] https://eprint.iacr.org/2015/681.pdf

This is absolutely very useful. Definitely within a specific team or company, but generally anywhere, especially when combined with Keybase's proven identities feature. I can imagine a "Vote with Keybase" button ubiquitous on the internet wherever they want to conduct surveys.

I made something that sounds similar to the kinds of ideas you're throwing out: https://aytwit.com/thoughter

You can read the basic protocol here: https://aytwit.com/about#technical_details__thoughter_gist

It would be cool to see something like that in Keybase. Feel free to steal the idea. :)

I think this is a great idea.

Further: it will remove the friction of doing anonymous surveys. I would do them way more often for various things (similar to the coin flips) if they were easy to do.

Hiw do you make sure that anonymous votes are coming from employees and from some third-party?

A ring signature would do. You can be sure that the signature came from one of a set of public keys, without knowing which particular private key was used.

We are still very early stages here, but we really like the Anonize system (https://eprint.iacr.org/2015/681.pdf)

What a coincidence, slightly!

I recently registered keybase.vote for a related web app idea. Rather than anonymous voting, rather, I wanted the opposite: authenticity in voting, polls, surveys, etc. A common problem in surveys is verifying that the respondents are real and from people you trust. Within small communities, you would have a large enough web of trust that you could rely on who you are following to determine who you individually pay attention to from the result set.

So my idea was simply to have the survey/poll generate a text field of all the Q/A in a JSON body, kinda like the proofs of keybase, and then have the user copy/paste it and sign it on keybase and then submit their response.

I would have the whole result set downloadable in raw format that anyone could easily verify with keybase commandline tools. But I’d also employ the web of trust created by following on keybase.

I thought I’d try it out and see if works. I like the idea of Keybase being a general way to authenticate without needing any elaborate login process or email acccount.

I worry this ends up being a technical solution to what is ultimately a social problem. If the problem is that people feel threatened submitted feedback at their workplace, the issue is the structure of the workplace.

Yes, but it's possible for these structural problems to be invisible to the people who could change them, precisely because of the structure that's setup. There are definitely cases where the structure is there on purpose to create this sort of environment...and this won't do anything to fix those. But there are also cases where people are afraid to give honest feedback, but if they were able to do so in an anonymous way, management would either be pressured to, or would want to make a change.

That’s a fair point, but even if workplace norms are sane, the technical solution additionally protects against (say) a rogue IT admin gathering info in secret, or against future policy changes.

Eh, not if it's cryptographically impossible (barring a zero-day vuln) for the higher-ups to trace back who submitted what.

imho the more interesting cryptopgraphic proof would be proof of address or bounding box. I feel that if you allowed third-parties to pay you for supporting a validation of locality via cryptographic features sent in postcard, then the rails would come off what was possible with digital systems. Knowing location will be increasingly powerful imho. Our opinions count most in a local spaces, at least with city-building. And I feel that third parties would be willing to fund the main cost of postage, if it allowed them to be assured of certain geographic bounds of users.

In order to cheat that system, people would need to engage in mail fraud or buy a PO box.

Related: https://github.com/patcon/can-ereg-api#unofficial-national-d...

Happy to discuss, chris. Sidewalk Labs is setting up camp in Toronto, and I was speaking about the above at a local event, and they were really interested in the concept. I had a call with their head of identity, but was disappointed that he couldn't say anything of substance on _why_ it was relevant to SL efforts, at least not without my signing an NDA. As a community organizer in the civic tech scene, I had no interest in that. More secrecy in the smart city / open gov sector :/ blech

I would love this idea. Do you want me to get feedback from my team? Can I ping you ... On Keybase?

Has Keybase lost it's way? I thought they were onto something cool and maybe could explore an enterprise play with private chats, filesharing and git for teams or something like that. Basically make money by selling to teams. But Keybase seems to have stagnated, existing apps are still quite buggy, not enough new developments recently. The facepalm moment for me was when they announced they were supported by the Stellar foundation. I lost all hope then and there. I get that you guys are buddies with the Stellar folk, you think Stellar is cool etc but an objective analysis leads to only one answer : don't do it you'll regret. Maybe add it as a feature (stellar integration) but don't go all in. Speaking of Stellar, still no integration after 1 year?? Focus on what you have and start making money. So, what went wrong malgorithms?

(Sorry if this sounds harsh or rude, there's no point in sugar coating the truth. Hopefully the keybase teams reads this criticism and does a little soul searching.)

In a sense keybase is one of the most important projects built right now (that I can think of).

It covers aspects (think pareto principle) of email, linkedin, slack, github, dropbox, whatsapp, online banking and probably more core use cases that I don’t have on top of my mind.

That is now, today. With a decent user experience that keeps getting better (see recent improvements @ user profiles).

=> all of that end-to-end encrypted and delivered in a way that it is accessible and usable and fun with a long-tail of users in mind.

[I have no idea what an equivalent would be right now, not even as combination of multiple separate projects. Think about that.]

> It covers aspects (think pareto principle) of email, linkedin, slack, github, dropbox, whatsapp, online banking and probably more core use cases that I don’t have on top of my mind.

Soooo... A centralized solution for everything? ;)

Exactly... unless the work is open source, they are just another centralized solution. Reminds me of Telegram.

I thought Keybase was open source? Released under the New BSD (3 Clause) License?

The clients are. The platform they all run on top of is centralized, made up its own irresponsibly insecure key handling and crypto protocols, and is proprietary.

They didn't make any of their own security, they use very well established open-source security libraries for everything. Stop spreading FUD.

I have researched their approach in great detail and found design flaws in it like: https://github.com/keybase/keybase-issues/issues/1946

A lot of trust is rooted in their centralized proprietary walled garden API and to make matters worse they actually silently bypass hardware security modules in favor of keys exposed to system memory!

They even encourage users to expose their PGP private keys to their browser and didn't even bother to isolate it to a service worker so browser plugins can't steal it (or just supporting hardware tokens which GPG already did just fine)

Almost everything they do is non standard, not interoperable with anything else, not distributed to keyservers. They are the internet explorer of cryptography.

They did this in the name of UX but it turns out you can have super easy PGP UX AND follow standards as OpenKeychain has demonstrated.

Keybase introduced lock-in and their own protocols for problems that did not at all need them. They are 2 steps forward on UX and one huge backwards step for security.

You have been on this crusade for a long time now and you have posted this link often, I just realized that keybase is not just a GPG replacement and that using it with my smart card is not a good fit and not the problem they want to solve. So I just accepted that and actually tried to figure out what keybase is actually doing and why rather then demanding they do what I initially wanted from it.

They have been focusing on a per-device key system and its not really a gpg front-end. NaCl is a well known library and what they do is based on it. Saltpack is an open library they use and they use other open libraries as well. I happen to like the how the keybase security system works and I think it has advantages over the GPG that I like.

If you don't want to use the evil centralized system at least spamming the same issue every-time Keybase comes up. If Keybase is not the solution you like then just move on with your life.

> Speaking of Stellar, still no integration after 1 year??

There is integration with Stellar in keybase. I can use the app as a stellar wallet and send/receive lumens.


Keybase's main competitor seems to be Slack. I don't use Slack but I assume that Slack can read all your messages if they want to, so privacy is the main differentiator.

All these little quality-of-life improvements on top of that can only be a good thing, IMHO.

Seems like it, given how shamelessly they are ripping off Slack's UI.

"Sorry if this sounds harsh or rude, there's no point in sugar coating the truth"

You're an idiot if you believe that.

(Still agree?)

I stopped using keybase because it just takes too long to open up the app on keybase and send message on my phone. the desktop client is great, but it's just too slow on mobile.

I wanted to make a more light-weight chat client for desktop but unfortunately their Go library documentation was completely impenetrable as of ~6 months ago.

Try updating your client.

I use Keybase daily and really like it, but of course the more I use it the more I fear it'll go away. Are they actually making any money off it yet, or will they eventually run out and fail to switch over to paid accounts in time before the company evaporates?

Is this a problem with commitment schemes?

I want a heads to come up. I add a couple of hacked members to the group, so there are 3 honest members, and lets say 3 coordinated dishonest members.

Everyone shares their commitment hash, and the dishonest members share their actual commitments amongst themselves. Once everyone has the commitment hashes, the 3 honest members broadcast their commitment. The three dishonest members now have everyone's commitments, but honest members only have other honest member commitments. Dishonest members compute the ultimate value - if it turns up heads, then they just share their commitments with everyone, and the final answer is heads.

If it turns up tails, then the dishonest members compute possible permutations of various dishonest members dropping out and never sending their commitments. So maybe if dishonest member 1 drops out, the resultant value from just the group of 5 would be heads. So dishonest member 2 and 3 share their commitments and dishonest member 1 goes offline.

So, this system will work when it is composed of only people you trust, but will not work when it may be composed of people you don't trust. And if you trust everyone in it, why go through this process in the first place? And if you decide that when someone drops out and doesn't share their commitment, you just have to rerun the algorithm, then you have just given a very easy way to give the dishonest people a way to spike your coin flipper, so that no one can ever get a value out of it, or the dishonest members can just keep dropping out until they encounter a round where the final value is determined to be heads.

1. All members generate a random value (high entropy).

2. All members submit a hashed value of that random value (the commitment).

3. All commitments are distributed to all members. At this point, no more commitments may be submitted.

4. All members reveal their random values to each other and validate each value against its hash.

5. All values are then used to deterministically calculate the coin flip.

A dishonest member will not receive any values used in the coin flip calculation before step 4. If a member decides to change their random value, the hashed value / commitment will be invalid.

Sorry, I think I may have used the wrong terminology in my post above. I was referring to the random value as the 'commitment' and the hash of that as the 'commitment hash'.

But what I proposed doesn't require anyone to change their random value. It relies on the fact that step #4 does not happen instantaneously, and honest clients will send their random values as soon as they transition into step #4.

So from my above example, the process reaches step #4 as normal - all honest clients send out their random values, and all dishonest clients wait. Once the dishonest clients have all of the honest random values, they can now determine what the result for #5 will be(because they know all other dishonest client random values because they are colluding, along with all honest client random values), whereas the honest clients cannot do so, because they don't know the random values from the dishonest clients.

So at that point(half way through step #4), dishonest clients can then determine if they want that value to be the end result, or if some of them should drop out(never broadcast their random value), in order to modify what everyone determines to be the final value from step #5.

I have implemented an extremely similar protocol for a side project game.

In my implementation, step 4 happens in two phases:

4a. All members submit their secret values to the server.

4b. Once all secret values are submitted, they are all sent to each member at once.

This ensures that members cannot determine the outcome before anyone else.

What you are describing could actually be an issue if there is no server involved, however something like commitment encryption (rather than hashing) utilizing secret sharing could help resolve this issue in a p2p environment.

Another option would be that all possible outcomes are randomly shuffled with their positions encrypted and distributed to all members prior to the game beginning.

My protocol implements the latter... all possible outcomes are encrypted and publicly available on IPFS to members before and after the game.

Only after step #5 is the value valid, i.e., after everybody has broadcasted their value. This includes everybody who broadcasted anything in previous stages. As long as the dishonest clients have not all broadcasted theirs, the procedure is not done and there is simply no consensus yet.

One could argue that the dishonest clients could use this to basically abort the procedure before the value is determined (by not broadcasting their values). But once the procedure is complete, being dishonest does not mean anything.

Yup. It's pretty gameable using collusion due to the ability to refuse to reveal.

>> Who invented commitment schemes?

> ~My wife~ Not sure.

Seriously? You work professionally in the crypto space and don't know where this is from? Or don't feel it's important to attribute such fundamental ideas to the appropriate people? If you really don't know, a quick google would have educated you. But what I fear to be more likely is that you apparently just don't give a damn.

For anybody remotely interested, look up Manuel Blum's work, e.g. "Coin flipping by telephone" presented at CRYPTO 1981. ACM Turing Award.

Or Rivest, Shamir, Adleman, "Mental Poker". Oh, those guys also got the ACM Turing Award.

I think you're unfairly punishing him for not knowing; it's a bit tough to find. I'll give you that the joke -- pinning it on his wife -- is unfunny.

None of the folks you mentioned, I believe, invented commitment schemes. Whit Diffie has a good history here: https://ee.stanford.edu/~hellman/publications/24.pdf. So these innovations would have been ~1 academic generation prior.

Of course, the GCHQ probably invented them even earlier ;)

Hm, I'd consider Whit and Martin to be the same generation as Blum and the RSA gang. :) I'd say Goldreich, Goldwasser, Micali and friends are the "next generation", if you really want to distinguish generations, no?

I didn't actually find the joke too unfunny, but not acknowledging the giants on whose shoulders his little business stands is a pretty inexcusable faux-pas.

Yeah, I think it's basically splitting hairs.

My understanding of the chronology is that Whit was a bit earlier than Ron to the party, and Whit's seminal work is what inspired Ron and gang to work out RSA.

Hence I put Whit 1 'academic generation' prior because Ron's work was directly inspired by Whit's work. This is slightly different than what I'd label pure contemporary work (I don't think Ron was doing much cryptographic work before Whit's seminal papers).

Could be wrong, and again, splitting hairs.

This may not be the intended use of your application, but I organize a local bdsm group (if unfamiliar, do not Google this at work), and we appreciate the security offered very much.

We can even think of a few "fun" uses of this new feature.

Speaking of NSFW stuff in Keybase…

Since the end of January 2019 I’ve started to notice more and more profiles with suggestive pictures of underage humans and anime characters. I almost never log into my Keybase account, but frequently check my profile to see “friend” recommendations, then I log in and follow people who I think are interesting, mostly tech influencers. But since January-February I randomly get recommendations that include one of these NSFW accounts, which is quite unsettling because other people may think my account is somehow involved with them. One day I clicked around and found that many of these accounts follow each other in a circle that composes around 20-30 profiles. Knowing that Keybase offers encryption as one of its main features, I wouldn’t be surprised if pedophiles are already using the service to share offensive content among them.

Edit: I reported 42 accounts a few weeks ago, but not knowing what these profiles were doing, I just asked them to check. The Privacy Team at Keybase started an investigation the day after. Today, none of these accounts exist anymore, I’m not sure if that was really a pedophile network or not, but I’m glad Keybase did something about it.

Yep, this is the latest incarnation of the evergreen philosophical problem regarding technology: capabilities can be used for good or evil, depending on the designs of the empowered. One of the main arguments that governments use these days (such as the FBI re: smartphone security/crypto) against allowing "common man" to use crypto is that criminals can use it for sex slavery and other crimes. This is a good argument since the vast majority of us find sex slavery and child porn abhorrent, as well as terrorism, etc. We are likely to give up our own rights and capabilities in the name of stopping such a grotesque practice and to protect others.

That said, I'm getting to a place personally where I think we will have no choice but to either accept the absence of privacy/security, or accept that bad people will be empowered for evil just like we are empowered for good. I hope I'm wrong tho, and we can figure out a way to prevent evil use while permitting (and encouraging) righteous use.

Note: I use the words evil and righteous for illustrative purposes, not for religious reference.

I love how the answer to the problem isn't solving the problem. It's to strip everyone of their privacy, instead.

I think pretty much any network with social potential will be used for porn spam eventually. (for example, I've noticed a bunch of Google Hangouts groups I've been added to for that purpose)

I don't think this is a valid dismissal.

First, you're conflating child porn with regular porn.

Second, certain services are more often used by serious deviants, and it's a huge reputational, moral, and legal risk. If you're running a business, or a user of that service, you can't afford to bury your head in the sand.

My apologies if you read my message as a dismissal; I wasn't trying to communicate that message. My point was that all social networks are targeted for abuse (if you don't see it on your favorite network, it's because that network probably puts a lot of time and effort and money into fighting it, not because they aren't being targeted)

I couldn't care less about NSFW stuff, because I work at home. But I do get the concern.

What I don't get is the whole "Consider following" thing. But then, I don't use Keybase as social media, per se.

About the suspected pedophile network, you'd think that they'd be more discreet.

There's a slight variation on this that I had pondered for designing a distributed election algorithm. I'm sure the idea is not novel, but it would be nice to know what work has been done on it.

The goal is to fairly select some candidate from a set of candidates. Each candidate `Ci` generates a UUID `Ui`. The hash of their UUID `hash(Ui)` is published by each candidate. Once all hashes have been collected, each candidate reveals the verifiable original UUID to all the others.

Each candidate then concatenates these UUIDs together (after normalizing the sequence in some way - e.g. sorting), and produce a selector code: `H = hash(U1 ++ U2 ++ ... ++ Uk)`. Finally, the selected candidate is simply the one whose UUID is the closest to `H` under some distance metric.

I tinkered a bit with adapting it for situations where the candidate set could shrink during the selection process (i.e. a candidate drops out), but didn't really pursue it much.

This is generally referred to as a RANDAO in cryptoeconomics. The problem is that a candidate can decide not to reveal their preimage `Ui` and affect the outcome of the RANDAO in that way.


Interesting! Thanks for the reference. I'm not sure how not revealing the pre-image would allow them to affect the result with any degree of predictability - it is equivalent to them selecting a different uuid, is it not?

I'll try to read up and see if I can answer that question.

Say that all the other participants have revealed their pre-images, and you're the last one left you reveal your pre-image. Before you reveal it, you realize you alone have all the pre-images and can calculate the result. You calculate it, and realize you don't like the result. You could then decide to throw away your pre-image to force a new roll. If necessary, claim your computer crashed, your connection dropped, etc.

I misworded the first sentence confusingly. Here's what I meant:

>Say that all the other participants have revealed their pre-images, and you're the last one left to your reveal your pre-image.

Couldn't this be solved (or the risk of manipulation largely mitigated) using a trusted third party to witness, and perhaps hold, each candidates pre-image generation?

If you have the trusted party reveal all of the pre-images, then you're just passing off the ability to decide not to reveal the pre-images to them. You could layer on the ability for all of the parties to reveal their pre-images if the third party fails to reveal them, though now you have the complication that the parties could reveal different pre-images than they gave to the third party. (Maybe one of the parties could realize the third party would defect because of a certain result that party doesn't want either, so that party could change their pre-image to avoid that result.)

The third party isn't any different than the main parties in the mix. If everyone can decide who the most trustworthy party in the mix is, you can have them reveal last.

Thanks for the succinct explanation - that makes sense.

FYI, cryptographic voting protocols have a long history of research:


It is very hard to handle candidate drop on this scenario.

Yeah that was why I sort of let go of that bit. The best solution I could come up with was to keep the population sizes small and re-run the vote if a drop occurred.

This doesn't work for large populations because the probability of a drop occurring during the selection procedure approaches 1. There, I was considering a tournament style selection - partition the population into small groups, select one from each, and treat the winners as a new candidate population.

I like the UI, but I find the blog post example - flip a coin to see who will donate a kidney - a little distasteful.

I don't really understand why they use HMAC-SHA256. Why do many schemes decide to do this needlessly when they can use SHA3 or Blake2b?

HMAC-SHA256 is baked into the WebCrypto API, whereas SHA3 and Blake2b are not (as far as I know). This alleviates having to load yet another library into the browser.

You can't use a straight hash in applications where the plaintext might be guessable. In the example given, you might try hashing "heads" and "tails" to see if that matches the SHA you were given. The random padding on an HMAC replaces the bit about lasagna in the example.

The plaintext isn't guessable, though. The post says they use a 32-byte random string.

Agreed we could have used either of those. We needed a PRF keyed by the Game ID (so the revealed secrets of this game can't be replayed in the next). Blake2b and SHA3 would also have worked fine.

Why is it needless? SHA3 was designed to complement, not replace SHA2. Blake2* are preferred by some but they are not the national standard and have their own quirks.

HMAC is useful because it fixes some flaws with the Merkle-Damgard construction that SHA2 uses.

SHA3 and Blake2* use different constructions that don't have these flaws, thus they don't require HMACs for their needs.

Ok, your original comment wasn’t clear on intent.

this is brilliant AND super useful.

also love the details like “flip again”

Author here, thank you for the comment. "flip again" was added at the last minute, after a night at a bar...where some beta testers were making real-world decisions using the app.

I didn't cover some details I find fascinating but which might have been overkill outside of HackerNews. For example, some assume the "one-way"ness of a hash function makes this protocol work. But that's not enough: we can't have Alice generating 2 different secrets with the same hash, even if Barb can't reverse the hash. What we also need is _collision resistance_, so Alice doesn't get to pick and choose what to expose in the final stage.


Lately, we've made much bigger, but less blogworthy, improvements to Keybase. It's faster, team on-boarding is getting better, and we'll be launching a very improved UX in the next month or so. I rarely get to stop and write about Keybase, so this was fun.

And for anyone looking to test, I'm `chris` on keybase. You can start a chat with me and do a `/flip cards 5 chris,yourname` and we'll see who gets a better poker hand. If you can deal yourself a flush or better on your first try I'll give a prize or something? Who knows. Anyway, we're having fun with it.

It seems like a VRF might be a more natural choice than a commitment scheme for verifiable randomness, since it doesn't require any honesty assumption for participants, and Keybase already manages keys (though maybe it would be a problem if participants could change keys midway through the ceremony).

@malgorithms - Could you consider buying or integrating Cryptpad? [1]

It would give you an office suite play very very quickly - I can only see it as a winner.

[1] https://github.com/xwiki-labs/cryptpad

I've thought about trustless lotteries quite a bit, and haven't really came up with a solution that works without using head-to-head brackets.

>A bad actor can't change the outcome of a flip but could prevent it from resolving.

The post glosses over this but it can get pretty bad. E.g. 99 actors have revealed their seeds and then the 100th decides whether to reveal or not, based on whether they or their confederates will be the winner after that final reveal.

I see a flaw with that prng scheme. Since AES is reversible, the 128-bit blocks that make up the output cannot repeat. The output is a permutation of distinct 128-bit blocks. Early in the sequence that only matters a tiny bit, but the longer it goes, the more that tells you about possible upcoming values.

Seems like it's safe for 2^(64) blocks. That should suffice.


In theory you are right, but in practice, flips take fewer than 10 blocks of AES output, so the sequence should look random unless AES is very broken.

Edit (and meta-edit): I changed the wording in the FAQ accordingly.

@malgorithms, what are the colored bars from each participant? Is it a colored representation of the hashes?

Yes! Each horizontal row in the rectangles represents a participating device. The purple/blue rectangle that comes in first represents all the bytes of the commitments coming in. Since we constrain the size of the rectangle it makes (IMO) a cool visual effect as the rows squeeze to accommodate more data.

Each little square inside it represents a byte, so we map bytes (0..255) to colors ranging from a blue to a purple.

The matching secret is also 32 bytes, and of course those come in in random order, so we line up secret rows with the matching commitments. It sure is fun to watch.

We played with some different visualizetions. We actually had one version with a 3d sphere getting covered in data, but it felt too gimmicky. This gives a good feeling of people showing up.

I really liked the animation! Could you share the other visualizations you were considering somewhere?

I got lost on the line "If the final answer is odd, the flip is TAILS." For example: Alice flips 1 for tails. Barb/Charlie/Danika flip 0. Why is the answer tails when most of the people flipped 0 for heads? Why use XOR instead of just taking the most common answer?

Perhaps an even simpler analogy is a light switch. Each person decides randomly either to flip the light switch or leave it where it is. This is basically what random XOR'ing is.

If you're one of 10 people doing this to the light switch, then as long as you choose randomly, it doesn't matter what the other 9 people do. It has a 50% chance of ending up on and a 50% chance of ending up off. Even if the other 9 people are cheating together.

Of course this has the problem that whoever goes last wins, which is why the commitment ceremony is necessary.

So you only have to trust one person to play fair. If you used most common vote, then if 10 people play, 6 could conspire to always achieve tails.

But when using XOR even if 9 people conspire they can not know if the 10th votes 1 or 0.

Think of it this way. If everyone is cheating and fixes their answers to be tails one fair coin flip will still randomly determine the outcome because their fair flip will swap between heads and tails for the whole group.

Seems like overkill. Just declare your guesses and use a third party to generate coin flip/dice roll. In fact this functionality is built-in in a number of chat clients.

How is this different from provably fair? https://dicesites.com/provably-fair

> The Keybase app can deal M cards into N labeled hands. I don't know what you would do with that, but enjoy.

Is he being coy here? I mean - poker, right?

I'll add my usual reminder here that keybase is a proprietary walled garden that made up its own crypto standards and protocols on many levels leading to dangerous design flaws like this one: https://github.com/keybase/keybase-issues/issues/1946

Also to cut off the usual arguments: no you don't need to be closed source or violate standards to provide secure yet user friendly crypto tools. See: OpenKeychain

How does this address the last revealer problem?

Assuming you mean the attack where one person can wait to be the last to reveal their secret and then influence the game by dropping out, it doesn't appear to.

> What if someone loses network before the secret stage?

> A bad actor can't change the outcome of a flip but could prevent it from resolving.

> The Keybase app will highlight this scenario. Odds are it was just a network issue, but if you have such a person disappearing often, you should break up with them.

So someone with a malicious client could force the flip to not resolve until it creates an outcome they're satisfied with, which is... Not great. I can't tell if "The Keybase app will highlight this scenario" means it'll abort the roll or if it'll automatically reroll.

One way to solve this issue is to shuffle the possible outcomes and encrypt or hash their positions along with a nonce. This info would be distributed to clients during the game initialization. At reveal time, the server presents these outcome positions to the clients along with the other client secrets.

Also see my other comment in this thread where in a client/server architectured version of this protocol, the reveal step is split up in to two stages:

> 4a. All members submit their secret values to the server.

> 4b. Once all secret values are submitted, they are all sent to each member at once.

> This ensures that members cannot determine the outcome before anyone else.

Makes sense to me!

Even requiring a deterministic reveal order (without a central, independent server) would reduce the chance an individual could force a reroll after knowing the outcome from 1/1 to 1/n, which would be significant. The 'central server' solution is probably more relevant for Keybase though.

echo -n

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact