
CHURP: Dynamic-Committee Proactive Secret Sharing - momentmaker
https://eprint.iacr.org/2019/017.pdf
======
nullc
In industry "secret sharing" has essentially become a good indicator for
snakeoil.

Virtually all applications people argue for the direct use of secret sharing
make limited sense under plausible threat models. The implementations are
usually flawed, almost universally containing at least certificational issues
like timing/emi channels (which are likely worse threats than the things they
protect against in most cases) and it's not that uncommon to find
implementations which are broken in ways that completely undermine their
security.

An intuitive way to understand why this is so is that secret sharing can be
fairly seen as a generalization of one-time-pad encryption to support
thesholds, and so the same (and worse) fragility that is widely understood to
exist for OTP encryption exists for SSS.

To be useful secret sharing needs to take place as part of a greater
protocol-- and it does, but then it doesn't get called secret sharing-- it
gets called whatever the greater thing is like "multiparty threshold
signatures".

Academic work on building blocks is useful, but its another thing to promote
something as directly useful industrially.

------
dkoston
Not sure if OP is the author but thanks for sharing. Secret management is a
challenge for a lot of folks.

For the authors, here are a couple of items that made it hard for me to
evaluate the project:

1\. This project doesn't build for me and some of the dependencies don't exist
or are private repos which prevent me from building the project
([https://github.com/CHURPTeam/CHURP/blob/master/src/cmd/bb.go...](https://github.com/CHURPTeam/CHURP/blob/master/src/cmd/bb.go#L5))

2\. It's lacking godoc documentation which means it's hard for me to quickly
see how the API works. As such, some of the API methods seem less than useful.

For example, `(Optional) storeSecret(SK)`. What does "optional" mean? What's
the return value? What's SK?

(Optional) retrieveSecret() -> SK: What? I can only store a single secret?
Without passing params to this, it seems so.

3\. Project structure is not conventional (cmd/ inside src/)

This may not seem like a big deal but am I really going to trust my secrets
without someone who didn't learn enough about go to use godoc and use a
conventional project structure?

4\. No tests

Again, doubtful I'm going to trust my secrets to a distributed network that's
not tested.

~~~
mskd12
One of the authors of the work here.

1\. Thank you for your comments. The project is an early-stage research
prototype, and we are soon going to add documentation and tests to the code.

2\. In fact, not all functions in the API are fully implemented, the
development is under process. At a high level, the functionality provided is
to store and retrieve a secret key (SK). The function is denoted optional
because in practice, the secret might not be inputted, instead it might be
generated randomly.

3, 4. We appreciate the comments. At this point, we’re still adding more
features and improving the documentation. We plan to release the code with
tests in the future.

PS: Just added a disclaimer stating that the code is under development!

~~~
dkoston
For #2, it seems unlikely that people would have a single secret.

I’m not sure if the codebase is more just to prove the concept from the paper
or to potentially get adoption. If you are looking for people to adopt the
project, I’d suggest a way to store and retrieve multiple secrets.

Best of luck with the research and project!

~~~
mskd12
In many scenarios, I'd think that different user-specific secrets can be
derived from the single secret stored, perhaps using a PRF. Why wouldn't this
be enough?

Thanks!

~~~
dkoston
You can use a PRF for the domain to derive additional secrets. It simply
depends on the scope of the project and who you think your users are.

For example: if your user base is cryptocurrency wallet holders who have
multiple secrets for each wallet, will they construct a secondary library on
top of yours to manage those additional secrets? Why wouldn’t they choose to
derive each secret independently? If millions of dollars are at stake, would
you risk any shared state from your secret derivation function?

It would be unnecessary to derive additional secrets with this library to
prove the concept in the paper so I don’t think it’s necessary if that’s the
goal of the code. However, if mass adoption of the techniques you’ve created
is your goal, a more user friendly API which doesn’t require each end user to
“roll their own code” to manage multiple secrets should be a goal.

One of the first major projects I worked on was essentially a wrapper around
open source software that provided “ease of use APIs and UIs”. Because the
average user was not technical, the convenience wrapper became highly valuable
and is used by hundreds of millions of sites today (cPanel).

One of the biggest challenges as a technologist is to understand to what
degree most people are not technologists, even fellow programmers. For
example, I’ve worked with skilled programmers with impressive resumes who had
issues troubleshooting CORS because they never learned how headers are defined
and where to look up the RFCs.

As mentioned above, providing an API for multiple secrets could be out of
scope for a bunch of reasons. If you’re looking for mass adoption by
developers, I’ll wager an Omakase at the sushi place of your choosing that
it’ll be required.

------
baby
This is the right timing as I was looking into Secret Splitting/Sharing
schemes today. Does anybody know a good survey? It seems like there are
several types of schemes:

* SSS: (threshold) Shamir Secret Sharing scheme. split a key into m-of-n keys.

* PSS: Proactive Secret Sharing. You can rotate the partial keys with new independent ones, and participants can delete their previous partial keys. This is good in case you someone gets their share compromised at some point.

* DCPSS: Dynamic-Committee Proactive Secret Sharing. This is CHURN. You can update the partial keys with a different m-of-n.

* VSS: Verifiable Secret Sharing. SSS doesn't seem to hold when participants are not all honest. This one fixes that.

* PVSS: Publicly Verifiable Secret Sharing. ?

* APSS: Asynchronous Proactive Secret Sharing. ?

* SVSS: Strong Verifiable Secret Sharing. ?

* SVPSS: Strong Verifiable Proactive Secret Sharing. ?

There is also another branch of secret sharing called DKG for Distributed Key
Generation . Participants collaborate to generate a key, and to asymmetrically
decrypt a ciphertext or sign a message. At no point in time does the full
private key exist.

~~~
abdullahkhalids
Thanks for just listing these out. I have worked on quantum secret sharing of
the SSS type during my PhD. In QSS, the secret is not a bit-string, but a
ordered set of qubits in some state.

Now, I am thinking whether there exists quantum analogs of these generalized
classical secret sharing tasks you have listed.

------
jMyles
I read the entire website, then spent a few minutes looking at the code, then
a few minutes at the paper. So far, I don't understand the exact nature of the
proposition here.

Specifically: is there any difference in the collusion profile with CHURP vs.
vanilla SSS? If this is not an area of dramatic distinction, then it's a
little hard to understand how this might end up underwriting key management
technology.

Disclaimer: I work at NuCypher; our Ursula Character is probably reasonably
regarded as occupying a similar space to this.

~~~
mskd12
So, in vanilla SSS, committee is fixed and the adversary is allowed to corrupt
t-out-of-n nodes _over the entire lifetime of the secret_.

In CHURP, committees are dynamic---changes at every epoch---and the adversary
is allowed to corrupt t nodes _in each epoch_.

~~~
jMyles
Sure, got that.

But doesn't it also mean that, as the committee cycles, the likelihood of
compromised nodes eventually being part of the cohort increases?

If you have a minute, maybe evaluate this statement to see if I've got it
right: The attack surface presented by CHURP, in contrast to vanilla SSS, is
more robust against an attacker who, throughout the lifecycle of the secret,
begins with few compromised nodes and increases her circle of influence
throughout? It is, at least in some cases, weaker against an established
attacker who begins the lifecycle already having compromised several nodes and
who can afford to wait until the right combination of nodes is selected for
committee participation.

Do I have that right?

Interesting stuff, either way.

Have we ever met? Were you at EthBerlin last week?

~~~
mskd12
The adversary in your statement seems like a static one. Given enough nodes in
the committee (100's), it should be possible to make sure the case you specify
never happens.

And, I'd think fixing the committee is worse, as it presents a static point of
attack from adversarial POV. If you want any amount of decentralization,
handling churn seems essential.

No, I was thinking about Devcon, but I don't want to travel half way across
the world ;)

~~~
jMyles
We'd love to meet y'all. Devcon is a truly awesome time, but securing tickets
is a non-trivial task.

> And, I'd think fixing the committee is worse, as it presents a static point
> of attack from adversarial POV. If you want any amount of decentralization,
> handling churn seems essential.

Interesting; I have often had the opposite sense: that involving a larger
cohort of nodes generally increases the risk of collusion.

Of course, the consequences of collusion with Ursula are quite a lot less (she
can refuse to revoke, but cannot recover the secret). This comes at a cost of
needing to know Bob's public key prior to access being granted. It's really a
very different formula.

We have often pondered (sometimes out loud) about adding some variety of
Shamir's Character after Ursula hits mainnet.

------
kang
This paper relies on an assumption: At a given time, either there exists a
blockchain, by virtue of which participants can talk freely or everyone knows
everyone.

------
javert
Almost every comment here is complaining about the code. This is based on a
misconception. The "product" here is an academic paper, not the code.

The code is just a kind of sanity check for the ideas in the paper.

This is how academia works.

Please comment on the ideas embodied in the paper, not the quality of the
code, which is almost entirely irrelevant at this point.

(No affiliation with the authors, here.)

~~~
dkoston
There’s no misconception. Until I commented, there was no disclaimer stating
that the code was not ready for production use and was for research purposes
only.

Very few people are qualified to review and critique this type of codebase so
it’s dangerous to market a repository backed by multiples professors and PhDs
as “it does X, Y, and Z”

It would be great to live in a world where people did a serious review of
projects before downloading them but that’s not the reality we live in.

OP didn’t provide any context either that this was focused on reviewing the
paper and concepts in it so I think what you mean is that “you think people
should focus on the paper” but “we focused on the code”, that’s not a
misconception, it’s a lack of clarity by both OP and the project website about
what’s important.

If the code was irrelevant, it should be removed from the website so that the
audience could focus solely on the paper.

~~~
javert
> it’s a lack of clarity by both OP and the project website about what’s
> important

As a former academic, I can immediately spot this website as an academic paper
with a bit of code on the side. Academics do this kind of thing constantly
because they get kudos for it in academic circles. Yes, there is a lack of
clarity (if you are not an academic). That's why I pointed this out for what
it is.

> If the code was irrelevant, it should be removed from the website so that
> the audience could focus solely on the paper.

I mean, you're not wrong. But then they wouldn't get as many kudos from their
academic buddies.

~~~
dkoston
I agree that this stuff is common place and I definitely understand that in
today’s world of noise that you have to market yourself and your papers to get
attention.

I was glad to see them add the disclaimer as hopefully it’ll turn away people
who see “hey, here’s this secret management project made by a bunch of
professors and PhDs, I should use it because they are super qualified”.

My goal was to provide feedback to the students about how people evaluate code
“in the real world”. It’s not helpful for them to be applauded for “sub par”
code but you raise a good point that it can seem harsh to not provide feedback
on the “meat of the project” which is their paper and research.

------
jfindley
Even a very very cursory review of the code turns up giant red flags. As
another comment noted it lacks tests, uses poor project structure and has
private deps. Much more seriously, though, the encryption looks very very
unsafe.

The encryption uses math/rand. This alone clearly shows the authors have
absolutely no clue what they are doing. The encryption itself is also very
unsafe - loads of obvious side channel attacks and really far too many flaws
to cover here. This isn't just a few mistakes, this is a group of people who
don't know enough to do this safely. Sorry, this stuff is hard and this
project doesn't come close to meeting even the lowest bar.

1:
[https://github.com/CHURPTeam/CHURP/blob/master/src/utils/enc...](https://github.com/CHURPTeam/CHURP/blob/master/src/utils/encryption/encryption.go)

~~~
jrpt
It's clearly research code meant as a proof-of-concept of their paper, not
something to take and use in production.

If you're going to criticize it, maybe do it more respectfully, and also focus
on the paper, ideas, and motivation, not the particular proof-of-concept code.

I'm decently familiar with this space and haven't read the paper yet, but the
problem it's addressing is important.

~~~
jfindley
The link was to www.churp.io which looks to me much more like a product than a
research paper.

One of the authors helpfully clarified below, but at least to me it was
extremely unclear that this was the case - had it been more obvious I'd have
phrased my original comment _very_ differently. On second inspection one of
the blue links does point to "full paper" but there's still lots of words that
imply that it's a viable working product, which it really isn't.

~~~
leetrout
We posted at the same time. This is exactly right. It looks like it’s trying
to be a marketed, accepted solution.

------
cjslep
This plus ActivityPub and data that's content addressed would be neat.

------
DoctorOetker
Theres a big difference between a brilliant protocol and a safe
implementation. It's clear from the other comments that the implementation
does not live up to their expectation, but what about the theoretical
algorithm?

------
EGreg
What's wrong with just Shamir Secret Key Sharing?

------
earthtolazlo
When’s the ICO?

------
faeyanpiraat
deleted (salty comment not adding anything to the discussion)

~~~
detaro
People upvote things they find interesting, even if they don't (or can't)
evaluate the quality of the specific thing.

