
A censorship resistant deadman's switch - heelhook
https://killcord.io
======
danShumway
This is interesting, but runs contrary to my understanding of how Etherium
works. I'm clearly missing something, any chance you (or anyone else) could
elaborate more?

My understanding was that the decentralization of Etherium would mean that
everyone watching the contract would need a copy of the decryption key. If
that's the case, what prevents someone from publishing keys early? Or is it
that the key isn't stored in Etherium, and Etherium is only being used as the
consent to publish?

If the key is being stored somewhere else and just waiting for the contract to
validate, how do we prevent a censor from just attacking that system?

If the key is being stored somewhere else and just waiting for the contract to
validate, why not also store the contract on the same machine and do checkins
directly into that? Would that be significantly less secure/reliable?

~~~
rojoroboto
Killcord treats ethereum as a project backend API. The smart contract is
pretty simple in construction by design. Writes are restricted to one of two
accounts (the owner account and the publisher account) and the publisher
account is further restricted to only allow writes to the publishedKey
variable in the contract. Reads are open to the public.

As stated in other responses, the decryption key is stored own trusted systems
that run the owner or publisher killcord projects.

As for attacking the system this is something to think about. So why did I
choose Ethereum for this?

Why Ethereum - The contract code (backend API) and variable state are written
to the block chain, so the availability are dictated by the network itself
which is made of around 20K nodes (give or take). Of course, as others have
mentioned the other aspect of this is internet access for the publisher and
project owner.

For the publisher, this can be accommodated by running the publisher in a
geographically distributed set of trusted systems. What do I mean by trusted
systems? These are systems that meet your risk profile. The code can run on
AWS Lambda in multiple regions, or on a raspberry pi, or in a datacenter in
iceland, the more, the merrier.

For the owner... If you are cut off from checking in, the system assumes
something bad is afoot. This is why its important that anything put in
killcord is something you really want to publicly disclose. Killcord should
really only be a system that runs on your behalf in the case that you go MIA
and you feel that is a threat to the data being otherwise released.

Hope this helps clear things up a bit?

~~~
kolinko
Are you using only a single decryption key?

If so, you could switch into M of N scheme - far more secure, and thanks to
Ethereum, the coordination between key keepers would be really simple.

(Kind of what we did with Orisi.org years ago)

------
rojoroboto
Hey Gang. Author of killcord here. I'm honored and humbled this was submitted
to HN and I'll be reading through the comments to answer questions and respond
to feedback. I started this project after a thought experiment in using newer
decentralized tech for internet activism.

~~~
sterlind
Neat project! I thought up a trustless scheme for this a while back, but it's
beyond my means to implement:

You can encrypt an entire circuit with homomorphic encryption, which users can
run without decrypting its internal state. Construct a device like so:

Inputs: 1\. Ethereum block 2\. Previous run-state (encrypted) or zeros.

Outputs: 1\. Next run-state (encrypted) 2\. Decryption key (if triggered) or
zeros (if not.)

Internal state: 0\. Hash difficulty range 1\. Hash of previous block seen 2\.
Pubkey to scan for 3\. Counter of # blocks seen without a tx signed by pubkey.

If you feed the device more than 1 week of blocks without a tx from pubkey,
the accumulator hits zero and it spits out the secret.

An attacker would have to mine 1 week of blocks at hash power IS.0 in order to
trick the device into spilling its guts. If you die, and don't send txs for a
week, anyone with the device can play a week of blocks into it and the secret
will pop out.

Unfortunately, homomorphic encryption is still too slow for this to be quite
feasible. Food for thought though! And you can build this today with SGX, if
you trust that.

~~~
rojoroboto
neat. Yeah, I picked symmetric encryption for the payload due to its relative
simplicity, speed, and resiliency.

------
ofcourseianal
Censorship resistant, until someone takes down the “publisher tool meant to
run autonomously on a trusted system”.

~~~
w-m
And a landing page that omits this fact (but contains download and
instructions for a command line tool). If you're thinking "wait, I can't put a
self-publishing secret on the Ethereum blockchain, how does this even work?",
the landing page leaves you hanging.

~~~
rojoroboto
This is true.

I'm working with a friend who is a copy writer to help make the landing page
clearer and more helpful.

------
gnode
Given that the trusted party is required for this to work, is there any point
at all in having it depend on the Etherium blockchain, other than perhaps a
weak form of anonymity network?

~~~
rojoroboto
The purpose of killcord + ethereum for public disclosures is that leaning on
ethereum as an API backend ties itself to the fact that taking down the entire
ethereum network is difficult and running your own backend resiliently is
hard.

That being said, I'm working on the concept of "providers" so that storage,
payload, and backend can be plugable and you'll be able to use whatever
backend you are comfortable with.

------
s17n
As far as I can tell, Ethereum isn't actually doing anything interesting here
- it's just being used to transmit pings to the server, which could just as
easily be done with, for example, tcp/ip.

------
dogma1138
Anyone who would think of using it you need to consider at least 2 threat
models.

1) The key castodian can decrypt your Information either willingly or through
coercion. If you use the same key to sign and encrypt the message or if you do
not sign it then they may also be able to impersonate you.

2) A third party who would gain from the information being disclosed can force
its release through a denial attack.

Never use a deadman switch as a bargaining or as an insurance policy if you do
not intend the information to be released to the public and if you are not
comfortable with the information being released the moment the switch is set
up rather than when it would be activated.

The only manner in which this or any simmilar setup does not expose you to
additional risk is if you only use it to ensure the release of said
information in a timely manner and there is no adversarial motive to release
it sooner.

@the creators you might want to look at the possibility of implementing
[https://en.m.wikipedia.org/wiki/Chaffing_and_winnowing](https://en.m.wikipedia.org/wiki/Chaffing_and_winnowing)
over a blockchain.

------
XR0CSWV3h3kZWg
There is a lot of hate for the trusted party set up of this, which seems
reasonable.

It seems like you could create a dead man's switch using arbitrary
participants. You distribute a secret to every participant and then to attempt
to activate the dead man's switch they raise k to the power s mod p and pass
it to the next participant. As long as you act as a participant each time and
raise the passed value to some invalid s then the answer that is arrived at
won't be the final secret.

As long as you participate every round the wrong answer will be arrived at,
but as soon as you don't participate the right answer will be arrived at.

Any singular party refusing to cooperate would destroy the deadman's switch so
malicious activation would be tough.

Designing it so it can tolerate failures would be the hard part.

EDIT: I am wrong, this isn't that great. It's really hard to hide information
that can be recovered without a secret being revealed.

~~~
danShumway
So, sort of like a secret generating linked list, where one node (you) are a
bad actor?

What prevents the participant right before you from simply circumventing you
or secretly passing to the next participant directly?

It also seems that once someone receives the correct answer for their step in
the chain, they no longer need anyone beneath them?

(A) -> (B) -> (C) -> (you) -> (D)

Once C has participated in this one time, why do they need A or B?

~~~
XR0CSWV3h3kZWg
Good point. You'd likely want to also encode something that opaque to who
exactly has participated, only really show whether this is the last step and a
way for individuals to tell if they have already added their secret.

The really bad part would be that if the poisoner happens to be the last step
then the final step would produce the secret before handing it to be poisoned.

~~~
girvo
I built exactly what you’ve described, using semi-homomorphic encryption
(addition of integers, used plainly as we were under the noise threshold of
participants). Luckily for me though, I got to punt on some of the really hard
questions of trust — the nodes that were communicating are adversarial, but
the outside “organising” network was the government and “us” (company I worked
for). It’s a really fun problem. I highly recommend taking a crack at it, or
even just reading the literature regarding digital voting — you need to prove
that one vote was cast for a given person, and no more, without ever tying
back any specific vote to said person, and with a huge range of attack
vectors!

~~~
carver
Was this a traceable ring signature[1], or something different?

[1]
[https://en.wikipedia.org/wiki/Ring_signature#Applications_an...](https://en.wikipedia.org/wiki/Ring_signature#Applications_and_modifications)

------
tshannon
So a lot of these comments seem to be criticisms of potential vulnerabilities
(which is par for hacker news really). I'm curious if there are better
alternatives out there that aren't vulnerable to the same issues, like a
single point of failure or attack?

~~~
carussell
You could do secret splitting:

[http://www.moserware.com/2011/11/life-death-and-splitting-
se...](http://www.moserware.com/2011/11/life-death-and-splitting-secrets.html)

It's vulnerable in that whichever threshold N that you choose allows for N
participants to conspire to publish ahead of time, or M - N to conspire not to
publish after the fact.

~~~
alanh
interesting. i hadn't seen this although I implemented something effectively
the same, except that all keys (which could be any number ≥ 2) be combined to
reveal the secret (or any information about it).

------
robert-wallis
What if the miners deny check-in transactions to force the killcord to
execute?

~~~
VectorLock
You're boned. Then most systems including Ethereum are based on the assumption
that the miners aren't majority controlled by an adversary. That may or may
not be a sound assumption.

------
everdev
Have any legal systems weighed in on a dead man's switch?

I get the premise, where typically it's illegal to take an action that
releases confidential or censored information.

But, to governments, especially ones that want to keep information secret or
censored, I'm not sure that negating that sequence and failing to stop the
release of information (that you willingly put in a dead man's switch) will
get you out of trouble.

Unless you're dead of course. But, I've seen this process promoted for living
people to release information and I'm not sure it's any better than just
posting the content anonymously, but with the added risk of accidentally
releasing the information.

------
TekMol
Better and simpler solution: Create a Bitcoin address and send one Satoshi to
yourself every month.

When the transactions stop, people know you are dead.

This way you need no trusted third party, no special software, no special
contract.

~~~
sterlind
You're describing a different problem. Killcord solves the _Insurance Policy
problem:_

Suppose you're a whistleblower, who exfiltrated gigabytes of unredacted data
from the NSA. So far you've leaked only redacted excerpts, but the NSA might
kill you to stop your leaking.

However, the NSA really doesn't want the whole archive leaked, or it would
blow their agents' covers.

So, you put the whole archive up on the net, encrypted, and set up Killcord to
decrypt unless you keep checking in. This keeps you alive, since the NSA knows
it'll leak if you're dead.

~~~
rojoroboto
yep. this falls in line with my design thinking on this.

------
bowmessage
Why is everyone suddenly spelling Ethereum with an '-ium'?

------
fareesh
Does this take into account network congestion and such?

~~~
rojoroboto
This is left up to the killcord project owner to set the publisher threshold.
If the project owner is concerned about congestion the owner should increase
the time allotted to the threshold.

------
arisAlexis
if someone puts a gun and steals your private key he can continue checking in
after he kills you right?

~~~
matte_black
No, if you have ordered keys and only you know the order, there is no way to
do it unless you give the order, and even then there’s no way to confirm the
order is correct without trying it.

The way around this is to threaten not to kill the target, but rather kill
their whole family or those they care about viciously and painfully, and be
ready to do it, if the order is wrong and there is an automated leak.

~~~
azernik
Well sure, just like you could give them the wrong private key.

I always find these arguments against coercion attacks unconvincing. "Well,
they can force you to give them information A, but for some reason not force
you to give them information B." No, they'll put you in jail and force you to
give them all the information needed to send check-ins, period.

