
Time-lock encryption - kiba
http://www.gwern.net/Self-decrypting%20files?2
======
zellyn
There are also real-world solutions. I spent some time brainstorming this a
while back.

1) The safest solution, albeit somewhat expensive: launch a Voyager-like
probe. It constantly generates private and public key pairs, broadcasting the
public keys immediately and the private ones on a schedule. That way, you can
pick a known timeframe for decryption. Launching more than one probe, for
redundancy, is probably a good idea: you can encrypt with multiple keys.

2) Cheaper but less reliable solutions involve balloon-dropping transmitters
that wake up and divulge decryption keys across a huge geographic area where
it would be infeasible to find them. A variation is sinking modules very deep
in the ocean that wake up and float to the surface.

~~~
xlayn
If the probe is destroyed so is the the possibility of decryption. Same for
the second option.

In fact the problem is not related to time, but to the processing time and
it's hard for two reasons, the math has to be so incredible/good/error free
that going the physics way is cheaper and by cheaper I mean the only way
(there is nothing more expensive than that thing it does not exist). Now
Assumptions related to physics, which part do you assume as "hardcoded",
(gwern decided for the cache speed; I found the problem on specialized
hardware for it, AMD builds cpu on spec (back to physics))speed of light, and
the amount of energy it takes to perform a job, resistance of circuits near
absolute zero and therefore the amount of energy required to perform the job
given that you can suppose on non paralelizable tasks therefore scalar cpu and
therefore given a minimum scheme of circuitry the amount of time on a perfect
conditions. This works unless reality(trademark) goes into play; -how much
time will it take to humanity to hit the perfect conditions? -which conditions
or values do you consider appropriate to measure the time (for releasing the
information given the conditions are met?)

What if you just hide (like for example by burring them in the sand on the
sahara) 100 devices from which 10 are the original ones and the other ones are
backups of the original and you need ten of them to decrypt? (scale times 10^x
for paranoia level)

~~~
anologwintermut
That's why you launch a few of them. Also, the odd's of the probe getting
destroyed e.g by an impact are really low. They get lower once you leave earth
orbit and really low when you hit interstellar space. And you can probably get
to less impact stuff quicker if you go perpendicular to the plain of the
ecliptic rather than out.

Your bigger issue is component failure.

So launch a enough to deal with component failure, wait long enough for them
to be out of harms way, and then you have a time lock.

Sure it's expensive, but time locks with a known time to unlock or even a
decent estimate are not really possible algorithmically.

~~~
xlayn
Excellent, but the point is to make the most secure/cheap solution, he already
has a mountain a he could build a room with gigantic 100mt of steel walls with
a piece of paper with the pass on it recorded by 100 webcams on 100 diff
connections, or send a 10000 probes. Math for building an idea to hack is so
good because ideas and intelligence to build them are so expensive the other
parts of the problem look more feasible to hack/fix/try

------
karl_gluck
I like the parallelized hash chain construction idea; I've never seen that
before.

One could improve a chunk of the chain by having checkpoints along the way. As
the author mentioned, it would suck to be 2 years into mining a 35-year
computation, only to make a mistake that you can't detect until the very end.

To add checkpoints, one could release both the original seed of the chain A,
and a number of pairs of hashes (x0,y0) (x1,y1) ...

Let's say you wanted to do 1-month chains. Hash the seed A for a week, then
take the current value x0 such that H(B)=x0. You know the value of B, since
you've been computing the chain. Pick another random value y0, and continue
the chain with H(B^y0). Write (x0,y0) in the output, and hash for another
week. Do the same for (x1,y1) (x2,y2) and (x3,y3). Each chain then has a seed
value and 4 pairs of 'checkpoints'.

When unlocking the crypto puzzle, these checkpoints can't be used to jump
ahead in the computation, but they can tell you that you're on the right
track.

I think that you could even use a secondary hash chain for the y_n values, so
y_n+1=H(y_n). If you also derived y0 from A (e.g. y0=H(A^const) ), you would
just need to publish the seed value A and each checkpoint hash x_n in order to
have a fully checkpointed crypto puzzle.

~~~
cbhl
One problem I can see with that is you have to be very careful that your
hashes (x_i, y_i) don't make it more attractive to, say, solve the n-1th
computation (similar to brute-forcing a password hash) than to do the first
1..n-1 computations properly.

~~~
karl_gluck
Good point! Still, I think the best hash inversion I've seen is slightly under
1/2 the hash space (2/5ths?). Using a 512-bit hash like SHA3, even at an
ungodly hash rate (1 TH/s), you still get approximately 2^141 seconds = 2^133
years maximum chain length before it becomes more efficient to invert the
hash.

------
bteitelb
If only there were a series of mirrors, each N light-years away, you could
blast a one-time pad out to the mirror of your choice and then announce the
date the reflection is expected to arrive.

~~~
simbolit
Nice idea IF we had such mirrors. Afaik the farthest away is on the moon.[0]
Also a one-time pad needs to be as large or larger than the data used to
encrypt it.[1] So your plan cannot be used for the several gigabytes wikileaks
insurance file. But perhaps you can "pad" the key used to encrypt it?
Probably.

[0]
[http://en.wikipedia.org/wiki/Lunar_Laser_Ranging_experiment](http://en.wikipedia.org/wiki/Lunar_Laser_Ranging_experiment)
[1] [http://en.wikipedia.org/wiki/One-
time_pad](http://en.wikipedia.org/wiki/One-time_pad)

------
im3w1l
I think I have a better scheme. Say you have a 10 bit keyspace or something,
and then encrypt a very large number of times with random keys. You don't have
to perform as much computation as your adversary. By the law of large numbers
the probability to solve all of the puzzles in a much shorter then expected
time is low. And it is much less parallellizable then just one encryption with
a random key.

~~~
ColinDabritz
Interesting idea, I like the "as much certainty as you want" with the
probability.

Why can't this be parallelized? Sure you have to work on one key at a time
because they are sequential, but 1000 machines can be cracking that one key.
Every key cracked would be distributed and the cluster would start in on the
next.

It's a little more complicated, but I'm not seeing how it's really any less
parallelizeable.

~~~
im3w1l
I think, but am not certain, that with such a small keyspace the IO cost would
dominate if you tried to parallelize it. (Assuming each decryption attempt is
rather quick).

~~~
dsl
I/O is inexpensive for even non-state level actors. I have 40,000 spindles on
a research cluster that is hardly used for anything...

~~~
Game_Ender
Yeah every body has $4 million in hard drives just sitting around. At 1TB each
that is a 40 peta-byte array, that is still huge in this day an age.

------
pbaehr
This is interesting in its own right, but the Assange use case doesn't really
make sense to me. Wikileaks doesn't want the encryption to be broken after a
certain amount of time, they want it broken based on the condition of
assassination.

~~~
arielweisberg
The primitive for this is a dead man's switch. I wonder what the cryptographic
equivalent would be.

Some sort of computational network that will always make progress towards
decrypting the data unless the soon to be dead man injects something using his
private key that sets the network back preventing completion?

The network can't identify that the soon to be dead man is preventing
progress?

Sounds like a fun research project. Maybe tie it to some coin mining network.

~~~
betterunix
The problem with using a secure protocol is that you need to trust the parties
to not just instantiate a second version of the protocol without you. If you
can trust them to do that, you can just give them shares of the secret and
trust them not to recombine the shares unless you die.

~~~
arielweisberg
I think what you would do is make it computationally infeasible for them to
create a separate network. The network exists independent of your individual
secret and may be working towards the release of many soon to be dead men's
secrets in a peer to peer fashion.

Sure they could avoid the dead man injecting something, but then they would
get hit with the full workload. If you coupled this with an economic incentive
like bitcoin mining you could get the miners to allow you to drastically
increase the amount of work required and make it keep up with the state of the
art in technology.

I couldn't say whether you can string enough primitives together to build
something like this but it would be really cool if you could. Maybe I should
go file a patent or something :-P

~~~
betterunix
"I think what you would do is make it computationally infeasible for them to
create a separate network."

If the goal of the network is to release my secret when I _fail to
participate_ then there must be a way for the network to operate without my
participation. What stops the parties from _ignoring_ the messages I send,
thus recovering my secret by simply pretending I died?

~~~
arielweisberg
I think that is one of the novel pieces of functionality that needs to be
created. The soon to be dead man looks like any other miner to the network and
submits what appears to be valid work, but he is secretly poisoning the work
going to release his secret, and of course he has to be the only one capable
of doing that. Maybe later on the network realizes the work it was doing was
poisoned and rolls back the poison change and resumes processing. Every time a
poison pill is introduced the network must do some amount of work to determine
that that the pill must be discarded and that amount is tunable by the soon to
be dead man.

"What stops the parties from ignoring the messages I send, thus recovering my
secret by simply pretending I died?" They would have to control enough of the
network and know who you are and how you are interfacing with the network to
stop you. If the network has an incentive like bitcoin mining this could be
infeasible for many adversaries.

Tampering is a problem with real dead man's switches as well such as a script
that you have to ping or an associate.

I think big computational networks with incentives unlock some interesting
doors. If you can assume that the computational majority of the network is
playing ball you can ask it to do some interesting things if you have the
right private key.

~~~
betterunix
""What stops the parties from ignoring the messages I send, thus recovering my
secret by simply pretending I died?" They would have to control enough of the
network and know who you are and how you are interfacing with the network to
stop you."

OK, but if the assumption is that out of the _n_ parties in the network no
more than _k_ parties are malicious, why not just use a _k+1_ out of _n_
secret sharing scheme? You broadcast a signed message once per month, and if
the message does not arrive for some number of months the parties all
broadcast their shares and recover the secret.

At best, the role of "proof of work" systems here is in combating sybil
attacks, which is only relevant if you want to remove the requirement that I
know the people I am issuing shares to. If that is truly advantageous, the
system might look like this: first, I broadcast a public key for some non-
malleable encryption scheme. Each party willing to participate will then use
that key to encrypt a randomly generated ID string that they keep secret. Once
I have received the IDs, I broadcast a random string, and each party will use
their chosen ID and the random string as a "starting point" in a proof-of-work
scheme. The output of the proof of work is then used as the seed to generate a
keypair for a symmetric cipher (using an appropriate key derivation function).
The parties encrypt the proof-of-work outputs and send the ciphertext to me; I
check the proofs and generate the keys locally. Then I encrypt each party's
share using the party's symmetric key and send the encrypted share. Then I
proceed as before, sending a periodic message.

I suspect, though, that such a construction is overkill; also I have not
really evaluated the security of it.

"I think big computational networks with incentives unlock some interesting
doors"

Maybe so, but right now I see a solution in search of a problem.

"If you can assume that the computational majority"

Why should I need to assume _anything_ about the computational resources about
the participants? We can have threshold secret sharing with unconditional
security, and we only need to trust _one_ of the parties for the switch to be
secure _regardless_ of the computing power of the rest of the parties.

~~~
arielweisberg
"At best, the role of "proof of work" systems here is in combating sybil
attacks, which is only relevant if you want to remove the requirement that I
know the people I am issuing shares to."

That seems pretty fundamental to making the mechanism accessible. If are
talking about switches as a service if there is a "fixed" pool of switches and
an exploit is found that allows you to compromise each switch component you
are out of luck because you didn't actually make materializing the secret
difficult.

By requiring actual work to be done and allowing the difficulty of the work to
be tuned based on the capacity of the network you make an adversary go up
against the math instead of against the people.

~~~
betterunix
"If are talking about switches as a service if there is a "fixed" pool of
switches and an exploit is found that allows you to compromise each switch
component you are out of luck because you didn't actually make materializing
the secret difficult."

If an exploit is found that allows you to compromise each component, then the
adversary can just have the components ignore your messages and open your
secret. It makes no difference how the system is structured at that point.

"By requiring actual work to be done and allowing the difficulty of the work
to be tuned based on the capacity of the network you make an adversary go up
against the math instead of against the people."

By using a threshold secret sharing scheme, you ensure that the adversary
cannot get the secret regardless of the their own computing resources. You
also avoid wasting electricity for the sake of your switch. You also have the
advantage of having a well-defined security model that can actually be
analyzed formally.

The only reason you would ever want to burn through some CPU cycles is to
thwart sybil attacks. Unlike Bitcoin, you do not need to keep doing proofs of
work after that, because once the shares are distributed, there is nothing
more to do. If the adversary increases his computing power after that, he
gains nothing by it, because he will not be given any more shares. Hence the
suggestion in my previous post: have the proof of work be coupled to the
generation of a public key, and just have the public keys be generated when
someone needs to set up a switch.

------
emiliobumachar
Prepare ordered sterile Petri dishes with nutritious solution. Expose some to
bacteria, forming bacterial cultures, but not others. Ones and zeroes.

At first, they will be almost impossible to distinguish (you cannot hurry much
your culture medical exams even if willing to spend a lot, right?) After a
while, the cultures become obvious.

The timing is heavily dependant of the bacteria's life cycle, but I guess if
you stricly control the number of individuals initially put into the dishes,
and keep temperature and lighting optimal, you can predict the time-to-
observability to around 40%?

------
Cyranix
Not directly related to this article, but gwern.net seems to be coming up on
the front page quite often. Is there an RSS feed for the site? I was unable to
find one.

~~~
malcolmmcc
Gwern posts articles, but not in blog format. If you look on the sidebar for
this page, for example, you'll see it was originally published in 2011, and
just recently modified.

This is the closest thing to what you're looking for:
[http://www.gwern.net/Changelog](http://www.gwern.net/Changelog)

~~~
Cyranix
Ha, close enough I suppose. Thank you!

------
3pt14159
In reality the easiest solution for this would be to build yourself a time
release cellphone or something. A simple time release power switch pulls, then
the phone tweets the key to a strong encryption scheme. Build the phone into
the walls of a cafe or something, maybe a bit of redundancy (multiple cafes) /
and photosensors to detect discovery and disable/destruct the phone.

If you split the key redundantly between the phones, you should be fine.

~~~
simbolit
And now you don't have to trust the math, but all kinds of things in the
physical world. A photo sensor for example is disabled by darkness. Digging up
the phone by night, anyone? Bad idea, IMHO.

~~~
jlgreco
IR LED for illumination, use the camera to detect movement where there should
be none.

It would burn a ton of power though.

------
chrislipa
Maybe there's still a way to use homomorphic encryption. gwern rightly
suggests that it causes a big problem if the recipient must decrypt the result
of an encrypted computation. However, what if the decrypted result of the
computation is never known to the recipient and instead the recipient must use
the still-encrypted result of the encrypted homomorphic computation?

It would work like this:

The secret sharer creates some random string called 'a' and some computable
function 'f'. The secret sharer also creates an encryption function 'e', and a
homomorphic equivalent to 'f', called 'F' so that the following commutes:
e(f(x)) = F(e(x)). F acts on encrypted data and gives encrypted results, but
is much slower than f.

The secret sharer can comparatively quickly compute e(f(x)), which he or she
uses as a key to encrypt a message. However the recipient is only given the
values e(x) and F and must use exponentially more computational time to go the
more laborious route, computing F(e(x)).

~~~
dsl
That is effectively a proof of work scheme, which is what most time lock
systems are based on. The fundamental problem is that people with more
resources can decrypt faster than people with less resources, and you're
generally wanting to share with someone who has less not more.

~~~
chrislipa
You make a good point, but that's also a shortcoming of all of the purely
computational techniques. I think the most you can hope for is 'democratizing'
the computational methods, i.e. making them non-parallelizable.

Taking gwern's idea, what if the homomorphic computation is just incrementing
your input repeatedly inside of a loop? It seems like that might be hard to
parallelize. Or at least, I don't know enough to assume otherwise.

------
currymesurprise
gwern, if you're reading, this section is misleading if not wrong ...:

"But that doesn’t seem very true any more. Devices can differ dramatically now
even in the same computers; to take the example of Bitcoin mining, my laptop’s
CPU can search for hashes at 4k/sec, or its GPU can search at 54m/second."

This is an example of parallelism and parallelism only.

~~~
gwern
Are you implying that GPUs execute each hash as slowly as a CPU and are better
at hashing simply because they have more processing elements? I knew GPUs had
a lot of small cores, but I was unaware that mine had 54000000 / 4000 = 13500
cores.

~~~
currymesurprise
More or less, yes, that is my implication. Luckily, my sibling comment
provides some extra information.

For the example of SHA-1 computation, you mention using FPGAs that finish in
400 clock cycles, which is at most an order of magnitude away from a naive CPU
implementation of around 4000 clock cycles. I'm not as familiar with SHA-256.

------
logicallee
Under the third section "hashing" it says:

>For example, one could take a hash like bcrypt, give it a random input, and
hash it for a month. Each hash depends on the previous hash, and there’s no
way to skip from the first hash to the trillionth hash. After a month, you use
the final hash as the encryption key, and then release the encrypted file and
the random input to all the world. The first person who wants to decrypt the
file has no choice but to redo the trillion hashes in order to get the same
encryption key you used.

Then it lists this downside:

>"This is pretty clever. If one has a thousand CPUs handy, one can store up 3
years’ of computation-resistance in just a day. This satisfies a number of
needs. But what about people who only have a normal computer? Fundamentally,
this repeated hashing requires you to put in as much computation as you want
your public to expend reproducing the computation, which is not enough. We
want to force the public to expend more computation - potentially much more -
than we put in. How can we do this?

>It’s hard to see. At least, I haven’t thought of anything clever"

I have! Rather than hash n as your seed, after finding n (which you will still
release) hash (n+m) instead, where m is a random number in the range (for
example) 0,100. Discard m, do not retain it after you've started the hashes.
Release only n. Now, rather than starting at n, they still have to start at n,
then when they find that m wasn't 0, they have to try all over again hashing
n+1 a trillion times, then when they find it's not a good key, they have to
try hashing n+2 a trillion times, and so on until they've bruteforced n+m as
the correct initial seed for the start of the hash process. i.e. you make them
bruteforce what m must have been.

if m is ten, someone would have to repeat your process up to ten times (an
average of 5 times) before they found the seed.

Likewise if m is large, like 1,000,000 the force multiplier is 1,000,000x as
much work as you had to do, in the worse case and 500,000 times as much work
on average.

I've emailed the offer this suggestion.

~~~
josephg
This has the same drawbacks as using a small random key, which the article
discusses later on:

\- They might find the key earlier or later in their search (it adds
volatility to the amount of time required).

\- Searching for the right value of m is highly parallelizable.

~~~
logicallee
interesting counter argument. but you can still use the "trillion hashes" as a
limit to how parallelizaeble it is, and then use m to increase the average
amount of work done, but which however can be done in parallel. You are right
that this increases unpredictability. You can balance a trade-off between the
figure of "trillion" hashes and the value of m, to strike a balance between
how much work you have to do to compute it, and how predictable and non-
parallelizable the work will be (by increasing m).

e.g. you can work for an hour and increase it by a factor of 10,000, but the
ten-thousandfold work will be parallelizable and slightly unpredictable; or
you can work for 1000 hours (41 days) and increase the work by a factor of
just 100 in the worst case - but the increase will be parallelizable and the
unlocker might get lucky.

so you can really balance how much work you're doing with the level of
parallelizability/predictability of the reverse.

------
geedy
Correct me if I am wrong, but this assumes that processor speed will not
increase dramatically over the lock period, no?

Seems to me that if one was to be storing information over an extended period
of time, say 20 years, as time goes on, it becomes more and more likely that
the encryption can still be broken sooner than desired.

~~~
DanBC
Rivest's puzzle (1999) [http://people.csail.mit.edu/rivest/lcs35-puzzle-
description....](http://people.csail.mit.edu/rivest/lcs35-puzzle-
description.txt) assumes that we're all using 10 GHz processors in 2012.

> _The value of t was chosen to take into consideration the growth in
> computational power due to "Moore's Law". Based on the SEMATECH National
> Technology Roadmap for Semiconductors (1997 edition), we can expect internal
> chip speeds to increase by a factor of approximately 13 overall up to 2012,
> when the clock rates reach about 10GHz. After that improvements seem more
> difficult, but we estimate that another factor of five might be achievable
> by 2034. Thus, the overall rate of computation should go through
> approximately six doublings by 2034._

I asked about progress on this puzzle at stack exchange here
[http://crypto.stackexchange.com/questions/5831/what-is-
the-p...](http://crypto.stackexchange.com/questions/5831/what-is-the-progress-
on-the-mit-lcs35-time-capsule-crypto-puzzle) and got some nice answers.

~~~
simbolit
Disclaimer: I have no idea what factors are relevant for cryptographic
functions.

In 1999 they had coppermine 1133 single-core @ 1.1ghz (~ 2 gflops), in 2012 we
now had sandy bridge 3970 quad-core @ 3.4ghz (> 100 Gflops). So at least
according to one measure the factor of increase is more like 50 than 13.

------
pacofvf
Maybe it's just because is Monday but, besides of the Julian Assange's example
and the time capsule, I can't think of another use case of a Time-lock crypto
puzzle, anyone?

~~~
ColinDabritz
Court proceedings that would be sealed for 100 years. Secret material, say
military records, that need to be secure in the present, but are important
historically at some point. You could provide a declassification schedule this
way.

You could do a delayed form of historical whistleblowing or confession, so
that it doesn't cause you problems today, but history can know what really
happened.

Perhaps providing the equivalent of a 'sealed envelope' to prove that
something was known or happened on a given date, without having to be present
or active to prove it.

It's fun to think of a puzzle, but presuming it was truly delivering on the
safety promise, I can see quite a few uses. The real schemes though depend on
a lot of varying factors, so in cases where the secrecy was critical, you can
see why people wouldn't use it.

------
atlbeer
How secure is
[http://en.wikipedia.org/wiki/Shamir's_Secret_Sharing](http://en.wikipedia.org/wiki/Shamir's_Secret_Sharing)
?

~~~
chrislipa
It's a really beautiful and easy to understand and genius algorithm, and it
perfectly meets its design goals. It works very analogously to the following:

Pretend your secret is an integer. You want to distribute clues as to your
secret integer to N of your friends such that any K of them can collude to
figure it out, but K-1 of them can't figure out anything about it at all.

So you construct a (K-1)-degree polynomial of one variable, f(x). All of the
coefficients of the terms of f(x) are random, except choose the y-intercept
(i.e. the constant co-efficient) to be your secret. Then calculate and
distribute the numbers f(1), f(2), ..., f(N) to your N friends.

K points on the 2D co-ordinate plane uniquely identify your single
(K-1)-degree polynomial, which will have your specific secret y-intercept.
However (K-1) points will pick out an entire family of potential (K-1)-degree
polynomials. And, in fact, for every single possible secret you could have
chosen, there's a (K-1)-degree polynomial that goes through (K-1) points and
the possible secret value. So, (K-1) of your friends colluding really don't
have any additional information at all about your secret.

Shamir's secret sharing works just like that, but done with integers modulo a
prime. (And the prime has to be larger than your secret.)

------
dangayle
This sounds like the ideal opportunity to have a map tattooed to our scalps
like old fashioned pirates. Or to the back of some kid (like on Waterworld).

------
earlz
Possible variant on this scheme to make it harder to estimate the amount of
time required: Don't use a fixed number of hash iterations. Instead, use a
bitcoin-ish scheme like: "the key to this file is given by hashing 'xxxx'
until the hash's bottom 8 bits are 0"

~~~
simias
The problem here is that the amount of work needed to generate the key is not
known beforehand either.

I think a better approach (that would also not leak bits of the key) would be
to say "hash until the hash of the hash is xxx". So basically you hash until
you get xxx and then take the previous iteration's result as a key.

~~~
zeckalpha
Given a sequence of N hashes, you could select any hash A as the initial hash,
any hash B as the key, and the successor of B (C) as the key to the key.

After choosing B and C, choose an A whose distance to B and C is proportional
to the amount of work you wish to require.

~~~
simias
Your "given a sequence of N hashes" is a bit of a deus ex machina here :)

~~~
zeckalpha
Create it using 50% of your lifetime. Or create one for your children.

------
romaniv
1\. Generate a random number. 2\. s- or bcrypt it. 3\. Encrypt the data with
the result of step 2. 4\. Give people most of the random number, minus N bits
at the end. (And tell them how many bits they are missing.)

They will have to run s/bcrypt (N^2)/2 time to get your date. Am I missing
anything?

~~~
karl_gluck
The issue is that you can parallelize the "unlocking" operation. So, if one
person is attempting to unlock, they solve in 10 years. If two people attempt,
it's solved in 5 years. If the NSA puts all their computers on it, it takes 1
second.

The author is proposing methods that are forcibly sequential.

------
ye
Schedule a google calendar event in the future.

Set up an email alert so it sends the key to an email.

This assumes google isn't going to discontinue its calendar service.

~~~
r00fus
You're also trusting Google here, as your key must be accessible by google
calendar's event in order to send.

~~~
ye
So? To google it looks like a random character sequence. They don't know what
it is. It can even be encrypted, if you're that paranoid.

