1) The safest solution, albeit somewhat expensive: launch a Voyager-like probe. It constantly generates private and public key pairs, broadcasting the public keys immediately and the private ones on a schedule. That way, you can pick a known timeframe for decryption. Launching more than one probe, for redundancy, is probably a good idea: you can encrypt with multiple keys.
2) Cheaper but less reliable solutions involve balloon-dropping transmitters that wake up and divulge decryption keys across a huge geographic area where it would be infeasible to find them. A variation is sinking modules very deep in the ocean that wake up and float to the surface.
In fact the problem is not related to time, but to the processing time and it's hard for two reasons, the math has to be so incredible/good/error free that going the physics way is cheaper and by cheaper I mean the only way (there is nothing more expensive than that thing it does not exist).
Now Assumptions related to physics, which part do you assume as "hardcoded", (gwern decided for the cache speed; I found the problem on specialized hardware for it, AMD builds cpu on spec (back to physics))speed of light, and the amount of energy it takes to perform a job, resistance of circuits near absolute zero and therefore the amount of energy required to perform the job given that you can suppose on non paralelizable tasks therefore scalar cpu and therefore given a minimum scheme of circuitry the amount of time on a perfect conditions.
This works unless reality(trademark) goes into play;
-how much time will it take to humanity to hit the perfect conditions?
-which conditions or values do you consider appropriate to measure the time (for releasing the information given the conditions are met?)
What if you just hide (like for example by burring them in the sand on the sahara) 100 devices from which 10 are the original ones and the other ones are backups of the original and you need ten of them to decrypt? (scale times 10^x for paranoia level)
Your bigger issue is component failure.
So launch a enough to deal with component failure, wait long enough for them to be out of harms way, and then you have a time lock.
Sure it's expensive, but time locks with a known time to unlock or even a decent estimate are not really possible algorithmically.
One could improve a chunk of the chain by having checkpoints along the way. As the author mentioned, it would suck to be 2 years into mining a 35-year computation, only to make a mistake that you can't detect until the very end.
To add checkpoints, one could release both the original seed of the chain A, and a number of pairs of hashes (x0,y0) (x1,y1) ...
Let's say you wanted to do 1-month chains. Hash the seed A for a week, then take the current value x0 such that H(B)=x0. You know the value of B, since you've been computing the chain. Pick another random value y0, and continue the chain with H(B^y0). Write (x0,y0) in the output, and hash for another week. Do the same for (x1,y1) (x2,y2) and (x3,y3). Each chain then has a seed value and 4 pairs of 'checkpoints'.
When unlocking the crypto puzzle, these checkpoints can't be used to jump ahead in the computation, but they can tell you that you're on the right track.
I think that you could even use a secondary hash chain for the y_n values, so y_n+1=H(y_n). If you also derived y0 from A (e.g. y0=H(A^const) ), you would just need to publish the seed value A and each checkpoint hash x_n in order to have a fully checkpointed crypto puzzle.
Which, given that it's cryptography we're talking about, probably just means that it's completely insecure. :)
Or maybe some sort of superconductor/magnetic state device.
An active repeater would give you 1/r^2, though. For example, you could ping Voyager I for a 36-hour delay. However, that's likely not feasible at light-year distances, let alone the problem of getting the devices there. :)
Why can't this be parallelized? Sure you have to work on one key at a time because they are sequential, but 1000 machines can be cracking that one key. Every key cracked would be distributed and the cluster would start in on the next.
It's a little more complicated, but I'm not seeing how it's really any less parallelizeable.
For example, are you sure that your cipher doesn't have the property that for any text.Encrypt(key1).Encrypt(key2) there's an equivalent text.Encrypt(key3)? Some ciphers have that property (e.g. the one time pad). That would reduce your security from 10^N to... 10.
Some sort of computational network that will always make progress towards decrypting the data unless the soon to be dead man injects something using his private key that sets the network back preventing completion?
The network can't identify that the soon to be dead man is preventing progress?
Sounds like a fun research project. Maybe tie it to some coin mining network.
Sure they could avoid the dead man injecting something, but then they would get hit with the full workload. If you coupled this with an economic incentive like bitcoin mining you could get the miners to allow you to drastically increase the amount of work required and make it keep up with the state of the art in technology.
I couldn't say whether you can string enough primitives together to build something like this but it would be really cool if you could. Maybe I should go file a patent or something :-P
If the goal of the network is to release my secret when I fail to participate then there must be a way for the network to operate without my participation. What stops the parties from ignoring the messages I send, thus recovering my secret by simply pretending I died?
"What stops the parties from ignoring the messages I send, thus recovering my secret by simply pretending I died?"
They would have to control enough of the network and know who you are and how you are interfacing with the network to stop you. If the network has an incentive like bitcoin mining this could be infeasible for many adversaries.
Tampering is a problem with real dead man's switches as well such as a script that you have to ping or an associate.
I think big computational networks with incentives unlock some interesting doors. If you can assume that the computational majority of the network is playing ball you can ask it to do some interesting things if you have the right private key.
OK, but if the assumption is that out of the n parties in the network no more than k parties are malicious, why not just use a k+1 out of n secret sharing scheme? You broadcast a signed message once per month, and if the message does not arrive for some number of months the parties all broadcast their shares and recover the secret.
At best, the role of "proof of work" systems here is in combating sybil attacks, which is only relevant if you want to remove the requirement that I know the people I am issuing shares to. If that is truly advantageous, the system might look like this: first, I broadcast a public key for some non-malleable encryption scheme. Each party willing to participate will then use that key to encrypt a randomly generated ID string that they keep secret. Once I have received the IDs, I broadcast a random string, and each party will use their chosen ID and the random string as a "starting point" in a proof-of-work scheme. The output of the proof of work is then used as the seed to generate a keypair for a symmetric cipher (using an appropriate key derivation function). The parties encrypt the proof-of-work outputs and send the ciphertext to me; I check the proofs and generate the keys locally. Then I encrypt each party's share using the party's symmetric key and send the encrypted share. Then I proceed as before, sending a periodic message.
I suspect, though, that such a construction is overkill; also I have not really evaluated the security of it.
"I think big computational networks with incentives unlock some interesting doors"
Maybe so, but right now I see a solution in search of a problem.
"If you can assume that the computational majority"
Why should I need to assume anything about the computational resources about the participants? We can have threshold secret sharing with unconditional security, and we only need to trust one of the parties for the switch to be secure regardless of the computing power of the rest of the parties.
That seems pretty fundamental to making the mechanism accessible. If are talking about switches as a service if there is a "fixed" pool of switches and an exploit is found that allows you to compromise each switch component you are out of luck because you didn't actually make materializing the secret difficult.
By requiring actual work to be done and allowing the difficulty of the work to be tuned based on the capacity of the network you make an adversary go up against the math instead of against the people.
If an exploit is found that allows you to compromise each component, then the adversary can just have the components ignore your messages and open your secret. It makes no difference how the system is structured at that point.
"By requiring actual work to be done and allowing the difficulty of the work to be tuned based on the capacity of the network you make an adversary go up against the math instead of against the people."
By using a threshold secret sharing scheme, you ensure that the adversary cannot get the secret regardless of the their own computing resources. You also avoid wasting electricity for the sake of your switch. You also have the advantage of having a well-defined security model that can actually be analyzed formally.
The only reason you would ever want to burn through some CPU cycles is to thwart sybil attacks. Unlike Bitcoin, you do not need to keep doing proofs of work after that, because once the shares are distributed, there is nothing more to do. If the adversary increases his computing power after that, he gains nothing by it, because he will not be given any more shares. Hence the suggestion in my previous post: have the proof of work be coupled to the generation of a public key, and just have the public keys be generated when someone needs to set up a switch.
Even if that were not the case, your scheme relies on your identity being a secret, which essentially is "security through obscurity".
We have to assume that a dedicated attacker knows no greater incentive than breaking the encryption. Anything else is foolish, IMHO.
Having the attacker be better off alone is the point.
First of all your assuming that the 3 letter server farm is your adversary. This approach has utility even if some adversaries (which don't happen to be yours) might have enough compute power. The approach has utility if it can ratchet up the difficulty enough that the adversary you care about can't trigger early release. It also means you don't have to maintain any infrastructure that is traceable to you. If an attacker can find and disable the switch before it triggers what good does it do you?
Another advantage of this approach is that you can ratchet up the difficulty to the point that the 3 letter server farm can't complete the task in a reasonable amount of time. I suppose there is hard upper bound in the sense that the network is only going to be a few orders of magnitude more powerful than the 3 letter server farm.
Another assumption is that an adversary finding out the secret and releasing it is the worst case. It really depends which adversary right? One thing you are trying to influence is public distribution, and the people with the incentive to apply the compute power may not be the people with an incentive to act on the information by distributing it publicly. The secret is just going to be an encryption key for decrypting the payload you have put out through other means. It's not black and white whether the utility is zero.
In summation, it's a dead man's switch, and the important part is that it fires if you don't do your thing. There are a lot of other desirable attributes, but real world dead man's switches share many of the same flaws.
"Even if that were not the case, your scheme relies on your identity being a secret, which essentially is "security through obscurity"
I don't actually follow why anonymity is necessary. Dead man's switches are usually because you fear you are not or may not remain anonymous. Anonymity is of course a desirable attribute for a lot of reasons, but if it is blown this approach would still work. Only the person possessing the secret key can generate the poison pills that delay the release of the secret.
Tor is an example of ways to attempt anonymity while participating in a network. I view that as a separate problem space.
I think the real competitor would be a switch that actively monitors a piece of state that has plausible deniability, but that runs afoul of the whole running infrastructure traceable to you thing.
And I'm surprised no one sees a use-case in recent events: if I were DPR, I'd be happy if usable time-lock crypto existed so I could, say, lock a wallet of 100k btc for 10 years, and give it to a friend as a backup; I don't have to worry about the friend deciding that they'd like to retire to Patagonia, but if I'm 'shot resisting arrest', at least I left my friend a fortune. Or, even if he hands a time-locked wallet over to the FBI, he's delayed them spending it for 10 years and given the Bitcoin economy that much more time to grow and be able to absorb a sudden infusion of 1% of the money supply.
As it is, he's basically in an all-or-nothing position.
Every transaction can have a lock time associated with it. This allows the transaction to be pending and replaceable until an agreed-upon future time, specified either as a block index or as a timestamp (the same field is used for both, but values less than 500 million are interpreted as a block index). If a transaction's lock time has been reached, we say it is final.
At first, they will be almost impossible to distinguish (you cannot hurry much your culture medical exams even if willing to spend a lot, right?) After a while, the cultures become obvious.
The timing is heavily dependant of the bacteria's life cycle, but I guess if you stricly control the number of individuals initially put into the dishes, and keep temperature and lighting optimal, you can predict the time-to-observability to around 40%?
This is the closest thing to what you're looking for:
If you split the key redundantly between the phones, you should be fine.
It would burn a ton of power though.
It would work like this:
The secret sharer creates some random string called 'a' and some computable function 'f'. The secret sharer also creates an encryption function 'e', and a homomorphic equivalent to 'f', called 'F' so that the following commutes: e(f(x)) = F(e(x)). F acts on encrypted data and gives encrypted results, but is much slower than f.
The secret sharer can comparatively quickly compute e(f(x)), which he or she uses as a key to encrypt a message. However the recipient is only given the values e(x) and F and must use exponentially more computational time to go the more laborious route, computing F(e(x)).
Taking gwern's idea, what if the homomorphic computation is just incrementing your input repeatedly inside of a loop? It seems like that might be hard to parallelize. Or at least, I don't know enough to assume otherwise.
"But that doesn’t seem very true any more. Devices can differ dramatically now even in the same computers; to take the example of Bitcoin mining, my laptop’s CPU can search for hashes at 4k/sec, or its GPU can search at 54m/second."
This is an example of parallelism and parallelism only.
For the example of SHA-1 computation, you mention using FPGAs that finish in 400 clock cycles, which is at most an order of magnitude away from a naive CPU implementation of around 4000 clock cycles. I'm not as familiar with SHA-256.
>For example, one could take a hash like bcrypt, give it a random input, and hash it for a month. Each hash depends on the previous hash, and there’s no way to skip from the first hash to the trillionth hash. After a month, you use the final hash as the encryption key, and then release the encrypted file and the random input to all the world. The first person who wants to decrypt the file has no choice but to redo the trillion hashes in order to get the same encryption key you used.
Then it lists this downside:
>"This is pretty clever. If one has a thousand CPUs handy, one can store up 3 years’ of computation-resistance in just a day. This satisfies a number of needs. But what about people who only have a normal computer? Fundamentally, this repeated hashing requires you to put in as much computation as you want your public to expend reproducing the computation, which is not enough. We want to force the public to expend more computation - potentially much more - than we put in. How can we do this?
>It’s hard to see. At least, I haven’t thought of anything clever"
I have! Rather than hash n as your seed, after finding n (which you will still release) hash (n+m) instead, where m is a random number in the range (for example) 0,100. Discard m, do not retain it after you've started the hashes. Release only n. Now, rather than starting at n, they still have to start at n, then when they find that m wasn't 0, they have to try all over again hashing n+1 a trillion times, then when they find it's not a good key, they have to try hashing n+2 a trillion times, and so on until they've bruteforced n+m as the correct initial seed for the start of the hash process. i.e. you make them bruteforce what m must have been.
if m is ten, someone would have to repeat your process up to ten times (an average of 5 times) before they found the seed.
Likewise if m is large, like 1,000,000 the force multiplier is 1,000,000x as much work as you had to do, in the worse case and 500,000 times as much work on average.
I've emailed the offer this suggestion.
- They might find the key earlier or later in their search (it adds volatility to the amount of time required).
- Searching for the right value of m is highly parallelizable.
e.g. you can work for an hour and increase it by a factor of 10,000, but the ten-thousandfold work will be parallelizable and slightly unpredictable; or you can work for 1000 hours (41 days) and increase the work by a factor of just 100 in the worst case - but the increase will be parallelizable and the unlocker might get lucky.
so you can really balance how much work you're doing with the level of parallelizability/predictability of the reverse.
Seems to me that if one was to be storing information over an extended period of time, say 20 years, as time goes on, it becomes more and more likely that the encryption can still be broken sooner than desired.
> The value of t was chosen to take into consideration the growth in computational power due to "Moore's Law". Based on the SEMATECH National Technology Roadmap for Semiconductors (1997 edition), we can expect internal chip speeds to increase by a factor of approximately 13 overall up to 2012, when the clock rates reach about 10GHz. After that improvements seem more difficult, but we estimate that another factor of five might be achievable by 2034. Thus, the overall rate of computation should go through approximately six doublings by 2034.
I asked about progress on this puzzle at stack exchange here http://crypto.stackexchange.com/questions/5831/what-is-the-p... and got some nice answers.
In 1999 they had coppermine 1133 single-core @ 1.1ghz (~ 2 gflops), in 2012 we now had sandy bridge 3970 quad-core @ 3.4ghz (> 100 Gflops). So at least according to one measure the factor of increase is more like 50 than 13.
You could do a delayed form of historical whistleblowing or confession, so that it doesn't cause you problems today, but history can know what really happened.
Perhaps providing the equivalent of a 'sealed envelope' to prove that something was known or happened on a given date, without having to be present or active to prove it.
It's fun to think of a puzzle, but presuming it was truly delivering on the safety promise, I can see quite a few uses. The real schemes though depend on a lot of varying factors, so in cases where the secrecy was critical, you can see why people wouldn't use it.
Here is one possible use case: imagine an offline digital cash system, so i.e. the bank will not accept the same token twice. To protect against an unscrupulous seller, the buyer pays by giving the seller a time-lock puzzle with the tokens; if the goods are not delivered by some deadline, the buyer will deposit the tokens at the bank, thus preventing the seller from doing so. Otherwise the seller solves the puzzle and makes the deposit. This is basically an anonymity-preserving escrow service, though in practice there are probably simpler approaches.
Pretend your secret is an integer. You want to distribute clues as to your secret integer to N of your friends such that any K of them can collude to figure it out, but K-1 of them can't figure out anything about it at all.
So you construct a (K-1)-degree polynomial of one variable, f(x). All of the coefficients of the terms of f(x) are random, except choose the y-intercept (i.e. the constant co-efficient) to be your secret. Then calculate and distribute the numbers f(1), f(2), ..., f(N) to your N friends.
K points on the 2D co-ordinate plane uniquely identify your single (K-1)-degree polynomial, which will have your specific secret y-intercept. However (K-1) points will pick out an entire family of potential (K-1)-degree polynomials. And, in fact, for every single possible secret you could have chosen, there's a (K-1)-degree polynomial that goes through (K-1) points and the possible secret value. So, (K-1) of your friends colluding really don't have any additional information at all about your secret.
Shamir's secret sharing works just like that, but done with integers modulo a prime. (And the prime has to be larger than your secret.)
I think a better approach (that would also not leak bits of the key) would be to say "hash until the hash of the hash is xxx". So basically you hash until you get xxx and then take the previous iteration's result as a key.
After choosing B and C, choose an A whose distance to B and C is proportional to the amount of work you wish to require.
You go ahead and take a number of crackers. Each cracker starts with a random hash and randomizes it for a fixed number of times, or until the bottom 8 bits of the hash are 0. Once it is done hashing, it goes to a central database and stores it's hash sequence. After that, it picks another hash which is not contained in a hash sequence and repeats the process.
Finally, you'd probably have one node just doing the boring hash iteration while checking if it can skip according to the generated hash sequences from the database. I haven't done any math how much of an improvement this would be, but I would suspect a pretty decent improvement given enough nodes.
They will have to run s/bcrypt (N^2)/2 time to get your date. Am I missing anything?
The author is proposing methods that are forcibly sequential.
Set up an email alert so it sends the key to an email.
This assumes google isn't going to discontinue its calendar service.