
Cuckoo: a memory-bound graph-theoretic proof-of-work system - k_vi
https://github.com/tromp/cuckoo
======
est
How is the power consumption compared with other PoW e.g. SHA1?

~~~
tromp
With Cuckoo Cycle, you pair every siphash evaluation (mapping 64 bits to 64
bits; a much cheaper hash function than SHA1 by an order of magnitude) with a
random memory access. As a result, most of the power consumption will be from
the memory subsystem. I believe a DRAM chip draws on the order of 1W.

So you should be able to run this on a smartphone, while it's charging
overnight.

------
socrates1024
According to the paper "Asymmetric Proof of Work based on the Generalized
Birthday Problem" (appearing at NDSS 2016, a top security conference)
[http://orbilu.uni.lu/bitstream/10993/22277/1/alex-dmitry-
asy...](http://orbilu.uni.lu/bitstream/10993/22277/1/alex-dmitry-asymmetric-
PoW.pdf) the Cuckoo puzzle is amenable to parallelism, and thus potentially a
"time-area" tradeoff. What do you think?

~~~
tromp
The project page states that "trading off memory for running time, as
implemented in tomato_miner.h, incurs at least one order of magnitude extra
slowdown"

For instance, to look for a 42 cycle on a billion node graph, the reference
algorithm uses 128MB.

If you want to get by with only 32MB, then you can run that alternative
algorithm, but it will take about 128/32*25=100 times longer. The penalty
factor of about 25 is due to losing the ability to represent edges with a
single bit.

You can also parallelize by having more cores share the same memory but this
it only takes so many cores to saturate a memory bank.

That paper sadly misrepresent Cuckoo Cycle by focusing on an outdated version
from the first half of 2014 (and incorrectly describes it as working on
directed graphs).

------
jcoffland
The big problem with an algorithm like this on is that there is a fair chance
that someone will come up with a substantially faster method. This part
bothers me:

>These bounties are to expire at the end of 2016. They are admittedly modest
in size, but then claiming them might only require one or two insightful
tweaks to my existing implementations.

------
zump
Does this solve the centralization problem of Bitcoin (Chinese miners etc.)?

~~~
tromp
Cuckoo Cycle author here. In the recent thread "The resolution of the Bitcoin
experiment" I commented:

Bitcoin mining could be more decentralized if it better resembled a lottery,
where huge numbers of people play for an expected loss. In other words, the
lack of people mining at a loss makes mining profitable and hence subject to
forces of centralization.

There are several reasons why mining as a lottery substitute is rare, a major
one being that commodity hardware is inefficient by many orders of magnitude,
making even a botnet next to useless. Perhaps, if a proof of work, whose
efficiency gap (with custom hardware) is at most an order of magnitude, were
adopted (or slowly phased in), enough lottery players would arise to make
mining unprofitable at scale.

Botnets should then just be welcomed as a modest increase in decentralization.

------
DanWaterworth
Interesting. I wonder how a sat solver would do with this problem.

~~~
tromp
How would you express a 42 cycle in SAT? How much memory would that take for a
billion node graph?

The Cuckoo Cycle algorithm only needs 128MB and a few seconds to solve this.

~~~
DanWaterworth
> How would you express a 42 cycle in SAT?

That's easy. You just need to design a combinational circuit that verifies
that a particular list of nodes forms a cycle.

> How much memory would that take for a billion node graph?

Naively, a lot. Thinking about it, way too much memory to be in any way
practical.

However, there may be a good way to solve it with an incremental solver or,
instead of materialising the graph, you could encode the siphash function in
the sat solver. Then it would use much less memory.

If that works well though, it would compromise the original goal of the
project.

~~~
tromp
I don't see how that could possibly work well with all the overhead. Remember
that the reference algorithm can be viewed as an exceedingly optimized SAT
solver, using exactly one boolean variable to say whether a particular edge is
used in the cycle or not.

------
js8
Umm.. wouldn't it be better (for the environment) if you just donate the CPU
time to folding@home or similar thing and they would certify that you donated?

I think someone suggested there already is a cryptocurrency like that.

~~~
garethrees
There are a couple of cryptocurrencies based on this idea (CureCoin [1] and
FoldingCoin [2]). But as far as I can tell these both rely on a trusted third
party to distribute the folding work and verify that it was done. That's
because unlike hashcash and similar proofs of work via hashing, protein
folding is not cheap to verify.

If you're concerned about energy use and willing to trust a third party to
verify the transactions, then why not use VISA?

(I looked at the CureCoin site and skimmed the FoldingCoin white paper [3],
but I couldn't find any description of how they verify the folding work. Can
someone point me at an explanation?)

[1] [https://www.curecoin.net](https://www.curecoin.net) [2]
[http://foldingcoin.net](http://foldingcoin.net) [3]
[http://foldingcoin.net/the-coin/white-paper/](http://foldingcoin.net/the-
coin/white-paper/)

~~~
js8
> If you're concerned about energy use and willing to trust a third party to
> verify the transactions, then why not use VISA?

Well, that's my question.. They are already willing to trust the party that
generated the code that they had legal access to their computer for this
purpose. I think we would all be better off if they just required proof of
monetary donation to some charity or something.

All that was accomplished here was shifting of trust on these individuals at
some energy expense; it's insane.

It's nice theoretical exercise for sure, but I sincerely hope that anyone
attempting to build yet another monetary system on waste of energy will
seriously reconsider the idea.

