Hacker News new | comments | show | ask | jobs | submit login
Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Crypto (ipfs.io)
92 points by pors 5 months ago | hide | past | web | favorite | 49 comments



Initial reactions:

1. Sybil-resistance (faking strong consensus by deploying cheap replica nodes you control) in a protocol like this is crucial. All I could find is this:

To prevent Sybil attacks, it uses a mechanism like proof-of-stake that assigns weights to participants in committee selection based on the money in their accounts.

2. Every non-proof-of-work protocol I've seen, including Ripple Consensus Process and proof-of-stake creates a problem of initial coin distribution. PoW systems have a clean distribution mechanism based on external resource consumption. Non-PoW systems produce an airdrop situation. Players start with no funds, and so can't stake. The creator of the network manually assigns ownership, with important long-term political consequences (e.g., Ripple).

3. The lack of an incentive structure around fees in protocols like Ripple creates bizarre economic consequences. For example, Ripple is guaranteed to lose money stock because fees are simply burned, rather than given to the consensus leader as in Bitcoin.

4. So far, I haven't seen anything in the paper regarding denial of service attacks on nodes. In other words, I see no negative incentives levied on those who can sign transactions from flooding the network with useless spam, bogging everything down.


Is PoW really a “clean distribution mechanism”? Economically, it’s like proof of stake, but all stake is divided up among the small set of people who have the electrical connections and equipment to mine profitably.


This statement is not always true.

May deployed PoW implementations are provably worse than most modern proposals for Proof of Stake. PoW mining opens selfish mining strategies, whereas Proof of Stake fixes the set of actors opens to scrutiny the mechanism for "who gets to mine the next block."

This doesn't mean that proof-of-stake is magical, but it's certainly less prone to issues than Proof of Work. It's also less inundated by religious zeal; PoS proposals face healthy skepticism and more vetting BEFORE they tend to be deployed. PoW is the axiomatic and beloved sacred "nakamoto consensus" in (incorrectly, but to many in the space) a platonic form.


I have two issues with PoS that I've never seen addressed?

(1) Isn't it Plutocratic? PoW can be too in that capital and energy costs money, but PoS seems to directly reward the largest stakeholders with more stake.

(2) Removing money from circulation is not free. It reduces monetary velocity and has other detrimental effects on the currency's economic system.


1. That's a process question. It's quite possible to rotate stakeholders, audit stakeholders, and challenge stakeholders.

2. If a network can hold back some currency, but actually scale to meet global transactional demand, it'll be better in this regard anyways.


> PoW mining opens selfish mining strategies

This isn't really true. It's true of Bitcoin but selfish mining is an artifact of Bitcoin's sloppy way of estimating how much work is being done, not an artifact of PoW. An enhancement like Bobtail [1] eliminates the incentive to selfishly mine by improving the network's ability to estimate the network hash strength.

[1] https://arxiv.org/abs/1709.08750


I've read it, but my concern with this approach is it actually slows bitcoin down substantally. Bitcoin already can't scale, and moving transactions off-chain to Lightning doesn't really address this concern. What's more, the slower the basis blockchain gets, the more prone Lightning networks are to malicious actors.


I'll admit I haven't read it for a few months, but I don't remember anything which would cause Bitcoin to slow down. What do you mean?

It does require a larger block header and more network traffic but Bitcoin's scalability is currently limited by politics, not network bandwidth.


> PoW systems have a clean distribution mechanism based on external resource consumption.

Unless the creators are the only ones mining for a time.


Even without that it's still not fair because of ASICs not being widely available to consumers. It's not as fair as simply selling tokens/coins which grows linearly with how much money you have, which is what you'd be spending on electricity and hardware anyway, you're just taking the shortcut of not using them and is one of the more popular arguments for PoS.


True, this has been one of the consistent criticisms of Bitcoin - Satoshi solo mined coins and therefore distribution wasn't fair.

OTOH, consider the alternative. Satoshi gave himself all the money in the system, then divvied it up among his friends.

However, given the availability of multiple multi-billion dollar cryptocurrency economies, another option might be possible. Airdrop to the current holders of some other cryptocurrency. Or maybe a basket of cryptocurrencies. Key owners could then claim their money on the newly-created network. This idea really started to take off in 2018 with Bitcoin hard forks.

The problem is that the network creator will face intense pressure to withhold just a little currency to fund a war chest. Yielding to that pressure creates the very political problems I alluded to earlier (e.g., Ethereum).


Add to that - I do not see any mention of the source of timing or "timestamp server" as the Nakamoto paper refers to it. The details of why it is important are in my blog post here https://grisha.org/blog/2018/01/23/explaining-proof-of-work/ (which was once at the top of HN).


I don't think your post shows that any public ledge must provide a timestamp server (which I think is your claim?). For instance PBFT (with a fixed validator set) works perfectly fine without providing a timestamp server since it's asynchronously safe - you can take as long as you want to complete every PBFT round.

Basically, PoW provides consensus by providing an absolute timestamp (we know that at the difficulty adjustment equilibrium, a certain block header must have taken 10 minutes to produce), but this does not imply that a consensus algorithm that all public ledger consensus algorithms must produce an absolute timestamp (the later claim is stronger).


My post merely shows that PoW is the mechanism via which this absolute timestamp (I like this terminology, BTW) is provided and seems very important, but I think the real question (to which I do not know the answer) is: can it be proven that an absolute timestamp is essential or non-essential, because it would answer the question of whether a distributed ledger without PoW (timelock puzzle) is fundamentally possible.


Yeah, I'll have to think a lot harder about full PoS to maybe answer that question. One thing to note us that there are certain hidden timing assumptions (eg in casper-ffg, there is the 4 month unbonding period, and the timescale over which an inactivity leak occurs) that are necessary for security (they seem to correspond to the time scales needed for weak subjectivity social consensus and hard forks respectively)

I think the most precise name might be something like "affine timestamp"; you don't actually need timestamp wrt the big bang but being able to measure the rate-of-time is stronger than merely being able to order events


These days you can just auction the coins in an ICO.


You might even sell ICO financing for a lottery system to help randomize the in-currency return.


Why do you think that's an inadequate Sybil defense? It clearly prevents someone with x% percent of coins from getting more than x% of the selection by making more accounts.


>To prevent Sybil attacks, it uses a mechanism like proof-of-stake that assigns weights to participants in committee selection based on the money in their accounts.

Isn't this already implemented by STEEM ?


That line in the paper is actually describing Algorand in the "related works" section. Full quote:

"Algorand [26] uses a verifiable random function to select a committee of nodes that participate in a novel Byzantine consensus protocol. It achieves over 360 tps with 50 second latency on an emulated network of 2000 committee nodes (500K users in total) distributed among 20 cities. To prevent Sybil attacks, it uses a mechanism like proof-of-stake that assigns weights to participants in committee selection based on the money in their accounts."


Agree completely. Nice paper, utterly useless in practice due to sybil attack.

edit: not only that, but it will be vulnerable to all kinds of history attacks for bootstrapping nodes


And it's very odd that the paper doesn't acknowledge or anticipate those concerns.


This seems like a thoroughly thought out version of a DAG (directed acyclic graph) based currency. I don't have time to go through it with a fine toothed comb, but I'm curious about others' reactions.

There are well known problems with DAGs: lack of incentive to run full nodes, tip choice attacks, flooding/spam attacks if there are no fees, and many and varied types of Sybil attacks.

For flooding or spam a transaction proof of work isn't enough. Not only does it "waste" a lot of energy (though at the edge nodes where it's less visible than mining farms) which negates part of the purported benefit of a DAG, but it's vulnerable to ASICs or botnets. If you can short a cryptocurrency on any major exchange that supports short selling then it will get attacked with the goal not of stealing coins or censoring transactions but of just destroying it.

Tip choice attacks combined with Sybil attacks can be very sophisticated. Tip choice is "random" but randomness cannot be verified. 3, 18, 593, 3, 3, now prove those were not random numbers modulus 1024. You can't of course. So I can non-randomly choose the transactions I link to. If I combine this with some sophisticated analysis of the network's transaction structure and physical topology I might be able to skew the network in some disastrous way over time in ways that would be completely undetectable since my apparently "random" tip/link choices were not in fact random. Then I can do something like short the coin and do something nasty to the network.

Attackers can be very very creative, and attacks only get better.

Last but not least: there is no mining mechanism in a DAG coin, or at least I've never heard of how one could be done. This means DAG coins are "Big Bang" coins that begin with all the money that will ever exist. This is problematic from an economic point of view and opens a huge can of worms around what is done with that money and how it is distributed to initial holders.


Please update the submission title to match the title of the linked paper, i.e. "Cryptocurrencies" and not just "Crypto".


It's probably too long with the entire word.


Then maybe trim the initial catchphrase, or the word "novel."


This sounds an awful lot like each transaction can consume exactly one UTXO, but can have multiple UTXO outputs. This would cause progressive "shattering" of the UTXO set into millions of low-value "dust" UTXOs, a problem that Bitcoin is struggling with (specifically, how best to incentivize "cleaning up" the UTXO set by making transactions with multiple inputs, even though that increases the overall transaction size).

"We adopt what is commonly known as Bitcoin’s unspent transaction output (UTXO) model. In this model, clients are authenticated and issue cryptographically signed transactions that fully consume an existing UTXO and issue new UTXOs."


Seems like ambiguous wording on that part. However, they do not seem to require single-UTXO-inputs, but can consume multiple.


It seems like in this protocol, such a cleanup wouldn't even be possible... if transactions can have only one input


Either "using crypto" or "for cryptocurrencies"


It would be good if the authors would have extended figures 20, 21, and 22 out to networks of size 20,000 or more nodes. Or at least described what they expect to happen. I.e. if you have many more nodes, does the throughput remain above 1k tps?

I also always like authors who are willing to acknowledge the limitations of their work. If this work described the limitations I didn't see it; maybe they think there are none :-)


Cursory reading, they do seem to discuss limitations. They say that the system is not guaranteed to provide liveness for double-spends.


We’re a technical community. Can we not shorten cryptocurrencies to crypto? Admin can you change the title?


Is this Sybil resistant though?


That's my question as well, the introduction says:

> Specifically, the system operates by repeatedly sampling the network at random, and steering the correct nodes towards the same outcome.

Obviously random sampling could be trivially manipulated if anybody can spawn nodes very easily. I expected that the "fix" would be in the "Snowflake" algorithm but I don't see how it prevents that:

> When the protocol is correctly parameterized for a given threshold of Byzantine nodes and a desired guarantee, it can ensure both safety (P1) and liveness (P2).

But isn't that threshold effectively infinite? If you look at something like the bitcoin network there are very few incentives to maintain full nodes. Meanwhile if having a majority of nodes let you cheat and steer the network (which is not the case for BTC thanks to PoW) the incentive to spawn a huge amount of byzantine nodes would be very high.

After that the paper introduces the notion of "confidence" which might be the key to unraveling all that but I haven't yet fully understood that part. I don't have more time to look into it at the moment, hopefully somebody else will.


You'd need to use this in a PoS style trusted set award. The internet couldn't scale to an arbitrarily sized fully connected, chatty network anyways.


At this point, are these papers simply to reduce the cost of verification?

It seems thats the only problem in the crypto world, but I dont know if verification will ever be scalable.


What do you mean about "cost of verification"? Verification is not too difficult, it's reaching a consensus among trust-less nodes that is.

PoW solves the problem by making it so that any node which receives two valid but conflicting versions of blockchain has an objective metric to decide which one is the "right" one. The answer being whichever has the most work put into it. Since you can't fake work you can't arbitrarily create a new chain that would take over the others (unless you manage to work harder than all the rest of the network combined, hence the 51% attacks).

Without PoW if you receive two valid but conflicting chains you need an other metric to decide which one you select. This paper describes such an approach.


Verification becomes difficult, when the system gains adoption and processes thousands of TX per second. Then the issue becomes about the trade-off between node resource requirements, and decentralization. Fundamentally, every node having to process/verify every single transaction in the network,is not scalable.


Hence the ongoing debate around whether verification should be less computationally costly than computation (BTC) or whether a system can be architected to successfully scale even when verification == computation (ETH).


Much like hashgraph https://hederahashgraph.com


Do you think it would be possible to build a consensus algorithm for scientific consensus?


Perhaps this is the technology behind https://chia.net ?


Chia is bitcoin but with farming (storage) instead of mining (compute), plus a whole host of cleanups to do it right. So no DAG or any such thing.

One could read "Beyond Hellman's Time-Memory Trade-Offs with Applications to Proofs of Space" https://eprint.iacr.org/2017/893.pdf


Is proof of space actually "greener" than proof of work? Memory and hard drives consume power, and querying them consumes power. Seems like it could be just another kind of work. It would be more ASIC resistant though.


It can be greener if there's a higher ratio of hardware cost to energy consumption (including the energy consumed to produce the hardware).


yes. vastly.


I'm curious about this. Are you saying that relying on storage latency and volume is energetically cheaper than relying on compute?

I see a few flaws in this. First of all: all fast storage media consume energy even when idle. Secondly I think this neglects the embodied energy (energy to manufacture) of storage media. Lastly if a proof of storage mining scheme became popular you'd probably see ASICs that incorporate onboard fast memory controllers with huge caches and other approaches that would improve performance to the point that this would just become another proof of work.

Comparing full life cycle energy of different approaches to securing a cryptocurrency is actually pretty tough. It's also pretty hard to compare it to the energy requirements of more conventional approaches to currency since the energy cost of those is so spread out across society.


It's not.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: