So far I've only skimmed the white paper, but I didn't see any response to this problem. (If the answer is "good guys can make lots of fake nodes too" then you're creating a computational arms race that will inevitably lead to just as much wasted effort as proof-of-work.)
There's also the minor practical issue that every node has to do an amount of computation and network traffic that's linear in the total number of other nodes, meaning the total workload grows quadratically with the size of the network.
Not really answering your question (which is answered elsewhere in the thread), but these things are independent in an interesting way. Sybil attacks, in the simplest sense, depend on running consensus across a group where you don't have trustworthy membership information. If you do, then it turns into an authentication problem, which is different. Sybil attacks are an attack on the protocol that controls jury membership (or authenticates jury members), rather than the consensus protocol itself.
Upon re-reading the paper, I also noticed that it says "In the beginning of the Propose step the [deterministically chosen] designated proposer for that round broadcasts a proposal to its peers via gossip" but it's not obvious to me what happens if that proposer fails to participate. Is the algorithm allowed to skip to the next round? If so, is there some guarantee that the gaps can't later be filled in, and the history retroactively modified? (The DLS algorithm solves this problem by requiring all nodes to propose a value.)
You can prevote or precommit more than once per height (on different rounds), but a commit for a height must be the last signature for that height.
The sudden mass bonding is a problem for light clients doing a certain type of desirable SPV. We're considering options to mitigate this issue. One such option is to throttle the bonding and unbonding at the protocol level, or to introduce a second class of coin for validating and controlling issuance of it.
Section 6.5 purports to give a game-theoretic justification for why a node should include as many valid signatures from its peers when proposing a block, for fear of retaliation. But nothing stops me from dividing my voting share among, say, a thousand nodes which appear to be unrelated.
If one of those nodes decides to "defect" (steal a larger-than-intended share of fees by excluding some signatures) then the expected time before it is selected again can be made arbitrarily large, by making the node's voting share arbitrarily small. If that expected time is significantly larger than the unbonding period, then there's no way to retaliate, is there? It can just unbond and then transfer its coins to a "new" node.
EDIT: I originally typo'd "arbitrarily large" instead of "arbitrarily small" above, but hopefully the meaning was clear.
More bandwidth && less latency && more CPU helps.
1. You assume the initial distribution of tokens is trusted. If someone presents you a consensus without a history to make you believe that initial distribution isn't full of sybils, distrust.
2. Assuming a large trusted initial distribution, operationally you have to believe that 1/3rd of the validator nodes don't want to double spend/ present an alternate reality to you.
I don't see how your point 2 solves anything, though; it's just failing to acknowledge the problem. Why is it reasonable for me to think that a supermajority of the nodes are honest? If validators are supposed to be computationally cheap to run, compared to bitcoin miners, then what stops an attacker from running lots of them to get a larger voting share?
Personally, I agree with you.
I am writing the "validators contract" for tendermint right now. It will have this functionality.
If it does, I expect that it would reduce the likelihood of a Sybil-styled attack during the early days of the network -- with the risk of introducing a different vector, direct coercion of the node owners.
Is the plan to make the hard-coded genesis nodes as geographically diverse as possible?
We're using TM consensus to create something more interesting than just a crypto "currency", so the website and github hasn't been updated in a while. But the algorithm is still sound (as far as we know) and we're looking forward to launching soon!
One the possible improvements to proof-of-stake we're willing to investigate now is proof-of-stake + proof of-activity hybrid, where proof-of-activity is implemented like noted in PoW+PoA paper http://eprint.iacr.org/2014/452.pdf. From my personal point of view TenderMint is also about kinda PoS+PoA hybrid but implemented in another way. Do you Tendermint folks have simulation tools or may be even stricter models around your consensus algo to play with?
What's the best way to get in touch? Would love to work together on some things.
My mail is kushtech [at] yahoo [dot] com.
They're building a securities market?
Large stock can easily control perhaps 10% of all coins. I think that might be enough to cause fork. Reward is huge, risk low (declare hack and go bankrupt).
It would be as if launching a double-spend attack in Bitcoin resulted in the destruction of the attacker's mining equipment (and whatever other fixed cost investments that were spent in setting up the mining equipment). This isn't actually possible with mining because mining power is anonymous. Pubkey identities and traditional quorum-based byzantine consensus allows for this kind of antifragility that Bitcoin cannot have.
"Security difficult to quantify — depends on extrinsic factors"
Nope, the security is very easy to quantify in Bitcoin - you know that there is certain amount of processing power behind a single block. As your transaction goes into the blockchain, you can be pretty damn sure that won't be reversed, once enough computing power is behind it.
"Blockchain forks often — difficult to build applications upon"
Nope, bitcoin blockchain forks very rarely. Where is this claim pulled from?
But how much did that processing cost in BTC? People come up with widely varying estimates. Cost of security matters because ultimately you want to denominate risk in the same currency that you're transacting.
Blockchain forks often
There are minor forks (usually called orphans) every few days: https://blockchain.info/charts/n-orphaned-blocks?timespan=30... Orphaning requires Bitcoin software to include complex chain reorganization code; a blockchain where orphans are very rare could omit such code.
So, this by itself is not an argument. That amount of computing power was produced once, so you can be pretty damn sure that if the incentives line up it will be produced again just on a different history.
I have a few slides detailing the rather scary game theory behind PoW here:
Particularly note the "Low incentive to protect against hacks". If the major Bitcoin mining farms get hacked, raided by men wearing ski masks and black suits, etc in order to double spend, large 51% attacks become possible, and a coordinated global attack could earn >$10-50m via double spends, and even more with shorting at 10x leverage. But from the point of view of each individual mining farm, they only lose at most 10-50 blocks' worth of rewards multiplied by their hashpower - a total private incentive of ~$100,000. In fact, if everyone else gets attacked, it's in your interest to ensure that you get attacked too!
Also, the "bribe attack" variants all theoretically cost under $100,000. In fact, the largest countervailing incentive factor against all of those is the idea that each mining farm has a decently sized percentage of mining power and thus captures a substantial portion of the "public-good" incentive of the network not failing. Now, you can argue that Bitcoin has not succumbed to these and works fine in practice, but (i) right now, Bitcoin is still tiny, and (ii) if that's true, then naive proof of stake, with its supposedly fatal nothing-at-stake problems, will also work fine in practice.
> Nope, bitcoin blockchain forks very rarely. Where is this claim pulled from?
It forks back one block ~1-2% of the time and two blocks ~0.01-0.04% of the time. You can determine this from simple math plus network latency statistics, or check blockchain.info: https://blockchain.info/charts/n-orphaned-blocks?showDataPoi...
This might not seem like much, but if you are looking at the blockchain as an application platform then users are really not used to the idea of finality suddenly becoming non-final 1% of the time.
"If the validator causes the blockchain to fork while its coins are locked in bond, all of its coins are destroyed. This means that as long as you sync your blockchain with the network periodically you never have to trust a central checkpoint."
>They propose a modification to the Bitcoin protocol such that it can tolerate colluding groups that control up to 1/4 of the mining power – less than the previously assumed bound of 1/2 of Byzantine mining power which requires an honest mining majority that follows the prescribed protocol.
Selfish mining is tolerable but unideal. It is a bit misleading to claim that 25% block withholding attacks are as critical as 51% attacks.
>Users keep an account in the system, where the user’s account is identified by the user’s public key or address.
Promoting address reuse isn't a good idea.
>When a validator signs duplicitously, a short evidence transaction can be generated by anyone with the two conflicting commit-vote signatures
The assumption that an attacker will tell you he is attacking isn't a very strong one.
>A user can avoid long-range attacks by syncing their blockchain periodically within the bounds of the unbonding period.
This security assumption means means that no one can safely join the network and perform an initial sync. It isn't explained how they prevent long range attacks, but it can be assumed that they do it through disallowing reorgs.
Disallowing reorgs allows another consensus breaking attack. The entire purpose of reorgs is to maintain consensus, so if an attacker can create a blockchain of length bond-period, they can trick you so you don't achieve consensus with the "main" blockchain.
Another thing that is worrying about this whole idea is that 2/3 of validators must come to consensus on what block to use. If you choose a block and broadcast your signature, you cannot sign again. So how do the validators solve the Byzantine Generals Problem? How do they determine which block they will sign? They could go with the first block they see, but that leaves you open to sybil attackers running many nodes and sending validators different blocks in order to trick them into splitting their vote. These are the fundamental problems that consensus systems are meant to solve, consensus systems shouldn't have to have a 2nd consensus system to support itself.
>There are three types of votes: a prevote, a precommit and a commit
It seems the paper is attempting to solve the previously mentioned problem, however the most fundamental explanation for the problem with this is explained by Byzantine Generals Problem. The problem essentially involves you sending a message of "I'm ready to attack" back and forth, but because you cannot know they received the message, you cannot attack. If they reply with a "okay, we got the message" message, they don't know that you received the message, so they don't know whether to attack. Whether there is 1 back and forth, 3 as in a "prevote to attack", a "precommit to attack" and a "commit to attack", or 100 back and forths, you aren't achieving consensus, you're skirting the issue.
The next few sections explain the mechanism, but not how it actually solves Byzantine Generals Problem. with additonal assumptions based on things that require consensus including "If there are less than 1/3 in Byzantine voting power and at least one good validator decides on a block B, then no good validator will decide on any block other than B"
>Each round has a designated proposer chosen in round-robin fashion such that validators are chosen with frequency in proportion to their voting power
If the pseudo-random seed for the next round validators is information that can be decided by "previous" (or not so previous) validators, a validator can choose themselves. With a high number of validators, it may take a few thousand blocks for, say, a 3% stakeholder to randomly be selected for a sufficient number of votes to choose the next seed, but once they have ground their first block and made themselves a majority of validators, they can repeat over and over and control the blockchain.
> Say that some good validators locked on block B and round R, and some good validators locked on block B0 at round R0, where R < R0. In this case the proof-of-lock from R0 included in a proposal by a good validator will eventually unlock those validators locked on R allowing the consensus process to continue.
This "solution" helps prevent 33% deadlocks, I agree, however now you can seed your stakegrinding attack with an incredibly small validator/voter %. Depending on the parameters of the system, you may only need to become a validator once to control the entire blockchain forever.
The system of voters has somewhat obscured it, but stake grinding is still a major problem, in fact, it is probably the simplest and cheapest attack vector on the system defined in this paper.
In conclusion, this system has severe sybil problems, consensus problems, centralizations problems (in that one person could control the entire blockchain cheaply), structure problems (address reuse) and usability/security assumption problems (no one can safely sync with the network after the first bonding period is up?).
Stake grinding is the process of changing data in a block to create a large number of new seeds (sometimes billions or trillions) until the seed designates you as the "winner", or in this case majority of validators.
Tendermint consensus is based off the solution to byzantine generals for partial synchronicity explained in this paper: http://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf
Tendermint doesn't have a random number generator at all. You cannot have consensus on random numbers, or else they wouldn't be random. It is a fundamental problem that you are determining the next blocks winner based on numbers that can be influenced by previous winners.
If the sum is even, I win. If the sum is odd, you win.
This simple 2-person game randomly generates 1 bit. It can be extended to arbitrary numbers of people by replacing addition with XOR.
N people each vote either "0" or "1". They reveal votes simultaneous. The minority participants are rewarded. The median of the votes is the next random bit.
I prefer (1) over (2).
"Simultaneous" has no meaning. There are no means of instant communication, there is latency. There also is no proof of time without a consensus system.
So what is the proof that I committed my 0 or 1 value before they revealed? Well you could trust a central authority to maintain that timestamping, or you could even use Bitcoin for your timestamping. In fact, your PoS system could probably work if it piggy backed on Bitcoin completely.
If that's too abstract for you, consider that I say I'm ready to commit, everyone sends me their commits, I construct a block with all their commits and a ton of other possible commits and then once I have created a seed that allows me to win and control the network permenantly.
Thank you for the important issues you raised.
Depends on the application and the purpose of the blockchain, but the paper isn't explicitly premoting address reuse.
> The assumption that an attacker will tell you he is attacking isn't a very strong one.
Users don't consider blocks committed until they've seen +2/3 of commit signatures, so a fork implies that the victim has evidence of double-signing.
> This security assumption means means that no one can safely join the network and perform an initial sync.
Not sure what you mean -- Maybe an analogy will clarify. Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?
We don't prevent long-range attacks. We assume that they will happen, and the clients should be coded accordingly. For example, if my phone wallet hasn't synced with the network in a while (longer than the unbonding period) then it should warn me the next time I connect, and perhaps ask me to validate a fingerprint of the blockchain via outside channels.
I'm aware of how checkpoints prevent reorgs in other protocols like PeerCoin. That's not how Tendermint works.
> So how do the validators solve the Byzantine Generals Problem?
Perhaps a prerequisite to understanding the Tendermint paper is to first understand the DLS algorithm. It works in a similar but different way. In short it's a round-based 2-phase locking system (prevotes and precommits) with a 3rd phase of signing on top to commit. Without the round-based 2-phase locking system, the protocol would have the problem that you mentioned. But assuming that there are less than 1/3 of byzantine voting power, the validators will come to consensus before they start committing.
> stakegrinding attack with an incredibly small validator/voter %
Stakegrinding is less of an issue in Tendermint because of the deterministic round-robin algo for choosing the proposer, and the fact that the remaining validators need to commit. There are subtle attacks involving the timing-out of validators, but there are also workarounds. These attacks are probably not what you have in mind when you mention stakegrinding.
TLDR: Check out the DLR algorithm first, "Consensus in the Presence of Partial Synchrony (1988)", and then lets keep the discussion. Also check out forum.tendermint.com for some other Q&As.
It defines a system in which addresses are reused.
>Users don't consider blocks committed until they've seen +2/3 of commit signatures, so a fork implies that the victim has evidence of double-signing.
No it doesn't, unless you have a severe misunderstanding of information theory and think everyone know all information everywhere (including attacking forks not published). Your security assumption shouldn't be that the attacker is dumb enough to announce that he's attacking and giving up his money by broadcasting forks.
>Not sure what you mean -- Maybe an analogy will clarify. Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?
It worries me that you can't answer this and you're working on a cryptocurrency. They can achieve consensus because they can be supplied a proof by any of their peers of the transaction-output database. Not allowing reorgs allows a peer to give you a proof that prevents you from achieving consensus.
>For example, if my phone wallet hasn't synced with the network in a while (longer than the unbonding period) then it should warn me the next time I connect, and perhaps ask me to validate a fingerprint of the blockchain via outside channels.
The purpose of a blockchain is to achieve consensus without trusting anyone. If you trust that group, you probably would get along just as well with them managing a centralized database of transactions for you.
>I'm aware of how checkpoints prevent reorgs in other protocols like PeerCoin. That's not how Tendermint works.
I never mentioned checkpoints. The paper basically said it wouldn't allow large reorgs when it said it could prevent long range attacks and ignore the fork.
>Stakegrinding is less of an issue in Tendermint because of the deterministic round-robin algo for choosing the proposer, and the fact that the remaining validators need to commit.
The entire reason stake grinding is possible is because the selection of the next blocks winner is deterministic. The previous winners can change the seed over and over again until they win.
>and the fact that the remaining validators need to commit.
This is not relevant, you can wait until the end or ignore theri validations completely.
>Check out the DLR algorithm first, "Consensus in the Presence of Partial Synchrony (1988)"
This paper is fine, but it assumes 1/3 generals aren't traitors. Your system makes it trivial for a traitor to pretend to be all of the generals.
> This security assumption, the idea of “getting a block hash from a friend”, may seem unrigorous to many; Bitcoin developers often make the point that if the solution to long-range attacks is some alternative deciding mechanism X, then the security of the blockchain ultimately depends on X, and so the algorithm is in reality no more secure than using X directly – implying that most X, including our social-consensus-driven approach, are insecure.
> However, this logic ignores why consensus algorithms exist in the first place. Consensus is a social process, and human beings are fairly good at engaging in consensus on our own without any help from algorithms; perhaps the best example is the Rai stones, where a tribe in Yap essentially maintained a blockchain recording changes to the ownership of stones (used as a Bitcoin-like zero-intrinsic-value asset) as part of its collective memory. The reason why consensus algorithms are needed is, quite simply, because humans do not have infinite computational power, and prefer to rely on software agents to maintain consensus for us. Software agents are very smart, in the sense that they can maintain consensus on extremely large states with extremely complex rulesets with perfect precision, but they are also very ignorant, in the sense that they have very little social information, and the challenge of consensus algorithms is that of creating an algorithm that requires as little input of social information as possible.
>> Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?
> It worries me that you can't answer this and you're working on a cryptocurrency.
I'm not asking because I don't know the answer. (I know the answer.) I'm asking because your answer will help me clarify the answer for you using the framework and terminology that you're comfortable with. And in this space, each term means different things in different contexts. :)
It's not trivial for a traitor to pretend to be all of the generals. Say there are validators A, B, C, D, and they have equal voting power. Given this configuration, it's impossible for A to be the proposer for 2 blocks in a row unless some of the other validators were absent. It's not just deterministic -- it's round-robin.
That's not at all what I said, but this is besides the point.
>t's not trivial for a traitor to pretend to be all of the generals. Say there are validators A, B, C, D, and they have equal voting power. Given this configuration, it's impossible for A to be the proposer for 2 blocks in a row unless some of the other validators were absent. It's not just deterministic -- it's round-robin.
And you whitepaper allows for block creation with absent validators which makes it trivial because you have total control over the seed determining the next set of valdators. Even if you changed that, it would be trivial because you would only need to have last input into the seed one time to grind and generate a signature such that you are the exclusive validator.
You continue to assert the attack isn't feasible, but you aren't actually paying attention to what the reason I have told you twice now.
"It's not [feasible] for a traitor to pretend to be all the generals <no explanation>, say there are [arbitrary example]. Given this configuration it's infeasible for the attack to be performed <no explanation>. <Repetition of facts that are part of the reason stake grinding works including it's determininsticness>."
If you aren't aware of what stake grinding allows, it is an attack where you calculate the deterministic results of many many signatures (seeds) and finds signatures that result in you winning, allowing you to perform the attack once again.
There are many possible algorithms for determining this seed, some of which do solve the stake grinding problem. I personally like the NXT algo; its basic approach in simplified form is:
seed(genesis) = 0
seed(block) = hash(seed(block.parent) + block.proposer.address)
No grind capabilities at all. No matter how many times you try, you will always generate the exact same result. The randomness basically comes from absentee proposers, so the only way to influence the outcome (not proposing) is costly as you sacrifice your reward and all other influence over the block beyond signing / not signing it.
That is trivially grindable. If the seed is your address then you create an additional address such that the next winning address is yours.
That notion makes sense in something like PeerCoin or other preexisting PoS designs, but I don't see how that phrase applies to Tendermint. There is no "sampling" -- all validators must sign every block.
Well now you're defining a new system. You can't expect me to explain the security of an undefined system, so I am sticking to the system defined in your paper which states that not all validators need to sign every block.
This sounds very bogus. I'd like to see some algorithm details. Most likely it's just an excuse for another pyramid scheme.
what would it take and how would you go about building it? This seems like what tendermint is doing?