Hacker News new | comments | show | ask | jobs | submit login
Tendermint – The completely decentralized consensus engine (tendermint.com)
123 points by strzalek on Mar 3, 2015 | hide | past | web | favorite | 61 comments



The obvious question for anything that claims to solve Byzantine consensus is: what about Sybil attacks? If you rely on 2/3 of all nodes to be honest, but you open up the network for anyone to join, what stops me from creating a million fake "nodes" and outvoting everyone else?

So far I've only skimmed the white paper, but I didn't see any response to this problem. (If the answer is "good guys can make lots of fake nodes too" then you're creating a computational arms race that will inevitably lead to just as much wasted effort as proof-of-work.)

There's also the minor practical issue that every node has to do an amount of computation and network traffic that's linear in the total number of other nodes, meaning the total workload grows quadratically with the size of the network.


> The obvious question for anything that claims to solve Byzantine consensus is: what about Sybil attacks?

Not really answering your question (which is answered elsewhere in the thread), but these things are independent in an interesting way. Sybil attacks, in the simplest sense, depend on running consensus across a group where you don't have trustworthy membership information. If you do, then it turns into an authentication problem, which is different. Sybil attacks are an attack on the protocol that controls jury membership (or authenticates jury members), rather than the consensus protocol itself.


It's a proof of stake system, so "a validator has voting power equal to the amount of the bonded coins."


I see. I'm still skeptical about how robustly that will turn out to work in practice. Unless the majority of coins are locked up by validators, it seems like there's a constant risk of an attacker amassing a large number of coins and suddenly turning them into voting power with no warning.

Upon re-reading the paper, I also noticed that it says "In the beginning of the Propose step the [deterministically chosen] designated proposer for that round broadcasts a proposal to its peers via gossip" but it's not obvious to me what happens if that proposer fails to participate. Is the algorithm allowed to skip to the next round? If so, is there some guarantee that the gaps can't later be filled in, and the history retroactively modified? (The DLS algorithm solves this problem by requiring all nodes to propose a value.)


It's the same idea as peercoin and others. Once you have amassed enough coins to control the vote, you're now financially dis-incentivized from voting in such a way as to undermine trust in the blockchain.


Yes, the algorithm allows for skipping absent or Byzantine validators. It doesn't skip the height, it just progresses to the next round. In the end it's about getting more than 2/3 of commit votes.

You can prevote or precommit more than once per height (on different rounds), but a commit for a height must be the last signature for that height.

The sudden mass bonding is a problem for light clients doing a certain type of desirable SPV. We're considering options to mitigate this issue. One such option is to throttle the bonding and unbonding at the protocol level, or to introduce a second class of coin for validating and controlling issuance of it.


One more question/critique:

Section 6.5 purports to give a game-theoretic justification for why a node should include as many valid signatures from its peers when proposing a block, for fear of retaliation. But nothing stops me from dividing my voting share among, say, a thousand nodes which appear to be unrelated.

If one of those nodes decides to "defect" (steal a larger-than-intended share of fees by excluding some signatures) then the expected time before it is selected again can be made arbitrarily large, by making the node's voting share arbitrarily small. If that expected time is significantly larger than the unbonding period, then there's no way to retaliate, is there? It can just unbond and then transfer its coins to a "new" node.

EDIT: I originally typo'd "arbitrarily large" instead of "arbitrarily small" above, but hopefully the meaning was clear.


If the blockmaker excludes signatures, then a larger portion of the fees get burned, and the blockmaker gets a smaller reward.


+1 Yes, you raise a valid point. It's a weakness that must be addressed. There needs to be additional assumptions in the model, e.g. the respective voting powers should be on the same order of magnitude, and the number of validators should only be as large as the throughput (in blocks/s).

More bandwidth && less latency && more CPU helps.


Sybil resistance in two forms in Tendermint.

1. You assume the initial distribution of tokens is trusted. If someone presents you a consensus without a history to make you believe that initial distribution isn't full of sybils, distrust.

2. Assuming a large trusted initial distribution, operationally you have to believe that 1/3rd of the validator nodes don't want to double spend/ present an alternate reality to you.


For point 1, sure: Bitcoin does the same thing, and "blesses" a particular initial block as the starting point of the chain by hard-coding it into the client. No reason Tendermint wouldn't do the same thing.

I don't see how your point 2 solves anything, though; it's just failing to acknowledge the problem. Why is it reasonable for me to think that a supermajority of the nodes are honest? If validators are supposed to be computationally cheap to run, compared to bitcoin miners, then what stops an attacker from running lots of them to get a larger voting share?


Validators are expensive to run. You need to put down a large security deposit in order to be a validator, and being a cheating validator will cost you the entire deposit.


In the Bitcoin/cryptocurrency world, there's a rarely-voiced but firmly believed assumption that being annoying and expensive to attack makes you secure. So the response that you are saying doesn't solve anything is what is believed to be the solution.

Personally, I agree with you.


The list of nodes will be hard-coded at the genesis with known meatspace identities. It can only be updated by a transaction signed by >2/3 of the existing nodes.

I am writing the "validators contract" for tendermint right now. It will have this functionality.


In this initial allocation of nodes, will one node equal one individual (with a verifiable identity)?

If it does, I expect that it would reduce the likelihood of a Sybil-styled attack during the early days of the network -- with the risk of introducing a different vector, direct coercion of the node owners.

Is the plan to make the hard-coded genesis nodes as geographically diverse as possible?


;) Where are you at? We're in the Bay Area.


If you need someone in Seattle, hit me up.


Hi HackerNews!

We're using TM consensus to create something more interesting than just a crypto "currency", so the website and github hasn't been updated in a while. But the algorithm is still sound (as far as we know) and we're looking forward to launching soon!


I have to agree with the whitepaper, "Tendermint is awesome. The future is now." It seems like a marked improvement over proof-of-work style blockchains.


Is this another fork of Ripple?

https://wiki.ripple.com/Forks


No, it's a completely different algorithm based on proof of stake.


I'm a member of ConsensusResearch group https://github.com/ConsensusResearch . We made some simulation tools around (Nxt-like) proof-of-stake consensus algo & published some papers around them.

One the possible improvements to proof-of-stake we're willing to investigate now is proof-of-stake + proof of-activity hybrid, where proof-of-activity is implemented like noted in PoW+PoA paper http://eprint.iacr.org/2014/452.pdf. From my personal point of view TenderMint is also about kinda PoS+PoA hybrid but implemented in another way. Do you Tendermint folks have simulation tools or may be even stricter models around your consensus algo to play with?


Interesting, at Bitsapphire we've been working on PBFT and economic simulations https://github.com/bitsapphire/hyperledger-simulation.

What's the best way to get in touch? Would love to work together on some things.


Interesting. What "hyperledger" means though?

My mail is kushtech [at] yahoo [dot] com.


"We believe that the future is comprised of many interconnected-but-sovereign blockchains that serve different purposes for a variety of communities."

They're building a securities market?


We're building the groundwork for it, and may tackle the securities market or partner up but for now our focus is on building out the foundation -- proper consensus logic and great block chain tooling.


Securities markets have existed on the internet since 1971, and apparently don't need consensus logic and block chain tooling to work, so I doubt the technical details will be the challenging part. Perhaps consulting with the federal regulatory bodies, or the billion-dollar gorillas in the room, might help ;)


I am bit skeptical to this deposit method.

Large stock can easily control perhaps 10% of all coins. I think that might be enough to cause fork. Reward is huge, risk low (declare hack and go bankrupt).


A fork in general results in evidence of double signing by at least 1/3 of the voting power, so fork attacks (and thus double-spend attacks) are expensive to launch (depending on the market cap and existing owners of the blockchain). After slashing the double signers, the remaining validators can regroup as in a hard-fork and carry on.

It would be as if launching a double-spend attack in Bitcoin resulted in the destruction of the attacker's mining equipment (and whatever other fixed cost investments that were spent in setting up the mining equipment). This isn't actually possible with mining because mining power is anonymous. Pubkey identities and traditional quorum-based byzantine consensus allows for this kind of antifragility that Bitcoin cannot have.


At least the claims about Bitcoin sound like bullshit

"Security difficult to quantify — depends on extrinsic factors"

Nope, the security is very easy to quantify in Bitcoin - you know that there is certain amount of processing power behind a single block. As your transaction goes into the blockchain, you can be pretty damn sure that won't be reversed, once enough computing power is behind it.

"Blockchain forks often — difficult to build applications upon"

Nope, bitcoin blockchain forks very rarely. Where is this claim pulled from?


in Bitcoin you know that there is certain amount of processing power behind a single block

But how much did that processing cost in BTC? People come up with widely varying estimates. Cost of security matters because ultimately you want to denominate risk in the same currency that you're transacting.

Blockchain forks often

There are minor forks (usually called orphans) every few days: https://blockchain.info/charts/n-orphaned-blocks?timespan=30... Orphaning requires Bitcoin software to include complex chain reorganization code; a blockchain where orphans are very rare could omit such code.


I have to admit that the "difficult to build applications upon" is not the best selling point to put up there. But I stand by the security being difficult to quantify in Bitcoin because the mining reward (plus fees) is necessarily small in comparison to the transactions that are being moved (e.g. 25 bitcoins today vs potentially millions of bitcoins moved per block). The problem is that with a bit of coordination from the miners, forking is easy and easily profitable.


> As your transaction goes into the blockchain, you can be pretty damn sure that won't be reversed, once enough computing power is behind it.

So, this by itself is not an argument. That amount of computing power was produced once, so you can be pretty damn sure that if the incentives line up it will be produced again just on a different history.

I have a few slides detailing the rather scary game theory behind PoW here:

http://vitalik.ca/files/stanford_presentation.pdf

Particularly note the "Low incentive to protect against hacks". If the major Bitcoin mining farms get hacked, raided by men wearing ski masks and black suits, etc in order to double spend, large 51% attacks become possible, and a coordinated global attack could earn >$10-50m via double spends, and even more with shorting at 10x leverage. But from the point of view of each individual mining farm, they only lose at most 10-50 blocks' worth of rewards multiplied by their hashpower - a total private incentive of ~$100,000. In fact, if everyone else gets attacked, it's in your interest to ensure that you get attacked too!

Also, the "bribe attack" variants all theoretically cost under $100,000. In fact, the largest countervailing incentive factor against all of those is the idea that each mining farm has a decently sized percentage of mining power and thus captures a substantial portion of the "public-good" incentive of the network not failing. Now, you can argue that Bitcoin has not succumbed to these and works fine in practice, but (i) right now, Bitcoin is still tiny, and (ii) if that's true, then naive proof of stake, with its supposedly fatal nothing-at-stake problems, will also work fine in practice.

> Nope, bitcoin blockchain forks very rarely. Where is this claim pulled from?

It forks back one block ~1-2% of the time and two blocks ~0.01-0.04% of the time. You can determine this from simple math plus network latency statistics, or check blockchain.info: https://blockchain.info/charts/n-orphaned-blocks?showDataPoi...

This might not seem like much, but if you are looking at the blockchain as an application platform then users are really not used to the idea of finality suddenly becoming non-final 1% of the time.


Can anybody elaborate on the destruction of coins? Sounds dangerous...

"If the validator causes the blockchain to fork while its coins are locked in bond, all of its coins are destroyed. This means that as long as you sync your blockchain with the network periodically you never have to trust a central checkpoint."


Only coins owned by attempted cheaters get destroyed.


What about accidental cheaters?


I don't know how you'd accidentally sign two conflicting forks at the same block height, but it would suck to lose your whole stake due to such a bug.


I think something like that would happen about an order of magnitude less often than accidentally reusing k values or publishing your private key to the internet :)


Why, faulty software could make it easy.


Critique follows

>They propose a modification to the Bitcoin protocol such that it can tolerate colluding groups that control up to 1/4 of the mining power – less than the previously assumed bound of 1/2 of Byzantine mining power which requires an honest mining majority that follows the prescribed protocol.

Selfish mining is tolerable but unideal. It is a bit misleading to claim that 25% block withholding attacks are as critical as 51% attacks.

>Users keep an account in the system, where the user’s account is identified by the user’s public key or address.

Promoting address reuse isn't a good idea.

>When a validator signs duplicitously, a short evidence transaction can be generated by anyone with the two conflicting commit-vote signatures

The assumption that an attacker will tell you he is attacking isn't a very strong one.

>A user can avoid long-range attacks by syncing their blockchain periodically within the bounds of the unbonding period.

This security assumption means means that no one can safely join the network and perform an initial sync. It isn't explained how they prevent long range attacks, but it can be assumed that they do it through disallowing reorgs.

Disallowing reorgs allows another consensus breaking attack. The entire purpose of reorgs is to maintain consensus, so if an attacker can create a blockchain of length bond-period, they can trick you so you don't achieve consensus with the "main" blockchain.

Another thing that is worrying about this whole idea is that 2/3 of validators must come to consensus on what block to use. If you choose a block and broadcast your signature, you cannot sign again. So how do the validators solve the Byzantine Generals Problem? How do they determine which block they will sign? They could go with the first block they see, but that leaves you open to sybil attackers running many nodes and sending validators different blocks in order to trick them into splitting their vote. These are the fundamental problems that consensus systems are meant to solve, consensus systems shouldn't have to have a 2nd consensus system to support itself.

>There are three types of votes: a prevote, a precommit and a commit

It seems the paper is attempting to solve the previously mentioned problem, however the most fundamental explanation for the problem with this is explained by Byzantine Generals Problem. The problem essentially involves you sending a message of "I'm ready to attack" back and forth, but because you cannot know they received the message, you cannot attack. If they reply with a "okay, we got the message" message, they don't know that you received the message, so they don't know whether to attack. Whether there is 1 back and forth, 3 as in a "prevote to attack", a "precommit to attack" and a "commit to attack", or 100 back and forths, you aren't achieving consensus, you're skirting the issue.

The next few sections explain the mechanism, but not how it actually solves Byzantine Generals Problem. with additonal assumptions based on things that require consensus including "If there are less than 1/3 in Byzantine voting power and at least one good validator decides on a block B, then no good validator will decide on any block other than B"

>Each round has a designated proposer chosen in round-robin fashion such that validators are chosen with frequency in proportion to their voting power

If the pseudo-random seed for the next round validators is information that can be decided by "previous" (or not so previous) validators, a validator can choose themselves. With a high number of validators, it may take a few thousand blocks for, say, a 3% stakeholder to randomly be selected for a sufficient number of votes to choose the next seed, but once they have ground[1] their first block and made themselves a majority of validators, they can repeat over and over and control the blockchain.

> Say that some good validators locked on block B and round R, and some good validators locked on block B0 at round R0, where R < R0. In this case the proof-of-lock from R0 included in a proposal by a good validator will eventually unlock those validators locked on R allowing the consensus process to continue.

This "solution" helps prevent 33% deadlocks, I agree, however now you can seed your stakegrinding attack with an incredibly small validator/voter %. Depending on the parameters of the system, you may only need to become a validator once to control the entire blockchain forever.

The system of voters has somewhat obscured it, but stake grinding[1] is still a major problem, in fact, it is probably the simplest and cheapest attack vector on the system defined in this paper.

In conclusion, this system has severe sybil problems, consensus problems, centralizations problems (in that one person could control the entire blockchain cheaply), structure problems (address reuse) and usability/security assumption problems (no one can safely sync with the network after the first bonding period is up?).

[1]Stake grinding is the process of changing data in a block to create a large number of new seeds (sometimes billions or trillions) until the seed designates you as the "winner", or in this case majority of validators.


I agree with you that the random number generator in tendermint isn't good enough. I am going to add one I have put in a different cryptocurrency before. Each validator gives the hash of a secret, and later gives their secret. All the secrets are XORed to find the random seed.

Tendermint consensus is based off the solution to byzantine generals for partial synchronicity explained in this paper: http://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf


>I agree with you that the random number generator in tendermint isn't good enough. I am going to add one I have put in a different cryptocurrency before. Each validator gives the hash of a secret, and later gives their secret. All the secrets are XORed to find the random seed.

Tendermint doesn't have a random number generator at all. You cannot have consensus on random numbers, or else they wouldn't be random. It is a fundamental problem that you are determining the next blocks winner based on numbers that can be influenced by previous winners.


2 examples of multiparty games of choosing random numbers: 1) You and I each write either "1" or "0" onto a piece of paper, and simultaneously reveal to each other.

If the sum is even, I win. If the sum is odd, you win.

This simple 2-person game randomly generates 1 bit. It can be extended to arbitrary numbers of people by replacing addition with XOR.

2) N people each vote either "0" or "1". They reveal votes simultaneous. The minority participants are rewarded. The median of the votes is the next random bit.

I prefer (1) over (2).


Yeah, these work great out of consensus systems. Please consider that you're working on a consensus system though.

"Simultaneous" has no meaning. There are no means of instant communication, there is latency. There also is no proof of time without a consensus system.

So what is the proof that I committed my 0 or 1 value before they revealed? Well you could trust a central authority to maintain that timestamping, or you could even use Bitcoin for your timestamping. In fact, your PoS system could probably work if it piggy backed on Bitcoin completely.

If that's too abstract for you, consider that I say I'm ready to commit, everyone sends me their commits, I construct a block with all their commits and a ton of other possible commits and then once I have created a seed that allows me to win and control the network permenantly.


After reflecting more on the difficulties of a random number generator on a consensus system, we decided to use round robin method to choose the next leader from the validators.

Thank you for the important issues you raised.


> Promoting address reuse isn't a good idea.

Depends on the application and the purpose of the blockchain, but the paper isn't explicitly premoting address reuse.

> The assumption that an attacker will tell you he is attacking isn't a very strong one.

Users don't consider blocks committed until they've seen +2/3 of commit signatures, so a fork implies that the victim has evidence of double-signing.

> This security assumption means means that no one can safely join the network and perform an initial sync.

Not sure what you mean -- Maybe an analogy will clarify. Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?

We don't prevent long-range attacks. We assume that they will happen, and the clients should be coded accordingly. For example, if my phone wallet hasn't synced with the network in a while (longer than the unbonding period) then it should warn me the next time I connect, and perhaps ask me to validate a fingerprint of the blockchain via outside channels.

I'm aware of how checkpoints prevent reorgs in other protocols like PeerCoin. That's not how Tendermint works.

> So how do the validators solve the Byzantine Generals Problem?

Perhaps a prerequisite to understanding the Tendermint paper is to first understand the DLS algorithm. It works in a similar but different way. In short it's a round-based 2-phase locking system (prevotes and precommits) with a 3rd phase of signing on top to commit. Without the round-based 2-phase locking system, the protocol would have the problem that you mentioned. But assuming that there are less than 1/3 of byzantine voting power, the validators will come to consensus before they start committing.

> stakegrinding attack with an incredibly small validator/voter %

Stakegrinding is less of an issue in Tendermint because of the deterministic round-robin algo for choosing the proposer, and the fact that the remaining validators need to commit. There are subtle attacks involving the timing-out of validators, but there are also workarounds. These attacks are probably not what you have in mind when you mention stakegrinding.

TLDR: Check out the DLR algorithm first, "Consensus in the Presence of Partial Synchrony (1988)", and then lets keep the discussion. Also check out forum.tendermint.com for some other Q&As.


>Depends on the application and the purpose of the blockchain, but the paper isn't explicitly premoting address reuse.

It defines a system in which addresses are reused.

>Users don't consider blocks committed until they've seen +2/3 of commit signatures, so a fork implies that the victim has evidence of double-signing.

No it doesn't, unless you have a severe misunderstanding of information theory and think everyone know all information everywhere (including attacking forks not published). Your security assumption shouldn't be that the attacker is dumb enough to announce that he's attacking and giving up his money by broadcasting forks.

>Not sure what you mean -- Maybe an analogy will clarify. Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?

It worries me that you can't answer this and you're working on a cryptocurrency. They can achieve consensus because they can be supplied a proof by any of their peers of the transaction-output database. Not allowing reorgs allows a peer to give you a proof that prevents you from achieving consensus.

>For example, if my phone wallet hasn't synced with the network in a while (longer than the unbonding period) then it should warn me the next time I connect, and perhaps ask me to validate a fingerprint of the blockchain via outside channels.

The purpose of a blockchain is to achieve consensus without trusting anyone. If you trust that group, you probably would get along just as well with them managing a centralized database of transactions for you.

>I'm aware of how checkpoints prevent reorgs in other protocols like PeerCoin. That's not how Tendermint works.

I never mentioned checkpoints. The paper basically said it wouldn't allow large reorgs when it said it could prevent long range attacks and ignore the fork.

>Stakegrinding is less of an issue in Tendermint because of the deterministic round-robin algo for choosing the proposer, and the fact that the remaining validators need to commit.

The entire reason stake grinding is possible is because the selection of the next blocks winner is deterministic. The previous winners can change the seed over and over again until they win.

>and the fact that the remaining validators need to commit.

This is not relevant, you can wait until the end or ignore theri validations completely.

>Check out the DLR algorithm first, "Consensus in the Presence of Partial Synchrony (1988)"

This paper is fine, but it assumes 1/3 generals aren't traitors. Your system makes it trivial for a traitor to pretend to be all of the generals.


> The purpose of a blockchain is to achieve consensus without trusting anyone. If you trust that group, you probably would get along just as well with them managing a centralized database of transactions for you.

See https://blog.ethereum.org/2014/11/25/proof-stake-learned-lov...

Specifically:

> This security assumption, the idea of “getting a block hash from a friend”, may seem unrigorous to many; Bitcoin developers often make the point that if the solution to long-range attacks is some alternative deciding mechanism X, then the security of the blockchain ultimately depends on X, and so the algorithm is in reality no more secure than using X directly – implying that most X, including our social-consensus-driven approach, are insecure.

> However, this logic ignores why consensus algorithms exist in the first place. Consensus is a social process, and human beings are fairly good at engaging in consensus on our own without any help from algorithms; perhaps the best example is the Rai stones, where a tribe in Yap essentially maintained a blockchain recording changes to the ownership of stones (used as a Bitcoin-like zero-intrinsic-value asset) as part of its collective memory. The reason why consensus algorithms are needed is, quite simply, because humans do not have infinite computational power, and prefer to rely on software agents to maintain consensus for us. Software agents are very smart, in the sense that they can maintain consensus on extremely large states with extremely complex rulesets with perfect precision, but they are also very ignorant, in the sense that they have very little social information, and the challenge of consensus algorithms is that of creating an algorithm that requires as little input of social information as possible.


Sure, the addresses for the validators should be reused. The more it's used the better.

>> Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?

> It worries me that you can't answer this and you're working on a cryptocurrency.

I'm not asking because I don't know the answer. (I know the answer.) I'm asking because your answer will help me clarify the answer for you using the framework and terminology that you're comfortable with. And in this space, each term means different things in different contexts. :)

It's not trivial for a traitor to pretend to be all of the generals. Say there are validators A, B, C, D, and they have equal voting power. Given this configuration, it's impossible for A to be the proposer for 2 blocks in a row unless some of the other validators were absent. It's not just deterministic -- it's round-robin.


>Sure, the addresses for the validators should be reused. The more it's used the better.

That's not at all what I said, but this is besides the point.

>t's not trivial for a traitor to pretend to be all of the generals. Say there are validators A, B, C, D, and they have equal voting power. Given this configuration, it's impossible for A to be the proposer for 2 blocks in a row unless some of the other validators were absent. It's not just deterministic -- it's round-robin.

And you whitepaper allows for block creation with absent validators which makes it trivial because you have total control over the seed determining the next set of valdators. Even if you changed that, it would be trivial because you would only need to have last input into the seed one time to grind and generate a signature such that you are the exclusive validator.

You continue to assert the attack isn't feasible, but you aren't actually paying attention to what the reason I have told you twice now.

"It's not [feasible] for a traitor to pretend to be all the generals <no explanation>, say there are [arbitrary example]. Given this configuration it's infeasible for the attack to be performed <no explanation>. <Repetition of facts that are part of the reason stake grinding works including it's determininsticness>."

If you aren't aware of what stake grinding allows, it is an attack where you calculate the deterministic results of many many signatures (seeds) and finds signatures that result in you winning, allowing you to perform the attack once again.


> you have total control over the seed determining the next set of valdators

There are many possible algorithms for determining this seed, some of which do solve the stake grinding problem. I personally like the NXT algo; its basic approach in simplified form is:

seed(genesis) = 0

seed(block) = hash(seed(block.parent) + block.proposer.address)

No grind capabilities at all. No matter how many times you try, you will always generate the exact same result. The randomness basically comes from absentee proposers, so the only way to influence the outcome (not proposing) is costly as you sacrifice your reward and all other influence over the block beyond signing / not signing it.


>seed(block) = hash(seed(block.parent) + block.proposer.address) No grind capabilities at all. No matter how many times you try, you will always generate the exact same result. The randomness basically comes from absentee proposers, so the only way to influence the outcome (not proposing) is costly as you sacrifice your reward and all other influence over the block beyond signing / not signing it.

That is trivially grindable. If the seed is your address then you create an additional address such that the next winning address is yours.


> control over the seed determining the next set of validators

That notion makes sense in something like PeerCoin or other preexisting PoS designs, but I don't see how that phrase applies to Tendermint. There is no "sampling" -- all validators must sign every block.


I think you underemphasize this point. My Slasher algo ( https://blog.ethereum.org/2014/01/15/slasher-a-punitive-proo... ) from way back then does use sampling; Tendermint does not, and thus gains a whole bunch of robustness properties at some cost to efficiency.


>all validators must sign every block.

Well now you're defining a new system. You can't expect me to explain the security of an undefined system, so I am sticking to the system defined in your paper which states that not all validators need to sign every block.


The whitepaper says that +2/3 of the entire validator set (in voting power) must sign every block. In the ideal scenario (in other words, in the stable state) every validator does sign every block.


Yes, it is pointless for me to argue about an undefined system. If you mean to say that 67% of the validator set must sign every block, then you shouldn't argue with the premise that 100% must sign every block.


It would be interesting to see a more well defined attack from you.


I've explained stake grinding attacks. Your response was redefining the system opening a new attack vector.


If the validator causes the blockchain to fork while its coins are locked in bond, all of its coins are destroyed. This means that as long as you sync your blockchain with the network periodically you never have to trust a central checkpoint.

This sounds very bogus. I'd like to see some algorithm details. Most likely it's just an excuse for another pyramid scheme.


say you want to create a cryptocurrency but you want miners spending their electricity and computing power on something productive, or some transferrable state which can be consumed by another crowd of people.

what would it take and how would you go about building it? This seems like what tendermint is doing?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: