Hacker News new | past | comments | ask | show | jobs | submit login
Stellar Consensus Protocol: Proof and Code (stellar.org)
274 points by joyce on Apr 8, 2015 | hide | past | web | favorite | 93 comments



I'm finding the graphic novel explaining federated consensus to be really entertaining: https://www.stellar.org/stories/adventures-in-galactic-conse...


Joyce from Stellar here. Thanks! As we were working on the white paper, we realized how difficult it was to explain complex concepts like federated Byzantine agreement.

We know it’s part of our jobs to make these ideas understandable. That way more people can join the dialogue and think of ways this infrastructure can be used to build services for their communities, which may be really far from the nearest computer science program. So we decided to add a lighter approach in hopes of making it fun for people to learn.


Hi Joyce, thanks for chiming in! I'm glad Stellar is committed to elucidating the ideas behind the technology, and this is a thoughtful and creative approach. Reading the graphic novel first helped me understand the idea behind quorum slices while reading the paper. Can't wait to see more of this!


The hard part is often explaining something. And if you want to change how money works, you need to be very good at explaining.


It's actually AMAZING! I've actually been pretty deeply embedded in the crypto community for a while and have spent some amount of time with the Stellar folk, and I after reading the gn I understand Stellar (and even the blockchain) significantly better! At least in a way that requires a lot less cognitive overhead to mentally tinker with.


I'm excited for the ideas here and have been following Stellar.

But I'm hugely disappointed to see that they went with C and C++ for their new core codebase. This is the kind of code that needs strong safety, security, and correctness guarantees, and here in 2015 we have several mature languages with better safety & correctness guarantees.

C# and Java are both mature and mainstream, and either would have been a sane choice. Go is slightly less mature but also a safe and conservative choice.

(I personally love where Rust is going too, but I could excuse people for not choosing it yet due to immaturit.)


When your software aspires to move billions of dollars of value, it would ideally be written in Ada.

That said, I agree that C# and Java are good options.

What's hilarious is all of the Bitcoin startups that are running on node.js and mongodb. Would you put your kids on a flight if you knew the control system was written with javascript and mongodb? Yikes.


You are partially right. It is completely possible to write in JS/Mongo systems as robust as Ada/Oracle|DB2|SQL Server. You just have to know what you are doing. There is no magic in Ada, Oracle, etc.

Node and Mongo are moving hundreds of billions daily in HFS shops.


>There is no magic in Ada, Oracle, etc.

There is no magic but there are strong constraints that shift reliance for correctness from fallible human programmers and peripheral tools to the type system and compiler, providing better integrated, systematic assurance.


Actually, having met a few of them, a surprising number are using PHP, which made me quite angry lol.


C/C++ are going to be tied to some runtime libraries but that doesn't seem like as big an externality as being tied to a particular version of an entire VM. You may prevent certain classes of of programming error with a memory-managed language but at the cost of fine-grained control of your memory, and when it comes to security software in general and key management in particular you expose yourself to a whole other set of issues with managed memory. C/C++ are also highly portable, arguably moreso than Java (and certainly moreso than C#). They seem like the safest choice in many ways.


FWIW, Go has no VM, and neither does Rust. They compile real binaries.

And Rust gives you fine-grained memory control without sacrificing safety -- unlike the others it has no garbage collector, and instead proves allocation safety at compile time. In Rust you can know for sure exactly where your key has been copied and when it will get deallocated (to the extent that any program running in virtual memory on normal hardware can know that).

Although I agree that these are the less mature choices, and it's reasonable to reject them for that reason.


Meh, Bitcoin seems to be doing ok with C++.

It's certainly had a few issues, though I haven't looked closely enough to know which are due to the lack of memory safety, etc: https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposu...


If a program is formally proven to be correct, it doesn't matter what language it's written in.


Agreed, but that's not what they did here. They didn't prove the program is correct, they proved the algorithm is correct. It remains to be seen how closely the program actually implements the algorithm in the paper.


Language choice can be important. If you do the common thing and build a distinct model of your program and prove it correct your guarantees don't hold about the actual implementation. You need a second method to ensure equivalence between your model and implementation, this obviously requires much more work and can vary greatly based on the tools and programming languages involved.


Yes--choose the language and development process that makes it easiest to reason about the desired properties of the program.

Example: seL4 is written in C (as in, written by hand by fallible humans--not compiled down from a higher-level language [1]). But, the C is written in such a way that it is feasible to prove its equivalence to a Haskell prototype which had previously been proven correct.

Yes, language choice can be important. But, the degree of its importance depends on what you're building with it and how you're using it.

[1] http://www.nicta.com.au/pub-download/full/7371


I guess you missed the /s at the end. As the perfect program runs on top of buggy hardware, OS, libraries, VMs, etc.


I'm not sure I'd call Java for systems programming is a "sane" choice, but ok!

I think with a very modern C++ approach and very careful coding you can rock out. Yeah not everyone has this, but the losses from using Java is just so huge!


A consensus algorithm targeted for use in a globally distributed OLTP is hardly "systems programming".


Shovel data around at speeds that push the boundaries of hardware performance. That isn't systems programming?


Well it's first and foremost a protocol. Feel officially invited to implement it in any language you like. Language correctness guaranties sounds to me as a term from MBA programs and business magazines and not something a REAL dev would EVER say.

I've been to at least 200 software conferences in my life and never heard speakers like Linus, Ken Thomson, RMS, Gordon Letwin, DHH, Anders Hejlsberg mention "correctness guarantees".


"Correctness guarantees" is a whole bucket of things you certainly have heard of, like: bounds checking, integer overflow protection, or statically safe memory allocation.

I'm not talking about a whole-program correctness proof -- although those do exist too.


I'm getting very frustrated by this presentation, it's all "This is so amazing, it satisfies so many criteria, there's a big problem with financial institutions"

I can only read those lines so much before I get the feeling of being whitewashed. How does it work? Where is the data?

I'm reading the white paper now, but I felt compelled to post this comment after I read through yet another 10 paragraphs of exactly what I described above.

Something that takes on distributed consensus is a fantastically interesting project, this is so frustrating!!!


Did you read the whitepaper? It's much more technical. Link: https://www.stellar.org/papers/stellar-consensus-protocol.pd...



Interesting that Graydon Hoare [1], Rust's (initial) creator is one of the core developers.

1. https://github.com/graydon


Graydon also created monotone, which was a big influence on git.

http://www.monotone.ca/monotone.pdf

https://en.wikipedia.org/wiki/Monotone_%28software%29#Monoto...


I think Monotone was an alternative to BitKeeper and BitKeeper was the inspiration of Git and Mercurial. It even seems to suggest that in the link you posted.


Linus played with Monotone before he started git (and it shows)

He is even credited in the changelog: http://lwn.net/Articles/131744/


BitKeeper not being available anymore was the impetus for starting Git and Mercurial (as replacements for BK), but many of the concepts in Git and Hg come from Monotone, most importantly the use of merkle trees (according to /u/ggherdov Matt Mackall[0] recently mentioned this explicitly on IRC). Linus also mentioned Monotone by name as "the most viable alternative" before starting/publishing git[1] and as pgeorgi noted contributed to the same.

[0] https://www.reddit.com/r/programming/comments/31yi7d/graydon...

[1] https://lkml.org/lkml/2005/4/6/121


He also did a little write-up around this stuff:

http://graydon2.dreamwidth.org/201698.html


"It is the responsibility of each node v to ensure Q(v) does not violate quorum intersection".

::Sigh:: This sounds like it does not even speak to one of the major fundamental issues of their approach; which I pointed out in 2013 (https://bitcointalk.org/index.php?topic=144471.msg1548672#ms...) and appeared to play a critical role in Stellar's spontaneously faulting, and has been avoided in ripple by using effective centralized admission to the consensus in the system to ensure a uniform topology.

The (generalized) ripple "as-advertised"* consensus model can only be safe if the participants trust is sufficiently overlapping. In spite of requests by myself and several others (E.g. Andrew Miller) Ripple never formalized the topology requirement, much less how users are to go about achieving it. This paper goes forward in formalizing it, but still provides no guidance on achieving it; and absent that the only reliably way I know to achieve it is to have a central authority dictate trust. (*Ripple, as-deployed, centrally administers the "trust"; and Stellar faulted when it failed to do so and switched to a fully centralized approach (at least temporarily))

Consider a trivial example of two fully meshed subgraphs of 100 nodes each with an overlap of a single node. Assuming that each nodes behavior is tolerant to at least one ill behaved node, then both of the subgroups can come to a consensus (achieving at least 99 out of 100) about mutually exclusive state, and this can happen spontaneously without any attacker. More complicated partitioning-- ones involving many nodes in the min-cut, or more than two partitions-- are possible, to avoid it there must be 'sufficient' overlap.

Deciding on what the 'trust' topology must be to achieve safety requires non-local (and presumably private) information about what everyone else in the network trusts. The required minimum set of additional edges to make any particular natural trust topology into a safe one may have no relationship to whom anyone actually finds trustworthy in the real world. As far as I can tell no mechanism is proposed to establish a safe topology; just "the responsibility of each node".

To me that sounds a lot like saying "It is the responsibility of each node to not connect to any faulty nodes." Its a very strong assumption.

Separately, this system proposes a kind of consensus which is weaker with respect to blocking than e.g. Bitcoins. This is perhaps made most obvious by the point at the end about being unable to use the consensus to safely arbitrate global parameters (like system settings or version upgrades), something we do regularly in Bitcoin. It isn't clear to me why the authors believe that the system is fit for cryptocurrency use when it cannot guarantee eventual agreement about _all_ of the state. In Bitcoin the transaction 'light-cone' from coins splitting and merging appears to grow exponentially for most coins, so a failure to reach consensus on one transaction would eventually block most transactions. It's not clear to me if all participants could reliably detect stuck statuses and avoid dependance on them (if they could, why cant consensus continue). I'll need to read more carefully to understand this point.


You are correct that safety requires overlapping quorums. However, the trust decisions are public, as this is what allows participants to discover quorums. The scenario you describe of two groups of 100 participants overlapping at one node might or might not be a problem. The most likely cause of such a topology is a Sybil attack, in which an attacker with one seat at the table gloms an extra 99 nodes onto the system that nobody trusts. The attackers' 100 nodes might of course diverge if they are so configured, but nobody will care.

A priori, we cannot definitively answer what kind of topology will emerge. But there is certainly precedent for building a robust network out of pairwise relationships, namely inter-domain routing on the Internet.


Hello Dr. Mazieres, thank you for making the Stellar whitepaper available!

While inter-domain route agreements show precedent for building a robust network out of pairwise relationships, might this particular class of agreement also work because it happens to be a "small world" where failures are obvious? That is, the set of major AS operators is small enough that all the major players know one another (since ICANN) maintains a centralized list of who owns which AS number), and bogus route announcements are easy for an AS operator to detect since they coincides with floods of angry tech support calls asking why www.foo.com no longer loads (or loads www.bar.com instead). By contrast, it seems that Stellar is geared towards environments with neither of these properties--large worlds with hard-to-notice failure modes.

I ask because I'd love to hear your thoughts on how to select quorum slices when considering the political and economic incentives that might influence which node operators I choose to trust. Specifically, do you foresee the emergence of a small set of big-player node operators that application developers almost universally (and blindly) select for their programs' quorum sets, like how web browsers and OEMs regard CA operators today? How can Stellar help users do better than blindly trusting a small set of operators? I'm assuming that the fact that big-player node operators must nevertheless externalize the same slot values in order to enjoy liveliness makes it easy for the application to automatically detect any equivocation? If so, how would nodes be deployed to resist DDoS attacks that try to break the vast majority of usres' quorum sets? I'm getting the impression that there's a missing precondition here that for a large-scale Stellar deployment to be successful, there must be a very diverse set of quorum sets.

Thanks again!


So first, a lot of problems with Internet routing do not really affect SCP's safety--for example bogus route announcements. Part of the reason is that SCP depends on transitive reachability. And of course part of the reason is that SCP is built on top of the Internet, so can already assume a basically running network underneath.

I can't predict the future, but I do think it is likely that a bunch of de facto important players will emerge and have fairly complete pairwise dependencies. One reason is that Stellar follows a gateway model, where counterparties issue credits. So for example, down the line people might want to store their money as Chase or Citibank USD credits. So people will already have some notion that some participants are trustworthy enough to hold their USD deposits, and these institutions will emerge as important. If I'm a Citibank customer and you send me money, I obviously won't consider the payment complete until Citibank says it is. And of course Citibank is likely to want to depend on a bunch of institutions they do business with, so even just one bank should give me good transitive reachability.

But the nice thing about safety is that you can reduce any individual party's trust by depending on more people. So for example Stellar will run a validator node for the foreseeable future, and their incentives are different from Citibank's. To gain more trust in the system, I might want to wait for both Stellar and Citibank to agree before considering any payment fully settled.


Why would I ever want to depend on Citibank?


You obviously wouldn't. But lots of people depend on them already because lots of other people do and they don't want to spend any more time thinking about it than they absolutely have to. If you can convince them to change, more power to you.


Let's say there's 3 big "tier 1" operators, which everyone trusts. This means that if 2/3 colluded to defraud the system, they could. Then, a smaller non-profit or more "trustworthy" node comes online and says, hey everybody, add me to your quorum slice. I'll listen to these big operators and make sure they're honest, and you can listen to me to verify that.

Now, regular nodes add the non profit node to each of their 2 member set slices. If one of the bigger guys colludes, the system is only blocked, because now the regular stellar users have depended on the non-profit node in each stellar slice, who has seen the collusion and isn't externalizing bogus transactions. Once the collusion is published by the whistle blower node, regular stellar nodes can remove those slices that contained the bad node, and the system moves forward.

Now this is a completely contrived example but you can generalize it as a chain of trust webs, where i listen to the someone, someone listens to me and the person i listen to, and so on.


The attackers' 100 nodes might of course diverge if they are so configured, but nobody will care.

The person who is being attacked cares, right? Is there a way for a Stellar node to realize that it has been partitioned from the real network?


Yes, and this is kind of the whole point of the protocol. It just hinges on defining the real stellar network. An analogous question is, "Is there a way for a computer to realize it has been partitioned from the real Internet?" Well, sure. Pick 50 web sites you think are really important--maybe a bunch from Alexa, plus your bank, employer, etc.--and make sure you can reach the vast majority of them (using https, of course, so no one can impersonate them).

There's one sense in which FBA is stronger than the Internet analogy, however, it that is is actually testing transitive reachability. So instead of just making sure you can talk to those 50 web sites, you actually make sure all of those 50 web sites can talk to all the sites they consider important, and so on, until you get the transitive closure, which is basically the notion of an FBA quorum.


> you actually make sure all of those 50 web sites can talk to all the sites they consider important

This requires you to trust the 50, correct?


Well for safety it's a logical AND, not an OR. But for liveness you'd most likely you'd have some threshold. For example, you might that at least 34 of the 50 are honest.


> are public

Ah. What mechanism assures this consistency of this information?

> at one node might or might not be a problem

How could it not be a problem? If that is the totality of the network, and the network is 1-fault tolerant, what prevents spontaneous divergence of the state?

> most likely cause of such a topology is a Sybil attack

"likely"? It's not clear what you mean there. What statistical model have you adopted that allows you to reason about the likelihood of various topologies?

Social networks usually appear to have small word behavior where its usually very easy to draw lines that describe disjoint local-majorities (or fairly large local-super-majorities).

I agree that a sybil sticking on a bunch on a bunch of extra 'nodes' and the sybil nodes diverging as a result isn't interesting case. What I do think is interesting is what mechanism will prevent user's honestly stated trust (much less politically manipulated trust) being a bad topology?

What is the procedure that I can follow, that if everyone else follows it, results in the correct global behavior (with high probability)? What are the additional assumptions required to achieve that and make it secure? Why are they plausible? Do they provide decentralization? (I can suggest on procedure which works: Stellar tells everyone who to trust; but it completely fails at decentralization so I assume that isn't the goal.)

In Bitcoin our security assumption is that the computational majority of participants conform to the protocol ('are honest') and these participants are not completely partitioned from each other. People can then think about-- or debate-- how reasonable those assumptions are.

(There are alternative formulations of Bitcoin's security which also argue about how plausible these assumptions are given economic incentive assumptions; but even this most simple set of assumptions gives people something easy to reason about.)

Can you give a parallel (informally stated, but equally comprehensive) version of the security assumptions for your consensus system?

> But there is certainly precedent for building a robust network out of pairwise relationships, namely inter-domain routing on the Internet.

The Internet is _wildly_ inconsistent. Asymmetric routing is the norm, the internet frequently suffers small partitioning and loops; single malicious parties at the edge can frequently inject bad state that is accepted globally, congestion and blocking happens multiple hops away from users where they have no recourse. The internet is not a consensus system, and these issues are not usually hugely problematic; someones brokenness doesn't involve your traffic generally effect you, you can route around problems locally. Ephemeral routing and ledgers are fairly different problems. Ambiguity about the ownership of a coin eventually effects almost everyone. I'm not seeing the connection you're making there.

I certainly agree that useful systems can be built from pairwise relationships: The original ripple design for pure IOUs without creating its own cryptocurrency prior to opencoin buying the ripple name was such a system, it had no need for a global consensus (except perhaps in certain atomic unwind cases)-- only the participants in a particular IOu transfer needed to be involved. It is not at all clear to me that a safe global consensus system can be built from pairwise trust.


At it's core, the question of whom to trust is of course crucial, as there are clearly at least straw-man answers that have undesirable effects. But the trust topology affects more than safety, it affects the scenarios in which a consensus protocol is useful. E.g., if I issue some scrip and trade it on the Stellar network, I don't necessarily want to depend on mining rigs in other parts of the world for my ledger safety. I want to tell people "trust whomever you want, but be sure to include me as well, because I won't let you redeem the scrip if you don't have it on my ledger."

Part of the goal of SCP is to leave such policy questions up to the market and see what kind of architecture emerges. Our hope is that this flexibility combined with the lower barrier to entry will lead to greater financial inclusion as people build on our platform. But if we add too many policy restrictions, we risk heading off unanticipated innovations. (Heck, someone might literally replicate the Bitcoin policy and configure their quorum slices to trust 67% of whoever mined a Bitcoin block in the past week. That wouldn't really make sense, but it's possible.)

That said, what you're getting at is that with flexibility comes risk. We can't a priori rule out the possibility that organizations will choose bad quorum slices that violate safety. But we need to ask under what circumstances people care about safety and why. People obviously won't care about forks if one of the branches is purely a Sybil attack. But they likely will care if "real organizations" diverge, for some notion of that term. The reason, again, is that at some point the "real organizations" will affect one another in the network, however indirectly--maybe after a chain of five payments. That kind of indirect link is precisely what FBA quorums capture in the transitive closure of slices. So if everyone depends on the financial institutions they expect to do business with, and the whole economy is in fact interconnected, then Stellar will be safe.

I obviously believe such interdependence exists, and fully expect Stellar to be safe, but I can't predict exactly what the network will look like. Nor do I want to, as this could limit innovation. Only time will tell how this plays out.


> I don't necessarily want to depend on mining rigs

Indeed, the security model provided by Bitcoin consensus system may not be fit for any particular purpose. But it has one, and so we can think about it and decide what purposes it may or may not be fit for, and think about under what conditions it will be safe or not safe.

> is to leave such policy questions up to the market and see what kind of architecture emerges

Users of a system take actions. In your system, it seems, the collective actions of all the users result in an effective security model. You refer to this as leaving it up to the market.

The resulting security model--the conditions of success or failure, the invariants which must hold--may be unknown to any of its participants; it may be even unknowable to any one human mind. It may, and almost certainly will, change over time. A user adopts the system today, but finds tomorrow that it is behaving in a way which was previously impossible, including restrictions being sprung on them later--the possibility of which is a kind of restriction in and of itself.

> Heck, someone might literally replicate the Bitcoin policy and configure their quorum slices to trust 67% of whoever mined a Bitcoin block[1] in the past week.

Even your best outcomes with pinning the state to "real organizations" leave me wanting to cite Jo Freeman's "The Tyranny of Structurelessness" as a source for concerns--but I can't, because by failing to state a specific security model, you have a fully general defense against any attack or failure mode: "okay, don't take that risk, the invisible pink hand can choose another set of tradeoffs instead". As you've helpfully demonstrated above (by claiming to generalize the Bitcoin consensus model), there is no conceivable attack for which you couldn't say the system addresses it, as the security is basically external. In some sense you might as well have just shipped a C compiler, pointed out that it was fully general for whatever the market might choose to do (good or bad), and said it was up to the market.

[[1]As an aside, Bitcoin mining is not just creating identities via hashcash; Blocks commit to the past ledger state-- it's effectively a signature itself--, and this is integral to the security model; without that those identities could concurrently create unbounded conflicting states with constant energy usage. See, https://download.wpsoftware.net/bitcoin/pos.pdf for a more complete discussion of the subtle details around that.]

For a market to choose, there must be a choice and there must be intent and understanding. Participants need to be able to trust that their choices are effective and won't be completely undermined by the choices of others, or at least understand how their choices might be undermined and be confident enough that such an outcome is unlikely. For the market to choose, people would need to understand the global ramifications of their actions and the actions of others, but you've seemingly provided no tools to reason about these.

I'm not complaining that there is risk--there is that aplenty, and in Bitcoin too for sure--but that there is no commitment to a sufficiently complete concrete security model at all, which makes the risk impossible to assess. Bitcoin users will also sometime make arguments about the suicide pact of the interconnected economy, but they do that as an answer to what if the first plausible mechanism fails. It probably okay that a system has generality and can potentially fullly accommodate the whim of man, but the more our systems rest on that the more opaque they are in practice.

I really think Stellar should develop and transparently state specific technical 'plan' on how the system should be used-- how trust should be configured globally-- and defend the plausibility and desirability of that model, describe who will and won't have the power to control the system as a result, how centralized it will be, how people can choose to configure their own systems to bring about that outcome, and how we can tell if it has achieved a configuration which can deliver on that plan (preferably before observing a failure). Maybe even multiple such plans, if it were possible to analyze their interactions.

Without that, I can't shake the impression that what you're actually saying isn't 'leave it up to the market' but that instead what you're actually saying is 'leave it up to chance'.


You're saying "how do we know that that quora will sufficiently overlap?". David seems to be saying "how do we know they won't?". At this point, I think just about everyone can agree that the security depends on a set of empirical assumptions that we currently simply do not know how they will play out into the real world. So my plan at least is to simply sit back and see how it actually ends up evolving in real life; if it fails, the Stellar Foundation was centralized enough to temporarily take back the reins for itself the last time, so I don't see why it can't do it again.

That said, my _prediction_ is that it will work fine mostly, occasionally there will be concerns about people splitting off into islands, and a resulting second-order consequence is that people will start putting the Stellar equivalent of blockchain.info onto their trust list in order to ensure connectivity to the "main graph" (I had actually cited The Tyranny of Structurelessness in my own responses already :) ), and this will just have to be the social-network-consensus version of the GHash.io scare and we'll be fighting against people's private interests to be lazy to reduce the risk of that happening.


> "how do we know they won't?"

Because the prior version _already_ faulted in production as a result of having a trust graph that didn't meet the prior criteria needed for safety. The prior version also resulted in centralization in practice (in it's ripple instantiation; kind of neat that the ripple->str-reboot let us see both of the predicted failure modes play out, even though they were largely mutually exclusive)

The latest work is intended to relax the requirements/consequences but still provides no guidance or tools for achieving the required topology in practice.

By all means, clearly label these efforts that have have taking a "you haven't yet proved its broken" approach to safety/security; and I won't complain about them. Absent that, I just normally expect that when someone produces a cryptographic product that they've actually given some care to their security.

> consequence is that people will start putting the Stellar equivalent of blockchain.info onto their trust list in order

Then if they're anything like the Bitcoin world's blockchain.info-- which is regularly in a confused state--, I'm may soon find myself the proud owner of infinity STR, I guess? ( Screenshot someone else sent me of the actual BC.i site one day a while back: http://people.xiph.org/~greg/21mbtc.png )

"testing is making sure it does what it should, security auditing is making sure that is all it does"


(Jed from stellar here) The fork you are describing occurred in the previous protocol when all the nodes had the same UNL; the failure had everything to do with that previous protocol's response to overloading (it diverges based on timeouts, rather than getting stuck, and discards the losing fork on healing) and nothing to do with trust topology. So it didn't have anything to do with improperly set up trust.

In SCP the topology is public and conveyed with each consensus packet. So people will be able to tell when the graph is vulnerable.

Improving the definition of topology requirements for correct consensus is, far from being 'ignored', exactly what Mazieres has been working on all this time. And as you admit, those requirements have now been formalized and the information to check them is conveyed in the consensus packets; they are just not trivial to check by hand, and we have not yet implemented a check for them in stellar-core (this will be forthcoming, see roadmap).


Hey Jed. I'd be interested in a post mortem on the fork, how the network broke down, and how the new protocol addresses those issues. Reading the paper, I can believe the new protocol works, but it's difficult for me to pinpoint how exactly it differs from the old protocol.

I'm also interested in how the explicit quorum slice data for each node can be used to maintain quorum intersection over the entire network as new nodes join.


I'd be interested in a post mortem on the fork as well. During the split, either one side has a supermajority or neither side does. If one side has a supermajority, that side will win on rejoin, so no problem. If neither side has a supermajority, then one side or the other will win on rejoin, but nobody's been relying on either side. In every case, there should be no problem.


I'm out of my league here technically, but I'm having a hard time seeing how your critiques don't apply to the entire market economy just as well.

The collective actions of all participants result in an effective price model, the causes of which are unknown to any of its participants and likely unknowable to any one human mind, and which changes over time in ways that are highly opaque.

In the market at large, participants certainly don't need to understand the global ramifications of their actions, only the local ones. I don't see why that isn't the case here as well.

(For what it's worth I want to point out that like Walter, I really enjoyed reading this discussion, even if much of it is over my head.)


> In the market at large, participants certainly don't need to understand the global ramifications of their actions, only the local ones. I don't see why that isn't the case here as well.

The primary reason why that does not apply here is because nobody is selling "the market at large" to you as a cryptographically-secure decentralized consensus system. And besides, those ledgers are edited all the time by Authorities; it's irrelevant to this topic.

Edit: you are attempting to reason by analogy about a pricing system, and then trying to apply it to proof-of-work consensus? May I ask why?


No, the market economy is sold as a surplus-maximizing decentralized price system. I was attempting to reason by analogy. You've pointed to a difference between the two, but haven't given an explanation of why that difference is sufficient reason for the analogy to break. I'm happy to believe that it is, but am as yet uneducated as to why.

Edit in response to your edit: It just occurred to me that most of the critiques that nullc was making were in very direct correspondence to critiques one could make of the price system. I have no idea if this analogy is useful, but it seemed awfully coincidental.


Markets are frequently manipulated and distorted, they fail and fault and such.

But they're not keeping consensus ledgers for limited supply cryptocurrencies. Any of these failures or faults can allow a coin to be spent twice (or other mutually excluded transaction) with parties that have different views of the system, the result-- and any transaction which is casually depended on the conflict-- can never be part of a common system.

So it's like someone manipulates the market economy to convince you to buy a cheeseburger most other parties think they sold to someone else, and now your hand cannot interact with your neighbor's door because your hand contains atoms that-- as far as your neighbor's door is concerned-- aren't part of your hand but are instead part of my foot.

Markets are a tool. They have their applications and limitations, building the security of a cryptocurrency in an adversarial environment out of them sounds like a plan for failure. No less than using a cryptosystem in a place where you really needed a market may not give great results.

But you don't have to take my word for it, the prior consensus model in Stellar _already_ faulted, all on its own when, the requirement for the trust topology was violated. This fault wasn't a surprise, I (and others) called it out years before-- but the vulnerability of the system was publicly ignored by Stellar's creators while they ran Ripple and Stellar's advisers (including Mazieres, who was an adviser listed on the Stellar site on day one) even as they facilitated the sale of their ripple-reboot asset to the general public. It was not acknowledged until it knocked their system out. The improved consensus may better confines the failure domain, but retains the property that the safety of the consensus is largely external and depends on particular topological constraints without a procedure that provides any assurance the constraints are likely to be met.

Another response to me argues that it will probably be fine; but this flys in the face of reason. The same property existed before and it demonstratively wasn't fine.

If you look at the old BCT thread, I was a big fan of the original ripple IOU system, before ripple labs bought the name, and had recommended it as a potentially fruitful area to many people. The original system could potentially have been implemented without any global consensus at all, which was a large part of why I found it interesting-- Global consensus is enormously costly. But functionality like a new cryptocurrency currency for the purpose of funding the company and the integrated non-interactive rippling (which apparently has been largely turned off in ripple now due to other security vulnerabilities) brought back in the global consensus requirement. (The result was that I felt I had to go edit all my old posts to remove my recommendations, since the system was wildly changed to be something else)


Walter from Kraken here - I really enjoyed this thread!

(Bit sleep-deprived right now but) FWIW, here's my take: Bitcoin tries to be too many things to too many people. Original ripple was a great response, though without providing a strong solution to the topology problem (manual processes for trust agility).

Now, both fail to "do one thing and do it well". Fundamentally, the functions 'settlement protocol', 'currency' and 'client program' (ie. implementation of protocol) should be well delimited. Besides these, Bitcoin now effectively attempts to do more (expensive paid public sequential datastore and agent-based computing platform, ill-conceived invoicing system, future revenue-guarantees for early adopters/special nodes, etc.)

This brings me to some points I feel have not been considered in this thread: (1) People don't put absolute trust in a settlement network or currency. The de-facto means of managing risk is to simply to limit exposure: either by sending smaller amounts sequentially over time and validating delivery with your endpoint (out of band) prior to sending the next 'chunk'; or by splitting a transaction across multiple currencies/settlement networks/(anonymous/temporary/unpredictable) points of network connectivity. (2) No system fits all people. It's great that the stellar notion addresses one of Bitcoin's most glaring issues: latency for real time / retail transactions. However, it does not necessarily meet all of Bitcoin's capabilities, nor should it aim to. A real world user should have a computationally available means of comparing these networks against real time requirements to select an appropriate path (in terms of risk mitigation strategy, temporal requirements, maximum transaction size limitations, secrecy requirements, and any other execution and routing goals/preferences). (3/corollary) The key issue in present-era financial systems may be that business logic around the true properties of a financial transaction, network or settlement partner are very rarely formally defined (ie. typically many pages of indecipherable legalese that amount to 'we promise nothing and have well paid lawyers', with no computationally usable community metric for SLA enforcement on latency, reliability, instances of failure, etc.)

My thoughts here have remained fairly constant for the last four years: what is really needed is a business-level transaction protocol that disclaims any affinity at all for (a) the currency or currencies in a transaction; (b) the settlement systems used; (c) the endpoint identification system used; (d) allows the discussion/agreement of a realistic range of resolution strategies for common problems in business-level transactions; and (e) facilitates flexible (indeed, multi-path) routing between endpoints across arbitrary financial service providers, potentially using multi-hop/multi-asset pathways shortlisted in real-time through actor and transaction-specific requirements.

A valid settlement path should be 'buy expensive art, get on a plane, deliver to mansion'. Likewise, a valid settlement path should be 'plant a tree and share the GPS location and have it manually validated by a third party'. (These are outlying examples, but an example of the degree of flexibility that should be aimed for in the long tail of latency and use cases. The shorter side is obvious: redundant multi-provider fiscal routing (even across conventional banks), level playing field for emergent financial systems and the conventional ones, etc.)

Some time ago I tried to make proposals in this area at http://ifex-project.org/ (IIBAN perhaps most successfully) but have not been able to dedicate much time to the notion lately. I do however feel it is relevant and am willing to jump at any time to work with others to move the notion forward. If you two are interested I'd love to get together and bash something formal and extensible out of this line of thinking. Tentative proposal: meeting in Europe next month?


> "It is the responsibility of each node v to ensure Q(v) does not violate quorum intersection".

Failing to ensure quorum intersection in my quorum slices choice will have local repercussion and may befoul nodes that depend on me but does not prevent global functioning if the rest of the topology has quorum intersection.

I think your argument here is sane, but I also believe that under reasonable circumstance we can expect the Stellar network to be well structured. Yes some edge nodes may get befouled as you would in real life if you would trust an untrustworthy bank or health insurance.


Yes - the benchmark is neither perfection nor omniscience, rather the benchmark is the current state of affairs.


Ripple formalized its topology requirement in its consensus whitepaper (page 5, section 3.3):

https://ripple.com/files/ripple_consensus_whitepaper.pdf

Basically, any two UNLs must have 20% overlap to avoid any risk of a fork.


I find this kind of stuff fascinating, but lack the CS and/or mathematics background to understand the discussion beyond the basics. I think I grasp the concepts outlined in the graphic novel linked elsewhere in these comments, but the whitepaper is too deep for me.

Any pointers for someone looking to gain an amateur understanding of this, or is this a topic of sufficient complexity that it precludes an amateur understanding?


Here's an overview that attempts to explain it in a less CS/math way and a more general and approachable way: https://medium.com/a-stellar-journey/on-worldwide-consensus-...


I'm still reading the paper, but I'm not seeing any discussion of a sybil or eclipse attack defense.


(disclaimer: I had early access to the white paper for review)

Sybil attacks are not really directly applicable here since each node in the system picks its own quorum slices (basically the set of nodes that it trusts). There is no notion of global reputation and nodes do not need to know every other nodes to participate. Looking at the definition of quorum intersection[0] section 4.1 should give you a sense of the conditions that are required on the choice of quorum slices for the network to function properly (quorum intersection ensures safety)

The proof exposed in the paper guarantees safety and liveness for the network provided a certain number of reasonable conditions are held true. What that means is that an attacker cannot force on intact nodes (definition p14) invalid transactions nor prevent the network from making progress.

That being said, (at least in the version I reviewed) there is no guarantee provided with respect to ensuring that all valid transaction will eventually make it into the network. Indeed a set of highly trusted nodes (present in a lot of quorum slices) could attempt to preempt a specific set of transactions X (originated by edge nodes) by opportunistically broadcasting valid transaction set V_i for each successive ledger entry i that explicitly do not include the targeted set of transactions X. Under raw SCP as described in the paper and for certain topologies this preemption could be real and this is the closest I can think of a Sybil attack. It's important to note that we still have liveness and safety in that case.

I believe the same kind of attacks to be plausible with the Bitcoin network and I know protection mechanisms against it are currently being evaluated by David, Jed and the rest of the team. I will let them share their progress when they think it's right. I also hope they will correct me if I stated anything inaccurate here!

[0] https://www.stellar.org/papers/stellar-consensus-protocol.pd...


> I believe the same kind of attacks to be plausible with the Bitcoin network

This isn't anyone elses understanding. Can you suggest a mechanism by which it would be possible for a minority conspiracy to perpetually exclude a transaction in Bitcoin?


Well in bitcoin, of course, trust would map to computing power.


In Bitcoin we can make a pretty concrete statement about computing power that one can reason about; the blocking attacker will not be successful without a majority of it.

Whats the similar statement for 'trust' which is sufficient for security? Obviously "attacker is partitioned from the network" is sufficient, but not very plausable. I'm sure there is a better statement possible, but its not clear to me what it is.


Why wouldn't this be plausible? Let's say one day China had enough of Bitcoin, and used their essentially limitless resources to gain enough hashing power at will, to block transactions or rewrite them or what have you. Entirely plausible with Bitcoin (in this case, China doesn't care about the coin reward and therefore is not a "rational attacker" as the popular game theoretic model of Bitcoin security presupposes).

Now let's look at the Stellar model in this same situation. We've got a bunch of large company nodes that are probably Gateways (for the sake of argument say JCB, Wells Fargo, Barclays, and Bank of Brazil). We've got a ton of other nodes that belong to research universities, and then we have a bunch of "non-profit" or hobbyist or whistle blower nodes. There's a nice graph topology between all of these. Then one day China comes along and decides its had enough. How does it attack the network in this case? By hacking enough organizations to take control of their nodes? Seems a bit more unlikely than it gaining 51% of hashing power on the Bitcoin network...


> Let's say one day China had enough of Bitcoin, and used their essentially limitless resources to gain enough hashing power at will, to block transactions or rewrite them or what have you. Entirely plausible with Bitcoin

That's the Maginot Line attack, at Tim Swanson calls it. The more realistic attack is that China just hacks into five data centers and serves a warrant to another ten. An interesting property of the PoW incentive structure is that there is actually fairly little incentive to protect oneself against hacks, so I would not be surprised if it was fairly easy.

> By hacking enough organizations to take control of their nodes?

The key point in Stellar consensus is that even if enough nodes are hacked, then users can just stop trusting them and switch to other nodes, and so the network would "route around" the damage. With Bitcoin PoW, there's no way to exclude an attacker from participating; you have to accept their work just as much as everyone else's.


Maginot Line attack, I like that. And yep, that's basically the point I've been trying to make in my posts. IMO Bitcoin isn't "trustless" - you need to implicitly trust those with hashing power aren't colluding to screw you.


If you had a prisoner's dilemma game where people were trading and anyone could create currency then all would defect and create currency. By making substantial expenditure of energy the cost of defecting, the game loses its prisoner's dilemma quality. This is what makes bitcoin unique IMHO. Other systems have no structured way to create currency that doesn't rely on a particular party not defecting. There's tit-for-tat, but increasing the price to defect works so much more neatly.


Read section 3 and 4 of the whitepaper - the whole trust a quorum model is directly aimed at Sybil-style attacks. The key argument here is that the way they do group membership prevents these attacks by using a trusted quorum. This seems like a key place to focus attention for outside analysis of the properties of this protocol. If the assertions about this are incorrect, then the whole thing breaks down. As the Sybil paper points out, membership is critical for consensus. As a common-sense thing, "we have agreed" is super dependent on who "we" is.

As for eclipse, the model is so fundamentally different that it's not clear that there is a direct analogy for those network attacks. What do you have in mind? I'm not saying that it's obviously safe, just that attacks on the network protocol would have a very different flavor.


This is kind of explained away implicitly by pages 6-7. Each node has its own quorum slice, and they tend to point upward to a set of trustworthy financial institutions, not entirely unlike DNS.

i.e. FBA relies on a web of trust, not just on showing up and churning out hashes or whatever.


isn't this still sort of pseudo-centralized? my guess is that most people will trust the same handful of nodes.


This kind of "pseudo-centralized" is decentralized. When the "central authority" gets all of its power from individual participants who choose to follow it, and those participants are free to change the authorities at any time, that's a decentralized model. Bitcoin isn't centralized just because everyone has to agree on which transactions are valid.


It's interesting how this suddenly shifts into "journalistic independence" territory - what about who reports on that? What if some influential source would receive some financial aid for proper reporting?


Are they talking about computer verified proofs? I wonder, are researchers able to prove the correctness of distributed algorithms the same way they would prove sequential algorithms (for instance, using some type of Hoare logic and sat solver/ proof assistant).


You may be interested in this[0] paper outlining AWS use of TLA+ to formally prove its systems. Also previous discussion here[1].

[0] http://research.microsoft.com/en-us/um/people/lamport/tla/fo...

[1] https://news.ycombinator.com/item?id=8096185


If anyone's interested in proving distributed algorithms correct, they should check out the Verdi project (https://github.com/uwplse/verdi), which has proved Raft correct in Coq. I imagine handling Byzantine faults and the full complexity of SCP would be quite a bit harder, but probably doable.

To me, though, it would be more interesting to prove the implementation correct. Rather than trying to prove an existing C++ implementation correct, it's probably more feasible to reimplement the algorithm within Coq and extract to runnable code. Verdi already supports that, but unfortunately it doesn't support disk state.


No. At least for the moment, the proofs are English language only.


Is this protocol isomorphic to bitshare's Delegated Proof of Stake (DPOS) [1]? Seems to have the same qualities.

*[1] https://bitshares.org/delegates


Bitshares is more democratic than decentralized. Basically people vote their stakes to elect 100 nodes that ensure consensus, but everyone knows who the 100 nodes are. By contrast, the FBA trust model is completely independent of coin holdings, and just depends on pairwise relationships between the validator nodes. The resulting quorum structure is unlikely to look like a single fully-connected group.


Where should we first see Stellar deployed in a major way?



What significant application of Stellar do they have?


It looks very promising, but I was unable to find the answer to this simple question: How do I get my money from my bank account into the Stellar network?


> How do I get my money from my bank account into the Stellar network?

In addition to using gateways, as diyang suggests, there is another way, but you first need two things

1) Someone on the network who you trust. This may be your bank, but it could also be something else. I'm not going to tell you who or what you should trust, and to what extent, that is a decision that should be always in your hands

2) A path between the entity you trust and an entity that either you or your bank has access to.

Until #2 exists, you can do what the poster below just said: use a gateway. This is not recommended, as they are probably going to track you and it may not be always possible to deal with them in a humane way.

So really it depends on what your bank is, whether it makes sense to draw money from your bank into another form/service that is compatible with a service that stellar can talk to. What bank? What country? These things are going to matter on the global scale.


You can get money in and out of the Stellar network using gateways. You can learn more about them here: https://www.stellar.org/learn/explainers/#Gateways_trust_and...

This post is about keeping everybody's copy of the ledger the same using a process called consensus. Even if there were no gateways, there still is the problem of ledger agreement, so we'd still need consensus. Gateways and consensus are orthogonal. You can learn more about consensus here: https://www.stellar.org/learn/explainers/#Consensus


[deleted]


(disclaimer: I had early access to the whitepaper for review)

You should look in the white paper[0] to the definition of an FBAS which differs from epaxos (while epaxos is egalitarian, SCP is federated). All related proofs are included in the whitepaper, the central one being theorem 3 in 4.2. Finally the protocol specification (Figure 15, p28) is also very interesting.

[0] https://www.stellar.org/papers/stellar-consensus-protocol.pd...


Hi. I won't be able to read this for a while. I am not familiar with federated byzantine. Can you quickly comment on the difference between this and purely distributed consensus? How is fault tolerance maintained? Do we presume that top tiers are free of byzantine nodes?

Thanks!


I think this page should answer your questions: https://medium.com/a-stellar-journey/on-worldwide-consensus-...


Click View Source on Homepage. Easter egg ?


How do they do proof-of-work?


The whole point of Ripple/Stellar is that it doesn't use proof of work. They have an alternative that trades off trust for resource consumption.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: