Hacker News new | comments | show | ask | jobs | submit login

anyone who thinks GHOST is a good idea, has not understood Bitcoin at all. the whole point of the blocks is that nodes can work on the global state in a chain. so the idea that nodes should work on greedy subtrees is about the worst possible idea. Bitcoin solves not only the Byzantine generals problem, but a latency variance problem, to achieve logical broadcast. anyway, the author of this paper also believes that "anyone with reasonably high intelligence could have invented Bitcoin by random luck" [1]. well, no. there are many hidden problems which Bitcoin solves. the literature on quorum systems, distributed applications, etc. is very deep.

[1] http://www.reddit.com/r/Bitcoin/comments/20oyes/brilliant_an...




Bitcoin "solves" the problems behind Byzantine fault tolerance, quorum systems, etc by completely ignoring the past 30 years of research on the topic, and introducing a very simple construction that bypasses all of the issues entirely by using the concept of proof of work. Don't get me wrong, Bitcoin is a brilliant idea, but it's the sort of brilliant idea which is actually more likely to come to you if you were NOT bogged down by existing research on how to do things. Satoshi's primary gift was not deep knowledge, it was a fresh perspective.

And I am not saying these things because Ethereum is a super-magic-brilliant protocol that involves deep knowledge about thirty years of development in multiple fields; it's not. It's also an ultimately fairly elementary idea that I still remain surprised that nobody tried to seriously push before me. In fact, at least two other groups got very close in 2012-2013, and a few weeks ago at the payments innovation conference in Boston I learned that apparently the concept sans blockchain was around in the 1990s; but for some reason they did not take the idea to its logical conclusion.

Also, our implementation of GHOST does not in any way compromise the concept of global state; blocks are required to specifically include uncle headers in order to benefit from them.


I would suggest to run a simulation where nodes are globally distributed and have various latencies with high variance, and then study this problem of non-uniform distribution of information. with GHOST nodes would cluster in physical locations (say Iceland). the latency between those heavy nodes would be very low, and the latency to say Australia very high. the heavy nodes could then co-opt the network. it does not help if the authors proves a bound of latency, because this imbalance would destroy the network. with geographically skewed distribution of information, one can imagine all kinds of weirds effects and attacks. perhaps I'm wrong and will find out this actually works, but even then robustness requires safety over the longterm in very unexpected cases.


The information distribution should not be any more skewed than Bitcoin because every block can be validated by itself assuming only the validity of its parent as prior knowledge. This differs from the Israeli authors' GHOST, which assumes that nodes already have the uncles; in our system, uncles are included in the block. But we are definitely going to be doing various kinds of network simulations to make sure that every network protocol we push out is stable and convergent.


Completely agree.

Judging from the technical description [1] and the most recent version of the draft white paper [2] alone, I think the project would benefit greatly from more substantial input and expertise from the academic cryptography community. I'm afraid they are spending a lot of effort cargo-culting on the wrong things, while missing where the real challenge lies: How to deal with the inevitable orders-of-magnitude increase in transaction size/complexity while preserving consensus-based distributed verification.

To be more precise: The metered computation mechanism is a very clever solution for dealing with unrestricted computation and unbounded data storage, but their proposed solution does not convincingly address the inevitable increase in space required. Hand-waving about greedy subtree verification without actual numbers/scenarios that shows this could work at all (which would be very surprising to say the least) is not convincing.

In all a great idea, though. It would be great if it works out.

[1] http://gavwood.com/Paper.pdf

[2] https://github.com/ethereum/wiki/wiki/Whitepaper-2-Draft


> I'm afraid they are spending a lot of effort cargo-culting on the wrong things, while missing where the real challenge lies: How to deal with the inevitable orders-of-magnitude increase in transaction size/complexity while preserving consensus-based distributed verification.

Umm... that's exactly one of the issues that we're cargo-culting about the most. There are two general categories of solutions to this problem: technical increments (ie. a better constant factor), and fundamental cryptographic upgrades (eg. the stacktrace challenge-response concept we have been talking about on our blog). The first category we are not yet doing because we are following the well established advice of "don't prematurely optimize". The second category, well, that's why we're thinking of ideas like distributed blockchain storage, clever algorithms to force more people to be full nodes, and challenge-response protocols. There is also another idea I was thinking of, which I'll have a post up over the next week or two.

In the long term, we are already beginning the development of a very widespread collaboration with academic groups to try to tackle the problems in cryptocurrency, and at this point we fully expect we'll end up releasing Ethereum 2 at some point in 2016 which would take a lot of new cryptography into account.


> (eg. the stacktrace challenge-response concept we have been talking about on our blog)

You wrote on your blog:

> Altogether, what this means is that, unlike Bitcoin, Ethereum will likely still be fully secure, including against fraudulent issuance attacks, even if only a small number of full nodes exist; as long as at least one full node is honest, verifying blocks and publishing challenges where appropriate, light clients can rely on it to point out which blocks are flawed

The fact that someone can extract compact proofs of an invalid state transition was pointed out by me years ago (e.g. https://bitcointalk.org/index.php?topic=96644.msg1064601#msg...), and I believe I described it you personally in Mountain View. It's equally applicable to Bitcoin (though not implemented anywhere for any system yet).

It's a bit irritating to see ideas from Bitcoin recycled as "innovations" in altcoins and incorrectly claimed as as not applicable to Bitcoin, especially when they're not even implemented yet.

This one has a bunch of gnarly engineering issues that make it hard to implement. You end up with fraud codepaths that are virtually never executed, so how do you gain confidence that multiple implementations actually implement them consistently? The best proposal I'd had on this (from bitcoin-wizards) was to always produce two versions of a block, committed under a common root, one which has a random flaw, and then always kill it and select the right block using a proof. But thats kind of complex and indirect.


> It's equally applicable to Bitcoin (though not implemented anywhere for any system yet).

I don't recall you describing it to me, but it's likely you did and I didn't realize it was important at the time. Also, I never claim that challenge-response protocols are not applicable to Bitcoin; theoretically, we know that any scalability improvement that is applicable to Ethereum would also applicable to Bitcoin simply because you can implement Bitcoin as an Ethereum contract. Rather, I made the claim that Bitcoin _does_ not have full support for such protocols. I count myself among those who are skeptical that substantial changes to the Bitcoin protocol will ever be made, primarily because of the "changing an engine on the run" problem and the nasty political issues involved (see: the recent Counterparty spat). Cryptocurrencies are not sets of abstract ideas, they are protocols that are implemented in code today and have to be judged on their merits as they actually are. And Bitcoin, as it actually is, is not fully secure with a light client.


> will ever be made

And yet they've been made in the past. It doesn't require any hard forking or incompatible changes, just some additional messages which can be ignored by old implementations. The Bitcoin community has made one soft forking protocol change per year for several years and will almost certantly make one and possibly more this year.

> Cryptocurrencies are not sets of abstract ideas, they are protocols that are implemented in code today and have to be judged on their merits as they actually are.

As I noted, Ethereum doesn't implement this yet, if you'd implemented and worked out the gnarly engineering issues in actually implementing it I'd have credited you for that.

But right now, it's just an idea. One which is equally applicable to Bitcoin and which was described by the Bitcoin as an ecosystem improvement for Bitcoin years ago. And not just as armwaving, I at least went as far as enumerating the things we'd need to do before I got mired in the problem of how do you make it not an extreme risk in the face of alt implementations— something which isn't solved even absent almost-never-executed anti-fraud code paths. It'll be super awesome to see you implement it, if you do.

But it's hard to respect your good work when it results in a lot of people being misinformed about advantages because you've been sloppy about talking about the attribution. The end result of these is that you produce armys of technically unsophisticated people who believe that it's the gospel truth that Bitcoin can't do this.


So the actual protocol change that needs to be made in order to make challenge-response protocols fully effective is basically the inclusion of merkle-sum-trees - make each node N = [ H, F ] where H = sha256(N.child0, N.child1) and F = N.child0.F + N.child1.F. Otherwise, there's no way to efficiently prove that a block does not have excessive fees. Unless you do some crazy ugly hack like creating a separate overlay merkle tree with its root being output 1 of the coinbase, that's a hard-forking protocol change.

As for changes actually being implemented, to be honest I haven't seen anything actually substantial since P2SH. The one big change that would benefit everyone now, increasing the block size limit, has been on the table for a year now with absolutely no progress toward pushing it through. If it does come close to happening, then I'll publish a bitcoinmagazine article cheering it on. For now though, it seems as far away as ever.

What I generally want to say when I make such statements in blog posts is "We _plan_ to do something that others have not yet _put into practice_"; of course, that just means we're equal and not better, but the point is to say that we're moving quickly and we'll get there soon. Bitcoin is currently a slow-moving target, and it given the $5 billion of existing capital stored inside of it it would be irresponsible to do things any other way; so I think it's unlikely that Bitcoin will develop second-layer scalability protocols first. If you wish to wait for actual results then that is a philosophy that I very much respect.

I have realized over time that pretty much nothing in Ethereum is new; Turing-complete contracts were in Ripple and Qixcoin (although I was not _thinking of_ either one of those two, and I did not even realize that Ripple contracts were Turing-complete, when I came up with the idea), Patricia tries I got from Alan Reiner back in 2012, all sorts of clever blockchain designs were mulled over on bitcointalk in 2009, and that doesn't even begin to describe the legions of forgotten hackers on cypherpunk mailing lists in the 1990s. A few weeks ago I learned about the concept of "rules engines". And then of course there's Yap stones. Meanwhile, Vertcoin is coming up with a memory hard proof of work that claims to be revolutionary and powerful but runs into a fundamental scalability issue that I solved months ago with Dagger. So perhaps I do need to tone down my "this is amazing and new" rhetoric; but at the same I've come to realize that since we are philosophically similar people attacking similar problems some degree of collision, whether of the "independent discovery" form or the "heard about it, forgot it, reinvented it without realizing" form is inevitable.


> So the actual protocol change that needs to be made in order to make challenge-response protocols fully effective is basically the inclusion of merkle-sum-trees

Where do you think the words "merkle-sum-trees" came from? :)

> Unless you do some crazy ugly hack like creating a separate overlay merkle tree with its root being output 1 of the coinbase, that's a hard-forking protocol change.

No crazy hack is required. You just include a commitment to a merkle-sum-tree tree of transaction values, along with the UTXO commitment. It doesn't have to commit to transactions, it's just a tree of values. There is no loss of efficiency, you don't even have to signal the data normally since all full nodes already have it.

> As for changes actually being implemented, to be honest I haven't seen anything actually substantial since P2SH

Well thats a step back from the position you took above that it can't change at all ever. Now the changes have been not substantial and not often enough. ::shrugs::

> that would benefit everyone

It's highly debatable that it would benefit everyone now, we're certantly not up against the limit. People are still using the blockchain in very inefficient ways, and the ecosystem of tools to increase efficiency hasn't developed yet. At the same time the count of full nodes is falling— increasing the cost of running one right now may not be a good strategy.

> so I think it's unlikely that Bitcoin will develop second-layer scalability protocols first

I don't have any interest in being first. I'd much better have a well designed and considered approach. Unfortunately, so far, none of the alt-systems— even ones which raised millions of dollars of funding— have developed anything that turned out to be useful to implement in Bitcoin. Maybe that will change.


I believe a solution would look more like what Hal Finney called transparent servers [1]. This would be a way to solve the issue of scaling (and getting rid of mining oligarchs), but also dealing with the necessary privacy issues of contracts. Also DNS is not really a contract between 2 parties, but N parties, and there is a relationship between DNS and contracts (more on that per PM by request).

[1] http://www.finney.org/~hal/rpow/security.html


Could you pls expand on this: "Bitcoin solves also a latency variance problem, to achieve logical broadcast"?


how does one node know what other nodes are doing if latency between the messages could be anything between 50ms and 5000ms? if one node has a good position in the network it would know more than other nodes and outpace the network. which is exactly what would happen with ghost. the ghost authors have bound estimations on latencies, overlooking the possibility of information arbitarge. somebody would figure out where the most information comes from and arbitrage the network. so besides hashing attacks there are "latency attacks", but they don't even appear in Bitcoin, because blocks solve that issue. latency is negligible vs. the 10-minute block time. this could be shown with a timing attack on an alt-coin with < 60 sec blocktime.

Lamport's work make this connection obvious. He invented the Byzantine Generals Problem and wrote this paper: "Time, Clocks, and the Ordering of Events in a Distributed System", Communications of the ACM 21, July 1978 http://research.microsoft.com/en-us/um/people/lamport/pubs/t...

logical broadcast means: every node has the same state (time invariance). the only thing that matters is hashing power.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: