And I am not saying these things because Ethereum is a super-magic-brilliant protocol that involves deep knowledge about thirty years of development in multiple fields; it's not. It's also an ultimately fairly elementary idea that I still remain surprised that nobody tried to seriously push before me. In fact, at least two other groups got very close in 2012-2013, and a few weeks ago at the payments innovation conference in Boston I learned that apparently the concept sans blockchain was around in the 1990s; but for some reason they did not take the idea to its logical conclusion.
Also, our implementation of GHOST does not in any way compromise the concept of global state; blocks are required to specifically include uncle headers in order to benefit from them.
Judging from the technical description  and the most recent version of the draft white paper  alone, I think the project would benefit greatly from more substantial input and expertise from the academic cryptography community. I'm afraid they are spending a lot of effort cargo-culting on the wrong things, while missing where the real challenge lies: How to deal with the inevitable orders-of-magnitude increase in transaction size/complexity while preserving consensus-based distributed verification.
To be more precise: The metered computation mechanism is a very clever solution for dealing with unrestricted computation and unbounded data storage, but their proposed solution does not convincingly address the inevitable increase in space required. Hand-waving about greedy subtree verification without actual numbers/scenarios that shows this could work at all (which would be very surprising to say the least) is not convincing.
In all a great idea, though. It would be great if it works out.
Umm... that's exactly one of the issues that we're cargo-culting about the most. There are two general categories of solutions to this problem: technical increments (ie. a better constant factor), and fundamental cryptographic upgrades (eg. the stacktrace challenge-response concept we have been talking about on our blog). The first category we are not yet doing because we are following the well established advice of "don't prematurely optimize". The second category, well, that's why we're thinking of ideas like distributed blockchain storage, clever algorithms to force more people to be full nodes, and challenge-response protocols. There is also another idea I was thinking of, which I'll have a post up over the next week or two.
In the long term, we are already beginning the development of a very widespread collaboration with academic groups to try to tackle the problems in cryptocurrency, and at this point we fully expect we'll end up releasing Ethereum 2 at some point in 2016 which would take a lot of new cryptography into account.
You wrote on your blog:
> Altogether, what this means is that, unlike Bitcoin, Ethereum will likely still be fully secure, including against fraudulent issuance attacks, even if only a small number of full nodes exist; as long as at least one full node is honest, verifying blocks and publishing challenges where appropriate, light clients can rely on it to point out which blocks are flawed
The fact that someone can extract compact proofs of an invalid state transition was pointed out by me years ago (e.g. https://bitcointalk.org/index.php?topic=96644.msg1064601#msg...), and I believe I described it you personally in Mountain View. It's equally applicable to Bitcoin (though not implemented anywhere for any system yet).
It's a bit irritating to see ideas from Bitcoin recycled as "innovations" in altcoins and incorrectly claimed as as not applicable to Bitcoin, especially when they're not even implemented yet.
This one has a bunch of gnarly engineering issues that make it hard to implement. You end up with fraud codepaths that are virtually never executed, so how do you gain confidence that multiple implementations actually implement them consistently? The best proposal I'd had on this (from bitcoin-wizards) was to always produce two versions of a block, committed under a common root, one which has a random flaw, and then always kill it and select the right block using a proof. But thats kind of complex and indirect.
I don't recall you describing it to me, but it's likely you did and I didn't realize it was important at the time. Also, I never claim that challenge-response protocols are not applicable to Bitcoin; theoretically, we know that any scalability improvement that is applicable to Ethereum would also applicable to Bitcoin simply because you can implement Bitcoin as an Ethereum contract. Rather, I made the claim that Bitcoin _does_ not have full support for such protocols. I count myself among those who are skeptical that substantial changes to the Bitcoin protocol will ever be made, primarily because of the "changing an engine on the run" problem and the nasty political issues involved (see: the recent Counterparty spat). Cryptocurrencies are not sets of abstract ideas, they are protocols that are implemented in code today and have to be judged on their merits as they actually are. And Bitcoin, as it actually is, is not fully secure with a light client.
And yet they've been made in the past. It doesn't require any hard forking or incompatible changes, just some additional messages which can be ignored by old implementations. The Bitcoin community has made one soft forking protocol change per year for several years and will almost certantly make one and possibly more this year.
> Cryptocurrencies are not sets of abstract ideas, they are protocols that are implemented in code today and have to be judged on their merits as they actually are.
As I noted, Ethereum doesn't implement this yet, if you'd implemented and worked out the gnarly engineering issues in actually implementing it I'd have credited you for that.
But right now, it's just an idea. One which is equally applicable to Bitcoin and which was described by the Bitcoin as an ecosystem improvement for Bitcoin years ago. And not just as armwaving, I at least went as far as enumerating the things we'd need to do before I got mired in the problem of how do you make it not an extreme risk in the face of alt implementations— something which isn't solved even absent almost-never-executed anti-fraud code paths. It'll be super awesome to see you implement it, if you do.
But it's hard to respect your good work when it results in a lot of people being misinformed about advantages because you've been sloppy about talking about the attribution. The end result of these is that you produce armys of technically unsophisticated people who believe that it's the gospel truth that Bitcoin can't do this.
As for changes actually being implemented, to be honest I haven't seen anything actually substantial since P2SH. The one big change that would benefit everyone now, increasing the block size limit, has been on the table for a year now with absolutely no progress toward pushing it through. If it does come close to happening, then I'll publish a bitcoinmagazine article cheering it on. For now though, it seems as far away as ever.
What I generally want to say when I make such statements in blog posts is "We _plan_ to do something that others have not yet _put into practice_"; of course, that just means we're equal and not better, but the point is to say that we're moving quickly and we'll get there soon. Bitcoin is currently a slow-moving target, and it given the $5 billion of existing capital stored inside of it it would be irresponsible to do things any other way; so I think it's unlikely that Bitcoin will develop second-layer scalability protocols first. If you wish to wait for actual results then that is a philosophy that I very much respect.
I have realized over time that pretty much nothing in Ethereum is new; Turing-complete contracts were in Ripple and Qixcoin (although I was not _thinking of_ either one of those two, and I did not even realize that Ripple contracts were Turing-complete, when I came up with the idea), Patricia tries I got from Alan Reiner back in 2012, all sorts of clever blockchain designs were mulled over on bitcointalk in 2009, and that doesn't even begin to describe the legions of forgotten hackers on cypherpunk mailing lists in the 1990s. A few weeks ago I learned about the concept of "rules engines". And then of course there's Yap stones. Meanwhile, Vertcoin is coming up with a memory hard proof of work that claims to be revolutionary and powerful but runs into a fundamental scalability issue that I solved months ago with Dagger. So perhaps I do need to tone down my "this is amazing and new" rhetoric; but at the same I've come to realize that since we are philosophically similar people attacking similar problems some degree of collision, whether of the "independent discovery" form or the "heard about it, forgot it, reinvented it without realizing" form is inevitable.
Where do you think the words "merkle-sum-trees" came from? :)
> Unless you do some crazy ugly hack like creating a separate overlay merkle tree with its root being output 1 of the coinbase, that's a hard-forking protocol change.
No crazy hack is required. You just include a commitment to a merkle-sum-tree tree of transaction values, along with the UTXO commitment. It doesn't have to commit to transactions, it's just a tree of values. There is no loss of efficiency, you don't even have to signal the data normally since all full nodes already have it.
> As for changes actually being implemented, to be honest I haven't seen anything actually substantial since P2SH
Well thats a step back from the position you took above that it can't change at all ever. Now the changes have been not substantial and not often enough. ::shrugs::
> that would benefit everyone
It's highly debatable that it would benefit everyone now, we're certantly not up against the limit. People are still using the blockchain in very inefficient ways, and the ecosystem of tools to increase efficiency hasn't developed yet. At the same time the count of full nodes is falling— increasing the cost of running one right now may not be a good strategy.
> so I think it's unlikely that Bitcoin will develop second-layer scalability protocols first
I don't have any interest in being first. I'd much better have a well designed and considered approach. Unfortunately, so far, none of the alt-systems— even ones which raised millions of dollars of funding— have developed anything that turned out to be useful to implement in Bitcoin. Maybe that will change.
Lamport's work make this connection obvious. He invented the Byzantine Generals Problem and wrote this paper: "Time, Clocks, and the Ordering of Events in a Distributed System", Communications of the ACM 21, July 1978
logical broadcast means: every node has the same state (time invariance). the only thing that matters is hashing power.