
Ethereum White Paper - _superposition_
https://github.com/ethereum/wiki/wiki/%5BEnglish%5D-White-Paper
======
terhechte
The explanation of all the opcodes for the contact script does not list any IO
operations for external (non-storage) data (at least none that I could see).
However, in the list of Ethereum examples, the following is given:

"Crop insurance. One can easily make a financial derivatives contract but
using a data feed of the weather instead of any price index. If a farmer in
Iowa purchases a derivative that pays out inversely based on the precipitation
in Iowa, then if there is a drought, the farmer will automatically receive
money and if there is enough rain the farmer will be happy because their crops
would do well."

Does somebody know how the contract would be able to read a data feed? I did
not find anything in the language spec. Such a feature would be really
interesting. On the one hand it would allow for a ton of interesting options
(binding contracts to stock markets, emails, political events) on the other
hand how would a contract behave if the required data url is gone (i.e. the
weather data feed changed from weather.php?v1 to /v1/weather). Whats more,
wouldn't there be strong incentives for man in the middle attacks to deliver
wrong data? This sounds like a very powerful yet dangerous feature, so I'd
love to learn more about the spec and reasoning behind this.

~~~
mquandalle
There will be no IO function like `http_get_content` because we can't securely
rely on them. Instead the method will be to rely on a trusted entity (or a
group of trusted entities) to sign an ethereum transaction containing some
data that will be used by the contract. For instance you could take a look at
_Reality Keys_ , a "trusted data feeds" platform [1].

If you want to reduce the need to trust a single party you can also use a
SchellingCoin system, described as a "decentralized data feed" in the ethereum
white paper v2 [2].

[1]
[http://forum.ethereum.org/discussion/comment/180/#Comment_18...](http://forum.ethereum.org/discussion/comment/180/#Comment_180)

[2] [http://blog.ethereum.org/2014/03/28/schellingcoin-a-
minimal-...](http://blog.ethereum.org/2014/03/28/schellingcoin-a-minimal-
trust-universal-data-feed/)

~~~
terhechte
Thanks, that totally makes sense.

------
mquandalle
This version contains some mistakes and approximations. For a more up-to-date
paper you should take a look at the White paper v2 draft [1].

A more formal and technical description is presented in the Yellow paper (also
draft) [2].

[1]
[https://github.com/ethereum/wiki/wiki/Whitepaper-2-Draft](https://github.com/ethereum/wiki/wiki/Whitepaper-2-Draft)

[2] [http://gavwood.com/Paper.pdf](http://gavwood.com/Paper.pdf)

------
adamfeldman
The 2007 book Rainbows End by Vernor Vinge addresses this topic from a science
fiction perspective. A major plot element is the ability of individuals to
enter into "affliances": digital, automatically-escrowed contracts between
individuals providing small services in networks created on-demand to produce
larger-scale goods and services.
[https://en.wikipedia.org/wiki/Rainbows_End](https://en.wikipedia.org/wiki/Rainbows_End)
(copied my past comment at
[https://news.ycombinator.com/item?id=7287584](https://news.ycombinator.com/item?id=7287584))

Also, here's a related past HN thread:
[https://news.ycombinator.com/item?id=7287155](https://news.ycombinator.com/item?id=7287155)

------
bachback
anyone who thinks GHOST is a good idea, has not understood Bitcoin at all. the
whole point of the blocks is that nodes can work on the global state in a
chain. so the idea that nodes should work on greedy subtrees is about the
worst possible idea. Bitcoin solves not only the Byzantine generals problem,
but a latency variance problem, to achieve logical broadcast. anyway, the
author of this paper also believes that "anyone with reasonably high
intelligence could have invented Bitcoin by random luck" [1]. well, no. there
are many hidden problems which Bitcoin solves. the literature on quorum
systems, distributed applications, etc. is very deep.

[1]
[http://www.reddit.com/r/Bitcoin/comments/20oyes/brilliant_an...](http://www.reddit.com/r/Bitcoin/comments/20oyes/brilliant_and_comprehensive_smackdown_of_leah/cg5ikzo)

~~~
gone35
Completely agree.

Judging from the technical description [1] and the most recent version of the
draft white paper [2] alone, I think the project would benefit greatly from
more substantial input and expertise from the academic cryptography community.
I'm afraid they are spending a lot of effort cargo-culting on the wrong
things, while missing where the real challenge lies: How to deal with the
inevitable orders-of-magnitude increase in transaction size/complexity while
preserving consensus-based distributed verification.

To be more precise: The metered computation mechanism is a very clever
solution for dealing with unrestricted computation and unbounded data storage,
but their proposed solution does not convincingly address the inevitable
increase in space required. Hand-waving about greedy subtree verification
without actual numbers/scenarios that shows this could work at all (which
would be very surprising to say the least) is not convincing.

In all a great idea, though. It would be great if it works out.

[1] [http://gavwood.com/Paper.pdf](http://gavwood.com/Paper.pdf)

[2]
[https://github.com/ethereum/wiki/wiki/Whitepaper-2-Draft](https://github.com/ethereum/wiki/wiki/Whitepaper-2-Draft)

~~~
vbuterin
> I'm afraid they are spending a lot of effort cargo-culting on the wrong
> things, while missing where the real challenge lies: How to deal with the
> inevitable orders-of-magnitude increase in transaction size/complexity while
> preserving consensus-based distributed verification.

Umm... that's exactly one of the issues that we're cargo-culting about the
most. There are two general categories of solutions to this problem: technical
increments (ie. a better constant factor), and fundamental cryptographic
upgrades (eg. the stacktrace challenge-response concept we have been talking
about on our blog). The first category we are not yet doing because we are
following the well established advice of "don't prematurely optimize". The
second category, well, that's why we're thinking of ideas like distributed
blockchain storage, clever algorithms to force more people to be full nodes,
and challenge-response protocols. There is also another idea I was thinking
of, which I'll have a post up over the next week or two.

In the long term, we are already beginning the development of a very
widespread collaboration with academic groups to try to tackle the problems in
cryptocurrency, and at this point we fully expect we'll end up releasing
Ethereum 2 at some point in 2016 which would take a lot of new cryptography
into account.

~~~
nullc
> (eg. the stacktrace challenge-response concept we have been talking about on
> our blog)

You wrote on your blog:

> Altogether, what this means is that, unlike Bitcoin, Ethereum will likely
> still be fully secure, including against fraudulent issuance attacks, even
> if only a small number of full nodes exist; as long as at least one full
> node is honest, verifying blocks and publishing challenges where
> appropriate, light clients can rely on it to point out which blocks are
> flawed

The fact that someone can extract compact proofs of an invalid state
transition was pointed out by me years ago (e.g.
[https://bitcointalk.org/index.php?topic=96644.msg1064601#msg...](https://bitcointalk.org/index.php?topic=96644.msg1064601#msg1064601)),
and I believe I described it you personally in Mountain View. It's equally
applicable to Bitcoin (though not implemented anywhere for any system yet).

It's a bit irritating to see ideas from Bitcoin recycled as "innovations" in
altcoins and incorrectly claimed as as not applicable to Bitcoin, especially
when they're not even implemented yet.

This one has a bunch of gnarly engineering issues that make it hard to
implement. You end up with fraud codepaths that are virtually never executed,
so how do you gain confidence that multiple implementations actually implement
them consistently? The best proposal I'd had on this (from bitcoin-wizards)
was to always produce two versions of a block, committed under a common root,
one which has a random flaw, and then always kill it and select the right
block using a proof. But thats kind of complex and indirect.

~~~
vbuterin
> It's equally applicable to Bitcoin (though not implemented anywhere for any
> system yet).

I don't recall you describing it to me, but it's likely you did and I didn't
realize it was important at the time. Also, I never claim that challenge-
response protocols are not applicable to Bitcoin; theoretically, we know that
any scalability improvement that is applicable to Ethereum would also
applicable to Bitcoin simply because you can implement Bitcoin as an Ethereum
contract. Rather, I made the claim that Bitcoin _does_ not have full support
for such protocols. I count myself among those who are skeptical that
substantial changes to the Bitcoin protocol will ever be made, primarily
because of the "changing an engine on the run" problem and the nasty political
issues involved (see: the recent Counterparty spat). Cryptocurrencies are not
sets of abstract ideas, they are protocols that are implemented in code today
and have to be judged on their merits as they actually are. And Bitcoin, as it
actually is, is not fully secure with a light client.

~~~
nullc
> will ever be made

And yet they've been made in the past. It doesn't require any hard forking or
incompatible changes, just some additional messages which can be ignored by
old implementations. The Bitcoin community has made one soft forking protocol
change per year for several years and will almost certantly make one and
possibly more this year.

> Cryptocurrencies are not sets of abstract ideas, they are protocols that are
> implemented in code today and have to be judged on their merits as they
> actually are.

As I noted, Ethereum doesn't implement this yet, if you'd implemented and
worked out the gnarly engineering issues in actually implementing it I'd have
credited you for that.

But right now, it's just an idea. One which is equally applicable to Bitcoin
and which was described by the Bitcoin as an ecosystem improvement for Bitcoin
years ago. And not just as armwaving, I at least went as far as enumerating
the things we'd need to do before I got mired in the problem of how do you
make it not an extreme risk in the face of alt implementations— something
which isn't solved even absent almost-never-executed anti-fraud code paths.
It'll be super awesome to see you implement it, if you do.

But it's hard to respect your good work when it results in a lot of people
being misinformed about advantages because you've been sloppy about talking
about the attribution. The end result of these is that you produce armys of
technically unsophisticated people who believe that it's the gospel truth that
Bitcoin can't do this.

~~~
vbuterin
So the actual protocol change that needs to be made in order to make
challenge-response protocols fully effective is basically the inclusion of
merkle-sum-trees - make each node N = [ H, F ] where H = sha256(N.child0,
N.child1) and F = N.child0.F + N.child1.F. Otherwise, there's no way to
efficiently prove that a block does not have excessive fees. Unless you do
some crazy ugly hack like creating a separate overlay merkle tree with its
root being output 1 of the coinbase, that's a hard-forking protocol change.

As for changes actually being implemented, to be honest I haven't seen
anything actually substantial since P2SH. The one big change that would
benefit everyone now, increasing the block size limit, has been on the table
for a year now with absolutely no progress toward pushing it through. If it
does come close to happening, then I'll publish a bitcoinmagazine article
cheering it on. For now though, it seems as far away as ever.

What I generally want to say when I make such statements in blog posts is "We
_plan_ to do something that others have not yet _put into practice_"; of
course, that just means we're equal and not better, but the point is to say
that we're moving quickly and we'll get there soon. Bitcoin is currently a
slow-moving target, and it given the $5 billion of existing capital stored
inside of it it would be irresponsible to do things any other way; so I think
it's unlikely that Bitcoin will develop second-layer scalability protocols
first. If you wish to wait for actual results then that is a philosophy that I
very much respect.

I have realized over time that pretty much nothing in Ethereum is new; Turing-
complete contracts were in Ripple and Qixcoin (although I was not _thinking
of_ either one of those two, and I did not even realize that Ripple contracts
were Turing-complete, when I came up with the idea), Patricia tries I got from
Alan Reiner back in 2012, all sorts of clever blockchain designs were mulled
over on bitcointalk in 2009, and that doesn't even begin to describe the
legions of forgotten hackers on cypherpunk mailing lists in the 1990s. A few
weeks ago I learned about the concept of "rules engines". And then of course
there's Yap stones. Meanwhile, Vertcoin is coming up with a memory hard proof
of work that claims to be revolutionary and powerful but runs into a
fundamental scalability issue that I solved months ago with Dagger. So perhaps
I do need to tone down my "this is amazing and new" rhetoric; but at the same
I've come to realize that since we are philosophically similar people
attacking similar problems some degree of collision, whether of the
"independent discovery" form or the "heard about it, forgot it, reinvented it
without realizing" form is inevitable.

~~~
nullc
> So the actual protocol change that needs to be made in order to make
> challenge-response protocols fully effective is basically the inclusion of
> merkle-sum-trees

Where do you think the words "merkle-sum-trees" came from? :)

> Unless you do some crazy ugly hack like creating a separate overlay merkle
> tree with its root being output 1 of the coinbase, that's a hard-forking
> protocol change.

No crazy hack is required. You just include a commitment to a merkle-sum-tree
tree of transaction values, along with the UTXO commitment. It doesn't have to
commit to transactions, it's just a tree of values. There is no loss of
efficiency, you don't even have to signal the data normally since all full
nodes already have it.

> As for changes actually being implemented, to be honest I haven't seen
> anything actually substantial since P2SH

Well thats a step back from the position you took above that it can't change
at all ever. Now the changes have been not substantial and not often enough.
::shrugs::

> that would benefit everyone

It's highly debatable that it would benefit everyone now, we're certantly not
up against the limit. People are still using the blockchain in very
inefficient ways, and the ecosystem of tools to increase efficiency hasn't
developed yet. At the same time the count of full nodes is falling— increasing
the cost of running one right now may not be a good strategy.

> so I think it's unlikely that Bitcoin will develop second-layer scalability
> protocols first

I don't have any interest in being first. I'd much better have a well designed
and considered approach. Unfortunately, so far, none of the alt-systems— even
ones which raised millions of dollars of funding— have developed anything that
turned out to be useful to implement in Bitcoin. Maybe that will change.

------
rsync
I can't decide if I want to see ethereum implemented in dwarf fortress ... or
dwarf fortress implemented in ethereum ...

------
higherpurpose
I hope they focus more on their Go client than the C++ one. Something like
this needs to be as secure as it can get. No reason to have it vulnerable to
memory leaks and such, if they don't have to.

~~~
ixmatus
If that's your reason for trusting an implementation in Go over C++ then I
would say they should level up and use Haskell. Or even better, Agda. There's
also ATS but I think the community around Haskell and Agda is more populous.

A garbage collected language is an improvement over possible memory leaks due
to human responsible memory allocation, but I think a larger problem is most
programmer's (very human) inability to separate pure from impure code.
Languages with very strong type guarantees do it for you and give you the
tools to protect yourself from a lot of problems that are common in any
language from Assembly all the way through to Go (I think Rust is actually
well suited as well since it has a better type system than Go, but I haven't
used the language so I can't speak to that).

[EDIT] minor edits.

~~~
mrec
The Rust folk tend to avoid talking in terms of purity nowadays. Graydon had a
good writeup of the pitfalls:
[http://thread.gmane.org/gmane.comp.lang.rust.devel/3674/focu...](http://thread.gmane.org/gmane.comp.lang.rust.devel/3674/focus=3855)

------
thecopy
Began reading on the tram home.. need to read up on basic crypto currencies

