
Why Turing-complete smart contracts are doomed - jarsin
https://www.reddit.com/r/ethereum/comments/4p0um9/why_turingcomplete_smart_contracts_are_doomed/
======
schoen
This article overstates the problem quite a bit by saying, for example, "Only
non-Turing-complete languages support formal reasoning and verification".

The point of Turing's result and related theorems is not that you can't reason
formally about programs, or understand or predict what they do, or prove that
they are correct. It's that no automated method is powerful enough to decide
nontrivial properties for _every_ program; there are always programs for which
the decision procedure will either say it doesn't know, or be wrong (or the
decision procedure will take an infinite amount of time).

However, there are automated methods that can decide nontrivial properties for
_many_ programs, and the existence of programs where a given property can't be
decided doesn't mean that the answers, when they exist, have to be wrong.

We _do_ have specific programs in Turing-complete languages whose behavior or
whose correctness to a specification is proven, and formal methods that can be
applicable to them.

I think the article's conclusion might still be right, though: having
environments where you can't always determine correctness may be playing with
fire, so it may be a better choice to avoid that kind of risk entirely. But
that doesn't mean that, given a program, we're always going to be completely
in the dark about what the program does!

~~~
schoen
A thing that you could do for smart contracts is have formal-methods analyzers
that check properties that you care about in the contract. The analyzer can
say, of a particular contract, "good", "bad", or "don't know". Then you can
avoid (or even somehow ban?) contracts that get "bad" or "don't know".

Or, you could simplify things by having "bad" and "don't know" be in the same
bucket! You could call it "can't be proven safe". Because of the halting
problem issue, some safe contracts will always end up "can't be proven safe",
but if the analyzer is good enough to allow for quite a lot of flexibility in
the contracts, you can say that writing something that can't be proven safe is
the contract author's fault or responsibility.

Then you can say that you will only use contracts that can be proven safe.
There's no contradiction between this and Turing completeness or the halting
problem, and there's no inherent reason to think that these contracts will be
rare or hard to find. You do have to be careful about what particular
properties you're checking for in your definition of "safety", though!

