The original smart-contract language, E, is almost nothing like Solidity. To focus on Turing-completeness, it's true that E is Turing-complete and has an `eval()` primitive, which normally would be dangerous. However, E both comes with sufficient tools to prove that any given instance of `eval()` is safe, and also to limit Turing-complete behavior when needed.
Specifically, in E and Monte, we can write auditors, which are objects that can structurally prove facts about other objects. A common auditor in both languages is `DeepFrozen`; writing `as DeepFrozen` on an E or Monte object causes the `DeepFrozen` auditor to examine the AST of that object and prove facts.
There's a Monte community member working on an auditor for primitive recursive arithmetic, inspired IIUC primarily by the EVM's failings.
The DAO hack happened because of a bug class known as "plan interference" in the object-capability world; these bugs happen because two different "plans", or composite actions of code flow, crossed paths. In particular, a plan was allowed to recursively call into itself without resetting its context first. EVM makes this extremely hard to get right. E and Monte have builtin syntax for separating elements of plans with concurrency; if you write `obj.doStuff()` then it happens now, but `obj<-doStuff()` happens later.
So, uh, yeah. Smart contracts aren't a bad idea, but Ethereum's not very good.
That's a classic form of GUI bug. Some widget calls something which calls something else which eventually calls the original widget, which is not in a stable state. Classic bugs in this area involve using "save file", and then "new folder", and then renaming some folders in a way which invalidates something upstream.
Do you think it would be possible to improve the EVM by adding E's notion of concurrency? One constraint would be the need to have deterministic scheduling, since every execution would need to be run identically by all validating nodes.
[edit] Incidentally, we pointed out several lessons from the ocap community in a commissioned report from Ethereum foundation back in 2015. Few of those suggestions were adopted at the EVM level or the higher levels though.
https://github.com/LeastAuthority/ethereum-analyses/blob/mas...
Turing-completeness has very little to do with this.
- The bug causing the DAO debacle did not involve loops or jumps or weird machines or other behavior associated with Turing-machine complexity, but instead had to do with confusing behavior of storage and inter-contract communication.
- Ethereum is not really Turing complete, since it has bounded. It is procedural though.
- Many expensive errors in the cryptocurrency world (e.g. transactions with too many fees, exchanges sending malleable transactions, transaction malleability) didn't involve the smart contract system at all
- Decision-table based systems can hide bugs too. Is there any evidence that decision tables actually lead to fewer bugs, given the same amount of programmer time and attention?
See Wikipedia.[1] Decision tables allow most of the things you really need in a contract. Termination is guaranteed, because there are no loops. Processing is simple; the evaluator goes down the table rows until one evaluates to true. There's a simple tabular way to look at a decision table, so ordinary humans can read them.
Actions should have database-like transactional properties - either everything in the action happens, or none of it does. If you do a send and an update in an action, both must happen, or neither does.
The big question is what primitives you're allowed to call from the table. They'll look a lot like the ones for Solidity, but need to support atomic transactions.
Decision tables would be better. Not as general, but understandable.