I just hope that poor guy can still sleep instead of lying awake at night, listening for the soft tapping of the assassins' padded steps on his bedroom floor.
I can imagine someone out there is pretty unhappy with him.
The thing is, the kind of developers it takes to implement smart contracts correctly is probably exceptionally rare, if they even actually exist.
I don't think it's impossible, but it's clearly in conflict with the "release early, release often" culture. If anything, developing smart contracts is more like building software for NASA crafts : you have to get it right before the launch. And even regarding the NASA, I guess they still have room for some remote patching. In smart contracts, perfect is not the enemy of good, they are actually synonymous.
It's not the release culture that is at fault here. It takes age for a language to mature. Relying millions of dollars on an untested language is grounds for trouble.
I recently started to code some Solidity, and there are some syntax expressions in the language (e.g payable function) that you wouldn't find in other languages because they don't deal with those concepts. Could you have used a modified version of Java or simply embed those concepts in a library, probably. I guess they wanted a specialized language.
The author mentions that turing complete contracts will have bugs in them eventually. While currently that happens, we do have formal verification methods in computer science. And thats exactly what people are trying to apply on Ethereum: Formally verified smart contracts. https://blog.ethereum.org/2016/06/19/thinking-smart-contract...
Formal methods have been under development for decades, yet their penetration into software development is miniscule. Hardly any professional developers know how to use them. Turning Ethereum and its contracts into a formally verified system would be the largest adoption of formal methods by several orders of magnitude.
Ironically, the article you link to does not support your post: it also stresses the difficulties of applying formal methods, and claims instead that various interventions of the sort that have not been particularly successful in securing software generally will be sufficient to satisfy the particularly stringent requirements of a crypto-currency (of course, to say that Ethereum needs formal methods in order to be usable would be to admit that it is not close to being so, so that view will always be contested by the Ethereum foundation. At Cardano [1], however, they do seem to think formal methods will be part of their solution, though I am not sure to what extent.)
This is all mostly moot, however, as the Ethereum we have now is demonstrably insecure by any measure.
Yeah true, it is used in some industries such as the space and flight industries, where you want to be sure your code is correct.
However it comes with a range of problems, and often its not worth going into the effort of formally verifying a contract. But with ether, the costs quickly outweigh the ptential losses.
I don't know if it will work, in theory it can, i just wanted to be possitive: I see a light at the end of the tunnel.
It is just moving the goalposts. The formally verified contract is only as good as the assertions proven. Look at WPA2, it was formally verified against 'most' attacks.
Just to be clear, this is an argument that even formal methods might not be sufficient to secure Ethereum (and certainly not if applied piecemeal), not an argument that it can be secured by other means.
Bank accounts, wire transfers that move trillions are also written by programmers, so should we stay to cash economy only? The problem is immaturity of crypto currency/contract market as it's still in its infancy. I'm optimistic about crypto economy future as most of current problems could be fixed (or at least alleviated) by standards, regulations and discipline.
Bank accounts, wire transfers that move trillions are also written by programmers
Sure, but they're written in ways that humans can generally roll back a mistake and if two humans disagree about what should be done (due to a 'bug' or inconsistency in the contract) there are processes where a third human can listen to their arguments and decide what sounds like the most reasonable course of action. At no point is the code written by programmers the final arbiter of what the 'correct' course of action is.
Of course no system is completely bulletproof against a determined attacker. There are however a few differences. First of all what these people did is obviously illegal and people go to jail for doing that sort of thing. As it stands there is no legal consensus on whether exploiting a bug in a smart contract is illegal or not.
Secondly and more importantly, the fail-safes built into the system largely work. In the Bangladesh case for example the vast majority of the money they where attempting to steal was never transferred due to humans intervening and overriding the transfer order. And even a non-trivial amount of the money that was transferred was later recovered by reversing the transaction.
Exactly. And this is just because there are no regulations to make it illegal and standard protocols to follow that, as I've mentioned in my first post, is the key requirement if crypto economy is to be something more than just a wet dream of crypto-anarchists.
But if one of the design requirements that we want/need is that people can override any contract and roll back any transaction, what do we gain from a blockchain/crypto based solution.
It just goes to show why developing a mature language is a difficult task. There are situations that were never thought of or some really unintended consequences.
Additionally, for the love of god, why are people experimenting on the main-net? My understanding was Ethereum has a test-net to try and experiment/learn smart contract coding. Is that too different from the main-net to learn stuff?
You lost me half way there. Experiment and production don't go together. Still what is a productionized main net code? Please elaborate.
Though looking at the screenshots, specially in the third one, he claims to be learning. He was experimenting sending kill() and destroy() to contracts on main net.
I can clarify. My point is that the code on the ethereum main net is immutable and permanent. If it wasn't this guy, it would have been someone else, either malicious or not.
If you deploy code to the ethereum main net, it is vulnerable to the world. The issue isn't that the guy was toying with main net contracts, the issue was that the contract was busted, and there was this room for problems.
Ethereum has been extraordinarily prone to these sorts of errors so far (see the DAO). As of now, the foundation is unstable and the stakes are high.
How about following solution:
Each contract specifies a timedependet 'bounty' for code reviwers. A code reviewer can lock some of his ether in this contract for a specified time. If no vulnarability is found in this time then he gets his ether + % of bounty back. If a vulnarability is found during the time then his ether goes to the bug finder.
Everyboddy can see how much ether the reviewers locked in a contract which could increase the confidence of users of a smart contract and encourage bug-searchers to prove that the code-reviewers were wrong.
Bounty could be implemented as a % of transactions/deposits of a smart contract - which would ensure that popular smart contracts have many reviews with lots of ether locked to prove it. On the other hand a smart contract wich process only 0.01 ether in its life-time doesnt need reviews at all.
If code reviewer doesn't report a bug before the contract is deployed he then can become the 'bug finder' taking all ether from the bounty + his own.
Even if the code reviewer is honest there are some economical problems:
- Code reviewer will find a balance between time spent, amount to put into the time dependent 'bounty' and probability of a bug that didn't come up during review --> little-at-stake problem
- If you force the code reviewer to put in a significant amount of ETH into the time dependent bounty you won't find any reviewers willing to work for you because of the huge risk for them --> risk problem
How would that have worked with the the Parity 'hack'?
- Parity deploying their multisig contracts, having a bounty with code reviewers. AFAIK it wasn't even a bug but a not-well-deployed contract library. So the reviewers would have said that Parity should go on and deploy their multisig contract. Parity would have deployed it in a wrong way (as they did). The 'hack'/accident would still have happened.
If your time dependent contract was separate from Parity's multisig the reviewer would still get his ETH back after the time lock releases. Alternatively the reviewer's funds would also be frozen.
Hopefully formal proof of contracts will save us sometime. Alternatively blockchain with some governance scheme that takes care of something like that would also be useful. Wait a second... Am I describing Tezos? Let's wait for them to launch and see if that works better.
Laic here: Could we not keep Solidity Turing complete as it is and just build a restricted framework on top that will be audited and not Turing complete and just encourage regular programmers to use the framework and its predefined audited functions instead for day to day contracts? Occasionally new tools could be developed in Solidity that will be carefully audited and aded as safe to the regulated framework.
As far as I remember Ethereum smart contracts were designed to be Turning-complete because that would make them "more powerful" than Bitcoin Script. It seems that with great power comes great responsibility...
I can imagine someone out there is pretty unhappy with him.