// Forward the delegation as long as
// `to` also delegated.
// In general, such loops are very dangerous,
// because if they run too long, they might
// need more gas than is available in a block.
// In this case, the delegation will not be executed,
// but in other situations, such loops might
// cause a contract to get "stuck" completely.
In general, the easier the code is to read and the harder it is to write, the better. (Force the programmer to think carefully, not the reader!) Anything that gets a comment like that in the Solidity examples should at the very least refuse to compile without the programmer adding some attention-grabbing _UNSAFE annotation. Better, there should be mechanisms to make sure the code is written in a way that everyone understands the consequences of e.g. running out of gas in the middle of a function.
This will be hard. While Solidity certainly has problems unto itself, some of its insecurity comes from the EVM's design, which is almost laughably low level and thus very hard to reason about. It certainly doesn't seem to be informed by modern VMs like LLVM, JVM or BEAM, which know a great deal more about the semantics of the program they're running and have things like dispatching features. My guess is the approach was "Bitcoin with a few more opcodes" and therefore more like a 80s-era CPU than a "VM".
As a result, the compiler is tasked with running the whole show. Add to this the coupling of RPC to Solidity's mangle-check-and-jump dispatch approach, and you start to see why there's been so little innovation in this area: Solidity has a tight grip on the Ethereum ecosystem. Also, writing a compiler to this substrate is not easy, and you're penalized for code size (there's a limit on how big a contract can be).
I'm opinionated, as an author of a competing smart contract language (http://kadena.io/pact) that runs in an interpreter, is Turing-incomplete, has single-assignment variables etc etc which we think makes a lot more sense for the kind of stylized computing you're doing on a blockchain. We even have the ability to compile our code to SMT-LIB2 for use with the Z3 theorem prover and will be talking more soon about our DSL for writing proofs. Interestingly though, we find that choosing the right domain for your language goes a long way towards safety AND expressiveness, so that you're not constantly cursing your compiler/interpreter while also worrying less about $50M exploits :)
I was thinking about creating a functional contract DSL that Coq could extract to. Not as familiar with Z3 and SMT solvers, but that's clearly another good approach to safety (I think Ethereum has finally started looking at formal verification after the DAO debacle, not sure what the status is. EDIT: looks like they've made a good bit of progress the past few months).
What about the status of your project? I see you've bootstrapped the contract language and a consensus protocol, do you plan on bootstrapping a P2P network? Or have you done that already and I missed it?
As for a public platform, stay tuned, we will have an annoucement very soon. Suffice to say for now: any work you do using Pact will have a public chain to run on in the very near future; you can use the O/S releases to develop Pact; and if you have an idea about permissioned (think "B2B only") blockchain applications, get in touch!
But how does kadena's Pact compare to Ivy by Chain? The goals seams quite similar.
Perhaps the main difference though is Ivy's language focus on financial-type transactions, shared by many other languages in this space. This seems logical for a smart-contract system, but in Pact we identified that many blockchain applications will be totally non-financial (supply chain, healthcare), and that a database metaphor is the most important feature.
Indeed, SQL is Pact's biggest influence. After all, SQL doesn't need Turing-completeness, mutable variables, unconstrained loops. With Pact, we sought to end the war between SQL and stored procedures (most DBs use a different dialect for SPs than SQL) with "one lang to rule them all".
Database-orientation also powers one of Pact's most important features: the ability to run a blockchain node writing directly to an RDBMS like Postgresql, Oracle, whatever. This way you don't have to write tons of smart-contract code to integrate with legacy systems.
Minor nitpick: Standard compliant SQL, due to recursive common table expressions, is actually turing complete. Whether that's good or bad...
- Pact executes in a runtime environment that at a minimum ensures any ED25519 signatures on the transaction are valid, such that code can then test the validated public keys to enforce authorization rules. So this would need to happen before each transaction, which would assuredly be super-expensive gas-wise.
- Pact modules (i.e., contracts in the Solidity sense) export functions on-chain that can then be imported/invoked by other modules by name, thus allowing safe inter-contract communication, on-chain services, and other nice things. This model is very foreign to Solidity where inter-contract communication is poorly supported; best practices there dictate copy-pasting approved code (c.f. ERC 20 tokens) and hosting all functionality in-contract.
- Pact being interpreted is hugely valuable on-chain, as you can directly inspect human-readable code, as opposed to EVM assembly/opcodes. This is more of a philosophical point though.
- Other things, like supporting direct storage of JSON objects in the database, exporting Pact types as JSON (which you get for free in the Pact runtime), key/value db functionality, transaction history at the db level, support for governance in upgrades -- all of these would need to be coded in as Solidity code, with great computational expense.
The biggest issue facing Solidity developers today is the sheer cost of best practices: ensuring you handle overflows right (ie don't use math primitives but call an approved function), planning for upgrades/data export, you name it: you have to use that code and pay that gas. The environment really needs to provide a lot more "free" functionality than it does today to change this reality.
Not entirely true. In Solidity, you don't need the whole code to call other contracts, you just need their interface (function signature) and you can call any contract.
You'll see all the best practices use interfaces these days.
Agree with all other points, especially about the math safety - there needs to be more support for financial math too.
In Pact, when you load a module, all references are aggressively resolved and directly linked to the calling code. In Ethereum, if the contract you're calling doesn't have the interface you thought it did, you won't find out until you actually call the code.
My understanding was you really can only trust your own code in Eth, that you can't rely on a pre-uploaded contract (like a safe math contract) -- and you certainly can't extend one safely.
Pact's philosophy sees a blockchain as a special-purpose database, for distributed transactions, so it's not designed for many "normal" database cases, namely bulk operations, searches, heuristics, etc. The use case of accepting multiple signed payloads sounds suspiciously "batchy" to me. Also, Pact is "success-oriented": we see failures like a bad signature as something that should fail your tx only. This is a way of avoiding dicey control-flow logic.
So, if a single payload is what you need the signatures on, you simply design your contract API/function call to have the specific function to handle that data (store it in the database, whatever), and let the environment check the signature.
EDIT: Pact is actually `([Signature],payload)` -- ie, you can sign the same payload multiple times
Keysets are designed for precisely this; what's more this rule can now be persisted.
> anything more complicated would go outside of the system you have designed.
Always a possibility of any PL, especially running in a blockchain. Pact makes less assumptions about use cases than most however. It's imperative, allows functions and modules, and offers a database metaphor. That handles a fair number of things.
The idea here is "auth is easy": you don't have to worry about what curve the sigs are (Pact supports ED25519 now but the lang and API support adding whatever you need); you don't have to handle bad signatures (they immediately abort the tx); all you have to do is define a keyset.
Lastly, the reason for the primitive is to have them be inviolable data that can be store in the database, for later use for voting, row-level auth, whatever you can think of.
This is not some rocket science type verification with a dependently typed theorem prover language, it's fairly simple paper and pencil logic. It should not be hard to adapt it to Solidity specific concepts like running out of gas.
The reason these techniques are mostly ignored is that the techniques don't scale at all to large programs calling APIs with imprecise semantics (e.g. filesystem, network), and most people would rather publish imperfect software and iterate rather than spec everything up front. Well, unlike most software, contracts are not large, their semantics are meant to be 100% precise, and most people would rather take the time to make sure a contract does what it claims to do rather than discover a bug afterwards. I would hope.
The environment software runs in is often scarcely understood at all. Operating systems and web browsers change without notice due to auto-upgrades. Libraries are often used without understanding their implementations, and they're also constantly being upgraded. Users can install plugins that introduce bugs that can't be reproduced in the test environment.
You can't build an accurate mathematical model of an environment you haven't observed. Integration tests (run against many platforms) and production logging help, but there are still plenty of unknowns.
: For example, when working with filesystems, people write code that they saw other people using. The code may or may not work as designed depending on the specific filesystem. see e.g. https://danluu.com/file-consistency/
But the more serious projects I've worked on use analytics and have semi-automated ways for users to send you stack traces and logs when they notice a bug.
Also, it's helpful to have a continuous integration setup that automatically runs integration tests on many platforms.
* Compiler time safety (type safety, capabilities, rust and/or haskell style features).
* Built in formal verification features to ensure a function does what the developer (and reader!) thinks and no cases are missed.
* Explicit language design (so that the compiler isn't even required to do strong safety: every action must be spelled out, even if a compiler could deduce it).
* Paying attention to the fucking history of the field (and not doing things like: insane name mangling, case sensitive semantics (not even just names by convention fucking semantics are case sensitive), hard to reason about VMs). Like come on people, at least learn from 20+ year old mistakes.
A small sampling of the issues:
Everything is 256 bits wide, including the "byte" type. This means that whilst byte is valid syntax, it will take up 32x more space than you expect. Storage space is extremely limited in Solidity programs. You should use "bytes" instead which is an actual byte array. The native 256-bit wide primitive type is called "bytes32" but the actual 8-bit wide byte type is called "int8".
Strings. What can we say about this. There is a string type. It is useless. There is no support for string manipulation at all. String concatenation must be done by hand after casting to a byte array. Basics like indexOf() must also be written by hand or implementations copied into your program. To even learn the length of a string you must cast it to a byte array, but see above. In some versions of the Solidity compiler passing an empty string to a function would cause all arguments after that string to be silently corrupted.
There is no garbage collector. Dead allocations are never reclaimed, despite the scarcity of available memory space. There is also no manual memory management.
Solidity looks superficially like an object oriented language. There is a "this" keyword. However there are actually security-critical differences between "this.setX()" and "setX()" that can cause wrong results: https://github.com/ethereum/solidity/issues/583
Numbers. Despite being intended for financial applications like insurance, floating point is not supported. Integer operations can overflow, despite the underlying operation being interpreted and not implemented in hardware. There is no way to do overflow-checked operations: you need constructs like "require((balanceOf[_to] + _value) >= balanceOf[_to]);"
You can return statically sized arrays from functions, but not variably sized arrays.
Arrays. Array access syntax looks like C or Java, but array declaration syntax is written backwards: int8 creates 5 dynamic arrays of bytes. Dynamically sized arrays work, in theory, but you cannot create multi-dimensional dynamic arrays. Because "string" is a byte array, that means "string" does not work.
The compiler is riddled with mis-compilation bugs, many of them security critical. The documentation helpfully includes a list of these bugs .... in JSON. The actual contents of the JSON is of course just strings meant to be read by humans. Here are some summaries of miscompile bugs:
In some situations, the optimizer replaces certain numbers in the code with routines that compute different numbers
Types shorter than 32 bytes are packed together into the same 32 byte storage slot, but storage writes always write 32 bytes. For some types, the higher order bytes were not cleaned properly, which made it sometimes possible to overwrite a variable in storage when writing to another one.
Dynamic allocation of an empty memory array caused an infinite loop and thus an exception
Access to array elements for arrays of types with less than 32 bytes did not correctly clean the higher order bits, causing corruption in other array elements.
As you can see the decision to build a virtual machine with that is natively 256-bit wide led to a huge number of bugs whereby reads or writes randomly corrupt memory.
Solidity/EVM is by far the worst programming environment I have ever encountered. It would be impossible to write even toy programs correctly in this language, yet it is literally called "Solidity" and used to program a financial system that manages hundreds of millions of dollars.
That's kind of a feature. Sure you can use decimal floating point (but never, NEVER use the common binary float for money), but storing integers of the minimum currency unit (e.g. cents) (typically wrapped in a Money class in OO languages) is also a good option.
Although one fairly well-known package, produced by a place I once worked briefly, which (when I worked there) internally used doubles for all money values, wrapped in a class that re-rounded the results every so often. No, really.
I pass no judgement on what belongs on Ethereum. I know from their website that they advertise it as a platform for general app programming and even implementing entire autonomous businesses. It clearly cannot support these things.
We solved that for floats like 10 years ago. Let alone the fact there are better formats, like posits, or fixed point numbers, that also solve this problem very easily.
As for name mangling, read this  and see if it seems sane to you.
For bonus points , `this.foo()` and `foo()` mean two wildly different things.
I don't even know what they were thinking.
Do you have any actual basis to back this up? My counterpoint would be Golang, which is designed exactly to be simple, and is usually really easy to read.
As in, I haven't found another language where jumping into a library and reading the internals is easier than in Golang.
I think (hope) that no one is advocating making a language verbose or complex for it's own sake.
Frankly if it weren't for the tooling I'd not be very sold on Go. The tooling totally sells it for me.
Or create your own. I am sure the creators of Solidity are aware of its limitations and quirks. But as far as I can tell they felt they had to ome up with something fast. And it grew from there.
But as I said: Feel free to create your own language for the EVM if Solidity does not fit your needs or requirements. With a system allowing Turing completenes it should be possible to create a language that removes Turing completeness (for more security). It would be impossible the other way round.
Decision tables would be better. Not as general, but understandable.
Specifically, in E and Monte, we can write auditors, which are objects that can structurally prove facts about other objects. A common auditor in both languages is `DeepFrozen`; writing `as DeepFrozen` on an E or Monte object causes the `DeepFrozen` auditor to examine the AST of that object and prove facts.
There's a Monte community member working on an auditor for primitive recursive arithmetic, inspired IIUC primarily by the EVM's failings.
The DAO hack happened because of a bug class known as "plan interference" in the object-capability world; these bugs happen because two different "plans", or composite actions of code flow, crossed paths. In particular, a plan was allowed to recursively call into itself without resetting its context first. EVM makes this extremely hard to get right. E and Monte have builtin syntax for separating elements of plans with concurrency; if you write `obj.doStuff()` then it happens now, but `obj<-doStuff()` happens later.
So, uh, yeah. Smart contracts aren't a bad idea, but Ethereum's not very good.
 Incidentally, we pointed out several lessons from the ocap community in a commissioned report from Ethereum foundation back in 2015. Few of those suggestions were adopted at the EVM level or the higher levels though.
I hadn't heard of it. It sounds neat
- The bug causing the DAO debacle did not involve loops or jumps or weird machines or other behavior associated with Turing-machine complexity, but instead had to do with confusing behavior of storage and inter-contract communication.
- Ethereum is not really Turing complete, since it has bounded. It is procedural though.
- Many expensive errors in the cryptocurrency world (e.g. transactions with too many fees, exchanges sending malleable transactions, transaction malleability) didn't involve the smart contract system at all
- Decision-table based systems can hide bugs too. Is there any evidence that decision tables actually lead to fewer bugs, given the same amount of programmer time and attention?
Actions should have database-like transactional properties - either everything in the action happens, or none of it does. If you do a send and an update in an action, both must happen, or neither does.
The big question is what primitives you're allowed to call from the table. They'll look a lot like the ones for Solidity, but need to support atomic transactions.
this is as opposed to the underhanded c contest, where it's much harder to monetize an exploit, since getting access to the code and/or cashing out requires some sort of interaction in meatspace (getting hired, then turning the exploit into $$$ respectively).
Please note that it is fairly old at this point (contributors welcome) - so this is just to give you a sense of the syntax and key concepts.