Hacker News new | comments | show | ask | jobs | submit login
Underhanded Solidity Coding Contest (solidity.cc)
139 points by ingve 7 months ago | hide | past | web | favorite | 62 comments



It sure makes me wonder if Ethereum would do better with a less forgiving programming language. The fact that the syntax resembles Javascript is not reassuring, neither is the fact that the very first code snippet in "Solidity By Example" [1] is littered with comments like this:

        // Forward the delegation as long as
        // `to` also delegated.
        // In general, such loops are very dangerous,
        // because if they run too long, they might
        // need more gas than is available in a block.
        // In this case, the delegation will not be executed,
        // but in other situations, such loops might
        // cause a contract to get "stuck" completely.
[1]: https://solidity.readthedocs.io/en/develop/solidity-by-examp...

In general, the easier the code is to read and the harder it is to write, the better. (Force the programmer to think carefully, not the reader!) Anything that gets a comment like that in the Solidity examples should at the very least refuse to compile without the programmer adding some attention-grabbing _UNSAFE annotation. Better, there should be mechanisms to make sure the code is written in a way that everyone understands the consequences of e.g. running out of gas in the middle of a function.


> It sure makes me wonder if Ethereum would do better with a less forgiving programming language.

This will be hard. While Solidity certainly has problems unto itself, some of its insecurity comes from the EVM's design, which is almost laughably low level and thus very hard to reason about. It certainly doesn't seem to be informed by modern VMs like LLVM, JVM or BEAM, which know a great deal more about the semantics of the program they're running and have things like dispatching features. My guess is the approach was "Bitcoin with a few more opcodes" and therefore more like a 80s-era CPU than a "VM".

As a result, the compiler is tasked with running the whole show. Add to this the coupling of RPC to Solidity's mangle-check-and-jump dispatch approach, and you start to see why there's been so little innovation in this area: Solidity has a tight grip on the Ethereum ecosystem. Also, writing a compiler to this substrate is not easy, and you're penalized for code size (there's a limit on how big a contract can be).

I'm opinionated, as an author of a competing smart contract language (http://kadena.io/pact) that runs in an interpreter, is Turing-incomplete, has single-assignment variables etc etc which we think makes a lot more sense for the kind of stylized computing you're doing on a blockchain. We even have the ability to compile our code to SMT-LIB2 for use with the Z3 theorem prover and will be talking more soon about our DSL for writing proofs. Interestingly though, we find that choosing the right domain for your language goes a long way towards safety AND expressiveness, so that you're not constantly cursing your compiler/interpreter while also worrying less about $50M exploits :)


Oh awesome, glad you mentioned your project. I've actually been writing a blog post that parallels a lot of your complaints right here about Ethereum. For safety as well as implementation complexity, Turing completeness is a significant disadvantage for Ethereum, and I'm sorry to see it played up by so many as an advantage. Making a Turing-incomplete DSL need not be limiting and inexpressive, as you clearly know (and have shown).

I was thinking about creating a functional contract DSL that Coq could extract to. Not as familiar with Z3 and SMT solvers, but that's clearly another good approach to safety (I think Ethereum has finally started looking at formal verification after the DAO debacle, not sure what the status is. EDIT: looks like they've made a good bit of progress the past few months).

What about the status of your project? I see you've bootstrapped the contract language and a consensus protocol, do you plan on bootstrapping a P2P network? Or have you done that already and I missed it?


Pact is live and in-use in enterprise settings on our permissioned blockchain platform; you can download the interpreter which when launched with "-serve" presents the RPC REST API, allowing you to write whole applications with just the interpreter; indeed there's a sample "TODO MVC" app showing just how easy this is (see https://github.com/kadena-io/pact/blob/master/README.md).

As for a public platform, stay tuned, we will have an annoucement very soon. Suffice to say for now: any work you do using Pact will have a public chain to run on in the very near future; you can use the O/S releases to develop Pact; and if you have an idea about permissioned (think "B2B only") blockchain applications, get in touch!


Ethereum is a flawed design, no question.

But how does kadena's Pact compare to Ivy[1] by Chain[2]? The goals seams quite similar.

[1] https://blog.chain.com/announcing-ivy-playground-395364675d0... [2] https://chain.com/


Ivy is different, for one that it intends to be a trans-compiled lang to Bitcoin and other substrates. It shares Pact's focus on public-key authorization, due to their shared debt to Bitcoin scripts as design inspiration.

Perhaps the main difference though is Ivy's language focus on financial-type transactions, shared by many other languages in this space. This seems logical for a smart-contract system, but in Pact we identified that many blockchain applications will be totally non-financial (supply chain, healthcare), and that a database metaphor is the most important feature.

Indeed, SQL is Pact's biggest influence. After all, SQL doesn't need Turing-completeness, mutable variables, unconstrained loops. With Pact, we sought to end the war between SQL and stored procedures (most DBs use a different dialect for SPs than SQL) with "one lang to rule them all".

Database-orientation also powers one of Pact's most important features: the ability to run a blockchain node writing directly to an RDBMS like Postgresql, Oracle, whatever. This way you don't have to write tons of smart-contract code to integrate with legacy systems.


> After all, SQL doesn't need Turing-completeness, mutable variables, unconstrained loops.

Minor nitpick: Standard compliant SQL, due to recursive common table expressions, is actually turing complete. Whether that's good or bad...


What would be the complications in providing a transpiler for pact into Solidity?


It's possible, but impractical:

- Pact executes in a runtime environment that at a minimum ensures any ED25519 signatures on the transaction are valid, such that code can then test the validated public keys to enforce authorization rules. So this would need to happen before each transaction, which would assuredly be super-expensive gas-wise.

- Pact modules (i.e., contracts in the Solidity sense) export functions on-chain that can then be imported/invoked by other modules by name, thus allowing safe inter-contract communication, on-chain services, and other nice things. This model is very foreign to Solidity where inter-contract communication is poorly supported; best practices there dictate copy-pasting approved code (c.f. ERC 20 tokens) and hosting all functionality in-contract.

- Pact being interpreted is hugely valuable on-chain, as you can directly inspect human-readable code, as opposed to EVM assembly/opcodes. This is more of a philosophical point though.

- Other things, like supporting direct storage of JSON objects in the database, exporting Pact types as JSON (which you get for free in the Pact runtime), key/value db functionality, transaction history at the db level, support for governance in upgrades -- all of these would need to be coded in as Solidity code, with great computational expense.

The biggest issue facing Solidity developers today is the sheer cost of best practices: ensuring you handle overflows right (ie don't use math primitives but call an approved function), planning for upgrades/data export, you name it: you have to use that code and pay that gas. The environment really needs to provide a lot more "free" functionality than it does today to change this reality.


> best practices there dictate copy-pasting approved code

Not entirely true. In Solidity, you don't need the whole code to call other contracts, you just need their interface (function signature) and you can call any contract.

You'll see all the best practices use interfaces these days.

Agree with all other points, especially about the math safety - there needs to be more support for financial math too.


Hmm ... would love to see an example of Solidity contracts calling pre-existing Solidity contracts as a best practice, especially given the difficulty of verifying the state of code on the blockchain.

In Pact, when you load a module, all references are aggressively resolved and directly linked to the calling code. In Ethereum, if the contract you're calling doesn't have the interface you thought it did, you won't find out until you actually call the code.

My understanding was you really can only trust your own code in Eth, that you can't rely on a pre-uploaded contract (like a safe math contract) -- and you certainly can't extend one safely.


So I can't send a contract in your language some signatures as byte arrays and have it validate them in its logic? Any program that signs things also needs to be able to produce block chain transactions? Just an initial question, I'll read more on your site.


We haven't seen the use-case yet where the (signature,payload) tuple is not isomorphic to a transaction. Yes, in the case of multiple, distinct payloads, you'd have to break those into separate transactions, but that seems like a very specific use-case that doesn't sound very "transactional".

Pact's philosophy sees a blockchain as a special-purpose database, for distributed transactions, so it's not designed for many "normal" database cases, namely bulk operations, searches, heuristics, etc. The use case of accepting multiple signed payloads sounds suspiciously "batchy" to me. Also, Pact is "success-oriented": we see failures like a bad signature as something that should fail your tx only. This is a way of avoiding dicey control-flow logic.

So, if a single payload is what you need the signatures on, you simply design your contract API/function call to have the specific function to handle that data (store it in the database, whatever), and let the environment check the signature.

EDIT: Pact is actually `([Signature],payload)` -- ie, you can sign the same payload multiple times


Signing the same payload multiple times would work for my use case (channels). I also need to accept transactions signed by at least one of two keys. I suspect this might be possible too. However, I can imagine that anything more complicated would go outside of the system you have designed. I haven't had the chance to learn your language, but I would be wary about it either being too limited for edge cases that most real world stuff is going to have, or turning into a "universal framework" antipattern.


> I also need to accept transactions signed by at least one of two keys.

Keysets are designed for precisely this; what's more this rule can now be persisted.

> anything more complicated would go outside of the system you have designed.

Always a possibility of any PL, especially running in a blockchain. Pact makes less assumptions about use cases than most however. It's imperative, allows functions and modules, and offers a database metaphor. That handles a fair number of things.


I'm impressed by Pact. Thanks for sharing. Is there a particular reason you made authorisation via keysets a primitive in your language/infrastructure?


Bitcoin was the inspiration, in identifying a fundamental aspect of blockchain being "authorization by verifying signatures on a transaction." Keysets are a primitive to avoid making multisig a "special case": anywhere you have one signature, in Pact you can have multiple. But in truth, keysets are only part of the picture: Pact runs in an environment that is required to previously verify all signatures on the transaction, and then simply provide the corresponding public keys in the environment.

The idea here is "auth is easy": you don't have to worry about what curve the sigs are (Pact supports ED25519 now but the lang and API support adding whatever you need); you don't have to handle bad signatures (they immediately abort the tx); all you have to do is define a keyset.

Lastly, the reason for the primitive is to have them be inviolable data that can be store in the database, for later use for voting, row-level auth, whatever you can think of.


Heck, the basic methods of writing provably correct programs have been explained in plain English since at least the 70s:

https://www.amazon.com/Discipline-Programming-Edsger-W-Dijks...

https://www.amazon.com/Science-Programming-Monographs-Comput...

This is not some rocket science type verification with a dependently typed theorem prover language, it's fairly simple paper and pencil logic. It should not be hard to adapt it to Solidity specific concepts like running out of gas.

The reason these techniques are mostly ignored is that the techniques don't scale at all to large programs calling APIs with imprecise semantics (e.g. filesystem, network), and most people would rather publish imperfect software and iterate rather than spec everything up front. Well, unlike most software, contracts are not large, their semantics are meant to be 100% precise, and most people would rather take the time to make sure a contract does what it claims to do rather than discover a bug afterwards. I would hope.


Calling it "imprecise semantics" is quite the understatement.

The environment software runs in is often scarcely understood at all. Operating systems and web browsers change without notice due to auto-upgrades. Libraries are often used without understanding their implementations, and they're also constantly being upgraded. Users can install plugins that introduce bugs that can't be reproduced in the test environment.

You can't build an accurate mathematical model of an environment you haven't observed. Integration tests (run against many platforms) and production logging help, but there are still plenty of unknowns.


How do you normally handle those unknowns?


Imitating other code that is known to work [1]. Lots of testing. Fixing the bug when someone runs into it and complains (a viable last-resort for almost anything besides a Solidity contract).

[1]: For example, when working with filesystems, people write code that they saw other people using. The code may or may not work as designed depending on the specific filesystem. see e.g. https://danluu.com/file-consistency/


Users submit bugs and the reply is sometimes "cannot reproduce." :-)

But the more serious projects I've worked on use analytics and have semi-automated ways for users to send you stack traces and logs when they notice a bug.

Also, it's helpful to have a continuous integration setup that automatically runs integration tests on many platforms.


Usually, by crashing or exhibiting bugs and misbehavior.


Exactly, this is a language that they want to run a new economy, not just payment processing, but banking, title transfer, legal resolutions, and the metric it was judged on was "looks like javascript" and not say:

* Compiler time safety (type safety, capabilities, rust and/or haskell style features).

* Built in formal verification features to ensure a function does what the developer (and reader!) thinks and no cases are missed.

* Explicit language design (so that the compiler isn't even required to do strong safety: every action must be spelled out, even if a compiler could deduce it).

* Paying attention to the fucking history of the field (and not doing things like: insane name mangling, case sensitive semantics (not even just names by convention fucking semantics are case sensitive), hard to reason about VMs). Like come on people, at least learn from 20+ year old mistakes.


Solidity has far worse problems than not being an advanced research language. Just being a sanely designed normal language would be a big step up. Solidity is so riddled with bizarre design errors it makes PHP 4 look like a work of genius.

A small sampling of the issues:

Everything is 256 bits wide, including the "byte" type. This means that whilst byte[] is valid syntax, it will take up 32x more space than you expect. Storage space is extremely limited in Solidity programs. You should use "bytes" instead which is an actual byte array. The native 256-bit wide primitive type is called "bytes32" but the actual 8-bit wide byte type is called "int8".

Strings. What can we say about this. There is a string type. It is useless. There is no support for string manipulation at all. String concatenation must be done by hand after casting to a byte array. Basics like indexOf() must also be written by hand or implementations copied into your program. To even learn the length of a string you must cast it to a byte array, but see above. In some versions of the Solidity compiler passing an empty string to a function would cause all arguments after that string to be silently corrupted.

There is no garbage collector. Dead allocations are never reclaimed, despite the scarcity of available memory space. There is also no manual memory management.

Solidity looks superficially like an object oriented language. There is a "this" keyword. However there are actually security-critical differences between "this.setX()" and "setX()" that can cause wrong results: https://github.com/ethereum/solidity/issues/583

Numbers. Despite being intended for financial applications like insurance, floating point is not supported. Integer operations can overflow, despite the underlying operation being interpreted and not implemented in hardware. There is no way to do overflow-checked operations: you need constructs like "require((balanceOf[_to] + _value) >= balanceOf[_to]);"

You can return statically sized arrays from functions, but not variably sized arrays.

For loops are completely broken. Solidity is meant to look like JavaScript but the literal 0 type-infers to byte, not int. Therefore "for (var i = 0; i < a.length; i ++) { a[i] = i; }" will enter an infinite loop if a[] is longer than 255 elements, because it will wrap around back to zero. This is despite the underlying VM using 256 bits to store this byte. You are just supposed to know this and write "uint" instead of "var".

Arrays. Array access syntax looks like C or Java, but array declaration syntax is written backwards: int8[][5] creates 5 dynamic arrays of bytes. Dynamically sized arrays work, in theory, but you cannot create multi-dimensional dynamic arrays. Because "string" is a byte array, that means "string[]" does not work.

The compiler is riddled with mis-compilation bugs, many of them security critical. The documentation helpfully includes a list of these bugs .... in JSON. The actual contents of the JSON is of course just strings meant to be read by humans. Here are some summaries of miscompile bugs:

In some situations, the optimizer replaces certain numbers in the code with routines that compute different numbers

Types shorter than 32 bytes are packed together into the same 32 byte storage slot, but storage writes always write 32 bytes. For some types, the higher order bytes were not cleaned properly, which made it sometimes possible to overwrite a variable in storage when writing to another one.

Dynamic allocation of an empty memory array caused an infinite loop and thus an exception

Access to array elements for arrays of types with less than 32 bytes did not correctly clean the higher order bits, causing corruption in other array elements.

As you can see the decision to build a virtual machine with that is natively 256-bit wide led to a huge number of bugs whereby reads or writes randomly corrupt memory.

Solidity/EVM is by far the worst programming environment I have ever encountered. It would be impossible to write even toy programs correctly in this language, yet it is literally called "Solidity" and used to program a financial system that manages hundreds of millions of dollars.


> Despite being intended for financial applications like insurance, floating point is not supported

That's kind of a feature. Sure you can use decimal floating point (but never, NEVER use the common binary float for money), but storing integers of the minimum currency unit (e.g. cents) (typically wrapped in a Money class in OO languages) is also a good option.


Nitpick: Most finanical packages work on 1/100ths of a cent, not on cents. Otherwise yes, everything money-related should use fixed point and be really careful about over/underflow.

Although one fairly well-known package, produced by a place I once worked briefly, which (when I worked there) internally used doubles for all money values, wrapped in a class that re-rounded the results every so often. No, really.


You don't want to use floating point numbers to represent monetary amounts, however financial applications often work with numbers that are not money. Consider risk modelling.


Do you think this really belongs on a blockchain, as a transactional environment? There's a notion that things like greeks and other non-linear inputs are best fed as inputs/oracles, for a number of reasons: 1) avoiding stochastic stuff on-chain 2) assurance, so you know what your inputs were later 3) impracticality of all that computation on-chain 4) dependence on market data. Of course there are simple things like imputing an option price from the stock with just delta and gamma, but a fixed-point decimal here wouldn't really hurt you; basic calculations like payment schedules would seem to benefit from fixed-point. But mainly, blockchains would seem to represent transactions and workflows primarily; analytics seem ill-suited for the high-assurance, database-write-heavy environment.


My knowledge of Solidity comes from reading the docs. It doesn't seem to support fixed point arithmetic either. The phrase "fixed point" appears in the ABI spec but nowhere else, shrug. Maybe half implemented? I guess you can implement it yourself as it does support bit shifts, assuming they aren't buggy too.

I pass no judgement on what belongs on Ethereum. I know from their website that they advertise it as a platform for general app programming and even implementing entire autonomous businesses. It clearly cannot support these things.


The Ethereum VM is not for general app programming. It's really not your typical environment. EVM contracts get executed on every network node, and it must return the same results everywhere.


> it must return the same results everywhere.

We solved that for floats like 10 years ago. Let alone the fact there are better formats, like posits, or fixed point numbers, that also solve this problem very easily.


I agree with many of your criticisms (and have other ones for Solidity as well), but the lack of floating point is absolutely a feature, not a bug. There's no reason for floating point in a contract language. Floats should never be used for monetary values or counts, and you're not going to be doing numerics on the blockchain.


Technology will evolve.

Good points.


or "hey maybe we shouldn't have memory corruption errors and 'optimizations' that randomly change static values to god knows what" will be met with a bunch of cryptocurrency koolaid-chugging mouthbreathers screeching that these are features and not a bug and now we have Etherium "It's In The Contract That I Can Screw You Over, Get Owned Nerd" Classic, Etherium Floating Point Is A Bug And Not A Feature Edition, and Etherium I'm Excited To See What They Fucked Up And Will Have To Fork Next


What is the insane name mangling and case sensitivity that you mentioned there?


`Transfer` is an event, `transfer` is a function. This [0] is from TheDAO attack and is one of the (many) bugs making the attack so terrible.

As for name mangling, read this [1] and see if it seems sane to you.

For bonus points [2], `this.foo()` and `foo()` mean two wildly different things.

I don't even know what they were thinking.

[0] http://vessenes.com/deconstructing-thedao-attack-a-brief-cod...

[1] http://solidity.readthedocs.io/en/latest/abi-spec.html#

[2] https://github.com/ethereum/solidity/issues/583


>In general, the easier the code is to read, and the harder it is to write

Do you have any actual basis to back this up? My counterpoint would be Golang, which is designed exactly to be simple, and is usually really easy to read.

As in, I haven't found another language where jumping into a library and reading the internals is easier than in Golang.

EDIT: A counterpoint is JavaScript, a language which I use in my day to day, and similarly has quite simple syntax. But I can have trouble understanding what is going on depending on the tools used in the local environment.


I think you might be misinterpreting what GP is saying by trimming the end of the quote; they're not saying that making something easier to read makes it harder to write, they're saying that making something easy to read but hard to write is a worthy goal.


Also, I think "hard to write" is meant as "require that critical or dangerous details are written explicitly; if a feature adds convenience for writing at the expense of reading, it should be avoided". (Type inference, overloads and reflection come to mind)

I think (hope) that no one is advocating making a language verbose or complex for it's own sake.


Like Rust? Rust does most of those things wrt explicit dangerous behaviour.


Exactly.


Thanks, I edited my comment to be more readable.


Golang is actually a good example of harder to write imo. It has good tooling that makes life easier, but unused variables, unused imports, and a lot of other things are errors. And if you do somewhat standard linting on top, it gets even more tedious.

Frankly if it weren't for the tooling I'd not be very sold on Go. The tooling totally sells it for me.


Nobody prevents you to write contracts in a more restrictive language. For example: https://github.com/ethereum/viper

Or create your own. I am sure the creators of Solidity are aware of its limitations and quirks. But as far as I can tell they felt they had to ome up with something fast. And it grew from there.

But as I said: Feel free to create your own language for the EVM if Solidity does not fit your needs or requirements. With a system allowing Turing completenes it should be possible to create a language that removes Turing completeness (for more security). It would be impossible the other way round.


There's an underhanded solidity coding contest running 24/7, it's called 'Ethereum'


This is why Turing-complete procedural smart contracts are a bad idea. As the DAO debacle demonstrated in practice.

Decision tables would be better. Not as general, but understandable.


The original smart-contract language, E, is almost nothing like Solidity. To focus on Turing-completeness, it's true that E is Turing-complete and has an `eval()` primitive, which normally would be dangerous. However, E both comes with sufficient tools to prove that any given instance of `eval()` is safe, and also to limit Turing-complete behavior when needed.

Specifically, in E and Monte, we can write auditors, which are objects that can structurally prove facts about other objects. A common auditor in both languages is `DeepFrozen`; writing `as DeepFrozen` on an E or Monte object causes the `DeepFrozen` auditor to examine the AST of that object and prove facts.

There's a Monte community member working on an auditor for primitive recursive arithmetic, inspired IIUC primarily by the EVM's failings.

The DAO hack happened because of a bug class known as "plan interference" in the object-capability world; these bugs happen because two different "plans", or composite actions of code flow, crossed paths. In particular, a plan was allowed to recursively call into itself without resetting its context first. EVM makes this extremely hard to get right. E and Monte have builtin syntax for separating elements of plans with concurrency; if you write `obj.doStuff()` then it happens now, but `obj<-doStuff()` happens later.

So, uh, yeah. Smart contracts aren't a bad idea, but Ethereum's not very good.


That's a classic form of GUI bug. Some widget calls something which calls something else which eventually calls the original widget, which is not in a stable state. Classic bugs in this area involve using "save file", and then "new folder", and then renaming some folders in a way which invalidates something upstream.


Do you think it would be possible to improve the EVM by adding E's notion of concurrency? One constraint would be the need to have deterministic scheduling, since every execution would need to be run identically by all validating nodes.

[edit] Incidentally, we pointed out several lessons from the ocap community in a commissioned report from Ethereum foundation back in 2015. Few of those suggestions were adopted at the EVM level or the higher levels though. https://github.com/LeastAuthority/ethereum-analyses/blob/mas...


Is this the language E you are referring to : https://en.wikipedia.org/wiki/E_(programming_language) ?

I hadn't heard of it. It sounds neat


Turing-completeness has very little to do with this.

- The bug causing the DAO debacle did not involve loops or jumps or weird machines or other behavior associated with Turing-machine complexity, but instead had to do with confusing behavior of storage and inter-contract communication. - Ethereum is not really Turing complete, since it has bounded. It is procedural though. - Many expensive errors in the cryptocurrency world (e.g. transactions with too many fees, exchanges sending malleable transactions, transaction malleability) didn't involve the smart contract system at all - Decision-table based systems can hide bugs too. Is there any evidence that decision tables actually lead to fewer bugs, given the same amount of programmer time and attention?


Excellent. I've been looking for an elegant solution like this. Thanks.


See Wikipedia.[1] Decision tables allow most of the things you really need in a contract. Termination is guaranteed, because there are no loops. Processing is simple; the evaluator goes down the table rows until one evaluates to true. There's a simple tabular way to look at a decision table, so ordinary humans can read them.

Actions should have database-like transactional properties - either everything in the action happens, or none of it does. If you do a send and an update in an action, both must happen, or neither does.

The big question is what primitives you're allowed to call from the table. They'll look a lot like the ones for Solidity, but need to support atomic transactions.

[1] https://en.wikipedia.org/wiki/Decision_table


Cheers!


I presume this is the Solidity they're talking about, for anyone baffled by this: https://solidity.readthedocs.io/en/develop/


serious question: why would any rational actor participate in this? if you actually had an underhanded exploit, you would sneak it into an actual ICO. sure, it would require a bit more work (thinking up a plausible purpose for the token), but i'm sure you can partner up with someone good for that. it would have better payoff than 10 tokens or a pass to a convention. all this can be done anonymously and would leave the victims with little recourse, assuming the etherum developers don't pull off another DAO "fix".

this is as opposed to the underhanded c contest, where it's much harder to monetize an exploit, since getting access to the code and/or cashing out requires some sort of interaction in meatspace (getting hired, then turning the exploit into $$$ respectively).


The same reason people participate in bug bounty programs instead of exploiting the bugs themselves: ethics/morals are important to that person.


Jokes apart, the contest is actually fairly useful cause it will put some light on the state-of-the-art evil practices that may lead to smart contracts outsmarting (no pun intended!) their investors... ;)


For anyone needing context, last year I wrote a Learn X in Y about Solidity:

https://learnxinyminutes.com/docs/solidity/

Please note that it is fairly old at this point (contributors welcome) - so this is just to give you a sense of the syntax and key concepts.


"Second place prize is 10 MLN tokens from Melonport." I love it.


While it is still not clear that turing complete scripts in blockchains are even required, this is a clever ploy by ethereum to ride some hacker to a spot in devcon, by giving it out as a prize. Ethereum really understands marketing the tech sector.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: