Hacker News new | comments | show | ask | jobs | submit login
Understanding Ethereum Smart Contracts (gjermundbjaanes.com)
476 points by bjaanes 3 months ago | hide | past | web | favorite | 158 comments

I've been programing my own Ethereum smart contract (virtual currency) for awhile now. Here's some gotchas off the top of my head:

- You have about 500 lines of code to work with. This of course varies, but smart contracts have to be really small to fit in the max gas limit (6.7 million wei).

- You can't pass strings between contracts (coming soon).

- There are no floating point numbers. Since you're probably working with "money", this can make things tricky.

- You can break up your code into multiple contracts, but the tradeoff is an increased attack area.

- Dumb code is more secure than smart code.

- The tooling is very immature. You'll probably use truffle, which just released version 4. It makes some things easier, some harder. It's version of web3 (1.0) may differ from what you were expecting (0.2).

- The Ethereum testnet (Ropsten) has a different gas limit than the main net (4.7 million vs 6.7 million).

> There are no floating point numbers. Since you're probably working with "money", this can make things tricky.

That is actually a feature when it comes to working with money. You don't ever want to use floating-point arithmetic with monetary values due to its inability to represent all possible decimal fractions of your base unit. This is just as true for over-hyped blockchain stuff as it is for any imaginable application in the "classic" financial sector.

What you need is either an integer data type plus a fixed, implied number of digits that you want to handle (so for example 105 represents 1 dollar and 5 cents), or a fixed-point numeric type (like for example BigDecimal in Java; there's lots of equivalents in other languages, most of them have something like "decimal" in their names), which essentially just stores the integer value together with the number of digits and features mathematical operations of pairs of these with each other.

>That is actually a feature when it comes to working with money. You don't ever want to use floating-point arithmetic with monetary values due to its inability to represent all possible decimal fractions of your base unit. This is just as true for over-hyped blockchain stuff as it is for any imaginable application in the "classic" financial sector.

I feel like this is constantly repeated but is simply untrue and working in the financial sector (high frequency trading) I don't know anyone who actually uses fixed digit number systems to represent money. Normally what I do see are people outside of finance who need to represent money who read the common mantra about using a fixed digit representation only to eventually encounter all the issues that floating point was invented to solve. When they encounter those issues they then end up having to adapt their code only to basically re-invent a broken, unstable quasi-floating point system when they would have been much better off using the IEEE floating point system and actually taking the time to understanding how it works.

BigDecimal will solve the issue but it's absolute overkill both in terms of space and time, we're talking many many orders of magnitude in terms of performance degradation and it's entirely unneeded.

One simple solution that works very well for up to 6 decimal places is to take your idea of having an implied number of digits, but instead of using an integer data type, you use a floating point data type. So 1 dollar and 5 cents becomes (double)(1050000.0). This lets you represent a range from 0.000001 up to 9999999999.999999 exactly, without any loss of precision. Values outside of that range can still be represented as well but you may have some precision issues.

Another solution which is less efficient but more flexible is to use decimal floating point numbers instead of binary floating point numbers. There is an IEEE decimal floating point standard that can represent an enormous range of decimal values exactly. Intel even provides a C/C++ implementation that meets formal accounting standards.

Both of the solutions I list above are orders of magnitude more efficient than using fixed-point numeric types and are by no means difficult to use.

But you are working in HF trading. Speed is paramount there.

The common programmer has no clue about the numerical instabilities that floating point numbers are causing.

Your examples are exactly what I would expect from a capable engineer that is pondering the different pros and cons when you have to optimize your code.

Calling BigDecimals entirely unneeded tells me you are in a luxury position where you can draw from a pool of the best programmers that are out there.

For lesser programmers and for the areas where speed is not as important, BigDecimal solves the issues completely and without any clever thinking and most importantly, consistently correct without having to rely on the skills of the programmer.

Not every single aspect of HFT is insanely speed critical and even in non-critical areas we still don't use fixed decimal point arithmetic on principle. For example we do a lot of profit/loss reporting and analytics which are not speed critical, we write user interfaces and web apps in Javascript that work with money, so on so forth.

I will agree BigDecimal is fine to use if speed really doesn't matter, it's at least correct even if it's a nuclear option of sorts. What I really want to discourage is using fixed integer to represent some smallest unit of money, like using an int where 100 = 1 dollar, 1010 = 10 dollars and 10 cents. That is fundamentally problematic and objectively worse than using a floating point value.

But sure, if you genuinely don't want to think at all about the issue, use your language's BigDecimal. If you decide you want the much improved performance, then use a floating point value, either the IEEE decimal floating point standard or use binary floating point to represent some reasonably small unit fraction of a dollar, like 1.0 = 10^-6 dollars.

Honest question - why is that something to avoid? What does the floating point representation buy you that say an int64 doesn't, with the same choice of units ie. 1 = 10^-6 dollars?

What if you need to calculate the percentage difference between two monetary values?

Ints wont cut it there.

Sure they will. To truncated whole percents, (N2-N1)×100÷N1, using integer ops, gets you that, to whole percents rounded half up, ((N2-N1)×100+50)÷N1 does it. For additional decimal places, increase the fixed multiplier (and added factor for round-half-up) by the appropriate power of 10.

Even if you are dealing in units of hundreds of billions of dollars tracked to mils, this gets you up to, I think, a hundred-thousandth of a percent with int64.

But with floats you just use a divide operation.

“Won’t cut it” and “will require 1-2 more operations” are very much not the same thing.

Tracking the decimal point is all that comes to my mind.

To clarify for myself and maybe others: are you arguing against using a single integer scaled to the smallest precision someone thought they'd need N year(s) ago?

If so: totally agreed. Inflexible single-scaled-ints have been a plague in nearly every piece of production code I've seen them in. I'll take floats (well, doubles) any day over this.

Flexible scale tho (arbitrary precision / bigdecimal equivalents (I'd prefer bigdecimal over home-grown two-int variants, obv)) usually works out. And tends to avoid semi-frequent-though-usually-unimportant/unnoticed[1] issues with imprecise floats (~16 million for exact int representation[2] is awfully small, though doubles are generally Good Enough™).

[1]: usually. adding doubles/fractions often produces garbage-looking output unless you remember to round. but in my lines of work, nobody cares[1.1] if it's a couple cents off in analytics.

[2]: https://stackoverflow.com/questions/3793838/which-is-the-fir...

[1.1]: yet

Yes, we are on the same page. BigDecimal is appropriate for representing money, fixed point arithmetic is not. I simply argue that in situations where you care about performance, you can actually drop BigDecimal and use floating point decimal or even floating point binary and get perfectly good results that have exact precision so long as you model your domain.

Yeah, if you know your domain and know you're creating only an acceptable amount of error, by all means float it up. Floats are easy to use and predictable nearly everywhere, since nearly everyone follows the same spec.

Generally though, I think people don't quite do due-diligence here. And/or the domain changes enough that numbers get large enough to invalidate the earlier analysis. There are obviously exceptions to this though, and for them, they know what they're doing and why they're doing it and it's all good.

Self-quote: > any imaginable application in the "classic" financial sector

Kranar: > working in the financial sector (high frequency trading)

Okay, you got me there - you have pointed me to that single application from the financial sector where it is not totally acceptable to "waste" a few thousands of processor cycles to compute some money-related stuff. Granted, in HFT applications, these cycles may allow you to get your trades in front of those of the other HFT companies. You are forced to use CPU-accelerated (or maybe even GPU?) computations in this case, which automatically means "floating point".

But there's another difference that allows you do do this: you mostly don't have to care about exact, accurate-up-to-the-penny results. I assume most of your calculations are done to eventually arrive at a conclusion of whether to buy or sell some stock or not, and at which price. You have to take care of not accumulating too much rounding errors in the process, of course, but the threshold for these errors is set by yourself, and you can give yourself a bit of leeway on this, because it's all internal stuff, mostly probabilistic and statistics-based - the only stuff that may be audited by someone else and thus has to match the real numbers up to the last penny are the trades you do and the money you move around, and I bet all of this accounting-style stuff is recorded using...decimal numbers :D

I work in the retail industry (think cash registers, retail sale accounting, that kind of stuff) and pretty much any legislation on this planet would obliterate us if we'd tell them that the result of the computations of our systems may be some cents up or down from the real result - the one someone would get who just scribbled the numbers on a sheet of paper and added them up manually. Our customers have to pay taxes based on the sales that they account using our systems, and the tax calculations as well as the entire chain of processes they use to arrive at the final numbers are regularly audited by the respective governments. There are specific rules (of course differing by country) as to how many decimal places have to be used and how stuff has to be rounded in which particular cases in calculations that would require more decimals. We waste an enormous amount of CPU cycles just to strictly adhere to these rules - and that is not only absolutely necessary, but also totally okay, modern CPUs can easily accomodate this in our scenario.

>I work in the retail industry (think cash registers, retail sale accounting, that kind of stuff) and pretty much any legislation on this planet would obliterate us if we'd tell them that the result of the computations of our systems may be some cents up or down from the real result - the one someone would get who just scribbled the numbers on a sheet of paper and added them up manually.

It is especially in cases like this that you absolutely should not use fixed decimal point systems to represent money. It is exactly in these circumstances that your fixed decimal point system will eventually encounter a situation where it fails catastrophically.

Intel provides an IEEE decimal floating point system specifically for the purpose of adhering to legal requirements, they say so right at the top of their website:


If fixed point math would have solved this issue, they'd have provided a solution involving that, but fixed point math is simply the absolute worst solution to this problem, even worse than naively using a 64-bit binary floating point number.

BigDecimal is also a viable solution, but it's entirely unnecessary and the cost really is enormous especially if you need to handle a large number of transactions. But sure, if you say performance genuinely isn't an issue go with BigDecimal.

> BigDecimal is also a viable solution, but it's entirely unnecessary and the cost really is enormous especially if you need to handle a large number of transactions. But sure, if you say performance genuinely isn't an issue go with BigDecimal.

That is exactly what we currently do, and speed of computations is definitely not an issue at all. If we have performance related problems (and we do sometimes), they have never in 10 years originated from numerical computation just taking too long, but always from other inefficiencies, quite often of architectural nature that uselessly burn a thousand times more cycles than the numeric part of the job.

BigDecimal provides exactly what we need: a no-brainer, easy-to-use, easy-to-understand and always correct vehicle for the calculation of monetary sums. It lets our developers focus on implementing the business logic, and also on making less of the described high-level failures that get us into actual performance troubles, instead of consuming a lot of mental share just to “get the computation right“ in all circumstances.

I think its generally assumed that the unqualified term "floating point" is generally referring to binary floating point, which is not the same as decimal floating point. You don't want to use binary floating point for representing money. Furthermore fixed-point refers to a decimal value with a fixed number of bits for the integral and fractional parts, which is not the same as previously suggested system with an implicit decimal point (e.g. multiplying by 10^N for N places of decimal precision). real fixed-point math can be a better and more accurate system if you're working within a well defined range. Regardless of the system you're using, there are only 2^N unique values that can be represented in N bits, binary floating point distributes them unevenly across a huge range. If you are representing a smaller range with a fixed-point system, it will necessarily give you better precision.

How on earth is using the wonkiest possible representation a better solution when you need precise results? In this case, even integer overflow would be a better failure mode than being a few cents off - because it would get noticed pretty quickly in comparison.

Hell, floating point isn't even a good way of doing imprecise arithmetic on larger numbers. What you would actually want is log arithmetic, but that's not what the hardware implements (even though it's simpler). It gives you roughly the same overall precision over each range but stops it from suddenly being cut in half as the exponent changes.

> pretty much any legislation on this planet would obliterate us if we'd tell them that the result of the computations of our systems may be some cents up or down from the real result

But you are not some cents up or down, even if you are calculating with billions, you still have 7 significant digits, that's a lot of head room for calculations.

Are there any financially sensible calculations where large numbers (order of magnitude billions) are multiplied with each other?

> [...] only to eventually encounter all the issues that floating point was invented to solve. When they encounter those issues they then end up having to adapt their code only to basically re-invent a broken, unstable quasi-floating point system when they would have been much better off using the IEEE floating point system [...]

Would you care to elaborate on some of these issues? I'm genuinely curious.

Sure, I'll embarrass myself here and use myself as an example since this is how I actually came to even learn about this to begin with.

When I started working in finance straight out of school (not in HFT at the time), I naively accepted the dogma that money should never be represented using floating points. I mean everywhere I went I would read in bold letters don't use floating point! Don't use floating point! So I just accepted it to be true and didn't question it. When I wrote my first financial application I used an int64 to represent currencies with a resolution of up to 10^-6 because that was what everyone said to do.

And well... all was good in life. Then one day I extended my system to work with currencies other than U.S. dollars, like Mexican Peso's, Japanese Yen, and currencies X where 1 X is either much less than 1 USD or much greater than 1 USD.

Then things started to fail, rounding errors became noticeable, especially when doing currency conversions, things started to fall apart real bad.

So I took the next logical step and used a BigDecimal. Now my rounding issues were solved but the performance of my applications suffered immensely across the board. Instead of storing a 64 bit int in a database I'm now storing a BigDecimal in Postgresql, and that slowed my queries immensely. Instead of just serializing raw 64-bits of data across a network in network byte order, I now have to convert my BigDecimal to a string, send it across the wire, and then parse back the string. Every operation I perform now requires potentially allocating memory on the heap whereas before everything was minimal and blazingly fast. I feel like there is a general attitude that performance doesn't matter, premature optimization is evil, programmer time is more expensive than hardware, so on so forth... but honestly nothing feels more demoralizing to me as a programmer then having an application run really really fast one day, and then the next day it's really really slow. Performance is one of those things that when you have it and know what it feels like, you don't want to give it up.

So not knowing any better I decided to come up with a scheme to regain the lost performance... I realized that for U.S. dollars, 10^-6 was perfectly fine. For currencies that are small compared to the U.S. dollar, I needed fewer decimal places, so for Yen, 1 unit would represent 10^-4 Yen. For currencies bigger than U.S. dollar, 1 unit would represent 10^-8...

So my "genius" younger self decided that my Money class would store both an integer value and a 'scale' factor. When doing operations, if the scale factor was the same, then I could perform operations as is. When the scale factor was different, I would have to rescale the value with lower precision to the value with higher precision and then perform the operation.

This actually worked to a degree, I regained a lot of my speed and didn't have any precision issues, but all I did was reinvent a crappy floating point system in software without knowing it.

Eventually I ended up reading about actual floating points and I could see clearly the relationship between what I was doing and what actual experts had realized was the proper way to handle working with values whose scales could vary wildly.

And once I realized that I could then sympathize with why people were against using binary floating point values for money, but the solution wasn't to abandon them, it was to actually take the time to understand how floating point works and then use floating points properly for my domain.

So my Money class does use a 64-bit floating point value, but instead of (double)(1.0) representing 1 unit, it represents 10^-6 units. And it doesn't matter what currency I need to represent, I can represent currencies as small as Russian rubles to as large as Bitcoin, and it all just works and works very fast.

64-bit doubles give me 15 digits of guaranteed precision, so as long as my values are within the range 0.000001 up to 999999999.999999 I am guaranteed to get exact results.

For values outside of that range, I still get my 15 digits of precision but I will have a very small margin of error. But here's the thing... that margin of error would have been unavoidable if I used fixed decimal arithmetic.

Now I say this as a personal anecdote but I know for a fact I'm not the only one who has done this. I just did a Google search that led to this:


The second top answer with 77 points yells in bold letters about not using floating points as a currency and suggests using a scheme almost identical to the one I described, where you store a raw number and a scaling factor, basically implementing a poor-man's version of floating point numbers.

Thanks for the detailed explanation of your thought process.

I guess you fall into the few percent of developers who are NOT meant to be addressed by the general rule of "don't use float for money". Like with all "general rules" in software development, it is intended to guide the >90% of devs who don't want and sometimes also aren't capable of fully grasping the domain they're working in and the inner workings of the technology they use. The majority of devs need simple, clear guidelines that prevent them from making expensive mistakes while wielding technology that's made up of layers and layers of abstractions, of which some (or even most) are entirely black boxes to them. "Playing it safe" often comes with other shortcomings like worse performance, and if those are not acceptable in a specific scenario, you need a developer who fully grasps the problem domain and technology stack and who is thus capable of ignoring the "general rules" because he knows exactly why they exist and why he won't get into the troubles they are intended to protect you from.

I have long thought that every developer under the sun should strive to get up to this point, and I still think that it is an admirable goal, but I came to understand that not all developers share this goal, and that even if someone tries to learn as much as possible about every technology that he comes in contact with, he will never be able to reach this state of deep understanding in every technical domain imaginable, as there are just too many of them nowadays. We all, no matter how smart we are, sometimes need to rely on "general rules" in order to not make stupid mistakes.

You know there's a numeric type in postgres? Not quite arbitrary precision, but large enough for pretty all practical purposes "up to 131072 digits before the decimal point; up to 16383 digits after the decimal point"


NUMERIC (without a precision or scale) is a variable-length datatype, and suffers all the performance problems mentioned above.

If you specify a precision and scale, performance of NUMERIC improves quite a bit, but now you can’t store values with vastly different magnitudes (USD and JPY) in the same column without wasting tons of storage on each row. You’re back to square one.

Thanks a lot for this answer -- the real-world scenario including your thought process is very insightful.

I worked on the exchange side(CME Group) and we used BigDecimal (and an in-house version that worked in a very similar way) quite often... Worked just fine for us as far as I know.

That being said, I agree with your sentiment that there are faster/more efficient ways to accomplish this. For something like smart contracts, I'd think they should be able to abstract this line of reasoning into an API that's consumable for those of us that aren't as familiar. Or is at least worth having the option of.

Additionally, the most widely used programming language in finance, Microsoft Excel, is based entirely on IEEE binary floating point arithmetic (though with a few hacks on top [1]).

[1]: https://stackoverflow.com/a/43046570/392585

True, but the number of weird mistakes due to floating point inaccuracies that happened in Excel spreadsheets is probably too large to be accurately representable in a 32bit float number.

What you are saying may be true of finance. But it's not true of accounting, in which exact figures are paramount.

I think a good system to look at copying in the real world is the Czech Koruna or Japanese Yen. It is a real world example of a currency that doesn't use decimal values. This simplifies the math involved and by having each unit have a very small value you can replace cents, and you don't have money "dissapear" due to being rounded off.

True, but the extra code around those solutions cost gas, of which your looking at around 500 lines worth.

How long is BigDecimal.class?

Providing a decimal type would of course not be the job of the developer of a smart contract - I would expect that the smart contract platform would anticipate the necessity of convenient decimal number handling and thus provide a suitable abstraction.

In the case of Ethereum, it could either be provided in the language or on the virtual machine level. I'm not that much into the EVM's inner workings to judge whether a machine-level implementation would be a good idea from an architectural viewpoint, but it would definitely deliver the best possible performance and lead to the smallest amount of contract code, thereby saving on gas to deploy the contract. As a second-best solution, the language could provide such an abstraction - it would at least be able to apply optimizations when compiling the code down to EVM opcodes, which an implementation purely on the contract level would not be able to do.

EVM is a 256-bit architecture, so it probably doesn't a BigDecimal equivalent for financial transactions.

Likely you don't need a generic implementation of Decimal.

What money do you want to represent, they all have a "atomic value".

For instance, when working with dollars store cents. When working with Eth, store wei. etc.

I can't think of a use case of money that needs decimals except maybe computing ownerships percentage. but that should never be stored, but rather computed.

Anything that needs to be converted is a "front end" view. all computations should use atomic store of value, thus no conversion is needed in contracts.

You might need to work with fractions of "atomic values" to get the desired level of accuracy. So you have to store not logical cents, but 1/100th of cents or something like that. Much easier and straightforward just use decimal-like type.

That's what happens in e.g. the Maker stablecoin project. Decimal fixed point to give sub-wei precision when prorating per-second compounding fees. Other than the normal simple arithmetic operations, we use "exponentiation by squaring" to take a decimal fixed point raised to an integer power.

Yep, when doing financial stuff, those half cents can count.

> What money do you want to represent, they all have a "atomic value".

This is mostly not true.

> For instance, when working with dollars store cents.

While external transactions often must occur in cents, internal accounts and unit prices often have smaller amounts. If you don't believe me, visit any US gas station and the prices will be in mils, not cents (and will usually be one mil less than some full-cent value per gallon.) Atomic units for financial applications are application-specific if there is an appropriate value at all, they aren't trivially determinable by looking at the base currency.

In some cases you really want an arbitrary precision decimal (or even rational) representation.

>For instance, when working with dollars store cents.

How does this work of you are, say, coding for a gas station where the price is in 1/10ths of a cent?

Alternatively, how does your banking application handle adding 10% interest to a bank account with 1c in it?

Some very good points. I recall that the mainnet once had a gas per block limit of 4.7 mil before it was increased. The increase is something that is voted on by the miners, and has been fluid over the years. (Btw, it is something very similar to a block size limit in BTC, although it limits the storage + CPU usage, rather than just storage like in BTC)

About the point on using floats, one should never use floats when working with money, this is because floats are not precise and result in rounding errors (eg. 0.1 + 0.2 results 0.30000000000000004, see for details http://0.30000000000000004.com ). One of the simplest approaches to solve it is to work with the smallest units, so if working with dollars then you can use cents and that means you can use integers which give more precision, which is how Solidity currently deals with it, by working with the smallest unit of Ether which is 'wei'. Some of the units listed here https://etherconverter.online

And it's exasperated by the fact that most Tokens use 18 decimal places of precision. So "1" would be 1000000000000000000. As a programmer, I just found it harder to reason about numbers like that.

I ran into problems testing my solidity contract with javascript, because javascript isn't great at big numbers. Or division.

Dang you got there 3 minutes before me!

Don't worry, I'm sitting in the same boat ;-)

> You can break up your code into multiple contracts, but the tradeoff is an increased attack area.

This is the way to get programs over 500 lines, and it doesn't increase the attack area as much as you'd think, since you can essentially hard-code the addresses of outside contracts into your code--it's not a combinatorial explosion.

> and it doesn't increase the attack area as much as you'd think

You should call Gavin Wood and tell him that :D

That makes sense, thanks. I hadn't thought of making it hard-coded (or just set at construction). The examples I'd seen were "upgradable" tokens.

Yeah, I did a lot of upgradeable stuff early, but have since moved to more hardcoded stuff for that exact reason.

A question about the 500 lines of code and tooling limitations. Is this something that is inherently an issue of the underlying technology or do you predict that this is something that is likely to be improved upon over time? And if you think it'll be improved, are we years away from big improvements? Less?

The 500 lines of code is, to be clear, just a guesstimate. It really is related to how those lines of code are compiled into Ethereum opcodes. Maybe you can get a thousand lines of code. My point is that it's pretty small, compared to what most developers are used to.

The max gas limit is voted on by Ethereum miners. They recently raised it to 6.7 million (I believe it was 4.5 million not so long ago). The gas is what is used to store the contract (thus increasing the block chain size), or to execute code (thus increasing the work load on the miners), so it's up to them.

More Eth = more $$, but they have to weigh that against the downsides. If the price of Eth keeps going up, they have less incentive to increase the gas limit, because they get more money for doing the same work.

Aside from the gas limit there's a hard 24KB limit on bytecode size, to deal with a denial-of-service vulnerability that cropped up last year. Possibly that will be improved in some future upgrade.

re: tooling

I've been using Truffle too. My biggest issue when getting started was the cognitive overload. If you want to try Truffle I wrote a couple of simple guides that helps you deploy your first Ethereum smart contract to a test net https://blog.abuiles.com/blog/2017/07/09/deploying-truffle-c...

Interesting point on the lack of floating-point support. I've dealt with storing fiat in real world software and using floating point is a big no-no due to the fact it is only an approximation by its very design. We therefore always stored money as integer cents/pennies etc. How is this approached with digital "money" where there is no smallest divisible unit?

Most of the time by using 18 decimals. One "Ether" is 10^18 "wei", the smallest unit of currency. Most cryptotokens built on ethereum also use that many units...but not all.

Bitcoin is, IIRC, 8 units of precision.

Ah that makes sense - if Bitcoin keeps going at its present rate could 8 units of precision simply not be enough? Would a hard-fork be the solution to this?

If one bitcoin were worth $100 million, then one "satoshi" (the smallest unit in bitcoin) would be worth a buck. So, 8 units of precision is probably plenty.

I'm not sure if there are infinitely divisible currencies? The cryptocurrencies I know (Ethereum, Bitcoin) _do_ have a smallest divisible unit (wei and satoshi respectively).

For passing strings between a contract you can return the string as Bytes32 and let the remote contract query a function to access that.

Wait you can’t call another contract’s function (from a contract) if it takes a string as argument?

just curious, what are you doing with ETH contracts? working on a product or just hobby?

Just over a month ago, HN user DennisP answered the question "How did you get into contract ethereum development?".


If you are starting back at the idea of the Bitcoin blockchain, see ahussain's recommendation of 3Blue1Brown a day later:



What I'd greatly appreciate is a walk through of a plausible real world use case. I find it hard to concentrate on the technology itself until I understand the application.

Crowdfunding (ICOs) is a use case. DNS (ENS) is a use case. Organisational Transparency (Aragon) is a use case. Prediction Markets(Augur) are a use case. Electricity Markets (Grid+) are a use case. International payment card with zero fees (TenX) is a use case.

I can go on for a while longer, there are hundreds of use cases but the question is: what would be a use case for you?

They all sound great. So what I'd love to see is a well written blog post describing how Ethereum would be put to work to achieve one of those goals and to give me an understanding of its superiority within the domain.

I'm not demanding it and I'm not being snarky at all. It's just that's the level I'm at - without that kind of entry point I struggle to assess it.

Some of the use cases can be fairly simple (thus easier to see the value).

1. Me and You start a company together. We would like to split things 50/50. We setup an address such that when values are paid to it, half goes to you, half goes to me. Effectively we've created an "LLC with an Operating Agreement", but we aren't relying on political legal system / country to enforce. Enforcement is automatic and done by the network.

2. You want to setup a Trust for your kids. You create a contract that holds 10 Bitcoin / Etherum / etc. On the date Jan 1, 2035 the value of that address will be forwarded to your child's address. Again, we created a "Non-Revokable Trust" that is network enforced. No need for probate, courts, trustees, etc.

> Me and You start a company together. We would like to split things 50/50. We setup an address such that when values are paid to it, half goes to you, half goes to me.

What is the real world use case for that, though? That seems like a contract that makes sense as long as you and I trust each other, which rather negates the purpose of a contract.

Some immediate issues that strike me:

* There's nothing to stop either of us offering the services of the company outside of that contract;

* There's no way to ensure that one of us isn't freeloading;

* If my wallet gets stolen you'll be paying half the earnings of the company to the hacker until the contract is somehow voided;

* Which raises the question of how can this contract get voided? Do either of us get to void it whenever we feel like it? What if I've lost my key? Do we potentially just have to live with the money being paid into hacked / unusable accounts forever?

So all you've really saved is the bother of dividing a number by two and doing bank transfers, in return for which you've gained significant operational burdens.

1- 50% of what? Revenue into the company or dividens/salary etc out of the company? if it's the latter, then, how do we make sure that you don't spend $500.000 for trash bins for the office? Do we have to make sure every decision goes through the network? (i.e. who is the authority to decide what is reasonable and what is blatant fraud if things go sour AND is it even possible to make sure that nothing possibly can go sour[I don't think so.]. So, fraud is possible, and in the case of fraud, -say; the network retroactively decides that something really constitutes a fraud, hence is a crime- who is the enforcer of the punishment[which is most likely to be physical: like prison sentence]?)

2- How do you make sure that your child doesn't sell it in 2025 and buys a Lambo? (which is a problem you don't have with regular Trusts AFAIK)

Parent gives you an answer that is scoped to "divide incoming by half and send the halves to given addresses" and you criticizing that the answer doesn't cover all scopes you can come up with?

Him: "This baker makes bread rolls". you: "What about dark bread? What about non-bread items? What's his broker?"

_Red gave two examples of Ethereum applications.

SeckimJohn quite quickly showed why neither work. The are some niche digital uses, but not the ones people often tout.

Ethereum contracts basically break down at the point they need to interact with the real world.

The only thing seckimjohn showed is that contracts, paper or "smart", are only as good as they're written. Since contracts such as _red gave, "we are friends, this is our company, we share the EBT equally", exist right now, it's a perfectly realistic use case for a smart contract. Hell, I'd even do that for a small project.

Smart contracts being fixed and errors being exploitable is a valid criticism but it's not the be-all-end-all argument for them to have no real world applications.

No its fundamental.

In the trust fund example, you cant stop the recipient trading their key or wallet early, as you have no way to verify the human holding it.

Unless you keep that access with some legal guardian. In which case your security still sits in the legal system, as it would if you just used a trust fund. So whats the point?

Again in the company example, it only works if you both observe that the coins mean shares in company. Which you can only really enforce with a legal agreement, but again, why not just have a legal agreement that says you split your share?

They completely breakdown at the physical barrier. Darknet markets only work because the seller doesnt control the market and has a deposit. But apart from that they can transact anonymously, that part has nothing to do with blockchain.

What's to stop someone from trading a normal trust fund away? The recipient can simply take out a loan with the trust as collateral.

how do you prove to the bank the non-existence of a clause that would void the trust if this is done(and noticed)?

the main point I tried to make above was this: in normal life there is the "human factor"; like, if you look at a situation, you can reasonably comment on whether it's a positive situation from an actor's perspective; whether the actors were engaging in fraudulent activity etc. In digital-only world, there is no such thing as "probable cause", "reasonable doubt" etc.

which makes the application of smart contracts to real life intractable(IMHO borderline impossible for complex situations for the near future[~10 years?]).

""smart", are only as good as they're written"

There are so many things that go into contracts that cannot be articulated in 'code' that this all hardly makes sense.

Employment contracts are long. Comp packages can be complex.

And we all live in countries with employment laws etc. that require these things anyhow.

I don't see any actual real-world cases for Eth contracts just yet.

The earliest smart contracts will likely involve things that are relatively easy to verify, e.g. a bet that the price of ETH will be above X at Y date and time.

Ensuring that accurate, real-world data gets entered into the blockchain to enable broadly useful smart contracts is going to be an interesting area to monitor over the next few years.

Augur attempts to align people's incentives to accurately report on real world events, like political events (aka Oracles). Axa pays out on delayed flights with a new flight insurance product[1], from data gathered from public flight information, which triggers an automated payout.

[1] https://fizzy.axa/

"a bet that the price of ETH will be above X at Y date and time"

Not so fast. Who says 'what the price of ETH' is? There is no such thing as 'the price of ETH' - there are simply buyers and sellers each willing to sell and buy at different amounts.

You'd have to agree on a mechanism to agree on what that price even is.

And what about odd fluxuations? Market cheaters - i.e. getting a hold of a vast quantity of ETH just to jolt markets for a few hundred milliseconds just to jigger some contract?

As far as 'paying out on delayed flights' - that's a very cool idea, but I can't imagine why on earth it would be in ETH on a distributed ledger.

> There are so many things that go into contracts that cannot be articulated in 'code' that this all hardly makes sense.

I think this is actually something I've been trying to articulate for a while now anytime smart contracts come up.

Legal contracts only specify the criteria and conditions surrounding a contract. Smart contracts must describe that PLUS all the very specific instructions on how to validate those criteria/conditions which adds a massive amount of complexity into the contract (and more potential for loopholes).

Well I'm entirely new to this and pretty blockchain sceptical but I don't think SeckimJohn showed why neither work, did he?

As I understand it each concern requires some kind of function/method which addresses it within the smart contract, right? So the real problem is that once you publish the contract you better make sure you haven't forgotten something important...

Basically both examples in order to work would need legal enforcement or guardianship outside of Ethereum.

Which they both need to work today anyway. So whats the point?

Until ether contracts can hold people to their word, are recognised by the judicial system (which wouldn't be able to effect it without some weird judge API), or is able to enforce some kind of judiciary process itsefl (SKYNET WARNING), they'll always break down at the physical barrier.

He didn't show that neither work, but the examples he did give were trivially bypassed and didn't seem "real world" at all.

For instance, I can't think of any corporate structures that splits income in half. Usually the money is run through the company and costs etc. are taken out first. So this isn't a real world example?

Right, the examples are just missing the step of "somehow make sure this isn't as bad an idea as it looks".

regarding number 2, the child would not have a way of selling that Ethereum before 2035.

They sell the proof of identity or authentication information that is used to associate a person with an Ethereum account to someone else, out of band. Ethereum only knows the address to send to, and has no idea who or what is behind it at the time.

If the beneficiary dies before 2035, their heirs will collect, for instance. It is pretty simple to do a TVM calculation to determine the current value of a future benefit, and without all that legal-system stuff that is being bypassed through use of smart contracts, the intent of the trust creator can be more easily bypassed.

Can't they just sell access to their address?

One way or another I'd rate the odds of their still having access to that address in 2035 as pretty low.

Realistically anything that requires people to have decent data management / opsec is going to fail for >99% of people.

They could but the buyer would not have assurance that they are the only one with the private key and therefore likely wouldn't do it.

" the child would not have a way of selling that Ethereum before 2035"

In 'no legal land' it would be easy enough to sell a bunch of 'locked until 2035' etherium - wrapped in another ether contract like a future or something. Much the same way lottery winners who get an annuity instead of a lump sum can then sell that annuity for a lump sum.

Those are both really interesting thanks.

I assume in case 2 Solidity language has some built-in mechanisms for ensuring that the current datetime is agreed across the whole network?

As DennisP said, yes there are mechanisms to ensure date is agreed upon.

Your question of "consensus" brings up other interesting points. I think an increasingly valuable service in a new smart-contract world will be to have an "Outcomes As A Service" arbiter. This would be a trusted 3rd party that publishes real-world outcomes in a method that can be accessed reliably via smart contracts.

Who will win the football game between Dallas and New Orleans on Nov 22, 2019? Who is winner of 2020 Presidential Election? etc....Essentially a service that you can tell them you want to track some real-world event and publish outcome for use in smart-contracts. Obviously this 3rd party would need to be transparent, have an open appeal process, be neutral and earning income not tied to outcome, etc.

What happens to gaming laws (and cities like Las Vegas), in a world where sports betting can be done with no middle-man needed?

This is the main problem I see with smart contracts (Even with oracles?)...

How do the nodes running the code all get the same response from an external data source?

Even if they use a service such as oraclize if the contract calls https://api/query?param1 and it returns lets say the weather:

Node1 gets: 15.2c Node2 gets: 15.3c

We've already failed on consensus haven't we?

The nodes don't all separately query the source. Instead, someone queries the source and sends a transaction to the network, with the data and the source's signature. The transaction gets incorporated into a block, updates the state, and now everybody has the same data.

Services like Oracleize already allow you to do something similar to this.

Yes, every block includes a timestamp so everyone is forced to agree, and nodes won't forward a block if the time is too inaccurate. Miners have some leeway to manipulate time a bit, so you wouldn't want to use the block's timestamp as a random seed for a lottery, but just using it as a coarse deadline works fine.

Interesting. So in the case of 1, I guess I could have a function in the contract called something like sellUserXStakeTo(userId Y), that would let me transfer my 50% ownership to another user Y. And that method would be secured to only accept being called by me, or by another contract written by me.

Then if a user Y did come along and wanted to buy my stake, we'd agree a price P, then I would write a second contract that said: If user Y pays user X amount P, then call sellUserXStakeTo(Y). Then, once I'd published it, user Y would verify it, then pay the money, secure in the knowledge that I could not alter or back out of the deal.

Out of interest, are there any examples of real "live" structures in place using those mechanisms?

Just a simple example, sort of: https://hodlethereum.com/

It will send back your ETH after a certain date. You can check the smart contract in the bottom left corner.

How about using smart contracts for prop betting

On ethereum.org there are several tutorials, including making your own token and doing a crowdsale.

The Solidity docs have tutorials for voting and auctions: https://solidity.readthedocs.io/en/develop/solidity-by-examp...

And I've got various simple ideas with code at my blog: http://www.blunderingcode.com/

I have to confess I don't like that article. I admit part of that is merely aesthetic...but principally it doesn't allow me to grapple with examples that are rooted in the now.

To give you an example of what I mean: when Bluetooth was in its early days everybody kept repeating this stuff about your Fridge being able to talk to your Toaster. They wrote imaginative articles about Toasters that told Breadbins they had just toasted the last two slices of bread or whatever. This didn't help me at all to understand a perfectly good technology. It was, frankly, distracting BS. If someone had said you could have headphones without wires (and perhaps they did) I would have gone: Ah hah! Now that may show you the hard limits of my technological imagination...but to me that's the kind of the thing that sells a new technology.

I think simple examples of things we currently need to do, with practical examples of how we can do them a different way is what helps me to understand things.

The superiority is always going to be the same: you don’t have to trust a third party anymore.

yes but any of that stuff can be accomplished with conventional databases and software stacks. What specific problem with respect to implementing any of the above is blockchain solving that conventional stacks can't?

You could argue that ICO-style crowdfunding is the only one of these problems that's currently not solvable in any other way, purely because it's impossible to get other people to invest obscene amounts of money into dubious projects if you don't have the ability to easily give them "tokens" of some kind in return that they can hope for to increase in value, regardless of whether the tokens are useful for their advertised purpose or not.

This "problem" (which is kind of an euphemism, you could just call it "scamming") is not solvable without having a blockchain as a neutral entity that secures the tokens in question. Just giving people the same essentially-worthless "fantasy tokens" stored in a conventional database that you promise to maintain will not cut it, as people won't give you hundreds of millions in this case, but will readily do so once your tokens are on a blockchain.

This is what I'm waiting for someone to explain. It's the same with blockchain, though. It's all a very neat concept, but it doesn't solve anything real-world except to power Bitcoin.

Smart contracts just sound like... normal contracts. Like the ones that have been around in some form or another for millennia. Most contracts are just "if X then Y" but using legal lingo instead of code.

>Prediction Markets(Augur) are a use case

The difficult part about something like a prediction market is that it's hard to get an authoritative source for the result. How can you prevent a fake result being sent out, for example?

I completely understand that. I answered more or less the same question on Reddit yesterday and I hope it can bring some value here too:

In the same way that Bitcoin allows people to send money to each other without having to trust the other person, you can do computation without having to trust a centralized party to do that computation. And it doesn't have to be "computing" in the way you might think about it; it can be tracking, storing and securing information (generally a hash of that information) with some logic around it (who can append, payments, vote on things, etc.).

I wrote a blog post about some potential use cases. Most of which are realized only with smart contracts. Some are pretty "out there," but in the end it up to the innovators of the world to decide what we can or cannot do with this.


Having recently gone through the prolonged pain of probating a will, I think smart contracts could be used to enforce wills without the need for probate. But of course, that opens up its own can of worms, like the need for potential heirs to maintain a digital identity for possibly decades. What happens if someone cuts all digital ties and effectively becomes a hermit? At least with ordinary probate, you can simply visit them in person and hand them some papers to read and sign.

So maybe it's not such a great idea after all.

Wills usually change many times in your lifetime, and with a smart contract you'd have to build in safe void clauses that can't be exploited.

It also has the hard limit that the assets being willed must be crypto currency. It wouldn't work for a house, car, cash, etc...

Paper wallets, maybe?

My mom works for Mercedes and she's been telling me (She's really tech-y) they really wanna use Blockchain/Smart contracts to help validate someone before they purchase a car. Essentially before you buy a car, you need to verify you have the credit, cash, etc.. whatnot. She says that Diamler/Mercedes think they could make this a lot smoother with smart contracts and what not.

Would it be used for an internal system, or for contacting credit score companies, banks, etc?

I believe internal for dealerships to use. Again, she's not very tech-y so I'm not entirely sure.

*she's really NOT tech-y, my mistake

I know that one of the major power companies in Germany works on chargers for electric cars based on Ethereum; send x ether to the smart contact and the charging station will let you charge your car accordingly.

What bit of that needs a smart contract? Seems perfectly do-able without one, and you still need the same levels of trust even when there's a smart contract in there.

The smart contract is the tool to make sure* that transferred Ether results in power being transferred. Of course we could just use a blockchain like Bitcoin and define "send x BTC to wallet y and I promise you will get your power" - but in that case, the blockchain can just guarantee the value transfer, not the subsequent actions being undertaken.

* since this involves interaction with the outside world, I don't know a way at this point that 100% guarantees that your get your power. Curious how they tackle this.

Ethereum cannot guarantee that power will be transferred. It's just numbers in a blockchain. Users still have to trust that the electricity provider will give them power. The blockchain has just added complexity with no benefits here.

This seems like a perfect use-case for payment channels, e.x. Lightning Network:

Car continuously delivers micropayments to charger while charger continuously delivers power to car, if one stops the other also stops. If the power is cut off prematurely the car only loses maybe a couple cents worth of power.

(Payment channels are built using smart contracts, but they’re simple enough that Bitcoin’s more limited smart contracts can do it)

That happens today: when I pump gas into my car, the price on the display goes up. When it hits the limit, the gas flow stops. No blockchain is required.

The difference is there is no counterparty risk. Not with the gas station, not with Visa.

Admittedly that’s not a big problem with gas pumps (at least where I live, though card skimming is definitely a thing) but you can easily imagine other use-cases. For example your computer could automatically pay an untrusted WiFi hotspot per MB (and automatically pay the VPN you use to secure said untrusted WiFi hotspot)

Zero counterparty risk digital microtransactions have never been possible until now. I’m pretty sure it will eventually be cryptocurrency’s killer application.

There's still counterparty risk: I could start paying the wifi hotspot but there is no guarantee that it will let me use what I've paid for. At any point, the next microtransaction could fail because the counterparty decides to steal my cash.

My point is if you can make the microtransactions arbitrarily small you can make the counterparty risk negligible.

If I’m sending 1 cent micropayments continuously as long as the services are being delivered (e.x. 1 cent per 1 MB WiFi, 0.1 kWh electricity, or 0.5 oz gasoline, etc) then the most that can be stolen is 1 cent, so why would they bother?

I think the idea of Ethereum is that all infrastructure is there, so with a very thin layer (a smart contract) you get everything else (distributed computing, payments, crypto, even storage) for free.

Ransomware that is guaranteed to release the encryption key when the ransom is paid.

> Bitcoin transactions are pretty simple in what they do. You can do one single thing. One type of transaction. Skipping some details, it all boils down to TO (who is receiving money), FROM (who is sending money) and AMOUNT (how much money). This let’s bitcoin be a store of value with the capability to transfer the value between participants in the network.

Doesn't Bitcoin script allow you to do far more complex things than just send btc from person A to person B?

Yes it does. A lot of people tend to brush over this aspect of BTC. The whole notion of smart contracts is really just an extension of the scripting capability introduced in BTC which some felt was far too limited.

Not only that, but Bitcoin script was intentionally designed to be Turing incomplete, in order to allow static analysis of time/space complexity.

Ethereum was invented to fill the space of a blockchain with a Turing-complete scripting language, which may or may not prove to be useful in the real world (in my opinion it hasn’t yet).

Yes, technically true. You can do some stuff there, but it is somewhat limited. I said I was simplifying some details, this is one of them. It only adds to the confusion. I don't think many people use it. At least not for any really complex use cases (I may be wrong, but that is the extent of my knowledge).

Looking at this I don't see how it can do too much interesting stuff: https://en.bitcoin.it/wiki/Script

Any other "gentle introductions" out there? Something with a complete walk-through both from contract creator's perspective and from some random node on the network.

This one failed to explain how contract calls are executed - is it by a single node, by multiple nodes, do nodes compete with each other for the execution, what's their incentive, etc.

I hav started a youtube channel to teach ethereum and solidity: https://m.youtube.com/channel/UCZM8XQjNOyG2ElPpEUtNasA

Contract call can be broken down into 2 categories: state changing vs read-only.

If its a state changing contract call, it will be executed by every node on the network. In this case its considered a transaction. This transaction has to be mined in a block and cost ether. The new state will be stored on the blockchain only after the transaction has been succesfully mined.

The other kind of method call is read-only. Its free, ,is executed only by a single node, and does not alter the blockchain state.

Yes, I have a free intro to writing DApps with Ethereum course over here: https://www.newline.co/

Contract calls are executed by every node on the network -- if you've heard of "embarrassingly parallel", Ethereum is "embarrassingly serial". That is, every function call, on every contract, is run by every (full) node on the network.

The calls are deterministic which means that the nodes can verify that their peers calculated the outcome correctly and didn't try to cheat.

It sounds expensive (and it is) but that's the (current) cost of decentralized, trustless computation.

I like to think of Ethereum as a giant virtual machine that shares it's CPU (contracts are the instructions) and Storage (the ethereum blockchain) across all nodes that participate. Every node re-runs everything locally to validate and move the state of the machine forward. And of course you get paid to do that through mining.

When you view it like that it's obviously very costly to execute code & store outputs, so alternative solutions are needed for lower value code. Breakthroughs in these areas could make Ethereum a viable platform for computing, and it honestly feels like we could be entering a new computing paradigm with it.

I don't have a better article, but I can try to answer your question.

Ethereum contracts are executed everywhere that the blockchain is downloaded and verified. They are deterministic, so every execution should return the same result.

Contract calls are executed by all the nodes in the network.

And they will all update the contract state onto their own blockchain then broadcast the result and compete via proof of work.

I wonder how they can ever expect a system like this to scale given that every node has to execute every instruction in every called smart contract...

Scaling is definitely a serious concern and is a large focus of Ethereum developers at the moment. I don't understand much about the proposed solutions, but they generally think the Ethereum network can be sharded in a way that allows only a small subset of the network to run any given contract and still guarantee reliability similar to as if the whole network had run it.

You'll need some very hard guarantees that that 'small subset' can never be under the control of a single entity.

It doesn't work like that - a working sharding scheme should not fail if one shard is taken over. This is the most recent description of ethereum sharding if you want to learn more -> https://youtu.be/9RtSod8EXn4?t=3h12m30s . In particular look at step 4 where the fork choice rule is changed so that shards with invalid blocks cannot exist on a valid base chain.

That's a good point. Bear in mind though that each subset can be larger than today's entire Ethereum or Bitcoin network.

Also various more specialized schemes to run things offchain, and submit to the chain only the results and some kind of assurance that you ran them correctly. (E.g. Raiden, Plasma, TrueBit)

We had an SF crypto builders meet-up[0] where I gave a talk on Introduction to Solidity programming[1].

It's a pretty good primer to get from zero to smart contract in about 30 minutes. Learn from my suffering.

[0] https://www.meetup.com/sfhackdays/

[1] https://www.facebook.com/hackdays4all/videos/544008492611485...

I am particularly amused that in the first example, a simple counter, no mention is made of the initialization routine being callable at any point... at which time the counter is reset to zero.

The constructor function is only called on deployment so the counter will increase with each new call.

If I'm not mistaken in Solidity constructors can only be called once on a contract

I've also just begun to play with smart contract programming.

One problem I'm having is with event listening.

It seems that MetaMask doesn't yet support subscriptions, nor does my localhost testRPC instance pass it in the web3 object.

Some have suggested I need to run my own node just to listen for contract events. Has anyone figured out an easier solution?

Possibly because the latest version of MetaMask has broken event listening: https://github.com/MetaMask/metamask-extension/issues/2393

Quick question, do you have to download the full ethereum chain before you can start messing with smart contracts? Or is there a "light" client you can use?

You shouldn't use the main ethereum network when testing smart contracts - develop on a local testnet and then if you want, on a public testnet (eg rinkeby or ropesten) first

You can use a test node called testrpc (now called ganache-cli ???) and connect to that.

I've had success listening to events using web3, against both testrpc and geth (on private and test networks). What version of web3 & testrpc are you using?

I'm using web3 1.0.0-beta.26, ganache-cli 6.0.3

I instantiate the web3 object as such: web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"));

When I try to setup an event listener, console logs this:

Error: The current provider doesn't support subscriptions: HttpProvider

I really recommend reading the whitepaper. It not only explains Ethereum very well, it also contains the best explanation of Bitcoin I have ever read.


I published a code walkthrough tutorial yesterday on this: https://hackernoon.com/full-stack-smart-contract-development...

I was looking for something like this

Thanks !

Does this fundamentally break down if the cost of ETH rises to a level where it’s too expensive to run code on the chain?

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact