- You have about 500 lines of code to work with. This of course varies, but smart contracts have to be really small to fit in the max gas limit (6.7 million wei).
- You can't pass strings between contracts (coming soon).
- There are no floating point numbers. Since you're probably working with "money", this can make things tricky.
- You can break up your code into multiple contracts, but the tradeoff is an increased attack area.
- Dumb code is more secure than smart code.
- The tooling is very immature. You'll probably use truffle, which just released version 4. It makes some things easier, some harder. It's version of web3 (1.0) may differ from what you were expecting (0.2).
- The Ethereum testnet (Ropsten) has a different gas limit than the main net (4.7 million vs 6.7 million).
That is actually a feature when it comes to working with money. You don't ever want to use floating-point arithmetic with monetary values due to its inability to represent all possible decimal fractions of your base unit. This is just as true for over-hyped blockchain stuff as it is for any imaginable application in the "classic" financial sector.
What you need is either an integer data type plus a fixed, implied number of digits that you want to handle (so for example 105 represents 1 dollar and 5 cents), or a fixed-point numeric type (like for example BigDecimal in Java; there's lots of equivalents in other languages, most of them have something like "decimal" in their names), which essentially just stores the integer value together with the number of digits and features mathematical operations of pairs of these with each other.
I feel like this is constantly repeated but is simply untrue and working in the financial sector (high frequency trading) I don't know anyone who actually uses fixed digit number systems to represent money. Normally what I do see are people outside of finance who need to represent money who read the common mantra about using a fixed digit representation only to eventually encounter all the issues that floating point was invented to solve. When they encounter those issues they then end up having to adapt their code only to basically re-invent a broken, unstable quasi-floating point system when they would have been much better off using the IEEE floating point system and actually taking the time to understanding how it works.
BigDecimal will solve the issue but it's absolute overkill both in terms of space and time, we're talking many many orders of magnitude in terms of performance degradation and it's entirely unneeded.
One simple solution that works very well for up to 6 decimal places is to take your idea of having an implied number of digits, but instead of using an integer data type, you use a floating point data type. So 1 dollar and 5 cents becomes (double)(1050000.0). This lets you represent a range from 0.000001 up to 9999999999.999999 exactly, without any loss of precision. Values outside of that range can still be represented as well but you may have some precision issues.
Another solution which is less efficient but more flexible is to use decimal floating point numbers instead of binary floating point numbers. There is an IEEE decimal floating point standard that can represent an enormous range of decimal values exactly. Intel even provides a C/C++ implementation that meets formal accounting standards.
Both of the solutions I list above are orders of magnitude more efficient than using fixed-point numeric types and are by no means difficult to use.
The common programmer has no clue about the numerical instabilities that floating point numbers are causing.
Your examples are exactly what I would expect from a capable engineer that is pondering the different pros and cons when you have to optimize your code.
Calling BigDecimals entirely unneeded tells me you are in a luxury position where you can draw from a pool of the best programmers that are out there.
For lesser programmers and for the areas where speed is not as important, BigDecimal solves the issues completely and without any clever thinking and most importantly, consistently correct without having to rely on the skills of the programmer.
I will agree BigDecimal is fine to use if speed really doesn't matter, it's at least correct even if it's a nuclear option of sorts. What I really want to discourage is using fixed integer to represent some smallest unit of money, like using an int where 100 = 1 dollar, 1010 = 10 dollars and 10 cents. That is fundamentally problematic and objectively worse than using a floating point value.
But sure, if you genuinely don't want to think at all about the issue, use your language's BigDecimal. If you decide you want the much improved performance, then use a floating point value, either the IEEE decimal floating point standard or use binary floating point to represent some reasonably small unit fraction of a dollar, like 1.0 = 10^-6 dollars.
Ints wont cut it there.
Even if you are dealing in units of hundreds of billions of dollars tracked to mils, this gets you up to, I think, a hundred-thousandth of a percent with int64.
If so: totally agreed. Inflexible single-scaled-ints have been a plague in nearly every piece of production code I've seen them in. I'll take floats (well, doubles) any day over this.
Flexible scale tho (arbitrary precision / bigdecimal equivalents (I'd prefer bigdecimal over home-grown two-int variants, obv)) usually works out. And tends to avoid semi-frequent-though-usually-unimportant/unnoticed issues with imprecise floats (~16 million for exact int representation is awfully small, though doubles are generally Good Enough™).
: usually. adding doubles/fractions often produces garbage-looking output unless you remember to round. but in my lines of work, nobody cares[1.1] if it's a couple cents off in analytics.
Generally though, I think people don't quite do due-diligence here. And/or the domain changes enough that numbers get large enough to invalidate the earlier analysis. There are obviously exceptions to this though, and for them, they know what they're doing and why they're doing it and it's all good.
> working in the financial sector (high frequency trading)
Okay, you got me there - you have pointed me to that single application from the financial sector where it is not totally acceptable to "waste" a few thousands of processor cycles to compute some money-related stuff. Granted, in HFT applications, these cycles may allow you to get your trades in front of those of the other HFT companies. You are forced to use CPU-accelerated (or maybe even GPU?) computations in this case, which automatically means "floating point".
But there's another difference that allows you do do this: you mostly don't have to care about exact, accurate-up-to-the-penny results. I assume most of your calculations are done to eventually arrive at a conclusion of whether to buy or sell some stock or not, and at which price. You have to take care of not accumulating too much rounding errors in the process, of course, but the threshold for these errors is set by yourself, and you can give yourself a bit of leeway on this, because it's all internal stuff, mostly probabilistic and statistics-based - the only stuff that may be audited by someone else and thus has to match the real numbers up to the last penny are the trades you do and the money you move around, and I bet all of this accounting-style stuff is recorded using...decimal numbers :D
I work in the retail industry (think cash registers, retail sale accounting, that kind of stuff) and pretty much any legislation on this planet would obliterate us if we'd tell them that the result of the computations of our systems may be some cents up or down from the real result - the one someone would get who just scribbled the numbers on a sheet of paper and added them up manually. Our customers have to pay taxes based on the sales that they account using our systems, and the tax calculations as well as the entire chain of processes they use to arrive at the final numbers are regularly audited by the respective governments. There are specific rules (of course differing by country) as to how many decimal places have to be used and how stuff has to be rounded in which particular cases in calculations that would require more decimals. We waste an enormous amount of CPU cycles just to strictly adhere to these rules - and that is not only absolutely necessary, but also totally okay, modern CPUs can easily accomodate this in our scenario.
It is especially in cases like this that you absolutely should not use fixed decimal point systems to represent money. It is exactly in these circumstances that your fixed decimal point system will eventually encounter a situation where it fails catastrophically.
Intel provides an IEEE decimal floating point system specifically for the purpose of adhering to legal requirements, they say so right at the top of their website:
If fixed point math would have solved this issue, they'd have provided a solution involving that, but fixed point math is simply the absolute worst solution to this problem, even worse than naively using a 64-bit binary floating point number.
BigDecimal is also a viable solution, but it's entirely unnecessary and the cost really is enormous especially if you need to handle a large number of transactions. But sure, if you say performance genuinely isn't an issue go with BigDecimal.
That is exactly what we currently do, and speed of computations is definitely not an issue at all. If we have performance related problems (and we do sometimes), they have never in 10 years originated from numerical computation just taking too long, but always from other inefficiencies, quite often of architectural nature that uselessly burn a thousand times more cycles than the numeric part of the job.
BigDecimal provides exactly what we need: a no-brainer, easy-to-use, easy-to-understand and always correct vehicle for the calculation of monetary sums. It lets our developers focus on implementing the business logic, and also on making less of the described high-level failures that get us into actual performance troubles, instead of consuming a lot of mental share just to “get the computation right“ in all circumstances.
Hell, floating point isn't even a good way of doing imprecise arithmetic on larger numbers. What you would actually want is log arithmetic, but that's not what the hardware implements (even though it's simpler). It gives you roughly the same overall precision over each range but stops it from suddenly being cut in half as the exponent changes.
But you are not some cents up or down, even if you are calculating with billions, you still have 7 significant digits, that's a lot of head room for calculations.
Are there any financially sensible calculations where large numbers (order of magnitude billions) are multiplied with each other?
Would you care to elaborate on some of these issues? I'm genuinely curious.
When I started working in finance straight out of school (not in HFT at the time), I naively accepted the dogma that money should never be represented using floating points. I mean everywhere I went I would read in bold letters don't use floating point! Don't use floating point! So I just accepted it to be true and didn't question it. When I wrote my first financial application I used an int64 to represent currencies with a resolution of up to 10^-6 because that was what everyone said to do.
And well... all was good in life. Then one day I extended my system to work with currencies other than U.S. dollars, like Mexican Peso's, Japanese Yen, and currencies X where 1 X is either much less than 1 USD or much greater than 1 USD.
Then things started to fail, rounding errors became noticeable, especially when doing currency conversions, things started to fall apart real bad.
So I took the next logical step and used a BigDecimal. Now my rounding issues were solved but the performance of my applications suffered immensely across the board. Instead of storing a 64 bit int in a database I'm now storing a BigDecimal in Postgresql, and that slowed my queries immensely. Instead of just serializing raw 64-bits of data across a network in network byte order, I now have to convert my BigDecimal to a string, send it across the wire, and then parse back the string. Every operation I perform now requires potentially allocating memory on the heap whereas before everything was minimal and blazingly fast. I feel like there is a general attitude that performance doesn't matter, premature optimization is evil, programmer time is more expensive than hardware, so on so forth... but honestly nothing feels more demoralizing to me as a programmer then having an application run really really fast one day, and then the next day it's really really slow. Performance is one of those things that when you have it and know what it feels like, you don't want to give it up.
So not knowing any better I decided to come up with a scheme to regain the lost performance... I realized that for U.S. dollars, 10^-6 was perfectly fine. For currencies that are small compared to the U.S. dollar, I needed fewer decimal places, so for Yen, 1 unit would represent 10^-4 Yen. For currencies bigger than U.S. dollar, 1 unit would represent 10^-8...
So my "genius" younger self decided that my Money class would store both an integer value and a 'scale' factor. When doing operations, if the scale factor was the same, then I could perform operations as is. When the scale factor was different, I would have to rescale the value with lower precision to the value with higher precision and then perform the operation.
This actually worked to a degree, I regained a lot of my speed and didn't have any precision issues, but all I did was reinvent a crappy floating point system in software without knowing it.
Eventually I ended up reading about actual floating points and I could see clearly the relationship between what I was doing and what actual experts had realized was the proper way to handle working with values whose scales could vary wildly.
And once I realized that I could then sympathize with why people were against using binary floating point values for money, but the solution wasn't to abandon them, it was to actually take the time to understand how floating point works and then use floating points properly for my domain.
So my Money class does use a 64-bit floating point value, but instead of (double)(1.0) representing 1 unit, it represents 10^-6 units. And it doesn't matter what currency I need to represent, I can represent currencies as small as Russian rubles to as large as Bitcoin, and it all just works and works very fast.
64-bit doubles give me 15 digits of guaranteed precision, so as long as my values are within the range 0.000001 up to 999999999.999999 I am guaranteed to get exact results.
For values outside of that range, I still get my 15 digits of precision but I will have a very small margin of error. But here's the thing... that margin of error would have been unavoidable if I used fixed decimal arithmetic.
Now I say this as a personal anecdote but I know for a fact I'm not the only one who has done this. I just did a Google search that led to this:
The second top answer with 77 points yells in bold letters about not using floating points as a currency and suggests using a scheme almost identical to the one I described, where you store a raw number and a scaling factor, basically implementing a poor-man's version of floating point numbers.
I guess you fall into the few percent of developers who are NOT meant to be addressed by the general rule of "don't use float for money". Like with all "general rules" in software development, it is intended to guide the >90% of devs who don't want and sometimes also aren't capable of fully grasping the domain they're working in and the inner workings of the technology they use. The majority of devs need simple, clear guidelines that prevent them from making expensive mistakes while wielding technology that's made up of layers and layers of abstractions, of which some (or even most) are entirely black boxes to them. "Playing it safe" often comes with other shortcomings like worse performance, and if those are not acceptable in a specific scenario, you need a developer who fully grasps the problem domain and technology stack and who is thus capable of ignoring the "general rules" because he knows exactly why they exist and why he won't get into the troubles they are intended to protect you from.
I have long thought that every developer under the sun should strive to get up to this point, and I still think that it is an admirable goal, but I came to understand that not all developers share this goal, and that even if someone tries to learn as much as possible about every technology that he comes in contact with, he will never be able to reach this state of deep understanding in every technical domain imaginable, as there are just too many of them nowadays. We all, no matter how smart we are, sometimes need to rely on "general rules" in order to not make stupid mistakes.
If you specify a precision and scale, performance of NUMERIC improves quite a bit, but now you can’t store values with vastly different magnitudes (USD and JPY) in the same column without wasting tons of storage on each row. You’re back to square one.
That being said, I agree with your sentiment that there are faster/more efficient ways to accomplish this. For something like smart contracts, I'd think they should be able to abstract this line of reasoning into an API that's consumable for those of us that aren't as familiar. Or is at least worth having the option of.
How long is BigDecimal.class?
In the case of Ethereum, it could either be provided in the language or on the virtual machine level. I'm not that much into the EVM's inner workings to judge whether a machine-level implementation would be a good idea from an architectural viewpoint, but it would definitely deliver the best possible performance and lead to the smallest amount of contract code, thereby saving on gas to deploy the contract. As a second-best solution, the language could provide such an abstraction - it would at least be able to apply optimizations when compiling the code down to EVM opcodes, which an implementation purely on the contract level would not be able to do.
What money do you want to represent, they all have a "atomic value".
For instance, when working with dollars store cents. When working with Eth, store wei. etc.
I can't think of a use case of money that needs decimals except maybe computing ownerships percentage. but that should never be stored, but rather computed.
Anything that needs to be converted is a "front end" view. all computations should use atomic store of value, thus no conversion is needed in contracts.
This is mostly not true.
> For instance, when working with dollars store cents.
While external transactions often must occur in cents, internal accounts and unit prices often have smaller amounts. If you don't believe me, visit any US gas station and the prices will be in mils, not cents (and will usually be one mil less than some full-cent value per gallon.) Atomic units for financial applications are application-specific if there is an appropriate value at all, they aren't trivially determinable by looking at the base currency.
In some cases you really want an arbitrary precision decimal (or even rational) representation.
How does this work of you are, say, coding for a gas station where the price is in 1/10ths of a cent?
Alternatively, how does your banking application handle adding 10% interest to a bank account with 1c in it?
About the point on using floats, one should never use floats when working with money, this is because floats are not precise and result in rounding errors (eg. 0.1 + 0.2 results 0.30000000000000004, see for details http://0.30000000000000004.com ). One of the simplest approaches to solve it is to work with the smallest units, so if working with dollars then you can use cents and that means you can use integers which give more precision, which is how Solidity currently deals with it, by working with the smallest unit of Ether which is 'wei'. Some of the units listed here https://etherconverter.online
This is the way to get programs over 500 lines, and it doesn't increase the attack area as much as you'd think, since you can essentially hard-code the addresses of outside contracts into your code--it's not a combinatorial explosion.
You should call Gavin Wood and tell him that :D
The max gas limit is voted on by Ethereum miners. They recently raised it to 6.7 million (I believe it was 4.5 million not so long ago). The gas is what is used to store the contract (thus increasing the block chain size), or to execute code (thus increasing the work load on the miners), so it's up to them.
More Eth = more $$, but they have to weigh that against the downsides. If the price of Eth keeps going up, they have less incentive to increase the gas limit, because they get more money for doing the same work.
I've been using Truffle too. My biggest issue when getting started was the cognitive overload. If you want to try Truffle I wrote a couple of simple guides that helps you deploy your first Ethereum smart contract to a test net https://blog.abuiles.com/blog/2017/07/09/deploying-truffle-c...
Bitcoin is, IIRC, 8 units of precision.
If you are starting back at the idea of the Bitcoin blockchain, see ahussain's recommendation of 3Blue1Brown a day later:
I can go on for a while longer, there are hundreds of use cases but the question is: what would be a use case for you?
I'm not demanding it and I'm not being snarky at all. It's just that's the level I'm at - without that kind of entry point I struggle to assess it.
1. Me and You start a company together. We would like to split things 50/50. We setup an address such that when values are paid to it, half goes to you, half goes to me. Effectively we've created an "LLC with an Operating Agreement", but we aren't relying on political legal system / country to enforce. Enforcement is automatic and done by the network.
2. You want to setup a Trust for your kids. You create a contract that holds 10 Bitcoin / Etherum / etc. On the date Jan 1, 2035 the value of that address will be forwarded to your child's address. Again, we created a "Non-Revokable Trust" that is network enforced. No need for probate, courts, trustees, etc.
What is the real world use case for that, though? That seems like a contract that makes sense as long as you and I trust each other, which rather negates the purpose of a contract.
Some immediate issues that strike me:
* There's nothing to stop either of us offering the services of the company outside of that contract;
* There's no way to ensure that one of us isn't freeloading;
* If my wallet gets stolen you'll be paying half the earnings of the company to the hacker until the contract is somehow voided;
* Which raises the question of how can this contract get voided? Do either of us get to void it whenever we feel like it? What if I've lost my key? Do we potentially just have to live with the money being paid into hacked / unusable accounts forever?
So all you've really saved is the bother of dividing a number by two and doing bank transfers, in return for which you've gained significant operational burdens.
2- How do you make sure that your child doesn't sell it in 2025 and buys a Lambo? (which is a problem you don't have with regular Trusts AFAIK)
Him: "This baker makes bread rolls". you: "What about dark bread? What about non-bread items? What's his broker?"
SeckimJohn quite quickly showed why neither work. The are some niche digital uses, but not the ones people often tout.
Ethereum contracts basically break down at the point they need to interact with the real world.
Smart contracts being fixed and errors being exploitable is a valid criticism but it's not the be-all-end-all argument for them to have no real world applications.
In the trust fund example, you cant stop the recipient trading their key or wallet early, as you have no way to verify the human holding it.
Unless you keep that access with some legal guardian. In which case your security still sits in the legal system, as it would if you just used a trust fund. So whats the point?
Again in the company example, it only works if you both observe that the coins mean shares in company. Which you can only really enforce with a legal agreement, but again, why not just have a legal agreement that says you split your share?
They completely breakdown at the physical barrier. Darknet markets only work because the seller doesnt control the market and has a deposit. But apart from that they can transact anonymously, that part has nothing to do with blockchain.
the main point I tried to make above was this: in normal life there is the "human factor"; like, if you look at a situation, you can reasonably comment on whether it's a positive situation from an actor's perspective; whether the actors were engaging in fraudulent activity etc. In digital-only world, there is no such thing as "probable cause", "reasonable doubt" etc.
which makes the application of smart contracts to real life intractable(IMHO borderline impossible for complex situations for the near future[~10 years?]).
There are so many things that go into contracts that cannot be articulated in 'code' that this all hardly makes sense.
Employment contracts are long. Comp packages can be complex.
And we all live in countries with employment laws etc. that require these things anyhow.
I don't see any actual real-world cases for Eth contracts just yet.
Ensuring that accurate, real-world data gets entered into the blockchain to enable broadly useful smart contracts is going to be an interesting area to monitor over the next few years.
Augur attempts to align people's incentives to accurately report on real world events, like political events (aka Oracles). Axa pays out on delayed flights with a new flight insurance product, from data gathered from public flight information, which triggers an automated payout.
Not so fast. Who says 'what the price of ETH' is? There is no such thing as 'the price of ETH' - there are simply buyers and sellers each willing to sell and buy at different amounts.
You'd have to agree on a mechanism to agree on what that price even is.
And what about odd fluxuations? Market cheaters - i.e. getting a hold of a vast quantity of ETH just to jolt markets for a few hundred milliseconds just to jigger some contract?
As far as 'paying out on delayed flights' - that's a very cool idea, but I can't imagine why on earth it would be in ETH on a distributed ledger.
I think this is actually something I've been trying to articulate for a while now anytime smart contracts come up.
Legal contracts only specify the criteria and conditions surrounding a contract. Smart contracts must describe that PLUS all the very specific instructions on how to validate those criteria/conditions which adds a massive amount of complexity into the contract (and more potential for loopholes).
As I understand it each concern requires some kind of function/method which addresses it within the smart contract, right? So the real problem is that once you publish the contract you better make sure you haven't forgotten something important...
Which they both need to work today anyway. So whats the point?
Until ether contracts can hold people to their word, are recognised by the judicial system (which wouldn't be able to effect it without some weird judge API), or is able to enforce some kind of judiciary process itsefl (SKYNET WARNING), they'll always break down at the physical barrier.
For instance, I can't think of any corporate structures that splits income in half. Usually the money is run through the company and costs etc. are taken out first. So this isn't a real world example?
If the beneficiary dies before 2035, their heirs will collect, for instance. It is pretty simple to do a TVM calculation to determine the current value of a future benefit, and without all that legal-system stuff that is being bypassed through use of smart contracts, the intent of the trust creator can be more easily bypassed.
Realistically anything that requires people to have decent data management / opsec is going to fail for >99% of people.
In 'no legal land' it would be easy enough to sell a bunch of 'locked until 2035' etherium - wrapped in another ether contract like a future or something. Much the same way lottery winners who get an annuity instead of a lump sum can then sell that annuity for a lump sum.
I assume in case 2 Solidity language has some built-in mechanisms for ensuring that the current datetime is agreed across the whole network?
Your question of "consensus" brings up other interesting points. I think an increasingly valuable service in a new smart-contract world will be to have an "Outcomes As A Service" arbiter. This would be a trusted 3rd party that publishes real-world outcomes in a method that can be accessed reliably via smart contracts.
Who will win the football game between Dallas and New Orleans on Nov 22, 2019? Who is winner of 2020 Presidential Election? etc....Essentially a service that you can tell them you want to track some real-world event and publish outcome for use in smart-contracts. Obviously this 3rd party would need to be transparent, have an open appeal process, be neutral and earning income not tied to outcome, etc.
What happens to gaming laws (and cities like Las Vegas), in a world where sports betting can be done with no middle-man needed?
How do the nodes running the code all get the same response from an external data source?
Even if they use a service such as oraclize if the contract calls https://api/query?param1 and it returns lets say the weather:
Node1 gets: 15.2c
Node2 gets: 15.3c
We've already failed on consensus haven't we?
Then if a user Y did come along and wanted to buy my stake, we'd agree a price P, then I would write a second contract that said: If user Y pays user X amount P, then call sellUserXStakeTo(Y). Then, once I'd published it, user Y would verify it, then pay the money, secure in the knowledge that I could not alter or back out of the deal.
It will send back your ETH after a certain date. You can check the smart contract in the bottom left corner.
The Solidity docs have tutorials for voting and auctions: https://solidity.readthedocs.io/en/develop/solidity-by-examp...
And I've got various simple ideas with code at my blog: http://www.blunderingcode.com/
To give you an example of what I mean: when Bluetooth was in its early days everybody kept repeating this stuff about your Fridge being able to talk to your Toaster. They wrote imaginative articles about Toasters that told Breadbins they had just toasted the last two slices of bread or whatever. This didn't help me at all to understand a perfectly good technology. It was, frankly, distracting BS. If someone had said you could have headphones without wires (and perhaps they did) I would have gone: Ah hah! Now that may show you the hard limits of my technological imagination...but to me that's the kind of the thing that sells a new technology.
I think simple examples of things we currently need to do, with practical examples of how we can do them a different way is what helps me to understand things.
This "problem" (which is kind of an euphemism, you could just call it "scamming") is not solvable without having a blockchain as a neutral entity that secures the tokens in question. Just giving people the same essentially-worthless "fantasy tokens" stored in a conventional database that you promise to maintain will not cut it, as people won't give you hundreds of millions in this case, but will readily do so once your tokens are on a blockchain.
Smart contracts just sound like... normal contracts. Like the ones that have been around in some form or another for millennia. Most contracts are just "if X then Y" but using legal lingo instead of code.
The difficult part about something like a prediction market is that it's hard to get an authoritative source for the result. How can you prevent a fake result being sent out, for example?
In the same way that Bitcoin allows people to send money to each other without having to trust the other person, you can do computation without having to trust a centralized party to do that computation. And it doesn't have to be "computing" in the way you might think about it; it can be tracking, storing and securing information (generally a hash of that information) with some logic around it (who can append, payments, vote on things, etc.).
I wrote a blog post about some potential use cases. Most of which are realized only with smart contracts. Some are pretty "out there," but in the end it up to the innovators of the world to decide what we can or cannot do with this.
So maybe it's not such a great idea after all.
* since this involves interaction with the outside world, I don't know a way at this point that 100% guarantees that your get your power. Curious how they tackle this.
Car continuously delivers micropayments to charger while charger continuously delivers power to car, if one stops the other also stops. If the power is cut off prematurely the car only loses maybe a couple cents worth of power.
(Payment channels are built using smart contracts, but they’re simple enough that Bitcoin’s more limited smart contracts can do it)
Admittedly that’s not a big problem with gas pumps (at least where I live, though card skimming is definitely a thing) but you can easily imagine other use-cases. For example your computer could automatically pay an untrusted WiFi hotspot per MB (and automatically pay the VPN you use to secure said untrusted WiFi hotspot)
Zero counterparty risk digital microtransactions have never been possible until now. I’m pretty sure it will eventually be cryptocurrency’s killer application.
If I’m sending 1 cent micropayments continuously as long as the services are being delivered (e.x. 1 cent per 1 MB WiFi, 0.1 kWh electricity, or 0.5 oz gasoline, etc) then the most that can be stolen is 1 cent, so why would they bother?
Doesn't Bitcoin script allow you to do far more complex things than just send btc from person A to person B?
Ethereum was invented to fill the space of a blockchain with a Turing-complete scripting language, which may or may not prove to be useful in the real world (in my opinion it hasn’t yet).
Looking at this I don't see how it can do too much interesting stuff: https://en.bitcoin.it/wiki/Script
This one failed to explain how contract calls are executed - is it by a single node, by multiple nodes, do nodes compete with each other for the execution, what's their incentive, etc.
Contract call can be broken down into 2 categories:
state changing vs read-only.
If its a state changing contract call, it will be executed by every node on the network. In this case its considered a transaction. This transaction has to be mined in a block and cost ether. The new state will be stored on the blockchain only after the transaction has been succesfully mined.
The other kind of method call is read-only. Its free, ,is executed only by a single node, and does not alter the blockchain state.
Contract calls are executed by every node on the network -- if you've heard of "embarrassingly parallel", Ethereum is "embarrassingly serial". That is, every function call, on every contract, is run by every (full) node on the network.
The calls are deterministic which means that the nodes can verify that their peers calculated the outcome correctly and didn't try to cheat.
It sounds expensive (and it is) but that's the (current) cost of decentralized, trustless computation.
When you view it like that it's obviously very costly to execute code & store outputs, so alternative solutions are needed for lower value code. Breakthroughs in these areas could make Ethereum a viable platform for computing, and it honestly feels like we could be entering a new computing paradigm with it.
Ethereum contracts are executed everywhere that the blockchain is downloaded and verified. They are deterministic, so every execution should return the same result.
And they will all update the contract state onto their own blockchain then broadcast the result and compete via proof of work.
It's a pretty good primer to get from zero to smart contract in about 30 minutes. Learn from my suffering.
One problem I'm having is with event listening.
It seems that MetaMask doesn't yet support subscriptions, nor does my localhost testRPC instance pass it in the web3 object.
Some have suggested I need to run my own node just to listen for contract events. Has anyone figured out an easier solution?
I instantiate the web3 object as such:
web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"));
When I try to setup an event listener, console logs this:
Error: The current provider doesn't support subscriptions: HttpProvider