The other problem is the obvious issue that smart contracts make the implicit assumption that we are able to write bug-free code. Evidently, that is not the case.
There clearly needs to be some sort of mechanism to sort out issues that may arise from this. But in practice Ethereum has replaced an imperfect but robust mechanism in the form of the legal system with one guy, who is accountable to no one, making discretionary decisions with no guidelines to bind him.
> ... written contracts can have "bugs" just as smart contracts can.
The onus on smart contracts is to come up with use cases where they are superior to written contract, not to find examples where they are equally flawed.
New Thing does W and Z imperfectly.
Therefore, New Thing is doomed.
Old thing does W imperfectly.
New thing does X. Proponents claim that X solves W's imperfections.
Turns out that it doesn't. Opponents claim that X hasn't much value then, so it's doomed.
Clearly the use case is in bypassing a third parties and avoiding risk. There is no clear value in grey areas there. In valuing technology, it is the best use case you can think of that counts, not the worst.
Ethereum is a specific form of smart contracts where the use case compared to other forms is perhaps not as clear cut, but that doesn't stop people from speculating on the possibility.
Technically it's possible and is called "exhaustive testing", you can do it for fairly small input states e.g. you can test trig functions on all 32 bits inputs, that's only 4 billion values.
Non-trivial programs have way more state than that, consider a vector of f32, just the full vector (2^64 items) contains 2^96 possible states.
Any modern machine: https://randomascii.wordpress.com/2014/01/27/theres-only-fou...
It's the same order of magnitude as your CPU frequency, and trivially partitionable (and thus parallelisable), how fast exactly depends on the complexity of the function being tested, but assuming the entire thing happens in native code it's on the order of 1~10 minutes to go through all values, call both the FUT and the oracle and compare the results.
> Once states are added in, it quickly explodes beyond a simple 32-bit input space.
Then you're not testing a problem with a 32b input space anymore. My comment specifically points out that it's doable for small input states and gives 32 bits as an anchor at the upper end. If the program under test is fast enough and you're fine with hours-long tests you could get up to the low 40s and still be able to test every single value but since every bit doubles the number of states (and thus the time spent testing) it grows unreasonable extremely quickly.
> I know in my own work, we don't even bother.
Of course not, the average program is billions of orders of magnitude beyond 32 bits of input state. The current HN homepage is 42.5KB, that's ~350000 bits of input space. If you take an input file or some such you start stacking exponentials just so the number of bits in your input space doesn't get unwieldy.
Sure, you can’t in the general abstract case, but my exact point is that you can do this in practice, for programs which we actually write.
Perhaps you could show an example of a useful program we can’t do that for?
In my experience, it’s not due to either of those theoretical considerations that we don’t see it done — we just don’t see it done in practice for cost reasons.
So, yeah, imagine doing that for any program with more than five hundred lines of code.
No, you can not do that for programs which we actually write. Even just a trivial program taking an array of single-precision floats can't be exhaustively tested, there's north of 2^100 input states (assuming a 64b platform and arrays can't be bigger than that).
If a program on a 32b Windows system takes a file as input, that's like 2^10^15 possible input states…
We can test every single case for very simple, trivial programs e.g. we can exhaustively test a function with 32 bits of input (a 32-bit integer or a single-precision float: https://randomascii.wordpress.com/2014/01/27/theres-only-fou...) in a minute or so if the function is not too complex. It might be feasible to add a few more bits, but assuming the same trivial function at 40 bits you've jumped to ~4 hours, at 43 bits you need more than a day.
At 64 bits you need 8000 years.
Alternatively: let's say we want to exhaustively test 32 bits division. That's 64 bits of input * 26 cycles per division (we're optimistic) and let's say 4GHz, 2^64 * 2^4.7 / 2^31.9 = 119647558364 seconds, or about 3800 years.
And that's just the division itself, mind, we still need to account for some incrementing, jumping, etc…
And that's just to exhaustively test the division of two single-precision floats.
And that's just one constraint that makes the mythical "100% test coverage" practically infeasible. Another is: what tests the tests?
The number of possible inputs and states for even a moderately complex program is also so astronomically high that it quickly far surpasses the number of atoms that earth consists of, so, yeah, "cost reasons" is one way to put it. We also don't transform sheet rock into gold for "energy reasons".
Do there exist inputs -- either string or repetitions -- for which the following function does not panic owing to allocation issues (as documented) but either 1. cause a SIGBART or similar to be thrown or 2. fail to produce a new string which is the correct multiple size of the original string slice?
It's not possible, I contend, to answer this question with tests. The cardinality of input strings and repetitions is bounded but very, very high. You can for sure find _examples_ where `str::repeat` functions as documented but demonstrating that it always will is a different thing.
You should definitely read up on the halting problem.
Yes, it can, that's what the whole formally verified software branch is about. Of course, even with formally verified software, there can be failures caused by external factors such as faults in hardware etc. But some piece of software itself absolutely can be proved to be correct (ie. bug-free). It just takes special approaches such as programming languages amenable to formal proofs (eg. Idris).
Software is usually written in bug-inducing ways and using languages that hinder formal verification not because provably bug-free software is impossible but because it is typically too costly and difficult to write.
Of course this still leaves space for bugs outside the program itself which still may influence it, such as bugs in the operating system or firmware or the like. Which may be prevented by having a formally verified OS too :)
But even in such a case, there might still be hardware errors (electrical noise, for example). Which is why for example spacecrafts or critical industry machinery double or triple their whole architectures. In such case the formally verified software runs on eg. three instances simulatenously and these instances cross-check each other's results all the time. When a hardware fault occurs (a bit randomly flipped in memory for example), they find out they're not matching and re-calculate.
That's as close as you can get to zero bugs.
Sure you can: you can prove software is correct through exhaustive testing. As noted previously it only works for small input space and simple programs (functions, really) but it does work, you can prove that a boolean xor is correct by enumerating its 4 inputs and checking that all of them produce the expected output.
This method can be applied to input spaces up to about 40 bits or so: https://randomascii.wordpress.com/2014/01/27/theres-only-fou...
What an unexpected coincidence, so am I!
> It's unlikely that anyone cares if a Hello World program is bug-free.
Did you consider reading the link I provided at any point?
You can't "stick by your assertion" then provide something which has almost no relation to your original assertion, that's called "moving the goalposts".
This moving of goalposts is even more asinine when you're moving them exactly where the comment which you originally decried had put them.
Reminder: here is what you originally felt you needed to note was wrong:
> Software can not be proved bug-free by tests (and even that assertion is not completely true, you can prove software through exhaustive testing if it's very very simple).
No, it can't be done with testing even for practical everyday programs, precisely because you never know how much your program is crossing into the "abstract general case" and which parts of it are more general than you think. Very often the cause of bugs is precisely this - that some part of a program behaves in a more general / less restricted way than the programmer thought.
It can be done with formal verification, but that's a whole another story.
What this means in this context, for practical programs, ones you are "likely to write as an SDE", is that it is in general impossible to know in advance whether your program has bugs or not.
So, the notion that a program can be proven bug free "if you write enough tests" is kinda funny for programs of even moderate complexity, ones that you are likely to write as an SDE.
Aside: This "how does it matter in the real world" attitude reeks of anti-intellectualism and annoys me to no end. It does, you just don't know how.
The mere existence of programs you can’t determine the halting status of doesn’t necessarily imply anything about the much, much smaller subset of programs we’d want to write for practical reasons — eg, managing my bank account.
It’s also very strange to quote things like “if you write enough tests”, when in fact I didn’t say anything like that.
Rather than address how a theoretical result connects to the real world, you assume without any thought that the kind of problematic cases which exist in theory actually relate to what we do in practice.
Show me the actual connection, or admit the halting problem isn’t really a concern in practice: where in the course of my life as an SDE does it actually cause problems, because in my experience, I work in the subset of programs you can reason about.
Many of the posters here make the same fundamental mistake: the existence of programs we can’t reason about doesn’t mean that there are no programs we can reason about — and in practice, we encounter the ones we can’t extremely rarely.
But were it not for the halting problem, we could automatically prove programs to be correct! So it affects you all the time: Because of the pesky halting problem, making sure that your program does what it should is really really hard instead of being plain automatic.
I think the point you don't get is that the halting problem is not just "you cannot prove that a problem halts", it extends to "you cannot prove that a program does what it should do, period". And this is the reason why we need exhaustive tests on one side and elaborate verification methods on the other side, both still never giving 100% confidence.
> because in my experience, I work in the subset of programs you can reason about.
You can reason about them, but you cannot guarantee that they are bug-free.
You ask me for a program "where the halting problem matters", and I say "pretty much any program you can find". On the contrary even, I ask you to provide me with a non-trivial program (i.e. one which is not just an academic exercise) that has been proven bug-free, through testing or other methods.
But we actually can do this for a huge subset of programs — as long we we’re okay with false negatives, ie programs which are correct but that we can’t prove using our system.
Precisely what I’m trying to call out is your mistake here: the existence of programs (in theory) which we can’t analyze (eg because of the halting problem) doesn’t imply anything at all about the programs we’re likely to encounter in practice. Those programs live in the subset of programs for which automatic reasoning does work.
The correct interpretation of “it’s not possible to prove all programs” is “there exists some programs we can’t prove correctness of”, not “there are no programs we can prove the correctness of”.
Show me that the programs we find useful overlap with the ones we can’t automatically prove — because in my experience, things like the halting problem are an abstract concern, and the programs we want to verify live solidly in the subset of programs we can automatically reason about.
For any program we’re likely to encounter in practice, we’re in the subset of programs we can automatically prove correct, if we spent the effort to do so. (And I have done so before.)
My contention is that the overlap between useful smart contracts and ones which demonstrate those theoretical problems is basically nil.
Let's start with tests: If you don't write a test for every single possible case, you will not know if there is not an edge case in your program that you did not catch. Unfortunately, the number of cases for any program that isn't entirely trivial (and thus mostly useless) grows so enormously quickly that you simply cannot write an exhaustive amounts of tests.
So instead, you have to classify your inputs and test "representative" inputs for each of those classes. However, to make the reasoning of what is a perfect classification, i.e. a classification where each input you give in your tests behaves the same (for a reasonable definition) as any other input in that class, requires proving non-trivial properties about the program. And that, the halting theorem tells us, is impossible as well.
In simpler words: In even relatively simple programs, there are more tests to write than available lifetime, but theory also tells us that we can't know how to reduce the test cases to only the interesting ones.
I think this is wrong approach. Ethereum and most of the altcoins are not tools aimed to solve a problem. They are gambling machines. What matters for them is marketing, charismatic leader and wide audience (developers).
Ethereum has all of these.
Is it? I know that some people intend the code to be law, but I suspect if/when this butts up against the real legal system, we'll see a variety of different interpretations.
Just because someone left their window open doesn't mean you get to take all their stuff.
// User should only receive positive deposit from this code
But someone finds a vulnerability to withdraw instead of deposit. The code allows it, but the comment says that shouldn't be allowed. Did they act in bad faith, with fraud in mind?
And to add on to that, could you consider function names as part of documentation/proof? ie. deposit()
And to add on to that, what about the actual code: if reasonably prudent person were to read the code and assume it was made to deposit only, is that sufficient?
I would assume a judge would say yes.
Also, the Ethereum blockchain has only the compiled version of the smart contracts that use a bytecode for a simple virtual machine. So the source code (and the comments) disappear before entering the blockchain.
[I guess you can add some comments in some dead code that is not optimized away during compilation, making the smart contract burn a small amount of additional gas each time it runs. My guess is that the current compilation just strips the comments.]
For example, there is $250,000,000 worth of Ether in the MakerDAO contract, collateralizing around $70,000,000 in DAI, a decentralized stablecoin that has maintained its peg to the dollar during a 90% price drop of the underlying asset, Ether.
This blog post reads like someone in 1992 using some obscure hypothetical FTP server bug to argue that secure computer networks are by definition impossible.
You could say the same about many hacks that involve money and assets. Judge, the software let me in! That was the original intention of the programmers!
To get away with something like this you're going to have to convince a judge or jury that exploiting a bug in a software program entitles you to a huge payout. I suspect this might fall under unjust enrichment.
The difference being that ethereum smart contracts are supposed to be autonomous, hence "The Code is the Contract".
If you need or rely on outside parties then why not just have a standard contract?
That's probably not the universal take. My take is there is a place for automated smart contracts, and a place for traditional contracts. For instance, you can hardly prove beyond any doubt that a package was delivered successfully and so payment can be dispersed. Too much room for fraud there. But for many other digital services I think it would work out fine. AFAIU it's been working quite well for microgrid projects.
(Specifically smart contracts and ethereum as a whole, not really the DAO. Don't know much about that)
A centralized system works fine for those situations though. Adding Eth into the mix doesn't improve anything in this scenario.
I'm not sold one way or the other yet, but the amount of trust some institutions have taken on and abused has been a bit ridiculous the past few years (and emerged unscathed themselves while their clients were left to deal with it).
Anecdotally, centralized institutions have broken many peoples’ trust in the past. It’s starting to look like a bit of a wash to me—not that I’m racing to close my accounts. Just the same, I’m not ready to write off an early tech that is still being developed and experimented on.
I don't need to convince anyone; it is a self-evident truth that cryptocurrency is wholly impractical for displacing centralized institutions. It's self-evident because it hasn't happened and there isn't even a whiff of a possibility that it could. Cryptocurrencies are a strange and interesting technical novelty, but they are closer to a video game than anything resembling a challenge to currently centralized institutions.
> centralized institutions have broken many peoples’ trust in the past
You keep repeating that but its irrelevant. Whether or not centralized institutions are breaking people's trust, cryptocurrency is obviously not a solution to the problem.
> not that I’m racing to close my accounts
Of course you're not because cryptocurrency is obviously not an alternative to a bank account in the same way that a drone is not an alternative to a car.
> Just the same, I’m not ready to write off an early tech that is still being developed and experimented on.
I'm not telling you to "write off" anything, what does that even mean? If you want to close all your bank accounts and meet up with people in the streets to trade cryptocurrency tokens in order to manage typical financial obligations then that is your prerogative; it doesn't mean that such a lifestyle has any appreciable impact on the existence of centralized institutions.
It seems to stem from one strain of thought surrounding the tech, implied by your use of "cryptocurrency" as the new descriptor and not focusing on the concept of smart contracts.
For the record, I never suggested trading cryptocurrencies as a medium for barter and exchange was a practical or desirable idea.
One viable use, currently implemented and being tested, is the use of smart contracts as an immutable record for tracking grants and other funding provided to organizations by the National Research Council of Canada:
It's not as self-evident to me as it might be to you—that's not really an explanation. You're quite aggressive in your disdain for this specific technology—I'm kind of baffled.
Pulling this quote directly from the page you linked:
> This technology offers unprecedented levels of transparency and trust allowing public records to be searched, verified and audited at a level the world hasn’t seen before.
This is just false. What was not possible before? The page has no details just breathless hype that is typical of cryptocurrency related projects. Please offer up an explanation of how blockchain enables "public records to be searched, verified and audited at a level the world hasn’t seen before"
> This is just false. What was not possible before? The page has no details just breathless hype that is typical of cryptocurrency related projects. Please offer up an explanation of how blockchain enables "public records to be searched, verified and audited at a level the world hasn’t seen before"
You haven't explained how it's false.
It makes the public records easier to access than they previously were, and immutable. Once published, the council nor any new government can wipe the records for any reason without either a concerted effort to attack the public chain and cause a fork that becomes mainstream, or otherwise attempt to eradicate the network entirely. There is much less gatekeeping now than there previously. One doesn't have to be technical, nor do much searching to find these records. (They actually came in handy somewhat recently in discussions about TunnelBear and their funding ploys) And as mentioned, it's an experiment. The experiment is part of the Open Government project aimed at increasing transparency to the public.
You have not demonstrated how this is so. Putting the records up on S3 is sufficient for the purpose of access.
Immutability has no practical benefits that were not already possible using cryptographic hashing and signatures. If you disagree, please explain.
> Once published, the council nor any new government can wipe the records for any reason
This is already solved by the inherent decentralization of the internet. This inherent property of the internet is so pervasive that it is actually a serious problem for situations like "right to be forgotten" and revenge porn.
> The experiment is part of the Open Government project aimed at increasing transparency to the public
You don't need smart contracts for this, if a government is willing to be open the problem is already solved, smart contracts don't add anything to the mix.
That's a matter of choice, isn't it? I just demonstrated how easily accessible it was. Just because there's another method available, doesn't mean this one isn't valid.
> Immutability has no practical benefits that were not already possible using cryptographic hashing and signatures. If you disagree, please explain.
That's exactly what the smart contract and blockchain system used does. It's just another vehicle that functions in a different way than just signing the files and uploading them to a server somewhere.
> This is already solved by the inherent decentralization of the internet. This inherent property of the internet is so pervasive that it is actually a serious problem for situations like "right to be forgotten" and revenge porn.
No. That ultimately relies on the proactive efforts of others making and serving unadulterated copies of the data in question. In the form the council is experimenting with, no additional conscious action is required. The copies are made perpetually as long as the network exists. I consider that a boon for data like this.
> You don't need smart contracts for this, if a government is willing to be open the problem is already solved, smart contracts don't add anything to the mix.
You say that they're not needed. The role of the National Research Council is to research and experiment in all manners, including new technologies. That is what they're doing here. It's not a question of "was it ever needed in any form"— it's a question of: is it an improvement? Do we see benefits or detriments? Are the results net positive or net negative? And what next?
There is one way to empirically answer that question: experiment, gather data, and draw analyses and conclusions.
You continually ask me to explain myself, and I have—you've however yet to explain your assertions.
Traditional software makes no such claim.
but yeah, basically.
Simplicity is a typed, combinator-based, functional language without
loops and recursion, designed to be used for crypto-currencies and blockchain applications. It aims to improve upon existing crypto-currency languages, such as Bitcoin Script and Ethereum’s EVM, while avoiding some
of the problems they face. Simplicity comes with formal denotational
semantics defined in Coq, a popular, general purpose software proof assistant. Simplicity also includes operational semantics that are defined
with an abstract machine that we call the Bit Machine. The Bit Machine is used as a tool for measuring the computational space and time
resources needed to evaluate Simplicity programs. Owing to its Turing
incompleteness, Simplicity is amenable to static analysis that can be used
to derive upper bounds on the computational resources needed, prior to
execution. While Turing incomplete, Simplicity can express any finitary
function, which we believe is enough to build useful “smart contracts” for
The idea being we're a few years at least before language and infrastructure can be mature enough to have value for most users beyond earliest of adopters. The other languages you've mentioned are much further along than any "dApp" platform at what they do. Decentralized application platforms like Ethereum are a new space with new requirements but, as far as I can tell, necessitate a new language to satisfy unique requirements. The hot-take is that Solidity is probably going to take some big changes to make it much further. I've not been impressed with it.
> we're a few years at least before language and infrastructure can be mature enough to have value for most users beyond earliest of adopters
There isn't really a clear explanation of why that would be the case. The bottom line is that smart contracts require a trusted oracle to feed data into the system, at that point, the advantages over a centralized system are essentially zero. There is no reason why anyone except enthusiasts, tinkerers and criminals would get involved.
There's a huge difference between something being necessary and something being useful. I'm in no way an evangelist, but I can at least imagine some interesting applications of blockchain.
You can make transactions that would normally be blocked by your country's legal system or zealous traditional payment processors.
Satoshi left out Turing-completeness from Bitcoin for this exact reason.
Posting stale criticism (3 years old) about a technology that is still evolving at this time strikes me as FUD.
Can you do that with ethereum?
So a TAM of money launderers and people without money. Do you see why these "use cases" are laughable?
You didn't replace the bank. You just created a worse version of it that works for a small segment of the population. A segment which happens to be highly uneconomical to service.
There are still many unbanked people in the world. I don’t think it’s a laughable use case to want to bring those that are too poor to participate in the banking system into the modern era. See Humaniq for an example of this.
I don't either. But something like M-PESA, which works (a) without a bank, (b) on cheap hardware and (c) without requiring great technical knowledge to avoid your coins getting stolen is a better fit for that problem.
The regions with widespread poverty aren't going to be enthusiastic about the fees, and wild instability that comes with crypto database tokens, Ethereum and so on.
The real use case is obviously money laundering, but everyone wants to try to spin the marketing into unrealistic areas.
Coinbase, which recently raised $75 million in seed
funding from prominent financial institutions such as the
New York Stock Exchange, said one of the four biggest
advantages of bitcoin is that it is "immune to country-
specific sanctions," citing Russia as an example, in a
PowerPoint presentation put together for investors by the
company late last year.
You can't pretend cryptocurrency is made to bypass only unjust laws and regulations.
There's a reason ransomware was not feasible with cash at the scale it's happening today.
All of those were done before cryptocurrency even existed. It's not enabling those things if those things are clearly possible without cryptocurrency.
Where there even cases of demanding payments for ransomware before Bitcoin?
Ransomware and international money laundering are easier with Bitcoin, Ethereum, and Monereo.
It's arguably their only real world use case due to the high overhead cost, risk, and instability of transacting though those systems versus "traditional" fiat based transfer systems.