Hacker News new | past | comments | ask | show | jobs | submit login

I think you’re moving the goal post. As I mentioned in my original post, smart contracts allow for the creation of a global computing platform we call a blockchain. Maybe you want formal verification, but not everyone cares about that.



What? You're the one with the "output is guaranteed to be correct" line. A smart contract is just code.


Yes, the output is guaranteed to be correct. Let’s say you give me a program and ask me to run it for you. Can you guarantee that I won’t tamper with the executable? No you can’t. You can’t trust the output of an arbitrary program you give me to run. The only way you could trust my result would be to run the program yourself and verify I provided you the correct output. If you run the program yourself, then what’s the point of paying me to run your code? With a smart contract, you can trust the output. That’s the difference.

This brings me back to my main point. You clearly have not implemented a smart contact and have misconceptions about what it does. I suggest you implement one first. You can still think the idea is stupid afterwards. That’s fine. But right now, it seems like you are arguing about how you think the blockchain works, which doesn’t necessarily match how it actually works.


If your point is that the EVM byte-code that's deployed to the blockchain is immutable and that transactions which don't agree with the results of executing code will be rejected, yes, that is a true statement.

I think what everyone is quibbling with is the assertion that this is the same as "correctness", which is (by and large) considered a different property that relates user expectations to actual runtime behavior.


Sorry but that’s a total straw man. Yes, if you redefine the words I used then you can make any argument you want.

In CS, correctness means with “behaves in a consistent manner with respect to a specification”. The smart contract is the specification, so yes the output of the smart contract is guaranteed to be correct with respect to the specification. Of course, there can always be be errors in the specification. If you have another definition of “correct”, let me know.


I'm not the one redefining words here: you've redefined "code" as "specification".

Canonically, within the industrial context, there are two processes for achieving compliance of implementation with specification ("correctness"). The first is testing, the second is formal verification. In either case, you need an artifact at a higher level of abstraction (the specification) and one at a lower level (the implementation).

In testing, someone who has access to the specification can evaluate the dynamic behavior of the implementation and determine whether it matches. In formal verification, you can do the same by static analysis and proof.

If you say "the code is the specification" then correctness becomes completely vacuous. The code is correct because it runs. Great. So what? If the implementation results in behavior that is completely unintended by the author (e.g. your smart contract contains a re-entrancy problem: a notorious source of bugs, or isn't robust to EVM stack exhaustion) and you still want to claim it's "correct" then I just don't know what to suggest - we're never going to agree, and I don't think your usage is anything like standard.


Ok fair. But you can formally verify your smart contracts or you can test them, so this notion of "correctness" seems orthogonal to the discussion. Even if you formally verify a normal application, you can't trust your verification once you give it to me to execute.


The name of the thing I think you're looking for is verifiable computing.

https://en.wikipedia.org/wiki/Verifiable_computing


I think there's one other aspect here that needs to be considered, which is what happens when undocumented features appear, as with the Polkadot fiasco, among others, where bugs in the shared walletsplus some thirs party actions used by many led to $hundreds of millions being completely inaccessible to all (ineptness, or malice, it is not clear).

The contracts were verifiable and correct enough, it seems, but some of the infrastructure turned out to be faulty, leading to a platform that was not trustworthy.


Are smart contracts Turing complete?


Yes


Then almost by definition I can't really trust your smart contract, except for airtight formal verification, which nobody really does plus I highly doubt smart contract programming languages are formal verification languages.

So this:

> Yes, the output is guaranteed to be correct.

is provably wrong and you either know it and are malicious or you don't know it and this discussion is useless.


I think we're talking about two different things. Imagine you have a generic program A that you've formally verified. Now you ask me to execute it for you. I say: "ok, the output of the program is X." Can you trust the output? No, you cannot trust the output even though you've formally verified the program. If you formally verify the smart contract, you can trust the output.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: