Hacker News new | past | comments | ask | show | jobs | submit login

Actually it's a bit more like 42.



How is showing that two representations are fundamentally different because they objectively and radically differ in computational complexity open to interpretation?


It shouldn't be.

I'm skeptical of your parsing argument because the normal way to construct type theory terms is inductively in their "native" tree form, which is cheap. Looking at them as a formal language isn't so convenient.

As a PLer, I hope dependently typed langauges see wide use not because they are the one true foundation of everything, but because they are the richest lingua franca for every academic discipline to represent their ideas—I'm personally enamored of such theories as a foundation for math, but I don't know much about the theory of computation and I'm fine if others just see it as a tool.

Anyways once that happens and fields have distilled their intentions into precise models in the lingua franca, perhaps the walls between fields can be broken down and we finally grok each other's concepts.


> their "native" tree form, which is cheap

It is not cheap at all, as type checking can be arbitrarily hard, depending on the type system, and perform arbitrary computation at the "validation" stage. This is real, significant (and useful, depending on circumstance) computational work, that no formalism or representation can reduce (only hide, by pushing it over to the collaborator).

> but because they are the richest lingua franca for every academic discipline to represent their ideas

I disagree. I strongly recommend you (and everyone) read this terrific essay by Lamport on a lingua franca for algorithms: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

Not only is TLA just as powerful as dependent types, it is far simpler[1]. Lamport says that the reason PLers aren't interested in TLA+ is because it's so simple, so there's not much to write about it. You see plenty of papers about embedding all kinds of things (separation logic, cost models, concurrency etc. etc.) with dependent types, things that are so trivial in TLA+ that they're barely worth a mention. Ideas from Hoare logic, behavioral refinement, differences between notions of program equivalence from process calculi (trace equivalence vs. bisimulation) simply collapse into simple, familiar logic in TLA (the only thing missing is probabilistic reasoning).

Lamport wrote about his experience trying to publish a paper showing that specifying real-time systems with TLA is trivial (it's a matter of defining a time variable): http://research.microsoft.com/en-us/um/people/lamport/pubs/p... You can only write a few papers on a simple approach that solves many problems, but lots and lots of papers that show how to use complicated approaches to solve them.

A lingua-franca must be simple. TLA+ uses notation and concepts that are familiar to all mathematicians and all computer scientists, plus a couple of new concepts that can be learned in a few days. Dependent types (and intuitionistic logic) are almost as arcane today as they were decades ago, virtually unknown outside the circles of logic and PLT, each of them is a particularly isolated sub-discipline within math and computer science. That so many papers are written about dependent types is strong evidence that they cannot serve as the lingua franca, and pretty conclusive proof that they cannot serve as the lingua franca just yet.

That PLers, as Lamport writes in his comment on my post, fail to see that Plotkin's SOS is an abstract state machine, and that some fail to see that computing a set-theory function and a type-theory function are two different computational problems with radically different computational complexity is further evidence that language models obscure rather than reveal.

Of course, as Lamport also says, real-world programming is very complicated, and so programming languages are justifiably complex (and whether dependent types can help with that remains to be seen[2]). But that complexity is absolutely not required for the purpose of a lingua franca with clear and simple semantics (TLA's semantics are far simpler than any programming language) for the purpose of specifying and analyzing algorithms.

Disclosure: I've contributed some code to the TLA+ project.

[1]: Not only do properties ("types") and algorithms share the same terms in TLA+, they are the same objects. This confuses some people to believe that TLA+ doesn't support higher-order algorithms, when, in fact, it becomes a non-issue. Instead of a parameter of a certain type, you have a concrete program that is the type (e.g., the program that nondeterministically returns, say, any even integer).

[2]: I'm skeptical because I believe we're close to the "Brooks limit": http://blog.paralleluniverse.co/2016/07/23/correctness-and-c...


A couple of clear, encouraging, articles about how to use TLA and the benefits it provides would pay dividends (to both you and your readers) orders of magnitude greater than this obscure, roundabout, oblique approach.

> This confuses some people to believe that TLA+ doesn't support higher-order algorithms, when, in fact, it becomes a non-issue.

An article explaining this would be astonishingly interesting (and probably clarify this whole discussion).


That PLers, as Lamport writes in his comment on my post, fail to see that Plotkin's SOS is an abstract state machine

Is this even true? I don't think I've ever met a PLer who didn't know that an operational semantics describes an abstract machine.


I am curious what you think of math.andrej.com/2016/08/30/formal-proofs-are-not-just-deduction-steps/


He mostly talks about proof assistants for general mathematical theorems, something I don't use, but in TLA+, proofs are of logical formulas, the steps are logical formulas (unlike in dependently typed provers), and a computation is just a (temporal) logical formula, so TLA+ already lets you incorporate computations into proofs, but the process isn't automatic. Then again, TLA+ isn't really designed nor used as a general theorem prover (though it certainly can prove general mathematical theorems) but as a tool to specify and verify algorithms.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: