For ClojureScript there really isn't any performance optimization to be done, as far as I know. asm.js doesn't really help out high-level functional languages like ClojureScript.
> Clojure codebases typically rely on tests to verify the correctness of the code.
OT, but this is a pet peeve of mine. Tests aren't verification, and they cannot demonstrate correctness (without being exhaustive). Clojure codebases typically rely on tests to mitigate the risk of runtime errors.
Agreed, but it's important to note that there's very little that can demonstrate correctness. Most type systems by themselves don't; as Pierce describes it in TAPL, "[a] type system is a syntactic method for automatically checking the absence of certain erroneous behaviors" (emphasis mine). It's possible that, in a dependent type system, those certain behaviours will be sufficiently many that you can prove the correctness of a program specification, but even a rich system like Haskell's usually doesn't offer such guarantees (just consider `tail `; or, for an example that's not just a Prelude wart, `take n xs`).
- first, the ClojureScript REPL whines at smelly coercions out of the box
cljs.user=> (+ 1 "2" nil)
WARNING: cljs.core/+, all arguments must be numbers,
got [number string] instead. at line 1 <cljs repl>
WARNING: cljs.core/+, all arguments must be numbers,
got [number clj-nil] instead. at line 1 <cljs repl>
- third, you can typecheck a codebase incrementally
Also, I really thought I'd see that quote about 100 functions operating on a 1 data structure easier to grok than 10 functions operating on 10 different data structures. Of course, it's nicer when that 1 data structure (or even the 10) has a schema instead of being void *.
In Common Lisp you can define pretty much any type (with deftype), you can define algebraic types and even value dependent types. Once you add full type declarations to your code, you will get warnings for inconsistent and violated type declarations.
Aside from that in Haskell the type system isn't usually a restriction. Experts in Haskell tend to use the type system to guide them towards a correct solution.
I'm not sure it possible to say that we couldn't have built something quicker in another language or stack. What I will say was that the underlying abstraction that we built our system on (streams of events) is very well suited to a Haskell implementation, and I can say with some certainty that building a solution that is as "correct" as the one we built would have been extremely challenging. The code also held together for longer than has been my experience in some other languages and communities.
Yes, absolutely. It's amazing how overlooked this fact seems to be, by many. Languages aren't that hard to learn, so excluding the really awful ones (e.g. Java for anything other than code that has to be on the JVM and has to be fast) the "familiarity" aspect is minuscule. Programmers learn new languages quickly. What does matter is the community and the quality of programmer you'll attract based on the tools you use and the signals that your tools send about who makes decisions.
Dynamic langs have lots of good qualities to them, but every tool has a drawback. Tests, documentation, good team dynamic, etc. hedge against some of the disadvantages of a dynamic language. They aren't a panacea.
- Catching at compile time errors that would otherwise happen at runtime
- Occasionally, it can enable making your code mathematically, provably correct
- Maintainability and flexibility in the sense that the code is easier to read by future hires
- And the most important advantage of all: automatic refactorings. Without that, the code base rots because developers are afraid to refactor since doing this without errors on a dynamically typed language requires a lot of tests, which nobody really has. Even renaming a function cannot be done safely in a dynamically typed language and it requires the oversight of a human
I can't remember where, but I was just reading something yesterday to the effect that a solution that works most of the time is worse than a 'solution' that never works: at least you'll notice the latter quickly, whereas you might not notice the former until it's buried so deep in your code that you've forgotten the hidden assumptions it involves.
Incidentally, your question about the 10K-year clock (to which I think it is impossible to give an answer today even if one exists today, since I think clockmaking has not yet been practiced for 10K years!) reminds me of anecdote #3 in http://www.netfunny.com/rhf/jokes/91q3/oldanecd.html .
Besides, code in general only work most of the time. Static typing does not protect you from bugs.
But it does. It doesn't protect you from all bugs, but nobody claimed it did.
I think that there is an important distinction between 'almost-there' solutions.
Suppose that you have a piece of code that is specified to work in a certain way. Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification. I think that we are all comfortable with this kind of "works most of the time".
By contrast, consider 'smart' products—for example, the auto-complete on your phone. This also, after a bit of training, works most of the time, and can make some brilliant inferences. However, I think that most people can agree that the "most of the time" for auto-completion is qualitatively different from the "most of the time" for specified software: it is reasonable to rely on the latter, but not, or at least not nearly as much, on the former (http://www.damnyouautocorrect.com).
> Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification.
This sentence was supposed to end "it only satisfies its specification some of the time." I was not going for Knuth-ian irony about proving vs testing (https://en.wikiquote.org/wiki/Donald_Knuth#Sourced).
There's plenty of software written in both typing disciplines in the wild. Yet, nobody managed to conclusively demonstrate that software written in statically typed languages is produced faster and with less overall defects or that it has lower maintenance cost. The very fact that we're still having these debates says volumes in my opinion.
Majority of arguments regarding benefits of static typing appear to be rooted squarely in anecdotal evidence.
I personally haven't found a place where this is necessary yet, but who knows what might happen in the future. :)
Of course, people are different. Some people do all of this on paper before writing a single line of code. Others plan by making types and interfaces, then proceed to implement those. I just like making prototypes.
Of these, I sound the most like you - think just a little, then sit down and start coding to get a feel for things.
Yet I find a good type system essential (or at least conspicuously missing when I write Python). There is a notion that, if one has static type checking, one has to get the types right before writing code. There's no reason that has to be the case. Get a sketch, start writing code, refine your sketch. With type inference, the sketch can even be somewhat incomplete and still help me find where my assumptions clash.
Nice duality you exposed right there!
Static typing requires discipline, it is no magic cure for bugs which will automatically make your programs correct, same way forcing everything to be in class in Java does not automatically make readable, modular and reusable code, its just restrictive and annoying.
The disciplines required for static typing to be effective can be employed in dynamic typing however merely having a static type system does not mean the discipline will be developed. In order for a static type system to not be restrictive (like Java's or C++'s) it has to be flexible i.e. it has allow types of varying degrees of vagueness. If I am an inexperienced programmer with no discipline nothing stops me from picking the vaguest type possible and making a complete mess. Type errors do not prevent all bugs, in-fact they prevent only a minimal subset of bugs, most bugs are related to logic errors involving an ill-thought out algorithm or data errors related to invalid data provided by the user which often can only be caught by dynamic type systems.
I'll leave it at responding to the bit I do agree with, which is that many people have had poor experiences with poor checkers for poor type systems.