Hacker News new | past | comments | ask | show | jobs | submit login
Type Wars (cleancoder.com)
59 points by ingve on May 1, 2016 | hide | past | web | favorite | 59 comments



Honestly, I feel like the current is going the opposite direction here. In the mid-to-late 2000s the most ascendent languages seemed to be dynamic ones. Ruby, Python, PHP, Javascript, etc. Perhaps that's web solipsism, but those are the languages I remember having the highest visibility from that time.

Currently, the tide seems to be changing to towards static typing (and stronger typing). Swift, rust, go, scala, etc. Already statically typed languages seem to be borrowing ideas from their stronger typed brethren. C++ gaining concepts (similar to typeclasses from haskell), Java encouraging the use of Optional (similar to Maybe sum types) for nullable values, etc. Many dynamic languages seem to be providing type checking as an option. Clojure, python with PEP 484, various javascript dialects and type system addons, etc.

Tests are important and should supplement a static type system, since there are many classes of bugs a static type system can't prevent. But the benefits of static typing as opposed to TDD for verifying the same thing are obvious to me:

1. Tests are not proofs. Unless a test is exhaustive on the set of inputs accepted, it cannot prove that a piece of code works correctly. Types can prove that certain inputs can never be received by a piece of code, which is a stronger guarantee.

2. Tests are more work. Why write tests to verify assertions that a compiler can prove for me? In practice I find this means that unit tests attempting to verify correct behavior for incorrect inputs have gaps or are imperfect because of lack of time or lack of foresight.

3. Tests are still code. Test assertions can have bugs in the same way that regular code can have bugs. True enough, a type system is also code and can also have bugs, but the code in a type system is more visible and more attended to. In practice, I have found many bugs from flawed test assertions and none from a broken type system (as far as I know).


A countercurrent is the rise of data science, machine learning and statistical programming in industry. Dynamic languages such as R and Python are the tools of that trade, because of the exploratory nature of the programming work that leads to an emphasis on speed and flexibility over correctness, and the loosely typed nature of datasets in the wild.


Good point, though I wouldn't say any of those languages represent new developments (Julia would be a good counterexample though). My understanding is that many of the new probabilistic programming languages are statically typed (hakaru, figaro, BLOG, etc.), though I'm not an expert in this field so I can't speculate on if that represents a coming tide change or not.


Julia has the best of both with the Any type. You can write Dynamic code and then tighten it up when you've got your system worked out.


>Types can prove that certain inputs can never be received by a piece of code, which is a stronger guarantee.

I'd amend this to say that sometimes the compiler can prove you're calling a function with or setting a variable to an incorrectly-typed input, but at runtime a strong type system by itself doesn't prevent receiving those inputs, only executing them.

It is possible to create a programming language with semantics such that the compiler will always catch incorrectly-typed inputs, but that's a separate but related concept from the type system.


>Tests are not proofs

Neither is static typing. It's a layer of protection, much like tests.

>Tests are more work

I don't think that's necessarily true. You still need to write tests with or without static typing. Those tests you need to write anyway will usually catch type errors in a dynamically typed language that a statically typed language picks up at compile time.

The question is whether you need to write more tests (probably you do...maybe 5% more), and whether the greater number of tests offsets the greater amount of code you need to write to statically type your code.

As far as I'm concerned, that's not an easily answered question.

>Tests are still code. Test assertions can have bugs in the same way that regular code can have bugs

Ditto for types.


>Tests are not proofs

Neither is static typing. It's a layer of protection, much like tests.

Proofs are exactly what the results of strong static type systems give you. They can't prove everything, but they do prove certain things in all possible cases.


If you are willing to adopt a strong enough type system pretty much any interesting property about a program you want to prove is provable. Its just a matter of ergonomics, more complex type systems provide power, but require investment in understanding, both conceptually, and in modeling your problem.

You can also adopt automated verification techniques that can prove strong properties about a system automatically.


All true, up to a point at least. Strong, static type systems aren't universal wins with no drawbacks.

However, I think it's fair to say that even the "strongish, staticish" type systems in a lot of mainstream languages can still prove very useful properties that are often sources of bugs in the more dynamic languages. A good example would be not accidentally allowing null values to be passed around, as mentioned elsewhere in this discussion. And of course some of less well known but still not uncommon languages, such as the popular functional programming choices or newer offerings like Rust, can do quite a lot more without their type systems becoming an unreasonable burden.


>They can't prove everything, but they do prove certain things

Much like tests then?


Tests actually prove very little. They probe the code with a few, isolated inputs out of a usually very large domain. Their nature is more stochastic.

Static type guarantees are actual proofs (barring a compiler bug), and they narrow down the domain that your tests have to probe.


> Neither is static typing. It's a layer of protection, much like tests.

Not true. The curry-howard correspondence shows the relationship between typed programs and proofs. I recommend you watch a Phillip Wadler talk about this: https://www.youtube.com/watch?v=aeRVdYN6fE8

Note: This doesn't mean that type systems can prove _all assertions_ about a program, but type systems do indeed work as provers.

> I don't think that's necessarily true. You still need to write tests with or without static typing. Those tests you need to write anyway will catch type errors in a dynamically typed language that a statically typed language picked up at compile time..

This isn't necessarily true unless you test for all inputs (which tends to be infeasible in most cases). A type system can reject inputs by virtue of their lack of conformance to a type, which eliminates the need to test them.

> The question is whether you need to write more tests and whether the greater number of tests offsets the greater amount of code you need to write to statically type your code.

I don't have any evidence to support this (if someone does or has proof to the contrary, I'd love to see it referenced here), but I believe there is almost no overhead imposed by a static type system with type inference.

> Ditto for types.

I acknowledged this above, you just removed that part from the quote. In practice I've never had a bug with the type system (that I'm aware of), while I've seen hundreds of bugs from incorrect test assertions.


>Note: This doesn't mean that type systems can prove _all assertions_ about a program, but type systems do indeed work as provers.

Type systems can verify certain properties about code much like a test does, but that's far from being a mathematical proof of program correctness (especially since compilers have, you know, bugs).

>This isn't necessarily true unless you test for all inputs (which tends to be infeasible in most cases).

No, it's necessarily true. The vast majority of type errors I experience get picked up during TDD. A small minority reach production.

It's very easy these days to write tests that cover an enormous range of inputs and outputs (e.g. see quickcheck).

I'd estimate that maybe 5% of errors I experience in production are type related (in a dynamically typed language). That's offset against quicker development time (which also eans ease of fixing the other 95%).

>I believe there is almost no overhead imposed by a static type system

I think that's wishful thinking.

>In practice I've never had a bug with the type system (that I'm aware of)

Which type system have you never had a bug with? I've dealt with several buggy, crappy type systems?

>In practice I've never had a bug with the type system

I've seen plenty of bugs caused by picking the wrong type.

>I've seen hundreds of bugs from incorrect test assertions.

So have I. Different types of bugs though. The kind which static typing doesn't help eliminate.


> No, it's necessarily true. The vast majority of type errors I experience get picked up during TDD. A small minority reach production.

I'm not trying to argue against your experience (or persuade you against it, for that matter). Only making a point that it is possible for you to have type errors that go uncaught by unit tests unless (unless you run those tests against all values).

> It's very easy these days to write tests that cover an enormous range of inputs and outputs (e.g. see quickcheck).

Quickcheck takes advantage of types to narrow the inputs for generative tests, whereas dynamic languages have to content with any possible input.

>> I believe there is almost no overhead imposed by a static type system > I think that's wishful thinking.

Here's a couple ways I think you actually waste more time in dynamic languages:

1. Type checks. Sounds almost tautological, but it's true. Any time you see someone using the type() function in python, you're branching in code for something a compiler could take care of for you.

2. Redundant validation, specifically against a purely functional language that is statically typed. Since types encode facts about data and functions encode theorems, statically typed languages only require one test to verify the assumptions in a theorem. In other words, if there is one function creating values of type Foo with an integer attribute "foo" and it insures that "foo" is 0, I never need to check if foo is 0 anywhere in production code. I only need to check that in a single test of the constructor. In practice I find this means I need to write a lot less tests with a statically typed language, and generally a lot less branches in code.

3. Documentation comprehension. I can't count the number of times I've read a javascript library's documentation only to be thoroughly confused about what kinds of values can be passed to a function legally. Since the language provides no way to enforce this, it seems to encourage a culture of negligence about documenting what invariants of data are required to hold. Furthermore I rarely get a descriptive exception informing me what the issue is, rather I get a type error about a missing attribute or something.

4: Boilerplate code. Fixed data schemas mean you can generate efficiently executing code to perform tasks like serialization, client libraries, etc. Take a look at servant: https://haskell-servant.github.io/ Because haskell is statically typed, the type system makes it trivial to generate an HTTP server, a client, and a swagger API docs page all from a couple types. This can be done for certain things in dynamic languages, but since it requires introspection it will almost definitely be slower (and in some cases that may make it impractical to use).

5. Refactoring. Dynamic types are notoriously a pain for editors. Values can be changed ad hoc in ways that make it very difficult to change names without some smart regexes. Refactoring in statically typed languages is a breeze with a sufficiently smart editor/ide. Find and replace is generally two clicks away and is guaranteed to find all occurrences and replace them safely.

6. Condition checking. Pattern matching is not exclusive to statically typed languages, but it is much more common in them. Static type systems also allow for exhaustivity checks that a dynamically typed language cannot perform.

In general I think the things people claim are time consuming about statically typed languages are based on older languages that are lacking richer type system features. Things like:

1. Omnipresent type annotations. In haskell you almost never have to specify one of these (in fact you can get near 0% of these if you turn off the monomorphism restriction). The most common use of type annotations in for function signatures, which I find are nice to have for documentation purposes anyways.

2. Inability to perform generalizable code. A lot of people base this on experience with languages like Java, which is unfortunate because the state of the art is much farther ahead. There are many "dynamic feeling" generalizations you can get out of newer statically typed languages. For instance, you can easily generate a "falesy" abstraction similar to python or javascript through haskell typeclasses. Typeclasses, functors, etc. are all examples of statically typed features that allow you to write abstract code over disparate sets of types.


>The pendulum is quickly swinging towards dynamic typing. Programmers are leaving the statically typed languages like C++, Java, and C# in favor of the dynamically typed languages like Ruby and Python.

Even though this article is recent, it reads like it's a good several years out of date. From where I sit, Ruby and Python are incumbents and the trend is moving toward typed languages of several varieties. From ML variants to Scala to the "streamlined functional-ish" languages like Swift, Kotlin, whatever C# is morphing into, to Typescript, and so on.

He presents TDD as a convenient way to escape the shackles of static-typing. I think many, myself included, would regard compiler-enforced type safety as a way to leverage a machine to ease the significant burden of writing and maintaining a huge test suite. Not to mention the productivity boost provided by the comparatively richer tooling that static typing enables.


> Why am I wasting time satisfying the type constraints of Java when my unit tests are already checking everything?

Couldn't you invert the question for Java and instead ask, why am I wasting my time chasing 100% test coverage when the type constraints of the language guarantee a certain degree of correctness?

I've always found that dynamic language projects require twice as many tests as a Java project to get a similarly reliable test suite.


Also, how does full unit (!) test coverage prevent a situation where unit A passes some type to unit B that unit B doesn't expect but happily runs with, but with different semantics?

Documentation of interfaces in dynamic languages usually always specify the types of arguments and return values, and the code very often looks like this:

    def foo(arg)
      raise TypeError, "Integer argument expected" unless arg.kind_of?(Integer)
      # ...
    end
Yup. That saved me a lot of typing (no pun intended) compared to

    void foo(int arg) {
       // ...
    }
Reading (and writing) code in dynamic languages (including the tests!) always seems to me to disprove all of their purported benefits and underlines the need for a strong type system.


Yup, the right tool for the job. Type systems are really good at enforcing inter-unit contracts, while tests (especially property tests) are very good at making sure that the implementation is correct for at least a subset of inputs.


This reminds me of a talk from StrangeLoop 2012 by Paul Snively and Amanda Laucher "Types vs Tests". Well worth the watch.

https://www.infoq.com/presentations/Types-Tests


Exactly. Types are a class of tests, checked by the compiler at compile time.


Right. They are one class of tests that happens to be checked at compile time. But what about all the other tests that you still have to do? Which typically also coincidentally check the types (if I check that a value is > 2, I am also checking that its type is number (or int or whatever your particular numeric tower or non-tower says).

And what is the value of "compile time", if the all this type checking makes the compiler so slow that just compiling is slower than compiling + running unit tests in a simpler language? What if it catches 10% of errors (optimistic estimate) but due to constraints in reusability makes my code 20% bigger (conservative estimate)? Likely around 10% more bugs.

Don't get me wrong: I like static typing (to some degree). I just get antsy when it gets oversold.


Don't underestimate the productivity boost provided by the little squiggly red line. The fact that your tooling will stop you from ever passing T when the expected parameter is Collection<T>, and that this "rescue" involves pretty much zero cognitive overhead, is invaluable. When I work in a statically typed language, I find that I pretty much never make "head-desk" type errors, and despair at their reappearance when writing... javascript.


Hmm...much of the disconnect might be that when people think "dynamically typed language", they think JavaScript and Python and Ruby.

I also don't make these mistakes while writing Smalltalk.


What feature of Smalltalk, but absent in Python/Ruby/JS, prevents these mistakes?


> But what about all the other tests that you still have to do?

You should do those too.

> Which typically also coincidentally check the types (if I check that a value is > 2, I am also checking that its type is number

    ~$ python -c 'print "foo" > 2'
    True
Now, on the bright side, Python 3 fixes that one:

    ~$ python3 -c 'print(2 < "foo")'
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
    TypeError: unorderable types: int() < str()
But most popular dynamic languages seem to have more where that came from. And if you have a sufficiently robust type system not to, then it starts to look a lot like the type system of a static language, minus the compile-time error detection.

> what is the value of "compile time", if the all this type checking makes the compiler so slow that just compiling is slower than compiling + running unit tests in a simpler language?

To the best of my knowledge, type checking is hardly the slowest part of most compilers. Many dynamic languages have somewhat faster startup times because they don't have a compiler, or they use a JIT at runtime. In any case, for many non-trivial programs, runtime dominates. A sufficiently thorough unit test suite may take significantly longer than compilation. And even if not, someone has to write the unit tests that handle what static type checking does for free.

I'm not going to claim that static typing is universally better; for instance, it's a lot harder to provide a good REPL in a statically typed language. But personally, I like dealing with as many errors as possible up front, rather than discovering them later on.


But what about all the other tests that you still have to do? Which typically also coincidentally check the types (if I check that a value is > 2, I am also checking that its type is number (or int or whatever your particular numeric tower or non-tower says).

I see this argument from fans of dynamic typing quite a bit. Gary Bernhardt manages to defeat it better in 5 minutes than I could in a lifetime, so here you go:

https://www.destroyallsoftware.com/talks/wat


Really? OK, if you're going to base your arguments about dynamically typed languages on JavaScript, then I'm going to base my arguments about statically typed languages on C.

'asdas' > 2 <Print It> -> Message not understood "SmallInteger>>isByteString"


So what languages would you like to compare on a like-for-like basis? There are plenty of useful invariants that would be verified by the static type systems of everyday languages, but not picked up implicitly in unit tests.

On further reflection, I'm not even sure how your argument about integers is supposed to work. Implicitly testing that an input is an integer by comparing it to another one in a single unit test case wouldn't guarantee that no other code could pass a non-integer input to the same function from somewhere else in the general case, which is what a static type system would do for you.

The context where your argument does seem to have some merit is in unit testing that the outputs from a function are of the expected type. But even then, there have been plenty of related gotchas in JavaScript, Ruby, Python, PHP, Perl... I'm sure you can pick an example where your particular case (the integer test) gets picked up, and I'm sure some popular dynamic languages are getting better in this area (Python 3 has significant advances over Python 2, for example).

Basically, this feels like you're choosing an invariant and a dynamic language that conveniently support your position, while ignoring numerous other useful invariants and languages that would not. If you get to do that, it seems only fair to judge the potential of strong, static type systems using languages like Haskell or Rust, and I can't think of many unit tests I've ever written in dynamic languages that would implicitly verify that the unit under test didn't have unexpected side effects or did not allow access to resources in ways that might not be thread-safe, even in the specific case being tested rather than the general one.


>if I check that a value is > 2, I am also checking that its type is number

Surely, you're only testing that it can be coerced to a number?


I whole heartedly agree with this sentimate


Unfortunately, this post doesn't seem to improve on Martin's usual fundamental misunderstandings. He still doesn't seem to understand that unit tests verify single cases but strong type systems prove general cases, and that in situations that could be handled with either approach, the latter is strictly more powerful. He still seems to believe that dynamic languages are the way things are going, despite almost every successful large and complex software system still being written in the popular static languages he dismisses. He still thinks dynamic languages like Python and Ruby are great for productivity, but he's primarily comparing them to Java as the standard for productivity with statically type languages.

This is the man who, somewhere around 2011 I think, claimed that we might have explored the whole programming language space, and there might not be any new programming languages left to be invented. He's also the man who, going by the very post we're discussing, seems to think that having typing so restrictive that not everything is nullable by default is some sort of radical new idea. (Compare that with Tony Hoare's conference speech in 2009, in which he called inventing the null reference his billion dollar mistake, and notice that almost every modern static language provides this kind of safeguard.) So I'm not sure we should take Robert Martin's predictions for where the programming industry is going too seriously; indeed, he should perhaps learn a bit more about what is already available today before making big public predictions about tomorrow.


> The language is very opinionated about type safety... For example, the fact that a variable of type X might also be nil means you must declare that variable to hold an "optional" value... The extreme nature of the type system in swift

Oh boy, if he thinks that's extreme...

> Why am I wasting time satisfying the type constraints of Java when my unit tests are already checking everything?

Oh really? Are the unit tests testing every single 2^64 possible values of an Int? Are they testing for the existence of null everywhere you have a pointer? Strong types aren't just a little bit more info about something; they're a precise refinement of the nature of all values in a program.

While very few type systems can test everything you can test with unit tests, a decent type system and good semantics can cover all causes of crash failures and many causes of logic errors. Seriously, folks; writing programs that crash is more or less optional these days. Writing correct programs is still hard, but you can make it a lot easier on yourself.


> Are the unit tests testing every single 2^64 possible values of an Int?

Hmm...your usual type checker is checking 0 possible values of an int. It is checking just the type. And of course if you are checking a value in a test, you ar coincidentally also checking its type.

That's not to say that types can't be useful (for example they have been shown to be quite useful as machine-checked documentation for getting around unfamiliar codebases), but please let's not oversell...


In fact, it checks every predicate codified by the type system. The question is how powerful your type system is, because this determines what predicates you can (conveniently) test.

Well-designed strongly typed languages naturally check some useful predicates (existence, the exact structure of the value, etc.). These alone are sufficient to eliminate null pointers, non-total functions, and most other failure modes that plague languages like C++ or Python.

Stepping it up a notch, you can use things like GADTs and kind promotion to check much more advanced predicates (like that a vector must be non-empty or even length).

With refinement types a la liquid Haskell, I can statically guarantee that e.g. my function only returns lists of even length or only returns even numbers, without even having to put this information in the type of the returned value.

With full-blown dependent types a la Coq/Idris/Agda, you can encode pretty much whatever property you want into types. For example, Idris has `verifiedMonoid` which statically guarantees that an operation is associative and has an identity. See https://github.com/idris-lang/Idris-dev/blob/master/libs/con...


The compiler can use the known bounds of a type to perform correctness checks or optimizations. It might not be testing every possible value of an int explicitly, but the compiler certainly uses the range of an int to make lots of decisions and produce warnings/errors.


"On the eve of publication, Bertrand Russel wrote to Frege and pointed out that Frege's logical system allowed statements that were ambiguous -- neither false nor true."

This makes it sound as though Russel showed the system to be incomplete, and the reference to Godel's theorem in the next paragraph reinforces the confusion. Russel instead showed the system to be inconsistent, which is much worse.


It's disappointing to see this post makes no mention of any ML style functional language or even Rust. It's hard to take a post with this kind of title (and predictions) seriously when it doesn't even mention the big (relatively) new players in the "Type Wars".


I don't know about what the author's goals are, but I am in favor of offloading as much work to computer as possible.

To err is to be human, but compilers never forget.


Good point, the author clearly misses the point that tests themselves are code and thus can have errors in them as well. Code coverage is just a vanity metric if you're tests don't actually test the correct thing.


Types are also code and can have errors in them...


Types aren't code.


>I don't know about what the author's goals are, but I am in favor of offloading as much work to computer as possible.

The problem is that both writing more tests and statically typing your code involve extra code you need to write. Neither one comes for free.


Without those pesky types, how does one do automated refactoring in a large codebase? By hand?

Types are one of the hallmarks of large scale engineering. Any codebase of appreciable size without types is difficult to work in, at least in my own experience.

It takes a little bit more effort writing the types, but doing so saves so much time in the long run that it's absolutely worth it.


> Without those pesky types, how does one

> do automated refactoring in a large codebase?

> By hand?

Dunno. One could use the RefactoringBrowser. The first automated refactoring tool. Ever. In Smalltalk. A dynamically typed programming language.

The authors even talked about the tradeoffs of using a dynamic language. They thought the problems would be bigger than they turned out to be.

http://dl.acm.org/citation.cfm?id=280610

http://www.refactory.com/tools/refactoring-browser

http://c2.com/cgi/wiki?RefactoringBrowser


> My own prediction is that TDD is the deciding factor.

If this turns out to be the case, then people would reach for statically typed languages to augment & cut-down on tests.

Like everyone here is saying, the pendulum is moving toward statically typed. The next generation will probably be in higher kinded types as we continue to attempt to write more generic code that cuts down on number of lines, but still preserves type guarantees.

Finally, I don't think it's appropriate to regard `Option[T]` as some mystical type. It's just a regular old generic class that wraps something. Scala programmers (and I'm sure others) have been doing this with `Either[A,B]` & `Option[A]` for awhile, but Swift was really clever in building it into the language with the `?` & `!` operators.


"Clever" or "overly specialized", depending on your viewpoint.


I think they made a much better decision then everyone else even if verbose.

Nearly every language has continued to repeat Tony Hoare's "billion dollar" mistake of allowing null to inhabit any type. I think approaches like Rust's and Swift's are steps to stamp out something that never should of existed in the first place.


> Nearly every language has continued to repeat Tony Hoare's "billion dollar" mistake of allowing null to inhabit any type.

Perhaps I was unclear. I could not agree more with this assessment (see link to my blog post in sibling).

I was merely commenting on Swift's choice to do it in a (to quote the OP) "clever" way, rather than a general one like Rust or Scala.


Agreed, but I will say it's been a boon to my iOS development.



Modern type systems made type inference and gradual typing possible and those are making a huge difference nowadays.

With type inference you can write programs almost as easily and productively as in dynamic languages. With gradual typing you can basically write the same code as in a dynamic language and still get type safety (e.g. TypeScript).

Swift, Go, (Rust, Scala, C# etc. but even C++) has type inference today. Maybe dynamic languages will get type annotations and gradual typing in the future and then basically we can have the best of both worlds.


There are many concepts which approach equivalence, e.g. the IO monad and the C preprocessor.

http://conal.net/blog/posts/the-c-language-is-purely-functio...

Likewise, tests and type systems approach equivalence (especially with dependent types).

Thus the distinctions are practical rather than purely theoretical.

(1) In practice, I find the static typing approach to result in more complete "tests".

(2) In practice, I don't see people writing reusable tests when they share code with others. However static types are are shared with the user's code, thus reducing the testing they need to write themselves.


>The pendulum is quickly swinging towards dynamic typing. Programmers are leaving the statically typed languages like C++, Java, and C# in favor of the dynamically typed languages like Ruby and Python. And yet, the new languages that are appearing, languages like go and swift appear to be reasserting static typing?

The author makes the claim that the pendulum is swinging back towards dynamic typing but then follows up immediately with the contradictory evidence that the new hip languages showing up are strongly typed.

I'm curious what if any evidence can be presented that this pendulum is swinging in any direction as oppose to just pulling at opposite ends as it has been for quite some time.


While it might seem like things are shifting back towards static typing, I think that is primarily happening in the development of technology now but not the use of that technology yet. I think these predictions will probably appear to be true for some period of time before these newer typed languages begin to gain larger amounts of usage. (However, I expect the testing will probably stick around. I'm highly skeptical of the idea that typing and testing are some how interchangeable.)

Also, I think it'll be interesting to see what role gradual typing systems like Flow, Hack, and TypeScript might have on getting static typing into existing dynamically typed codebases.


Ruby salaries tend to be higher than JVM salaries? What?



I was surprised by that statement too, I don't think it is true in London.


Really? My experience is that while there are a number of high-end Java positions, and a number of Ruby positions in well-funded startups and such, there are VAST numbers of Java positions for mediocre engineers and mediocre companies – no fault of the language, more of the niche it has found itself in. I'm not surprised at all to find that this drags the average salary down.


> "Why am I wasting time satisfying the type constraints of Java when my unit tests are already checking everything"

Not only is this a straw man, it's also a really weak argument.

Types prove the absence of (type) bugs. Tests can merely prove the presence of such bugs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: