Hacker News new | comments | show | ask | jobs | submit login

> However, given that I’m working on a database engine now, not on business software, I can see a whole different world of constraints.

This might be my own confirmation bias, but this is my takeaway: point of view and the constraints that you see or believe are there are the main determinant of choices, and not whether some particular pattern or feature of a language is intrinsically good.

The older I get and the more code I write, the more I find this to be true. I change my own mind about things I used to argue fiercely over and things I thought were tautologically true.

I think (hope) this is making me more open minded as I go, more willing to listen to opposing points of view, and more able to ask questions about what someone else's constraints are rather than debating about the results. But, who knows, I might be wrong.




I think it is hard to separate this from developing more maturity in the field also. For example, people learning OO seem to almost have to go through a phase of over reliance on inheritance.

Pointing back at yourself over 10 years is pointing to a different place, sure, but it is also a different person.


I have many years of programming in Java under my belt. Until I started using dynamic languages I thought static typing was really important. It's not.


> I have many years of programming in Java under my belt. Until I started using dynamic languages I thought static typing was really important. It's not.

Static typing is useful, especially in large projects, if it provides the right guarantees.

OTOH, whether it does that depends on the type system. Go, Java, Pony, Rust, and Haskell are all static, but their type systems offer very different capacities. If you have a type system that has a lot of ceremony, and fails to provide the guarantees that are actually needed in your project, it's a pure burden. If it's low-ceremony and provides guarantees important to your project, it's a clear, undiluted benefit. Reality often falls somewhere in between.


It rules out certain categories of bugs, makes it hard to assign a string to an int, etc...

If you are writing a small one time use script to accomplish a task clearly this that kind of protection is of low value.

If you are trying to write or maintain a system intended to last 20 years and keeps bugs out of 100 millions lines of code, every kind of check that can be automated has extremely high value.

Most projects are somewhere between these two extremes. The nature of the cutoff point where strong static typing helps or does not is what we should be debating, not its inherent value as Dahart suggested.


Simple designations like "static typing" and "dynamic typing", even when you bring in the concept of strong vs. weak (Java allows concatenating an int to a string, Python throws an error), aren't very helpful when languages like Common Lisp exist. (Edit: nor "compiled" vs. "interpreted" either for the same reason but especially in current_year when just about everything compiles to some form of bytecode, whether that is then run by assembly or by microcode running assembly a small distinction.) Specific languages matter more, specific workflows within languages matter more too. And as you say what you're trying to build also matters, but not all that much.


You are right that the type system waters are muddied by a variety of technologies and perhaps that isn't the best line to draw. I think your focus on the semantics of static vs dynamic dodges much of my point.

The crux of my argument was the larger and more complex the work the more important it is to find errors early. It seems obvious to me that languages like Java, C++ and Rust do much more to catch errors early than languages like Ruby, Python and Javascript which are easier to get started with and make a minimum viable product. Put those two things together and it seems like strong heuristic to use when starting a project.


This is why I think workflows matter too, at least as much as the language itself. If you write Python like you write Java, of course you're going to not catch some errors that Java would have caught before you ship, and you're probably going to be frustrated when you're at a company where everyone writes Python like Java. But if you write Python like Python (you can't write Java like Python), you'll find many of your errors almost immediately after you write it because you're trying out the code in the REPL right away, and writing chunks in a way where that's easier to do in Python.

Maybe a few type errors will still slip by, but you'll have found and fixed so many other kinds of errors much earlier. Kinds of errors that benefit by being caught immediately instead of festering because they passed a type checker. (I've never really found a type error to be a catastrophic-oh-I-wished-we-found-this-sooner type of bug. You fix it and move on. It's not dissimilar to fixing various null pointer exceptions that plague lots of corporate Java code.)

To me your obvious claim is not obvious at all, because the tradeoff space is so much richer than what mere type systems allow. We're not even touching on what you can do with specs and other processes that happen before you code in any language, nor other language tradeoffs like immutability, everything-is-a-value, various language features (recalling Java only recently got streams and lambdas), expressiveness (when your code is basically pseudocode without much ceremony (or even better when you can make a DSL) there's a lot fewer places for bugs to hide)... Typing just doesn't tell that much of a story.


The type system is your friend, not your enemy. You are comparing the type errors to the null pointer exceptions, aka the billion dollar error. You can have an extremely powerful type system, with very low ceremonies, that checks continuously that you are not shooting your foot AND having a REPL. For example using F#. Your code will be extremely expressive and creating DSL can be a breeze, with the huge benefit that even your DSL will be type checked at compile time.


That's my point, all that is in favor of F#, not static typing in general. I'm not opposed to type systems -- Lisp's is particularly nice, I like Nim's -- but having static types or not isn't enough of a clue that such a language really is suitable for large systems or can catch/prevent worse errors quicker.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: