Hacker News new | comments | show | ask | jobs | submit login
The comeback of static typing (kishorelive.com)
73 points by karterk 1697 days ago | hide | past | web | 111 comments | favorite



The lack of popularity for static typing can be laid mostly at the feet of Java and C++. A language with a truly good type system like Haskell, OCaml or Scala gives you more advantages without many of the disadvantages of a Java-style system.

Too bad too many people were too poisoned by Java et al to consider switching to one of the good statically typed languages; happily, their recent popularity (like this blog post :)) has gotten people to consider them.


> The lack of popularity for static typing can be laid mostly at the feet of Java and C++.

If anything, I'd argue that Java is the main reason for the rebirth of the popularity of static typing.

Java is a C++ without the compiler getting in the way, so it exposed the power of what you can do with a type system (pretty weak one, but one anyway) when syntax is not tripping you every step of the way.

By making static typing popular, Java kickstarted a whole new generation of improved languages such as C#, Scala and Kotlin which leveraged all the good concepts introduced by Java (such as no more separation between headers and implementation) while adding important missing features, such as type inference and traits.


Agreed, it's easy to look back now and consider Java negatively, but 15+ years ago (which is a long time in our field) it caught on for a reason.

It's not like Sun took a gun to everyone's head and said "use Java, damn it". I mean, Microsoft might try that :-), but Java succeeded almost despite Sun (e.g. on its own merits as a language), instead of because of them.

And really it was Sun's bungling of the language evolution that led to Java's lack luster reputation today. In constract, Microsoft did a very admirable job with evolving C# to keep up with more modern syntax/semantic expectations.


Agreed, it's easy to look back now and consider Java negatively, but 15+ years ago (which is a long time in our field) it caught on for a reason.

I suggest that Java mostly caught on for two reasons (from a purely technical perspective):

1. It had automatic garbage collection.

2. It had a batteries-included approach to its standard library.

In practice, both of these had caused much pain for C and C++ programmers for a long time, and a platform where things Just Worked was very appealing for a lot of those programmers. That in turn led to non-technical advantages that drove the widespread adoption of Java for server-side/enterprise programming.

I don’t see that either of those technical advantages has much to do with the type system, though. In fact, of all the programming languages and type systems I’ve used, Java’s is probably still the worst by almost any interesting measure.

I don’t really accept the original premise that static typing is making a comeback; it never went anywhere. However, if it’s gaining renewed interest in certain parts of the industry, I suspect that has more to do with increasing experience of where dynamic typing can have disadvantages as more projects using it grow larger and are maintained for longer, and perhaps to some extent with aspirational statically typed languages like Haskell showing that some of the perceived disadvantages of static type systems are not inherent problems.


I think even today, but especially many years ago, my experiences using Java code both in the form of web applets, and in the form of standalone applications has been far from "Just works".

It is better these days, but even today Java programs tend to have more distribution issues than other languages.

In the olden days, besides the awful Swing GUI that made it immediately recognizable, gray squares and/or stack dumps when trying to execute Java applications or applets were common.


One of my major ongoing web projects still uses a Java applet, so I’ll be the first to agree with you that it doesn’t always just work. Having said that, if you look at the kind of tools we have for packaging and distribution in say Python or JavaScript, it’s clear that even popular languages today can still have a very long way to go in this respect.

I would love to have a mainstream development stack with the kind of programming tools and expressive power that plenty of modern languages and libraries now offer, but which ultimately produced compiled and statically linked executables the old-fashioned way at least by default, without necessarily relying on package managers or heavyweight virtual machines or run-time linking or writing code just to describe how to install my code. Unfortunately, I doubt that such a language/tool chain exists today, though I’m optimistic that sooner or later someone’s promising niche language will break through.


Don't get me started on the pains of #ifdef everywhere to manage to write portable code...


15 years ago, Java was to C++ as Ruby is to Java: much slower, used by startups, touted for claims of programmer efficiency (garbage collection!)


Except Ruby is to a certain extent Smalltalk with another name without the nice developer environment.


You can't really say that Java and C++ are not popular languages. C++ is only really popular, IMHO, because there hasn't ever been a good competitor to it in the compile to native programming domain except for C. However, Go and Rust, IMHO, are interesting approaches and I wouldn't be surprised if Go or Rust replaced C++ over the next 10 years. I imagine C will still maintain its popularity in very low-level systems programming.

The thing about Java is that it is really hard to write clever hard to maintain code in it. Sure, it makes it painful to develop with, but because it leads to maintainable code the code never gets thrown out and rewritten and almost by accident very large multi-million line code bases grow over time in it. These huge code bases lead to a lot of jobs for people doing maintenance on them, not because developers like the language, but because the code still works and never becomes impenetrable, as most large code bases often do.


I like Go, but it will never replace C++. Nothing that uses a garbage collector ever will.

The Go authors are saying that Go is for "systems programming". What they apparently mean by that is server side infrastructure stuff, the kind that is now frequently written in Java. Things like Hadoop, Tomcat or even database systems.

What they don't mean (I believe) is operating systems, device drivers, embedded systems, real time stuff including many trading and telecoms systems, graphics pipelines, signal processing, libraries that are too complex to rewrite in every language (like browser engines), etc. There's a very long tail of these things and it's growing as we get more intelligent connected devices.

Much of that is now written in C++. The question is, will C come back to replace C++? I don't think that's going to happen. I love C. It was my first language and I admire its simplicity. But there is one thing that C++ has over C: RAII. RAII is an ingenious universal resource management tool that gives us 90% of the productivity gains we get from garbage collection without any of the unpredictability, and it helps with resources other than memory as well.

Forget templates and all that fancy stuff. C++s raison d'etre is RAII. So, I think, Rust may have a chance to replace C++ (purely from a technological perspective) but Go unfortunately does not.


> I like Go, but it will never replace C++. Nothing that uses a garbage collector ever

Only future will tell.

I have used in the past desktop operating systems (Native Oberon and AOS) without any single line of C++ code, written in GC aware system programming languages (Native Oberon and Active Oberon).

I do like C++ a lot, but I am quite confident that manual memory management will eventually be a sweet memory of old coders.


Update to my own comment.

I don't mean that Go will be that language per se, but that whatever we might use for doing systems programming in future OSs will be GC enabled, even if it allows for some way to do unsafe/system local memory management.


There are certain ways to write Java that are both more expressive and clever (and therefore more fun), as well as more dangerous in terms of future maintenance. Various dependency injection frameworks - e.g. Guice - are known for introducing some "magic" into Java code that sometimes may exceed the tolerance threshold of less experienced programmers.


>Various dependency injection frameworks - e.g. Guice - are known for introducing some "magic" into Java code that sometimes may exceed the tolerance threshold of less experienced programmers.

A lot of "dependency injection" BS can also exceed the tolerance threshold of more experienced programmers.


It depends :)


As no-one else has mentioned it yet, here’s a link to a copy of Chris Smith’s excellent “What to know before debating type systems” article:

http://blogs.perl.org/users/ovid/2010/08/what-to-know-before...

If you’re at all interested in the real strengths and weaknesses of different kinds of type system but don’t yet have much experience in the field, this is a solid and impressively neutral starting point.


I think language evolution will always be cyclical like this--dynamic languages sprint ahead, and the static languages eventually catch up.

The reason is that dynamic languages are so easy to build--they usually start out as hobbyist projects. Not to overly-trivial it, but these days most anyone can sit down with a compiler book and have an interpreted language up and going quickly.

So dynamic languages evolve quickly--it's easy to try new things, see if they work, see if they don't.

Static languages, on the other hand, are hard to build--you don't often see hobbyist language developers coming up with type systems in their spare time. Type systems are hard.

And the tooling for static languages is hard too--dynamic language users live in emacs/vim/Sublime, but static language users are going to want an Eclipse IDE, refactoring, etc., etc.

So, static languages evolve slower.

But once someone has put the time and effort into a static language, and has the tool chain polished, I assert that most programmers (of course not all) that sprinted ahead to the dynamic languages will find themselves tempted to wander back.

If the syntax is the same (once the static languages catch up), why wouldn't you want static typing, given that, even in dynamic language programs, 95%+ of the calls could be statically verified anyway?

(Admittedly, how you deal with the remaining 5% (reflection, macros, XML, whatever) is left an exercise to the reader.)

Great case in point: Groovy. The original creator, James Strachan, was a hobbyist, and I believe (admittedly putting words in his mouth), if not consciously, built Groovy as a dynamic language because it was easy. But now he prefers Scala, which is basically the same syntax, but with all of the extra infrastructure around typing/tooling/etc.


I prefer static languages for enterprise projects and dynamic ones for small scale ones, specially due to the differences among team members skills.

But the funny thing about IDEs is that the best ones were originally made for dynamic languages (Smalltalk and Lisp Machines) and most IDEs are still lacking when compared to what those environments allowed feature wise.

While many hackers prefer to use 70's style vim/emacs editors.


> But now [James Strachan] prefers Scala, which is basically the same syntax, but with all of the extra infrastructure around typing/tooling/etc.

James has actually moved on from Scala to helping build statically-compiled language Kotlin!


Ha! How interesting. I didn't know that; thanks for the update.


I disagree that static language users will want an IDE. I'm a static-typing advocate, but I much prefer vim over IDEs.

Then again, I design static-type systems in my spare time, so maybe I'm not the target of your comment :)


"Over time, not only have I learned to appreciate languages with good type systems, I actually now prefer them for solving certain classes of problems."

I think that dynamic typing seems easier at first, but after a while you see how static typing would catch mistakes you make. While one programmer gains the experience necessary to appreciate static typing, a new programmer starts programming and doesn't have that experience yet, thus the static vs dynamic debate is always with us.

I described it here: http://www.databasesandlife.com/the-cycle-of-programming-lan...


thats very true it will always be with us. i started in C ..spent a lot of time there and felt crazily liberated in perl, php and javascript. - but i still love and use C, ( and recently objective-c ).

this all boils down to skill with tools - cordless drills are amazing ..and so are floor drill presses


I think static typing (10+ years ago) got a bad rap not because of static typing as such, but everything that came with it (and didn't necessarily need to). I'm talking about the languages of the time (C, C++, Java primarily).

C is a bit of the one out here in that its static typing is a really thin veneer as you can cast things to whatever you want them to.

Python and Ruby (in particular these two) more than anything else allowed you to write really terse, expressive code. The canonical Python example of this is defining a tree data structure:

    def tree(): return defaultdict(tree)
or solving the N-queens problem in ~6 lines of code [1].

Java and C++ OTOH were (and are) much more verbose but only some of this has anything to do with static typing.

All of this is one reason I'm incredibly bullish on the future of Go for such reasons as:

- no interfaces. If an object implements the methods expected by a consumer then it compiles otherwise it doesn't;

- incredibly simple OO system;

- no exceptions as such (particularly checked exceptions in the case of Java, which IMHO are just horrible). Some consider this a minus. I tend to think that exceptions tend to get abused for flow control and aren't the panacea they're made out to be;

- simple semantics for function returns and the ability to return multiple items;

- opinionated formatting (I'm not one of those people who has almost religious zeal for the placement of curly braces and indenting so I'd rather just have a standard and stick to it, whatever it is);

- compiles to a binary rather than running on a VM;

- simple grammar with fast compilation. This last point is contrasted particularly well with C++;

- coroutines (or rather goroutines) as a first-class language citizen for solving concurrency problems;

- the "defer" mechanism for resource cleanup. I find this much easier to read than tortured try-finally semantics and nesting.

Another way to put it is this: a lot of vim/emacs users judge IDEs as horrible based on having (often briefly) used Eclipse, which is a bit like judging the car by having once driven a 1975 Trabant.

Don't judge static typing by Java or even C++. Judge it by Go.

[1]: http://rosettacode.org/wiki/N-queens_problem#Python

[2]: http://www.time.com/time/specials/2007/article/0,28804,16585...


> The canonical Python example of this is defining a tree data structure:

  def tree(): return defaultdict(tree)
But you're not defining a data structure -- you're just reusing the builtin dictionary type.

Try defining say a RB tree without using builtin container types, and you quickly see how poor these languages' support for new data structures is.


I think you're willfully missing the point - you don't have to implement an rb tree without the built in containers because the language comes with built in containers. Python doesn't need to be optimized for doing such things from scratch because hardly anybody ever needs to do such things from scratch.


The language comes with a small number of simple but widely applicable containers.

That doesn’t negate dons’ point that if you need something other than those containers there is relatively poor support for defining your own.


I don't get your point.

In C++ "return std::set<int>" - there's your tree.


"Python and Ruby (in particular these two) more than anything else allowed you to write really terse, expressive code."

If you like being able to write terse, expressive code, you'll love APL,[1] where Conway's Game of Life[2] can be written in one line as:

  life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}
[1] - https://en.wikipedia.org/wiki/APL_(programming_language)

[2] - https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life


We also want our code to be readable.


I can't read APL but perhaps that is readable. I'm unsure that other common dynamic languages like Ruby or Python are readable purely in comparison purely for using http://en.wikipedia.org/wiki/COBOL#Verbose_syntax.


I think a really simple rule-of-thumb for code is: "can you dictate it through the phone?"


Code has nested structures which are commonly shown with indentation, which it's difficult to communicate vocally with say, intonation. E.g. think about how differently you'd dictate this variation on your reply...

I think a really simple rule of thumb-for-code is "can you dictate it?" through the phone.


shudder

My first introduction to functional programming was with J (APL but using ASCII) during my sophomore year of college and I definitely didn't have the chops to understand it at that point. I was scared of functional languages for another year until a different professor introduced me to Haskell.


APL is far more readable than J in my opinion, perhaps because APL keeps the brackets and curlies balanced, while J doesn't. All operators in APL are one token long, while J uses multi-token operators. The extended character set of APL has many benefits.


This is terse but not expressive.


It is highly expressive.


For me static typing actually was learned by using ML, Modula-2, Pascal, Ada with C showing a more weak typing kind of thing.

Java and C# also have native code compilers available to them.


Why don't MLs get enough credit?


“Being right too soon is socially unacceptable.”

ML was an improvement on most of its successors, and if you trace the development of mainstream languages, you see that they are more and more becoming ML.


ML is actually the poster child of static typing with its Hindley-Milner type inference.


Yes exactly! I always find it really impressive that ML and its type inference were developed in the 70s, only just a few years after C had been defined.


C only got ubiquitous due to being UNIX's system programming language.

Who knows how it would have turned out if UNIX did not became as it turned out to be.


> - compiles to a binary rather than running on a VM;

To me, this is one the many reasons why Go will fail.


VM is just a conceptual model.

Most languages that usually have a VM, also tend to have native code compilers available, it is just that most HN folks tend to be unaware of them.


Could you elaborate on this?


These days, there are more advantages than drawbacks to running in a VM. The performance advantage of native over byte code has shrunk considerably to the point of being only relevant for a tiny portion of applications (e.g. games). On the other hand, there are considerable advantages to JVM's, as Java and the generation of language it kickstarted has shown.


Having a VM also adds a whole lot of security concerns and in case of jvm - dependency on sun/oracle. VMs are not the only way to kickstart language generation, there are other options, such as llvm (yeah, that name is confusing in this context).


> ... case of jvm - dependency on sun/oracle ...

Why the dependency on Oracle?

There are other many other JVM available, even not certified ones, and you can also use compilers to native code if you wish to do so.


Wouldn't a vm help with security?


I'm not the parent, but one reason could be that the feature is more about collaboration of different languages for different purposes on a shared ecosystem (JVM, CLR, JS as bytecode, etc).

Which Go doesn't play well with.


> A few years back, one big lure of Python or Ruby used to be the functional constructs that they offered like higher order functions.

This statement surprises me.


Why's that, dons? They are playing a bit fast and loose with what 'higher order functions' means, but, for example, one of Ruby's best features is blocks. The details of higher order stuff in Ruby is certainly... messy, but blocks are a great introduction to the concept for people who haven't used them before. I've taught the concept to high school kids on their first day of programming, (mostly through a physical metaphor involving a list of chores on a piece of paper) and they picked it up easily enough.


I can't speak of Ruby (or for Dons :), but I'm not sure Python's use of hof's is a good example to learn from when it's lambda scoping rules produce such "interesting" results.

>>> monomials = [lambda x: x(star)(star)i for i in [1,2,3]] #(not sure how to escape stars for exponentiation properly, whatever)

OK, what will these functions evaluate to at the point 2.0?

>>> monomials[0](2.0), monomials[1](2.0), monomials[2](2.0)

(8.0, 8.0, 8.0)

The i variable was captured as an address which all the lambdas share. For an integer. Ouch.

You can do "lambda x,i=i: x(star)(star)i" and capture the value properly, but to me this is a landmine waiting to be stumbled upon every time I write a lambda. I for one will often forget to do this, at least so long as I also keep using other languages which can capture integers as values without hijinks.


Yeah. Makes sense. Ruby also has a few of these kinds of weirdness, especially around the multiple kinds of hof, (lambdas, Procs, and blocks) and their limited utility compared to a truly functional language.


Every language with mutable variables and closures that I've used works the same way (other than ones like C++ where you have to explicitly capture by either reference or value). The behavior can be a bit confusing, but it makes perfect sense once you realize that closures are capturing variables, not values.


I wonder if anyone knows why subtlety exists in Python. Is it because reference semantics and HOF are fundamentally incompatible, or could it have been avoided somehow but still retaining reference semantics. The most common complain about lambda in Python is that it is restricted to expressions, which is a conscious design decision that could have gone the opposite direction.


The subtlety sixbrx mentions used to be present in C#, but it has been mostly fixed by moving the declaration of the foreach loop variable inside of the loop [1]. Like Python, C# has reference semantics. I'm surprised that Guido van Rossum didn't make a similar change when he broke backwards compatibility in Python 3.

[1]http://blogs.msdn.com/b/ericlippert/archive/2009/11/12/closi...


Lexical scopes capturing variables is more powerful than capturing values.

If you want to capture values, that's easy too, with:

  lambda x, i=i: ...


Fair enough. I'd say they provide more of a flavor of "higher orderness" rather than a true implementation, as the semantics are so limited and ad hoc in Ruby.


Seems fair. I highly respect your opinion on the topic, and that makes a lot of sense. There's a reason that I moonlight Haskell and Rust. :)


I agree with his view, but I could never find a static typed language that was useful in a long run.

Haskell was amazing as long as try don't need to do much io. When I started spending more time fighting the constraints than being productive I stopped.

Ocaml was also not bad. But their strict restrictions on what opetators can be used with what types is annoying when you do lots of calculations. Having to explicitly cast numbers all the time was not inviting either.

Rust sounds great, but after a day of playing with it I saw it's really not ready yet. A couple of places needed wrapping in "unsafe" block to start working as expected. Some keywords and ideas were also going to change soon. I couldn't get used to the restrictions coming from not having exceptions or early function returns. It really adds some maintenance code.

Scala was probably the best so far, but the heavy vm is annoying.

So... still waiting for that perfect language.


>I couldn't get used to the restrictions coming from not having exceptions or early function returns

I'm probably being getting confused about what you mean here, but I thought you can return from anywhere in a function in rust using 'return'. I think Option types are one way to propagate errors back up the stack (with the compiler forcing you to handle them), but I don't know how practical that is in reality.

I agree that Rust isn't ready for production use though, even in the few weeks I've been trying it they have decided to change parts of the language. Also the standard library is lacking in documentation and is pretty inconsistent.


What I mean is that if you don't have exceptions and `return` returns only from a given block, it takes some serious exercise to break out of a "fn ... { ...each { ... match ... } } }".

Return will only return from one of them, so to signal an error from the match block, you need to:

- save the result in a mutable variable in a function

- do something that results in "false" returned from each block

- return the result from the function (probably casting mut->immut on the way)

What I'd really like is either a stack unwinding exception, or a `return` keyword that's different from the "last expression in a block" syntax. Last expression could give a result for a closure, match, if, etc. while `return` could always break out of a function. For example (pseudo-code, proper syntax doesn't matter here):

    fn parse_into_list(s: &str) -> Option<Stuff> {
        let res = do s.map |c| {
            match c {
                bad_character => return None
                ...
            }
        }
        Some(Stuff(res))
    }
Instead of:

    fn parse_into_list(s: &str) -> Option<Stuff> {
        let mut result: ...;
        let mut correct: true;
        do s.each |c| {
            match c {
                bad_character => {correct = false; false}
                good_character => { do_stuff_with_result; true}
            }
        }
        if correct {
            Some(Stuff(result))
        } else {
            None
        }
    }


Then you should probably invest in Scala. The (J)VM is actually a blessing, Scala is standing on the shoulder of giants. I'm not talking about Oracle, but other languages that depend on JVM such as Clojure, etc, that will (hopefully) push the VM forward.


I don't like Scala being based on JVM/.NET for the following reasons (I know many people don't agree, this is all subjective):

- inherited null (I love Option<> and Either<> pattern and exceptions, nullable stuff is a problem)

- relying on foreign ecosystem (you end up with half functional, half "we have inherited all this mutable stuff, so let's use it" ugly mixture)

- startup time (if you write things that are script-ish in nature, this is really annoying)


The first two are required for Java compatibility, the latest probably can be mitigated by compiling it to native using LLVM (http://greedy.github.com/scala-llvm/) (I haven't used it).


> - startup time (if you write things that are script-ish in nature, this is really annoying)

Just compile your code to native using a native code compiler for JVM bytecodes. There are quite a few to chose from.


So scalac and then another compiler after each modification? That's speeding up startup, but slows down development even more.


Of course you should only compile to native code when making the package to distribute the scripts.


Try Go. No vm. Compiles super fast (one of the most annoying parts of static typing when used for big programs). Obvious and easy to use closures. Implicitly fulfilled interfaces for painless compile time duck typing, and type inference for less typing.


Of course it has a "VM" (aka runtime system), providing services like thread scheduling, garbage collection, IO management and so forth. It might just not be an easily accessible standalone VM like the JVM.

> Compiles super fast

Because it doesn't do anything. Seriously, it compiles fast because the type system is so trivial. No type inference (except the really restricted (:=) thing. And I can't even have polymorphic functions -- which we've known how to type check for around 40 years now.

This is a serious drawback if you are interested in type system support for code reuse.


> Of course it has a "VM" (aka runtime system)

VM != runtime system. VM == virtual machine, bytecode.


Virtual machines most certainly do not require bytecode.

GHC for example targets the "Spineless, tagless G machine" by compiling to native code that calls runtime services defined by the (abstract) STG machine ISA (the "primops").

Similarly, Go targets the Go runtime, by compiling to native code that calls "machine" services like collecting memory or scheduling threads. It might be a thin wrapper over the underlying processor architecture, but the compiler is clearly targeting the Go runtime as its target "machine".


The Go runtime does nothing what a machine does. Compiled Go code consists of pure, native machine code instructions, no special opcodes. It's just that some additional functionality is statically linked in.


You should check out Julia[0] and Dylan[1]. Both are dynamic with support for gradual typing and parametric polymorphism (multiple dispatch and generic functions). This, in my opinion, is the best of both worlds.

[0] http://julialang.org/ [1] http://opendylan.org/


Julia looks awesome from the description. I'll need some hands-on time with it :) I love the Kinds approach.


>Rust sounds great, but after a day of playing with it I saw it's really not ready yet.

Well, they never said it was ready. Actually they say the exact opposite: that it's still a moving target. It's 0.5 for a reason. Hopefully around this time next year there will be a 1.0.


I didn't want to imply otherwise, but languages have different paths to "ready". Some are stuck in 0.999 for ages, some are actually usable after 2.4 (python), some are quite stable without any specific number (ooc), etc. It's worth checking out your options early.


> So... still waiting for that perfect language.

Have you looked at the new, post-Scala, generation of languages, such as Kotlin? http://blog.jetbrains.com/kotlin/


No, but it's on the ToCheck list, right after Go.


Haskell is great at doing much IO. How long did you try to use it?


Not long, but enough to rage-quit. This is very subjective, but if after 1-2 months I'm still finding myself in a situation of "oh god, this actually needs to do something useful; now I have to pull IO through half of this module", I'm starting to blame the language, not myself. I'd love to use haskell sans the academically pure approach to "the world". I'm not doing value computations - my applications spend most of the time pushing/pulling bytes outside of the system, so that part should be trivial. Maybe it's just not the right language for the job, maybe it's my approach. Either way, all the other great things about the language couldn't divert my attention from fighting to get stuff done.


Haskell can definitely take more than 1-2 months to pass the learning curve, especially without a good mentor.

You generally shouldn't need to "pull IO through half this module", since you can just lift the large pure part with "fmap" to operate on the inner IO. But using these combinators effectively does take some learning.

The approach to IO that Haskell uses is not "academic", it is extremely practical: By typing effects you get immense practical, not academic, benefits.

Pulling and pushing bytes outside the system is trivial.


> Pulling and pushing bytes outside the system is trivial.

Though doing so with any performance, might not be.


Why? Haskell has excellent performance for IO.


Depends on what level of abstraction you want to be at. If you use C-style IO, that's fine, but low level. If you use lazy-IO, that unfortunately made it into the Prelude, forget your performance. The Right Way is to use enumerators / left-fold based IO, or the pipe or conduit packages. But that's not trivial, yet. (Though not insanely hard, either.)


It's not any worse than IO in other languages, which is the original topic.


Lazy IO, which is what you get by default, if you just learn the language and use the Prelude, is worse than what you get in C.


So Haskell makes IO harder by making you learn which modules use lazy I/O and avoid them? I can agree with that. But I don't think that's what people generally try to say when they claim IO in Haskell is hard or bad. People think the IO/effect segregation in Haskell makes IO bad or hard, when it's in fact just the opposite.


Ever tried D? (don't believe the hype)


Shortly. It was hard to find much information and from what little I got to experience, I'd rather take OOC or cython in cases where I'd otherwise want to try D. But that was before D2 - maybe it's worth giving it another go now...


The title should have been "My personal comeback to static typing", because static typing was never gone.


well if you read HN religiously you might have thought otherwise


I have distilled my thoughts on this down to: if you think dynamic typing is the only effective way to do things you probably have never done any significant debugging or maintenance. Dynamic typing is a solution, but like all solutions it has limits. Here is my longer article on the topic before I distilled down to the above: http://www.win-vector.com/blog/2012/02/why-i-dont-like-dynam... .


Disagree that static typing is important- it's all about static analysis. Static analysis has come a long ways for dynamic languages, see TypeScript or the Closure JS compiler for example.


Hm, this is an interesting point. One could say that once one has such general things as structural types, static typing essentially is just mandated static analysis.


Static analysis is a great tool, but the guarantees of a typing system can be much stronger (and as a result, can inform the static analyzer much better).


A typing system != static typing!

Static typing means that the types cannot change. Dynamic languages can still declare types. It's just that they are allowed to morph. A good analyzer can track those changes and know when types are equivalent (TypeScript does this).

Furthermore, typing is one mechanism to express contracts within your code- such as 'this method takes argument X which is of the form Y and will return something which looks like Z'.

More complex analysis requires more and more annotations to specify the contracts of the methods. An example is a C function which takes a char* and an int. You want to annotate that the int indicates the length of the char* buffer, and have a static analyzer enforce that.

I am all about code contracts and enforcement of those contracts. But this is not necessarily tied to a static language.


There's just a different set of terms used here.

In static typing context, "types" is a lexical construct of subexpressions. In this context, dynamically typed languages are "untyped" or "unityped" (one type for all expressions).

In dynamic typing context, "types" are the runtime tags of data with rules about dispatch and what kinds of tags are allowed with what operations.

I think for the benefit of communication, we should refer to the static, compile-time annotation on sub-expressions as "types", and the entity dynamic languages calls "types" as "tags".

Your example:

> More complex analysis requires more and more annotations to specify the contracts of the methods. An example is a C function which takes a char* and an int. You want to annotate that the int indicates the length of the char* buffer, and have a static analyzer enforce that

This is of course certainly compatible with types. Something like:

    func : (n : Int) -> Ptr (Array n Char) -> ...
This is the type of the contract you specified.

Static types and static analysis are basically variants on the same idea. However, static types is part of the ordinary workflow, and static analysis is often treated as an "after-the-fact" feature. This will not work as well, since much of the power of types relates to techniques to structure our programs so that types cover as much as possible.


One of the points the author makes, is that static typing improves code maintainability and refactoring. I've used both static and dynamic languages. When I was younger I preferred dynamic typed because they were easier to start with and code, but now that I've grown more experienced and dealt with highly complex projects, I prefer static typed. Being able to debug, maintain, and refactor code is much easier and less risky in static typed languages.


I think dynamically typed languages have become so popular because in simple examples they look great to novices. You don't need to know much syntax about defining variable types, you don't have to type as much, you don't have to figure out why adding an integer to the end of your string won't work, etc. Later, when they try to build something large enough to be useful they realize just how much they traded for that little bit of convenience. For example in PHP:

  $id = 1;

  $dbh = new PDO($connectionString, $user, $password);

  $sql  = ' SELECT column';
  $sql .= ' FROM table';
  $sql .= ' WHERE id = :id';

  $stmt = $dbh->prepare($sql);
  $stmt->bindParam(':id', $id);

  if($stmt->execute()) {
    // Do stuff
  }

  //This echos ID: 1
  echo "ID: $id";

  if($id === 1) {
    echo 'This should be true, we set $id = 1 at the top.';
  } else {
    echo 'Instead we get here because PDO casually changed our variable into a string and "1" != 1.';
    die('in a fire dynamic typing.');
  }


> you don't have to figure out why adding an integer to the end of your string won't work

You're confusing dynamic typing with weak typing. A language can be dynamically typed (types are checked at runtime) without doing things like automatically converting integers to strings, integers and strings to booleans, etc. etc.

Conversely, a language can be statically typed and still do some of those conversions.


I very much like Go's approach to static typing. Go allowed me to retain the flexibility and ease of development that Ruby and friends provide, even though the output is fully statically typed. It's great!


HM type systems have been providing this benefit (to a greater effect than Go) for ~40 years...


That is Go's PR on the move, presenting Go features as novelty.

My personal one is how fast it compiles, given I was doing the same thing with Modula-2 and Turbo Pascal in the mid-80's.


I wouldn't call something "fully statically typed" when most abstractions can only be implemented by casting from and to the top type.

There is a reason why no one wants to use Java 1.4 anymore.

Go's "static typing" is next to useless.


For the dev building the library static typing is a very convenient thing to have. All the ins and outs have very clear rules enforced at compile level.

However, for the user of the library, it simply gets in the way. This may be avoided though if the library has been designed well enough that strange internal types are not exposed to the user unless absolutely necessary.


Actually, I find the opposite to be the case--static typing helps me use unknown libraries. The types immediately let me know exactly what a function expects and what I should expect from a function. They also prevent me from using things incorrectly most of the time.

Essentially, the types are compiler-enforced documentation. They also make discovering functions easier--a type is a succinct summary of what a function does, so I can just browse by types to find what I want quickly. Augment this with great tooling like Hoogle (a type search engine) and you've got a truly awesome development system.


I am sure you don't care but Scala runs on the Java virtual machine.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: