Hacker News new | comments | show | ask | jobs | submit login

I agree with his view, but I could never find a static typed language that was useful in a long run.

Haskell was amazing as long as try don't need to do much io. When I started spending more time fighting the constraints than being productive I stopped.

Ocaml was also not bad. But their strict restrictions on what opetators can be used with what types is annoying when you do lots of calculations. Having to explicitly cast numbers all the time was not inviting either.

Rust sounds great, but after a day of playing with it I saw it's really not ready yet. A couple of places needed wrapping in "unsafe" block to start working as expected. Some keywords and ideas were also going to change soon. I couldn't get used to the restrictions coming from not having exceptions or early function returns. It really adds some maintenance code.

Scala was probably the best so far, but the heavy vm is annoying.

So... still waiting for that perfect language.

>I couldn't get used to the restrictions coming from not having exceptions or early function returns

I'm probably being getting confused about what you mean here, but I thought you can return from anywhere in a function in rust using 'return'. I think Option types are one way to propagate errors back up the stack (with the compiler forcing you to handle them), but I don't know how practical that is in reality.

I agree that Rust isn't ready for production use though, even in the few weeks I've been trying it they have decided to change parts of the language. Also the standard library is lacking in documentation and is pretty inconsistent.

What I mean is that if you don't have exceptions and `return` returns only from a given block, it takes some serious exercise to break out of a "fn ... { ...each { ... match ... } } }".

Return will only return from one of them, so to signal an error from the match block, you need to:

- save the result in a mutable variable in a function

- do something that results in "false" returned from each block

- return the result from the function (probably casting mut->immut on the way)

What I'd really like is either a stack unwinding exception, or a `return` keyword that's different from the "last expression in a block" syntax. Last expression could give a result for a closure, match, if, etc. while `return` could always break out of a function. For example (pseudo-code, proper syntax doesn't matter here):

    fn parse_into_list(s: &str) -> Option<Stuff> {
        let res = do s.map |c| {
            match c {
                bad_character => return None
Instead of:

    fn parse_into_list(s: &str) -> Option<Stuff> {
        let mut result: ...;
        let mut correct: true;
        do s.each |c| {
            match c {
                bad_character => {correct = false; false}
                good_character => { do_stuff_with_result; true}
        if correct {
        } else {

Then you should probably invest in Scala. The (J)VM is actually a blessing, Scala is standing on the shoulder of giants. I'm not talking about Oracle, but other languages that depend on JVM such as Clojure, etc, that will (hopefully) push the VM forward.

I don't like Scala being based on JVM/.NET for the following reasons (I know many people don't agree, this is all subjective):

- inherited null (I love Option<> and Either<> pattern and exceptions, nullable stuff is a problem)

- relying on foreign ecosystem (you end up with half functional, half "we have inherited all this mutable stuff, so let's use it" ugly mixture)

- startup time (if you write things that are script-ish in nature, this is really annoying)

The first two are required for Java compatibility, the latest probably can be mitigated by compiling it to native using LLVM (http://greedy.github.com/scala-llvm/) (I haven't used it).

> - startup time (if you write things that are script-ish in nature, this is really annoying)

Just compile your code to native using a native code compiler for JVM bytecodes. There are quite a few to chose from.

So scalac and then another compiler after each modification? That's speeding up startup, but slows down development even more.

Of course you should only compile to native code when making the package to distribute the scripts.

Try Go. No vm. Compiles super fast (one of the most annoying parts of static typing when used for big programs). Obvious and easy to use closures. Implicitly fulfilled interfaces for painless compile time duck typing, and type inference for less typing.

Of course it has a "VM" (aka runtime system), providing services like thread scheduling, garbage collection, IO management and so forth. It might just not be an easily accessible standalone VM like the JVM.

> Compiles super fast

Because it doesn't do anything. Seriously, it compiles fast because the type system is so trivial. No type inference (except the really restricted (:=) thing. And I can't even have polymorphic functions -- which we've known how to type check for around 40 years now.

This is a serious drawback if you are interested in type system support for code reuse.

> Of course it has a "VM" (aka runtime system)

VM != runtime system. VM == virtual machine, bytecode.

Virtual machines most certainly do not require bytecode.

GHC for example targets the "Spineless, tagless G machine" by compiling to native code that calls runtime services defined by the (abstract) STG machine ISA (the "primops").

Similarly, Go targets the Go runtime, by compiling to native code that calls "machine" services like collecting memory or scheduling threads. It might be a thin wrapper over the underlying processor architecture, but the compiler is clearly targeting the Go runtime as its target "machine".

The Go runtime does nothing what a machine does. Compiled Go code consists of pure, native machine code instructions, no special opcodes. It's just that some additional functionality is statically linked in.

You should check out Julia[0] and Dylan[1]. Both are dynamic with support for gradual typing and parametric polymorphism (multiple dispatch and generic functions). This, in my opinion, is the best of both worlds.

[0] http://julialang.org/ [1] http://opendylan.org/

Julia looks awesome from the description. I'll need some hands-on time with it :) I love the Kinds approach.

>Rust sounds great, but after a day of playing with it I saw it's really not ready yet.

Well, they never said it was ready. Actually they say the exact opposite: that it's still a moving target. It's 0.5 for a reason. Hopefully around this time next year there will be a 1.0.

I didn't want to imply otherwise, but languages have different paths to "ready". Some are stuck in 0.999 for ages, some are actually usable after 2.4 (python), some are quite stable without any specific number (ooc), etc. It's worth checking out your options early.

> So... still waiting for that perfect language.

Have you looked at the new, post-Scala, generation of languages, such as Kotlin? http://blog.jetbrains.com/kotlin/

No, but it's on the ToCheck list, right after Go.

Haskell is great at doing much IO. How long did you try to use it?

Not long, but enough to rage-quit. This is very subjective, but if after 1-2 months I'm still finding myself in a situation of "oh god, this actually needs to do something useful; now I have to pull IO through half of this module", I'm starting to blame the language, not myself. I'd love to use haskell sans the academically pure approach to "the world". I'm not doing value computations - my applications spend most of the time pushing/pulling bytes outside of the system, so that part should be trivial. Maybe it's just not the right language for the job, maybe it's my approach. Either way, all the other great things about the language couldn't divert my attention from fighting to get stuff done.

Haskell can definitely take more than 1-2 months to pass the learning curve, especially without a good mentor.

You generally shouldn't need to "pull IO through half this module", since you can just lift the large pure part with "fmap" to operate on the inner IO. But using these combinators effectively does take some learning.

The approach to IO that Haskell uses is not "academic", it is extremely practical: By typing effects you get immense practical, not academic, benefits.

Pulling and pushing bytes outside the system is trivial.

> Pulling and pushing bytes outside the system is trivial.

Though doing so with any performance, might not be.

Why? Haskell has excellent performance for IO.

Depends on what level of abstraction you want to be at. If you use C-style IO, that's fine, but low level. If you use lazy-IO, that unfortunately made it into the Prelude, forget your performance. The Right Way is to use enumerators / left-fold based IO, or the pipe or conduit packages. But that's not trivial, yet. (Though not insanely hard, either.)

It's not any worse than IO in other languages, which is the original topic.

Lazy IO, which is what you get by default, if you just learn the language and use the Prelude, is worse than what you get in C.

So Haskell makes IO harder by making you learn which modules use lazy I/O and avoid them? I can agree with that. But I don't think that's what people generally try to say when they claim IO in Haskell is hard or bad. People think the IO/effect segregation in Haskell makes IO bad or hard, when it's in fact just the opposite.

Ever tried D? (don't believe the hype)

Shortly. It was hard to find much information and from what little I got to experience, I'd rather take OOC or cython in cases where I'd otherwise want to try D. But that was before D2 - maybe it's worth giving it another go now...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact