Hacker News new | comments | show | ask | jobs | submit login
The Haskell / Snap ecosystem is as productive (or more) than Ruby/Rails. (dbpatterson.com)
142 points by T_S_ 1550 days ago | hide | past | web | 98 comments | favorite



Having a compiler that will tell you all the places that you need to change things is an amazing productivity booster.

Agree with this 100%. I think dynamically typed languages are a transitional technology we'll mostly leave behind as the kinks get worked out of modern type systems.


It would have been interesting (and perhaps a little more convincing) to see some examples of the Haskell type system boosting productivity in such a dramatic way.

I can count on one hand the number of type issues which have caused trouble for me in Ruby over > 5 years of producing complex systems with it. I haven't used Haskell though so would be interested to see some examples of this. If his audience is Ruby users, it is not very informative to just say it is better in a general way. The main point of comparison in the article point 1 seems to be with unit tests, however there are no concrete examples given of a failing test from a real program where types would have helped. A strong type system adds friction and verbosity, so it would need to come with a lot of advantages for me to prefer it to Ruby, which will not allow something like 2 + "2" but does try to get out of your way as much as possible.

It's quite hard to compare type systems as they vary so much but at least here is he comparing just two systems Ruby/Haskell, so comparisons can be straightforward examples of situations where types have caused him trouble with direct comparisons. I'd be interested to see an example of something like a telephone number (from untrusted user input) where the Haskell type system is superior to having some checks on formatting, and an explanation for those less familiar with Haskell of what advantages the type system gives.


The thing is, it's not that straightforward. It's not about avoiding type errors that would have cropped up in Ruby, but about getting the type system to encode as much of your program's semantics as possible. For example, in Ruby, you use strings and symbols for a lot of disparate things. In Haskell, you'd introduce a type for each purpose to encode your intent in a way the compiler understands†. In Haskell, you're actually going out of your way to create more potential type errors, because that's more stuff the compiler can check for you.

Concrete example off the top of my head: In Ruby, we do `foo.instance_variable_get(:@bar)`. If we accidentally write `foo.instance_variable_get(:bar)`, that's a hard error, but it isn't a type error. Haskellers would generally express a constraint like that with the type system, so the compiler would let them know when they made such a mistake.

(Also, don't forget that every unintended nil is a type error! If you've been doing heavy Ruby work for years and gotten fewer than six NoMethodErrors, I will hang my head in shame.)


For example, in Ruby, you use strings and symbols for a lot of disparate things. In Haskell, you'd introduce a type for each purpose to encode your intent in a way the compiler understands

But then in Ruby if you want to you can encapsulate behaviour and your intent in objects instead of types - as concepts become more complex, you may introduce an object which encapsulates the data and provides checked interfaces for it. Taking the example of a telephone number, you might define a PhoneNumber Class in Ruby which encodes your intent and enforces an interface in much the same way as a Type in Haskell might (?), but if your use of telephone numbers is simply as an unformatted string, you don't have to introduce that complication initially. A static typing system is not the only way to encode that sort of information is it?

I do find the example above somewhat puzzling, as idiomatic ruby would be more like foo.bar if you want the value of bar (which should have an accessor defined if you are allowed to read it). What I was hoping for was an example of the two languages side by side demonstrating some small mistake which leads to errors or unintended consequences because of a lack of static typing in Ruby.

If you're counting no method errors as type errors then I must hang my head in shame :) Usually those are caught before going into production though by either unit tests or normal testing. You catch typos (with or without strong typing) on compilation in a compiled language, but you have to catch them with testing at runtime in an interpreted one. But is that really related to static typing or compiled versus interpreted?

I'm intrigued by the enforced types of Haskell though, so this article is an interesting starting point in the comparison, and thanks for trying to explain it.


But then in Ruby if you want to you can encapsulate behaviour and your intent in objects instead of types - as concepts become more complex, you may introduce an object which encapsulates the data and provides checked interfaces for it.

I think the idea is that a strong static type system will allow you to describe the constraints idiomatically and concisely, whereas describing them using the usual OOP tools gets overkill very quickly.


You can start with simple types in Haskell too, FWIW. To take your phone number example, let's say I'm making a very simple program that dials a number. In Ruby, I'm talking about something like:

    aNumber = "+0123456789"

    def dial(number)
      # does the dialling
      return someConnection
    end

    dial(aNumber)
(It's been a long time since I've done any Ruby, so forgive any glaring syntax faults.) That method expects a string - aNumber is an example of which - but there's no annotation to explain that to the runtime (Ruby doesn't have a compiler in the sense we refer to one here).

In Haskell, something similar might look like this (I'm going to be verbose and specify types here, but the compiler can actually infer a few of them):

    aNumber :: String
    aNumber = "+0123456789"

    dial :: String -> PhoneConnection
    dial number = -- a function which does some dialing

    -- somewhere else, the above is called as
    someConnection = dial aNumber
The "::" lines are type signatures. The first says that the variable aNumber is a String. The second says that the function "dial" takes a string argument, and returns an instance of the PhoneConnection type (in real Haskell, there'd be some IO monad stuff wrapping it, but we can happily ignore that for now).

Type signatures aren't generally used when defining variables, and Haskell can generally infer them for simple functions - but they're very useful when writing code, as alongside informing the compiler of our intent, they inform other developers of what the function does. Anyone coming along in future can easily see that they need to pass the "dial" function a String, for instance, and will get back a PhoneConnection. In Ruby, however, other developers either need to hope you've documented it, or inspect your code to figure out what it expects.

FWIW, in Java, the function definition/type signature might look like:

    public PhoneConnection dial(String number) {}
Now, let's say I want to make things a little more obvious to people reading my code. I can make the following nips/tucks:

    type PhoneNumber = String

    aNumber :: PhoneNumber
    aNumber = "+0123456789"

    dial :: PhoneNumber -> PhoneConnection
    dial number = -- a function which does some dialing
Here, we've used Haskell's type aliasing. All this does is say that the type PhoneNumber is the same as a String. It's not really useful at this point, but it means that it's a little more clear what the dial function requires. Developers inspecting it will see that the PhoneNumber type is really a string, but they can see what it is that String is supposed to store (of course, they still have no idea of intent).

There's no real analogy for Java here - it'd be like saying something like:

    public class PhoneNumber extends String {}
(Note that: the Java code doesn't define any new functions for PhoneNumber, the Haskell code isn't really object extension, and the Java String class is final so you can't do this anyway. It's not really like doing that at all, but hopefully it illustrates the point).

Now, let's say I want to change this String into an object, to make it more robust.

In Ruby:

    class PhoneNumber
      def initialize(countryCode, areaCode, number)
        @countryCode = countryCode
        @areaCode = areaCode
        @number = number
      end
    end
The code inside the dialling function will also change, but the actual function definition _doesn't_ - and any code calling that "dial" function won't be aware that it needs to change. It's still:

    def dial(number)
      # ...
    end

    dial(aNumber) # which might still be our string, so at runtime, we're going to crash!
In Haskell, you might do something like:

    data PhoneNumber = PhoneNumber { countryCode :: Int, areaCode :: Int, number :: Int }

    aNumber = PhoneNumber 01 23 456789

    dial :: PhoneNumber -> PhoneConnection
    dial number = -- do stuff
Now, because you have that type definition, anything that still uses String numbers will cause compilation to fail. Your code will not produce an executable that you can run. Which is a Good Thing(tm)! Because it means that you've prevented a whole class of runtime error/crash.

So, yeah. Static typing is cool because (with a compiler) it helps you catch and prevent runtime errors. Haskell's is particularly cool because it's very terse, and very flexible (I haven't really got into it here, fwiw - this was a very basic example :)) - which prevents some of the extreme annoyances you face dealing with Java's verbose (and, thanks to generic type erasure, slightly broken) type system.

PS, as a little extra - Nil/NullPointerException type stuff is difficult to cause in Haskell. When you say a function returns a type - PhoneConnection - it must return that type. If there's a chance of it erroring and returning Nil/null, you return a "Maybe" type which encapsulates it instead. That means, of course, updating your type signature:

    dial :: PhoneNumber -> Maybe PhoneConnection
Which, in turn, means that before you can extract any data out of the PhoneConnection,you explicitly must check that there's actually something there you can work with - which is very, very cool :).


This is IMHO far more useful and interesting comparison than the original article; thanks for taking the time to write it.

I think the concrete examples really help show up the differences between the languages here. I might quibble with your initialisation in ruby (I'd expect it to be initialised with a string and hide the internal representation), but it is clear here why the Haskell type system might help you avoid issues with changing an interface and forgetting to change things which call it (though I can't say I've run into that a lot in Ruby, I can see where it might be useful, particularly with large groups collaborating or a large program).

I would argue that the compilation step is what is responsible for finding errors at compile time rather than runtime, but then the type system is perhaps required to enforce that.


I would argue that the compilation step is what is responsible for finding errors at compile time rather than runtime, but then the type system is perhaps required to enforce that.

It is. A Ruby (or Python, or JavaScript, ...) source code inspector cannot, in general, infer concrete types for variables. You're allowed to say (JS example)

    var x = "fred";
    x = 3;
and now, if you try to write a JavaScript inspector to infer the types so it could check that you used x correctly, it would not be able to.

Of course, there are cases where you can infer useful things about your code, but for most real-world JS code, it would produce a lot of false positives or false negatives, and wouldn't be very useful. (Think about monkeypatching...)


While this is a perfectly valid example, I think that it doesn't really show the crux of the issue, since the types involved (int or string) are simple and the temporal aspect of the type changing really doesn't come very often.

The bit where dynamic languages get really complicated is that they allow for you to use types that hadn't been taken into consideration when the language was designed. For example, in Python you can write code using duck typing and it will Just Work(TM) but if you want to do something equivalent in Haskell you would need tell GHC to use one of those weird language extensions just to have the program typecheck. The monkeypatching you mention goes more along this line, I think.


Sorry if my Ruby chops are weak - I've not used it for probably 5 or so years now :) (I do a lot of Python work though, so I'm familiar with dynamic type systems).

I kind of agree with your second sentiment - but I did want to point out that it's entirely possible to write a dynamically typed compiled language too.

It boils down to type systems: weak/strong, and dynamic/static. C, for instance, is weak/static: you define types, but you can basically pass around whatever you want to:

    #include<stdio.h>

    int main(int argc, char* argv[]) {
        int result = add(1, 2); 
        printf("%d", result);
        return 0;
    }

    int add(int a, char b[]){ 
        return a + b;
    }
The above will happily compile - and run. The method header for "add" says it takes an integer and a character array - essentially, a String in C (for people who actually use C: I've avoided pointers because they're confusing, and my C is even rustier than my Ruby).

Inside that method though, I do something entirely braindead: I add the integer and character array as if that were an operation that makes sense. Then I call "add" with two integers anyway, and let it do what it wants. It actually does return 3: rather than throwing a runtime error to say "hey, this is stupid, I should have a String here and you can't add a String to an integer!", it just trundles merrily along. A strong type system wouldn't let you compile or run this: it'd tell you off for trying to call a function with invalid arguments, then (hopefully) tell you off for using "+" in a nonsensical way.

FWIW, the following also works:

    #include<stdio.h>

    int main(int argc, char* argv[]) {
        int result;
        char aString[] = {'h', 'e', 'l', 'l', 'o'};
        result = add(1, aString);
        printf("%d", result);
        return 0;
    }

    int add(int a, char b[]){
        return a + b;
    }
The number returned and printed here is derived from the location in memory of the characters you've written. I think. Either way, it makes no sense (for most people, anyway - C hackers may have some use for such an operation).

At the other end of the spectrum, Python and Ruby are strong/dynamic: you can pass whatever types around you want, but the interpreter will crash if you try to do something the type system doesn't allow. You can't do 1 + "test": it makes no sense. It won't even try to run it and return something nonsensical, it just won't run.

All that said: I don't know if there are any strong/static languages without a compiler. I can't seem to think of an application of such a language: the best way to exploit a strong/static type system is to have it tell you, up-front, that you've made a mistake (and, of course, have your compiler optimize the pants off your code for runtime).

I also don't know of any languages which are compiled, but fully dynamically typed (Scala, maybe? Although it allows static typing too). Considering the benefits you can gain from type analysis during compilation, again, the concept seems a bad fit. Although (same as in the previous example), you could happily write a language that is!

So, yes, compiling the code is what enables you to perform those checks - but a strong type system is a part of compilation in its own right :).


Great explanation. I don't know enough Haskell to form my own opinion of some of your points, but I have a feeling that will soon change. Thanks for taking the time to write this.


I'm sure there are some flaws present in what I've explained - I've simplified out a lot of pain points for learners (like Haskell's IO system for side-effecting functions), and I'm still only a novice myself, so can't comment on what's considered idiomatic. I think the core idea behind it should be correct though :)!


Just a reminder of duck typing:

Haskell style => function applied to parameter, e.g f(x) Ruby duck type style => parameter applies function to itself e.g x.f()

An example:

  module Dialable
    def dial
      # do stuff with self
      puts "Dialling..."
    end
  end
  
  # I wouldn't do this, but...
  class String
    include Dialable
  end
  
  # So, phone number as a string can be dialled
  my_phone_number = "01 23 45678"
  my_phone_number.dial
  Dialling...
  
  # I'd rather do this
  class PhoneNumber < String
    include Dialable
  end
  
  my_phone_number = PhoneNumber.new "01 23 45678"
  my_phone_number.dial
  Dialling...
  
  # now you get an error on non phone number strings
  "just a random string".dial
  NoMethodError: undefined method `dial' for "just a random string":String
  
  my_phone_number == "01 23 45678"
  => true
  
  my_phone_number == "just a random string"
  => false

You get all the regex and string functions for free too.


Ah, I see. I don't really know what's idiomatic in Ruby anymore; I haven't used it in anger for nigh on 5 years now :).


I agree very much.

As far as I'm concerned Java's biggest failure, orders of magnitude worse than all others, is to make java.lang.String a final class.


I don't know:

1. having nullable references by default strikes me as a bigger issue

2. I don't see any reason I'd want to subclass String, actually (though I could see wanting to create an alternate implementation e.g. ropes-based). So String being a final class makes perfect sense as far as I'm concerned (unless it were an interface or some sort of "proxy" class as is often done in Cocoa). On the other hand, I'd give a phalange to easily create an unrelated type (typesystem-wise) with the same implementation. What `newtype` provides in Haskell.


I must agree with the nullable one. Let's say the biggest failure that I haven't heard anyone else talk about.

As per your #2, that's what I was hinting at except that the implementation effort of final vs. non-final is negligible, thus making it harder to excuse as far as I'm concerned.


how would a nominally different string type be better than wrapping the string ?

I mean I understand why having a Name, ZipCode or UUID stringish class helps ensure program correctness, I do not understand (out of ignorance) how it would improve your code vs a wrapper.


> how would a nominally different string type be better than wrapping the string ?

* It is significantly less verbose, therefore simpler and more likely to be used at all. And less error-prone

* It can provide string APIs working on itself (either by default or through the aliasing declaration) precluding the need to manually re-implement things like comparisons or printing


It's amusing that in a thread about the advantages of Haskell, immutability is being identified as the largest failure a language has made.

I don't think immutable Strings are a bad idea but I do think it's unfortunate that Java:

- Made Strings immutable, used them everywhere, and only later worked out that perhaps CharSequence would have been better in a lot of places.

- Didn't provide a sensible way to handle Object extensions


A class being final means that you can't derive from it in Java, not that instances of the class are immutable.


To ensure immutability of a type in Java, you typically need to prevent it from being extended.


Biggest failure? try Integer(100) == Integer(100) and Integer(200) == Integer(200)


Why a failure?

String is final in most languages OO languages.


I would consider having any final classes in a standard library to be a failure. One of the pillars of OOP is extensibility.


Then you should learn about the fragile base class problem,

http://www.cas.mcmaster.ca/~emil/Publications_files/Mikhajlo...

API design is a very complex issue. Any change in a base class can have unintended consequences.

Specially in components sold as libraries to development companies, where you as a customer don't have access to the source code.

You're right, one of the OOP pillars is extensibility, but inheritance is just one way of doing it.


Replied to the wrong comment?


I worked almost exclusively in Ruby and Python over the last ten years and coming back to static languages after that long was a real eye-opener. A good type system doesn't just help you catch a certain class of bugs. It completely changes the way you write code. Functions become self-documenting, aggressive refactoring becomes routine, the underlying architecture of your code emerges much more clearly. And you usually get much faster code for free. It's like having a genius savant pair partner.

In my experience a good type system doesn't add much friction and the extra documentation you get from specifying types more than pays for itself in the long run.


Pretty much same experience here. I'm currently working a pretty large and fairly old Python code base. And while I'd still consider python my favorite language, I no longer really feel comfortable recommending it (or any other equally dynamic language) for large, long lived, projects that's worked on by several developers.

Sure, all problems can be solved in theory with things like meticulous documentation, rigorous coding standards that everybody follows and lots of detailed tests. But in practice those sort of informal constraints will break down over years as developers of varying skill levels come and go. Better to formally force as much of that as possible with a decent type system.


I used to like dynamic languages.

Nowadays I will take a static language with automatic type inference over a dynamic language anytime, preferably one that with direct support for FP.

The IDE support, code navigation and refactoring, and runtime performance just beat dynamic languages all the time.

For the few use cases where dynamic types really make a difference, that can be supported by some kind of variant type or reflection.


I'm at this point right now - what are the best static languages with type inference for the dynamic language crowd? Haskell? Scala? Go? Rust? Something I haven't heard of? All of the above?


Haskell isn't quite there in terms of IDE support (there is "Leksah") but I find myself writing most Haskelly code in emacs/vim with GHCI open in a separate terminal.


I just make use of EclipseFP, it is a very comfortable IDE for Haskell.

http://eclipsefp.github.com/


I can't count on one hand the number of commits with >500 changes that did not introduce a single bug or regression due to the compiler catching a huge amount of little typoes and mistakes that would have slid by in a dynamic language.

Its like how most vehicle accidents involving impact speeds of over 300 MPH involve planes and not cars or boats. Its not that cars and boats are safer..


That is the biggest problem with dynamic languages, they don't scale.

In the enterprise projects where I work, usually with around 200+ developers scattered around the globe, it is unthinkable to use dynamic languages in such environments.

We already have issues with static languages in the CI system. I cannot think how it would look like doing the project in a dynamic language.


Fair enough - I accept dynamic languages are not always suited to larger projects, particularly with a large number of developers involved all working on the same code. I suppose it is possible to impose structure in a dynamic language by having strict rules about interfaces and documentation, but at some point (which you have obviously experienced) it becomes much more attractive to have the compiler do it for you.

For smaller projects however, with smaller teams, dynamic languages do have less overhead, which is perhaps why they are so popular for web apps.


As other commenters have said, in statically typed languages you try and design your program so that more errors are caught at compile time. As an example, watch Yaron Minsky's "Effective ML" talk. He shows how he takes a data structure and modifies it to make illegal states (such as having a disconnection time while being connected) type errors. With a language with Ruby, you couldn't do something like that, so you'd classify the error of having a disconnection time while being connected as a semantic error or a human error. With the proper tools, you can make it a type error.


> Agree with this 100%. I think dynamically typed languages are a transitional technology

54 years old transitional technologies?

Look the split between dynamic typing and static typing is as old as computing itself (older actually, see untyped v typed lambda calculi), as it looks to me you're asserting properties your prover can't cash.


Dynamic languages are easy to implement. Type systems that are rigorous enough to be useful but flexible enough to be expressive are much, much harder to design. I think we're still not quite there yet but we've made a lot of progress in the last decade. Even after ten years of Ruby experience I find myself writing code in Scala that has fewer bugs and is vastly easier to understand and maintain.

Maybe my background in chemistry colors my view but I don't see 54 years as really such a long time for any intellectual discipline. We're still in the early days of software engineering, IMO.


>Agree with this 100%. I think dynamically typed languages are a transitional technology we'll mostly leave behind as the kinks get worked out of modern type systems.

While my favorite language right now is Haskell due to its sophisticated type system and ambitious design goal of enabling engineers to coral and isolate error-causing code into monads, I have to disagree.

Dynamic languages can be looked at as the next evolution of non-memory-managed -> memory-managed. Next is statically/strongly typed -> dynamically-typed. In both cases you go from manual management of some feature of the language, to having the vm or compiler do it for you, and that increasing abstraction and automation is the story of language development (and technology in general).

Non-memory-managed languages like C, C++, ObjC are still around and are used by highly skilled developers to squeeze the utmost performance out of the system, Chromium/Chrome/V8 being a good everyday example.

But where that kind of optimization is not strictly needed or where the automatic memory management of, say the Hotspot JVM, is sufficient, memory-managed languages will flourish. Further, memory-managed languages benefit from the continuous improvement in vm technology, and over time continue to approach non-managed in many situations.

I expect something similar to happen with type systems. Where a particular project does not need the strict control provided by a type system, dynamically typed languages will flourish (as they are in the startup web space).

Perhaps one day, vm's and dynamic type inference will be so good that neither manual memory-managed languages nor strongly typed ones will be necessary,but given Haskell's design goals and the type of difficult problems it's trying to solve (software assurance, security, etc.) I don't expect that will be anytime soon. They'll all continue to coexist for the foreseeable future.

Having said that, Haskell's type system is a pleasure to use, and imho actually increases productivity by both requiring and helping you think more clearly about the data flowing through your code.


Wow, interesting perspective. Let me attempt to summarize what you said:

When compilers/runtimes become smart enough, you won't need a static type system, because the runtime one will catch all your errors.

Here's why I think it's wrong:

Catching errors is not something that I want done at runtime.


Not all your errors, dynamically typed languages still require testing.

And I agree, I tend to prefer type systems like haskell that offer better compile time assurance than testing, but my point is there's a large contingent of programmers who don't, and who drive uptake of increasingly automated and abstracted dynamically typed languages (this isn't a prediction, but an observation of how things already are).


You still have to catch all the type errors "manually". Dynamic types doesn't mean you're free to avoid managing types, it just means you're going to get no feedback at compile-time about the errors you forgot to catch.


see "Horrors of Static Typing"[1]

the thesis is that there are places where the type system just gets in the way, and there are places where it is invaluable.

[1] http://phillyemergingtech.com/2012/system/presentations/Horr...


I think dynamic and static type systems will eventually meet somewhere in the middle. Haskell has a “Dynamic” type for those times when you really do want to include dynamic types in your program. And some researchers have experimented with “soft typing” systems for dynamic languages like Scheme and Python, where a type analyzer tries to infer the types of expressions from how they are used.


I've been learning Haskell on the side for a while and was hoping for something more after reading the title, but the post pretty much just points out a few preferences that OP has.

I spend very little time in Rails hung up on any of the issues expressed.

With some experience with frameworks like Express/Noir/Sinatra, I find that Rails is productive for me because of (A) convention and (B) getting common things done with terse code like `belongs_to :forum, counter_cache: true, touch: true`, and (C) not having to write glue code for basic things.

This quote...

> In addition, there is also very little “convention” with Snap. It enforces nothing, which has the consequence (in addition to allowing you to make a mess!) of having the whole application conforming to exactly how you think it should be organized.

...is precisely the deal breaker for me. If OP thinks it's hard to add/edit Rails code because it's spread out, then a lack of convention suddenly isn't a very compelling scenario for productivity. At least you know where to look.

I've been in a Node.js kick recently, consuming a lot of Express-stack repos on Github of smart people, and the experience is baffling. Deciding where to put code in my application just isn't a problem I want to solve when I'm trying to write a non-trivial, non-single-paged application.


I'm slightly confused by this. Is it not the case with rails that many of the .rb files are generated via rails?

I think you're over thinking / pessimizing what happens with haskell code.

1) for any self contained project (library or executable) in haskell, the directory structure determines the module names. if you're wondering how the functions imported from the Foo.Bar module are implemented you simply go to the subdir #root/Foo/Bar.hs

2) Types. when you're writing use case specific code, you are going to define use case specific types, and they will be declared. More over, it is good practice to give explicit type ascriptions to haskell code to make sure that it does in fact have the type you expect. This means that you can jump into a module and by also recursively looking at its imports (and you know where those are in the fs), you can figure out exactly whats going on.

3) cabal-install (cabal), and your project specific project.cabal file make life great. Why? the foo.cabal file will tell you which module has the Main function if you're making an executable, what other packages (and their versions) your code might be importing, which language extensions are enabled, etc.

Point being, there are no need for "framework specific conventions" of the sort you're concerned about because those problems are solved by Haskell specific conventions :)


I mean to specify higher level design conventions, as in the structuring decisions you'd have to make in a non-trivial app.

For instance, my experience with Express. The popular Peepcode screencast makes a Django'esque `apps` directory and modularizes the granular components of the entire app. Other people make an `app` directory and model the familiar Rails MVC. Some people contain controller logic in the router. Some people export routes from smaller files into app.js under a `//routes` comment. Sometimes the connection to the database is bootstrapped when the server starts in app.js. Sometimes the database is accessed from each model. Sometimes the database is accessed from each route. And you're guaranteed to have to dig into every required file to see how they exposed its API. Did they module.exports the entire object? Or did they individually export each public method?

See, I'm not criticizing unopinionated frameworks. They're for people that are opinionated and want to write the code that glues their opinionated structure together. Or for apps small enough to get by without deliberated structure.

But in a discussion about productivity, perhaps there's something to be said when people with experience making non-trivial apps have corroborated conventions and practices for a community to share.


Firstly: no, it should not be the case that many of the .rb files are generated via rails. Or at least, it shouldn't be. Generators are a crutch.

Secondly: the sort of conventions you're talking about in Haskell modules also exist in Ruby. They just aren't enforced by the compiler, and because Rails is an application framework rather than a library, it has its own set of completely different conventions which make sense for application code. This is a double-edged sword. On the one hand you get a convention which makes more sense for the specific type of application Rails assumes you're building; on the other you are encouraged to write code which may end up difficult to extract out of the application you're writing, and may be less well-designed because of it.


I think you've stated the main Snap vs. Yesod distinction in Haskell-land: "here are some tools" versus "do this." There's advantages to each approach, and luckily there are solid Haskell frameworks for both.


Maybe Sorta. Most of Yesod is also reusable libraries, for example I've used Hamlet to generate HTML without using any of the rest of Yesod.


Sure, but if you want a Preferred Way, like the OC seems to, Yesod's your huckleberry. Right? Or is that not so?


That depends on one's definition of Preferred. And there is most definitely not a single one, even within the Haskell community.


This article was interesting, having only used frameworks that either do everything or just the minimum (think Rails vs Sinatra, or Django vs Tornado). I was beginning to wonder if there was a middle ground in web frameworks. The idea of a web library, not a framework, really appeals to me. Is there anything like Snap for Python or Ruby?


I've noted that split as well - and wondered indeed if there is no middle ground. Recently it occurred to me that perhaps there is no real need for that middle ground.

The fact that Django comes with even more batteries included than Python itself does not mean you have to use each and every battery. And ignoring stuff you don't need is often easier than extending an even moderately complex 'middle ground framework' in a meaningful way without making a mess of it.

If I compare that to my own experience, that is exactly what I've started to do over time.

A thing I've become accustomed to is splitting my applications cleanly in 'front-end' (focus on interaction with humans) and 'backend' (focus on business logic, (persistent) state and interaction with other applications/machines).

For front-end applications I've no real need for a number of things Django offers (ORM, authentication/authorization mostly). For building back-end applications Django has even more bits I do not really need.

This lead to a situation where I asked myself 'what does Django still offer me? Is it not easier to use a more limited framework for either use case or gobble together my own frameworks from various bits and pieces?'.

For front-end applications I could've (and have tried to) build a framework-to-fit from various bits and pieces such as cherrypy, jinja for templating, babel for i18n, routes for routing, etc. However creating a solid working framework this way, with all bits working together nicely - and keeping up to date with the development of its various bits and pieces - is a lot of work. Just using Django an ignoring the bits I don't need turned out to be far easier, leaving more time spending on my actual problem.

And for building my backend-applications I don't need all the stuff listed above - I simply need a robust and reasonable fast networked server which can host my business logic. The 'minimalist' frameworks then offer just what I need without getting in the way too much.

So in conclusion: if you want to build something that is mostly logic with a machine-machine interface, the minimalist frameworks offer a solid foundation and you have no real need for all the extras a 'mega framework' or even a 'middle ground framework' offers you. If you want to build something that has to interact with humans, you actually do need most of what the prevailing 'mega frameworks' offer you and leaving out the bits you don't want is easy. Hence there is no real need for 'middle ground' frameworks.


Probably the closest match to Snap (based on the OP's description) in Python is Pyramid:

http://docs.pylonsproject.org/en/latest/docs/pyramid.html

The routing system in particular looks quite similar.


Not sure about Python and Ruby but much of the Clojure web ecosystem takes the 'library, not framework' approach.


IMO Flask [1] is all about being a library but not a framework. By default it only gives you routes (based on werkzeug), templating (jinja2) and simple sessions. Everything else is available as a plugin. E.g. ORM [2], Forms [3] etc [4]. Documentation is awesome too.

[1] http://flask.pocoo.org/

[2] http://packages.python.org/Flask-SQLAlchemy/

[3] http://packages.python.org/Flask-WTF/

[4] http://pypi.python.org/pypi?%3Aaction=search&term=flask&...


werkzeug is a nice web library and Flask merely collects several existing libraries (werkzeug, jinja2, sqlalchemy, wtforms).


In the Ruby world Ramaze is right there between Rails and Sinatra.


Learning Haskell will improve your overall programming skill, it will change your mind. The best tutorial to learn Haskell: http://learnyouahaskell.com/ .And it is the tutorial that I enjoy the most (including tutorials ruby, php, backbone, jquery, etc)


Some source code is worth a (three) thousand (fourteen) words ... =P


And a couple of finished products, well, web-apps or web-sites in this case, even more.


Not a single line of code to illustrate the point? Not even one?


I was coming in to write this. Rule #1 for these kinds of blog posts: include code samples! Sometimes I want to read long, detailed articles. Other times I just want to look at the "pretty pictures", so to speak.


When discussing structural issues, like routing, code samples will be either deceiving or extremely verbose, because the ideas like that involve interaction of multiple components that simply doesn't happen in "hello world" examples.


Yep, the same is true of the benefit static typing gives you for refactoring. Real world examples where the benefit is really significant are too big to put in a blog post. It should not require too much imagination to make the point clear to an experienced developer.


OT: The title of this article without the parenthesized would be "...as productive than Ruby/Rails." Want a more link-baity title? Try "...more productive than Ruby/Rails?" making it a question. Not so baity, but with Moar English? "...at least as productive as..."

This goes back to my little rant from last week: Please proof read your blog posts. In this case, reread the title with and without the parenthesized modifications and see if the English still holds together.


The single sentence that surprised me the most:

"For performance, there is no question that Haskell will win hands down on any performance comparison".


Why is that surprising?

Haskell is a compiled language with a very clever optimizing compiler, so it tends to be pretty fast by any standard. On top of this, it has recently had some improvements to its IO manager which is particularly important for web servers.

Ruby, on the other hand, is notoriously slow.


Performance isn’t measured just by the language tools, it depends on the whole stack. It’s quite probable that the Ruby/Rails stack is better suited to high-performance sites simply because it has more installations and the kinks have been worked out.


I agree that the priors would suggest that you are correct, however, let me give some evidence to update you:

The Warp web server for Haskell is the fastest Web application server that exists. http://steve.vinoski.net/pdf/IC-Warp_a_Haskell_Web_Server.pd... [pdf, but easy to read] -- in the comparison on page 2, php handled 3400 requests per second, and Warp 81000.


Uh. This doesn't make any sense whatsoever.

As to why, here is a hint: The three most common webservers in the world are Apache, nginx and IIS. Yet are any of those even mentioned in that paper? NO.


Hm, you may be right. The methodology to arrive at these results in the paper is not given at all (as far as I can tell).

However I suspect that the three you listed would not be benchmarked on their own, as they are not application servers, just frontends. It would be more reasonable to benchmark Apache+mod_wsgi or whatever, and it would be nice to see that on the graph.


Ruby is as slow as PHP (see: Debian Language Shootout), and Rails is even worse because you're operating at a ridiculous stack depth. MRI and YARV are horrible at doing GC on deep stacks.


title: "The Haskell / Snap ecosystem is as productive (or more) than Ruby/Rails." bio: "...computer programmer and web designer, who is interested in the intersection of mathematics and computer science..."

Old stereotypes die hard.


I really hope Haskell/Snap starts getting picked up by everyday web developers so someone can start exposing ridiculous things like:

  > 11111111111111111111111111111 - (length [])
  => 1729917383


Can someone explain what is happening in this example? I'm learning Haskell right now and this makes no sense.


    Prelude> :t 11111111111111111111111
    11111111111111111111111 :: Num a => a
    Prelude> :t (length [])
    (length []) :: Int
    Prelude> 11111111111111111111111111111 - (length [])
    1729917383
    Prelude> 1111111111111111111111111111 - 0
    1111111111111111111111111111
    Prelude> 1111111111111111111111111111 - fromIntegral (length [])
    1111111111111111111111111111
    Prelude> 11111111111111111111111111111 :: Int
    1729917383
"Int" is the machine-length integer type. "Integer" is the unbounded, but slower integer type ("bigint" in other languages). In Prelude, "length" always returns an Int, assuming that the length of a list will never be bigger than 232.

Haskell interprets numeric literals as the typeclass (Num a => a), meaning any member of the typeclass Num can be constructed using these literals. Int is one such type, and causes wraparound when it constructs an instance using the typeclass.


assuming that the length of a list will never be bigger than 232

Should be 2^32, of course. Though I think the real upper bound is (2^31)-1: there's a bit used for negative numbers, and there's zero.


It's essentially the same thing as doing:

    > 11111111111111111111111111111 :: Int
Numeric literals are automatically treated as Num instances, so when you subtract (length []) which is an Int, the literal is treated as an Int. In this case, it triggers an overflow condition, something that programmers understand well and have been dealing with from the earliest days.


My language.

It's better than your language.

Ergo, you should use my language.

Rinse and repeat with any choice of languages.


Where are the benchmarks?


Here http://shootout.alioth.debian.org/u32/which-programming-lang...


This one is probably more relevant: http://shootout.alioth.debian.org/u64q/which-programming-lan...


I think he wants the server benchmarks in particular rather than general speed.

I found some on the Snap framework web site[1], but they may be a little out of date.

[1]: http://snapframework.com/blog/2010/11/17/snap-0.3-benchmarks


Let's be clear, the article is comparison between frameworks. The benchmarks should therefore reflect the comparative speed of the same web application written using both rails and snap. I'm not aware of any attempt to do this, so I attempted to approximate it using the speed of the language in general and you can see that from the results I posted that the slowest Haskell GHC is faster than the 25% percentile of MRI ruby 1.9. This is a useful observation; A simple pong benchmark has absolutely no bearing whatsoever.


A slightly more recent benchmark:

http://www.yesodweb.com/blog/2011/03/preliminary-warp-cross-...

Also includes Yesod and Warp.


Let's not downplay the few years you'll need to spend to get productive with Haskell first.


Until it has produced as many sites, I consider any such statements as anecdotal.

That is, I'd rather measure a system's productivity with actual production in the wild than with any of the systems "inherent" capabilities.

It might be productive for the author, but I don't see the general web programming public finding it more productive. People have learned Ruby to use Rails, but not many have ventured to learn Haskell to use its frameworks.


It's not as popular therefore it's not as good. Your logic is definitely valid!


>It's not as popular therefore it's not as good. Your logic is definitely valid!

And your logic is definitely faulty. I've never used the word "good"

What I said is "more productive" (what the author claims) can only be measured in actual PRODUCTION.

That is, the important thing is not:

(a) "If I were to use X framework/language, how productive would I be over Y framework/language?",

but:

(b) "In an actual empirical observation, what framework/language is actually responsible for the largest volume of production?"

The author talks about (a), and advocates Haskell. But that is not an empirical, scientific, measurable observation, it's just his personal opinions, feelings and anecdotes. Only (b) gives an actual overall metric of the productivity of two frameworks/languages combos.

Even having the same person doing the exact same project with both X and Y framework/languages and comparing the speed with which each was done, would tell us very little. Maybe someone he was more comfortable with one or the other, maybe that particular project fitted especially X over Y, maybe it didn't need to communicate with legacy stuff with neither X nor Y do well, etc.

The only way to tell what generally was for production for the majority of people, is to, DUH, see what the majority of people have used for their productions.


So basically vanilla PHP has won in your opinion, with Ruby not even being a small glimpse on the map?


If we are empiricists, yes. And I say that as a Django man.

If we are to held an idealist view, no. But it would just be ideology making up for a lack of the same volume of production being done with Ruby compared to PHP.

Remember the "worse is better" motto? Worse could also be more productive.

Now, I don't care why PHP is more productive in actual volume of production --instead of more productive as in "it makes you more efficient". It could be because of "stupid" programmers that cannot adapt to Ruby, because of inertia, because it is fast to start with, because it has a more vibrant ecosystem than Ruby/RoR, because of large amounts of code already built with it used to bootstrap newer projects, because of lack of RoR publicity, because of just being there first, etc etc. Thing is: by usage and number-of-sites metrics, it is.


[deleted]


Dude, Clojure isn't even purely functional, nor is it meant to be. I respect Rich Hickey a lot too, but that doesn't mean he's some kind of god who blesses functional things. In reality, Haskell is much more at the forefront of functional programming than Clojure is — it actually embraces FP down to its core, purity and curried functions and all, while Clojure takes the more pragmatic stance of maintaining easy Java interop. Haskell's whole purpose is to push the boundaries of functional programming, while Clojure's purpose is to be a very useful modern Lisp that separates value and identity. Clojure is a really nice language and IMO more practical than Haskell in general, but it is not the last word on FP.


Your argument is nonsensical. Larry Wall uses perl therefor perl is better than ruby?

Not that I'm saying Clojure is better than Haskel or vice versa.


what issue are you referring to here ? Could you link to any relevant discussions.


[deleted]


The most common and impactful source of problems or bugs are ones of misconception, not TypeErrors, therefore anything that lets you iterate faster or get something out the door quicker is desirable. (I heard him say something to that effect in a talk)

He must have a typechecker in his head ;). My experience with learning Clojure (knowing Haskell) is that I constantly bumped into bugs where I thought "Haskell's type checker would have caught this".

Usually, when I write new functions in Haskell, I write down the type signatures first. This helps catching many errors. For instance, if your type signature is:

  f :: Integral n => [a] -> n
You can not accidentally apply a function (in f) to f's argument that assumes a list of lists, or return something that is not an integral number.

This cuts both ways: strongly encoding semantics using the type system catches many potential bugs in functions, but it also provides guarantees to the caller of a function. For instance, I know for sure that f does not return a 0/nil/None pointer, I know that the number is exact (since it is integral), etc.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: