Maybe the article could talk more about their specific needs, but this looks like a crud app made very complicated by the choice of unusual software for the task. Maybe it did something awesome, but this doesn't tell us what that was. (They had to write their own ZeroMQ broker, after all. That was certainly costly.)
I don't understand the claim in this article that concurrency in Python is hard. There are many reasonable ways to do it for the web, from multiprocessing using something like uwsgi or the excellent gevent. There are certain things that are hard, but for common patterns like web services, there are many awesome solutions to choose from.
And I don't understand why memory footprint is seriously a factor here. Server runtimes may use all the memory available to go fast. As long as it fits, footprint seems a lot less important than other factors. The cost of buying an extra stick of ram is miniscule compared to the cost of having to implement libraries to support a language choice.
The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages. I'm curious to know how that affected the implementation. I'd like to use more functional languages, but since my job is primarily IO of some sort, watching people struggle with writing to sockets leaves me more than a little hesitant.
Basically, this article is missing a lot of details to support the argument they have made.
"The satisfaction we feel after a good day of Haskell is unparalleled, and at the end of the day, that's what it's really all about isn't it."
Actually, since they appear to be working on a startup, I would think that a functioning business and time to market would be more important.
So where do you get the idea that it was hard work in Haskell?
"It was a breeze to rapidly prototype and test individual components before composing them into a whole."
"Very surprisingly we got done really quickly."
> The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages.
Haskell separates IO, it doesn't make it harder.
Can you tell me what a program which does no I/O actually does?
I can tell you what it does in Haskell: nothing. Without IO, there would be no reason for a computation to ever run in Haskell (or in any other language, but Haskell will actually NOT run it). Every Haskell program does I/O.
I know when I first heard about functional programming and Haskell in particular, that ideas like purity, no side-effects and immutability seemed absurd. "But, but - the programs I write always do I/O and manage state!?" So do Haskell programs, they simply do it differently. If a program had no side effects it would have no purpose. Please do not assume that the tool used by an entire community is not well suited to building programs that have a purpose.
Pure Haskell programs have no side effects, where "side effect" is defined as an effect that occurs implicitly outside of the function's signature. They can definitely have non-side effects; they just have to be explicit about it (the same with IO). For some reason, "side effect" have come to mean effects in general...
Parent's observation probably relates to the fact that doing IO in Haskell is "harder" than say in Python. There is some truth to that.
To expound, Haskell pure functions do not run code. They produce code to be run at a future date. The infamous IO monad is essentially a wrapper around this code which contains information on when the runtime is supposed to actually execute the code. The execution is always dependent on IO.
So, as the parent said, a Haskell program without IO is literally nothing.
The argument was that doing IO in Haskell was difficult. So if you take programming as pure code intertwined with IO, the benefits of using Haskell would go down as the fraction of IO increased if the premise that "doing IO was more difficult in Haskell" was true.
So parent said something true, but I'm not sure how it relates to grandparent's comment.
Haskell has excellent I/O support. How do you mean that Haskell makes I/O difficult?
Composability is different since effects must be explicit, which is both a strength and weakness of Haskell. So say you want to add an effectful computation P in some function G, G and all its callers must have modified signatures accordingly. That could be annoying depending on what you are doing.
PL design is like a box of tradeoffs.
log "Var x =" x
Having taken the rite of passage that is writing my own monad tutorial, I would agree, the concept is both easy and intuitive. Getting it in to your head, however, takes considerably more work than in any other language I've used.
Ask yourself: why does everyone write a Monad tutorial after they understand Monads? It's because they seem obvious in _retrospect_. The only tutorial I've seen that's anything like approaching a "works for everyone" tutorial is "You could have invented Monads".
As to your specific explanation, I think it falls down with IO ().
Yes, the mathematical background for the concept is very abstract, like everything in category theory. But in a way, talking about that is like going into "Principia Mathematica" to explain how sets work.
You can also just say that monad is a name for the general kind of thing that you can use with the do syntax. That is, types that support binding variables and returning values.
That the abstract mathematical theory of binding variables and returning values is a bit abstruse is not such a big deal. People can use IO without really grokking the theory of monads.
Yeah, to people used to imperative languages, it's surprising that you can't just call an IO thing in the middle of your function.
But that doesn't mean Haskell makes IO difficult. And it doesn't mean Haskell is a bad choice for IO-heavy applications.
I worked at a web startup that used Haskell for its backend. It worked extremely well, was quite short and clear, and rarely had problems.
I was trying to avoid replies like this with my disclaimer at the beginning of my comment. I've deleted it.
I was trying to avoid the low-effort dismissal you put forth by saying it was a very silly explanation.
What do you think you've accomplished?
> concurrency in Python is hard. There are many reasonable ways to do it for the web, from multiprocessing using something like uwsgi or the excellent gevent
Multicore support? http://stackoverflow.com/questions/15617553/gevent-multicore...
> And I don't understand why memory footprint is seriously a factor here. Server runtimes may use all the memory available to go fast. As long as it fits, footprint seems a lot less important than other factors.
Using a server side programming language-runtime that would be able to use multiple cores and have a low memory footprint so that it would handle greater scale at lower hosting costs seems like a sensible move. Ofcourse, like you said, only if it is easy enough to do so. Which is what the author's point was I think.
Disclaimer: I haven't read this yet, but your comment above caught my eye and what I'm about to comment applies regardless.
Maybe it makes IO slightly more difficult to begin with, but you can be sure that your all cases are handled after that. Plus once you know about fmap and >>= you can apply pure functions to monadic functions (functions of type IO are monadic because they implement the Monad typeclass (think of it as an interface for now)).