Maybe the article could talk more about their specific needs, but this looks like a crud app made very complicated by the choice of unusual software for the task. Maybe it did something awesome, but this doesn't tell us what that was. (They had to write their own ZeroMQ broker, after all. That was certainly costly.)
I don't understand the claim in this article that concurrency in Python is hard. There are many reasonable ways to do it for the web, from multiprocessing using something like uwsgi or the excellent gevent. There are certain things that are hard, but for common patterns like web services, there are many awesome solutions to choose from.
And I don't understand why memory footprint is seriously a factor here. Server runtimes may use all the memory available to go fast. As long as it fits, footprint seems a lot less important than other factors. The cost of buying an extra stick of ram is miniscule compared to the cost of having to implement libraries to support a language choice.
The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages. I'm curious to know how that affected the implementation. I'd like to use more functional languages, but since my job is primarily IO of some sort, watching people struggle with writing to sockets leaves me more than a little hesitant.
Basically, this article is missing a lot of details to support the argument they have made.
"The satisfaction we feel after a good day of Haskell is unparalleled, and at the end of the day, that's what it's really all about isn't it."
Actually, since they appear to be working on a startup, I would think that a functioning business and time to market would be more important.
So where do you get the idea that it was hard work in Haskell?
"It was a breeze to rapidly prototype and test individual components before composing them into a whole."
"Very surprisingly we got done really quickly."
> The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages.
Haskell separates IO, it doesn't make it harder.
Can you tell me what a program which does no I/O actually does?
I can tell you what it does in Haskell: nothing. Without IO, there would be no reason for a computation to ever run in Haskell (or in any other language, but Haskell will actually NOT run it). Every Haskell program does I/O.
I know when I first heard about functional programming and Haskell in particular, that ideas like purity, no side-effects and immutability seemed absurd. "But, but - the programs I write always do I/O and manage state!?" So do Haskell programs, they simply do it differently. If a program had no side effects it would have no purpose. Please do not assume that the tool used by an entire community is not well suited to building programs that have a purpose.
Pure Haskell programs have no side effects, where "side effect" is defined as an effect that occurs implicitly outside of the function's signature. They can definitely have non-side effects; they just have to be explicit about it (the same with IO). For some reason, "side effect" have come to mean effects in general...
Parent's observation probably relates to the fact that doing IO in Haskell is "harder" than say in Python. There is some truth to that.
To expound, Haskell pure functions do not run code. They produce code to be run at a future date. The infamous IO monad is essentially a wrapper around this code which contains information on when the runtime is supposed to actually execute the code. The execution is always dependent on IO.
So, as the parent said, a Haskell program without IO is literally nothing.
The argument was that doing IO in Haskell was difficult. So if you take programming as pure code intertwined with IO, the benefits of using Haskell would go down as the fraction of IO increased if the premise that "doing IO was more difficult in Haskell" was true.
So parent said something true, but I'm not sure how it relates to grandparent's comment.
Haskell has excellent I/O support. How do you mean that Haskell makes I/O difficult?
Composability is different since effects must be explicit, which is both a strength and weakness of Haskell. So say you want to add an effectful computation P in some function G, G and all its callers must have modified signatures accordingly. That could be annoying depending on what you are doing.
PL design is like a box of tradeoffs.
log "Var x =" x
Having taken the rite of passage that is writing my own monad tutorial, I would agree, the concept is both easy and intuitive. Getting it in to your head, however, takes considerably more work than in any other language I've used.
Ask yourself: why does everyone write a Monad tutorial after they understand Monads? It's because they seem obvious in _retrospect_. The only tutorial I've seen that's anything like approaching a "works for everyone" tutorial is "You could have invented Monads".
As to your specific explanation, I think it falls down with IO ().
Yes, the mathematical background for the concept is very abstract, like everything in category theory. But in a way, talking about that is like going into "Principia Mathematica" to explain how sets work.
You can also just say that monad is a name for the general kind of thing that you can use with the do syntax. That is, types that support binding variables and returning values.
That the abstract mathematical theory of binding variables and returning values is a bit abstruse is not such a big deal. People can use IO without really grokking the theory of monads.
Yeah, to people used to imperative languages, it's surprising that you can't just call an IO thing in the middle of your function.
But that doesn't mean Haskell makes IO difficult. And it doesn't mean Haskell is a bad choice for IO-heavy applications.
I worked at a web startup that used Haskell for its backend. It worked extremely well, was quite short and clear, and rarely had problems.
I was trying to avoid replies like this with my disclaimer at the beginning of my comment. I've deleted it.
I was trying to avoid the low-effort dismissal you put forth by saying it was a very silly explanation.
What do you think you've accomplished?
> concurrency in Python is hard. There are many reasonable ways to do it for the web, from multiprocessing using something like uwsgi or the excellent gevent
Multicore support? http://stackoverflow.com/questions/15617553/gevent-multicore...
> And I don't understand why memory footprint is seriously a factor here. Server runtimes may use all the memory available to go fast. As long as it fits, footprint seems a lot less important than other factors.
Using a server side programming language-runtime that would be able to use multiple cores and have a low memory footprint so that it would handle greater scale at lower hosting costs seems like a sensible move. Ofcourse, like you said, only if it is easy enough to do so. Which is what the author's point was I think.
Disclaimer: I haven't read this yet, but your comment above caught my eye and what I'm about to comment applies regardless.
Maybe it makes IO slightly more difficult to begin with, but you can be sure that your all cases are handled after that. Plus once you know about fmap and >>= you can apply pure functions to monadic functions (functions of type IO are monadic because they implement the Monad typeclass (think of it as an interface for now)).
Although, I suppose one's choice of Haskell is a function of one's own risk/reward profile. Haskell and its failure modes are hard to understand. That induces extra risk that some people (myself included) might be uncomfortable with. That said, I am now enthusiastically following you guys and hope to see the proverbial averages get sorely beaten.
Amazing language and ecosystem.
My company is in the very early investigative stages on the Haskell front.
You can reach me at my name (including middle initial) which is this HN name, at gmail dot com.
The world seems to be heading towards functional programming, with Swift and the increase in functional constructs in c#. Much easier when the compiler does more work for you.
Is that a positive quality of Haskell?
I'd guess that he's talking about something like the Lens library, which is incredibly easy to use, and comes from a place of great aesthetic sense, but whose type signatures take some time to really get. I had to work some things out on paper to see how the general Lens type signature:
forall f. Functor f => (a -> f b) -> s -> f t
That said, I still don't understand how certain languages (that shall remain nameless, but I'm not talking about Haskell here) implemented a type-safe printf, but it was easy to use. In statically typed languages like Haskell, you get just some immediate insight into what you don't understand. In a dynamic one, you can be led to feel like you understand more than you actually do.
I'm the biggest Haskell fanboy you'll come across, but this is only true if an order is not much bigger than 1!
These are clearly ones that i would be testing as part of the test suite...
This may be a generalization, but from my experiences so far, it seems there are two major schools of programmers in today's world: those who come from Java / C, and those who come from Python / Ruby.
The Java/C school likely did a lot of low-level stuff, hardware, OS, compilers, and the like college. If required, they can probably crack open the gnu debugger and crank through assembly. They concern themselves primarily with systems that the computer can efficient perform. In their work, the reader will find lots of loops, indexing temp variables, and comments documenting that does what. Today, they tolerate working with higher level tools like Go, vanilla js, Rust, Typescript, etc.
The Python/Ruby school likely did a lot of math and scientific computations in college. In their hard drives, you can probably find Matlab, R, or (more recently) Julia files containing everything from implementations of Newton's method to routines for calculating Navier Stokes. They concern themselves primarily theory and models, and prefer elegant and beautiful models/code to optimized performance. In their work, the reader will find tons of maps-filter-reduce chains, "arrows", and few comments (they argue their code is clear). Today, their higher-level tool chain include things like Haskell, Coffeescript, HAML, etc.
This is a caricature at best. Lots of great, fast, commonly-used libraries for scientific computation are actually in C++. See e.g. Eigen, Blaze, Armadillo, MLPack, CGAL, Caffe, libsvm/liblinear,and GNU Scientific Library.
The ecosystem for such libraries in Haskell is currently (unfortunately) sparse. And in Python they are mostly wrappers around C and C++ code (apparently, Python programmers also like to dive in C or Cython when it needs to be fast).
Funnily enough, it's the few years of Perl+Moose I've done since which has made a switch back to python just over a year ago a bit hard to stomach. Now that I've tasted something resembling an expressive type "system" (however narrow, incomplete and warty it is), which encourages immutable objects, and does so in a very painless and idiosyncratic way - while still bringing many of the benefits of declarative/"up front" programming for free - I really, really miss it.
So of all the things that could have made me realize I've had enough of dynamic languages, it certainly feels odd that a Perl OO bolt-on would show me a glimpse of what I've been missing out on. And even weirder that the closed-mindedness of the average pythonista made up my mind to move on.
These are many of my reasons at least.