Hacker News new | comments | show | ask | jobs | submit login
Haskell in Production (34cross.in)
81 points by arkhamist 738 days ago | hide | past | web | 47 comments | favorite



As far as I can tell from this, the authors worked really hard to implement in Haskell something that is trivial and well-understood in Python and other languages.

Maybe the article could talk more about their specific needs, but this looks like a crud app made very complicated by the choice of unusual software for the task. Maybe it did something awesome, but this doesn't tell us what that was. (They had to write their own ZeroMQ broker, after all. That was certainly costly.)

I don't understand the claim in this article that concurrency in Python is hard. There are many reasonable ways to do it for the web, from multiprocessing using something like uwsgi or the excellent gevent. There are certain things that are hard, but for common patterns like web services, there are many awesome solutions to choose from.

And I don't understand why memory footprint is seriously a factor here. Server runtimes may use all the memory available to go fast. As long as it fits, footprint seems a lot less important than other factors. The cost of buying an extra stick of ram is miniscule compared to the cost of having to implement libraries to support a language choice.

The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages. I'm curious to know how that affected the implementation. I'd like to use more functional languages, but since my job is primarily IO of some sort, watching people struggle with writing to sockets leaves me more than a little hesitant.

Basically, this article is missing a lot of details to support the argument they have made.

"The satisfaction we feel after a good day of Haskell is unparalleled, and at the end of the day, that's what it's really all about isn't it."

Actually, since they appear to be working on a startup, I would think that a functioning business and time to market would be more important.


> the authors worked really hard to implement in Haskell something that is trivial and well-understood in Python and other languages.

So where do you get the idea that it was hard work in Haskell?

"It was a breeze to rapidly prototype and test individual components before composing them into a whole."

"Very surprisingly we got done really quickly."

> The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages.

Haskell separates IO, it doesn't make it harder.


> The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice

Can you tell me what a program which does no I/O actually does?

I can tell you what it does in Haskell: nothing. Without IO, there would be no reason for a computation to ever run in Haskell (or in any other language, but Haskell will actually NOT run it). Every Haskell program does I/O.

I know when I first heard about functional programming and Haskell in particular, that ideas like purity, no side-effects and immutability seemed absurd. "But, but - the programs I write always do I/O and manage state!?" So do Haskell programs, they simply do it differently. If a program had no side effects it would have no purpose. Please do not assume that the tool used by an entire community is not well suited to building programs that have a purpose.


I remember a quote by SPJ but am unable to find a reference to it right now. It goes something like: "a program without I/O is merely heating space."

Pure Haskell programs have no side effects, where "side effect" is defined as an effect that occurs implicitly outside of the function's signature. They can definitely have non-side effects; they just have to be explicit about it (the same with IO). For some reason, "side effect" have come to mean effects in general...

Parent's observation probably relates to the fact that doing IO in Haskell is "harder" than say in Python. There is some truth to that.


I think you are missing the parents point. Since everything is lazy in Haskell, a program without IO will literally do nothing. It will not evaluate a single line of code.

To expound, Haskell pure functions do not run code. They produce code to be run at a future date. The infamous IO monad is essentially a wrapper around this code which contains information on when the runtime is supposed to actually execute the code. The execution is always dependent on IO.

So, as the parent said, a Haskell program without IO is literally nothing.


Yes, but grandparent wasn't complaining about the inability of doing IO in Haskell:

> The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages.

The argument was that doing IO in Haskell was difficult. So if you take programming as pure code intertwined with IO, the benefits of using Haskell would go down as the fraction of IO increased if the premise that "doing IO was more difficult in Haskell" was true.

So parent said something true, but I'm not sure how it relates to grandparent's comment.



"The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages. I'm curious to know how that affected the implementation. I'd like to use more functional languages, but since my job is primarily IO of some sort, watching people struggle with writing to sockets leaves me more than a little hesitant."

Haskell has excellent I/O support. How do you mean that Haskell makes I/O difficult?


> Haskell has excellent I/O support. How do you mean that Haskell makes I/O difficult?

Composability is different since effects must be explicit, which is both a strength and weakness of Haskell. So say you want to add an effectful computation P in some function G, G and all its callers must have modified signatures accordingly. That could be annoying depending on what you are doing.


I use Haskell a lot. I can't recall ever having to have added IO to a deeply nested expression like this.


Why would it be done if it was hard? The path of least resistance in Haskell dictates not doing this, and carefully figuring out where you want IO up front. In contrast, in a language with side effects you just do it whenever you want and everything compiles as before, for better or worse.

PL design is like a box of tradeoffs.


Debugging...

log "Var x =" x


Nope, just use trace.


He means you have to climb the IO Monad learning curve.


In truth, you don't actually have to climb the IO monad learning curve. IO in Haskell has nothing to do with monads.[1] It's simply a type which encodes input and output effects. Simple and elegant. The fact that people so thoroughly confuse the issue by mixing the disparate topics of IO and monads, is a shame.

[1] http://blog.jle.im/entry/io-monad-considered-harmful


Learning is fun! And Haskell's IO functionality is pretty easy. They teach it in the very first programming course at Chalmers in Gothenburg. It's covered in any introductory Haskell book.


Yes it's covered in any Haskell book - what use would a language book be without covering IO? How far back in the book it comes, relative to other languages, is the telling piece. LYAH has it as Chapter 9, after algebraic data types, higher order functions, functors, and recursion... I note that in the one week introduction I did for my MSc at the University of Oxford, they didn't cover it, considering it to be too much to fit in to the week.

Having taken the rite of passage that is writing my own monad tutorial, I would agree, the concept is both easy and intuitive. Getting it in to your head, however, takes considerably more work than in any other language I've used.


[deleted]


I know there's some debate over it, but I think it's important to teach the general concept of a Monad first, before attempting to teach IO.

Ask yourself: why does everyone write a Monad tutorial after they understand Monads? It's because they seem obvious in _retrospect_. The only tutorial I've seen that's anything like approaching a "works for everyone" tutorial is "You could have invented Monads".

As to your specific explanation, I think it falls down with IO ().


"Monad" is a cool word, and they have a reputation of being extremely profound and obscure. Lots of people in the Haskell world, as you probably know, find this a bit overwrought.

Yes, the mathematical background for the concept is very abstract, like everything in category theory. But in a way, talking about that is like going into "Principia Mathematica" to explain how sets work.

You can also just say that monad is a name for the general kind of thing that you can use with the do syntax. That is, types that support binding variables and returning values.

That the abstract mathematical theory of binding variables and returning values is a bit abstruse is not such a big deal. People can use IO without really grokking the theory of monads.

Yeah, to people used to imperative languages, it's surprising that you can't just call an IO thing in the middle of your function.

But that doesn't mean Haskell makes IO difficult. And it doesn't mean Haskell is a bad choice for IO-heavy applications.

I worked at a web startup that used Haskell for its backend. It worked extremely well, was quite short and clear, and rarely had problems.


>As to your specific explanation, I think it falls down with IO ().

I was trying to avoid replies like this with my disclaimer at the beginning of my comment. I've deleted it.

https://news.ycombinator.com/item?id=5072224


Don't ask for feedback if you're only looking for encouragement?


I knew the explanation didn't cover `IO ()` and some other corner cases, I hinted at the fuller understanding in the second half of my comment.

I was trying to avoid the low-effort dismissal you put forth by saying it was a very silly explanation.

What do you think you've accomplished?


Waiting until chapter 9 isn't so bad when you realize that many professional programmers including myself have been doing this for 20+ years and are still learning the best way to structure effects within a program. What's the rush anyways?


That's only a problem when looking for the cheapest souce of http://c2.com/cgi/wiki?PlugCompatibleInterchangeableEngineer...


> implement in Haskell something that is trivial and well-understood in Python and other languages. Converting a custom AST (represented as a JSON query) into another AST that can be used to generate SQL is probably not simple with Python simply because there's no type safety while moving from one syntax tree to the other. In Haskell, the definition of the two syntax grammars (as types) itself solves a part of the problem, and then ensures the the programmer is writing functions that are taking the right input and generating the right output. So much more 'correct' code seems likely.

> concurrency in Python is hard. There are many reasonable ways to do it for the web, from multiprocessing using something like uwsgi or the excellent gevent Multicore support? http://stackoverflow.com/questions/15617553/gevent-multicore...

> And I don't understand why memory footprint is seriously a factor here. Server runtimes may use all the memory available to go fast. As long as it fits, footprint seems a lot less important than other factors. Using a server side programming language-runtime that would be able to use multiple cores and have a low memory footprint so that it would handle greater scale at lower hosting costs seems like a sensible move. Ofcourse, like you said, only if it is easy enough to do so. Which is what the author's point was I think.


> The choice of a pure functional language like Haskell to do lots of IO seems like a strange choice given that Haskell makes side effects like IO more difficult than other languages. I'm curious to know how that affected the implementation. I'd like to use more functional languages, but since my job is primarily IO of some sort, watching people struggle with writing to sockets leaves me more than a little hesitant.

Disclaimer: I haven't read this yet, but your comment above caught my eye and what I'm about to comment applies regardless.

Maybe it makes IO slightly more difficult to begin with, but you can be sure that your all cases are handled after that. Plus once you know about fmap and >>= you can apply pure functions to monadic functions (functions of type IO are monadic because they implement the Monad typeclass (think of it as an interface for now)).


I must respectfully disagree that Haskell's memory footprint is simply 'low'. This is because the memory footprint of a given Haskell program is not at all transparent, and Haskell is notorious for leaking memory in a maddeningly opaque fashion [1, 2, 3, 4]. Space leaks might be relatively straightforward to diagnose and fix for a true domain expert, but I would not want to have to rely on someone having such abstruse knowledge in a production application. It goes without saying that a space leak in a production app is a really, really bad thing.

Although, I suppose one's choice of Haskell is a function of one's own risk/reward profile. Haskell and its failure modes are hard to understand. That induces extra risk that some people (myself included) might be uncomfortable with. That said, I am now enthusiastically following you guys and hope to see the proverbial averages get sorely beaten.

[1] http://neilmitchell.blogspot.com/2013/02/chasing-space-leak-...

[2] http://blog.ezyang.com/2011/05/calling-all-space-leaks/

[3] http://blog.ezyang.com/2011/05/space-leak-zoo/

[4] http://blog.ezyang.com/2011/05/anatomy-of-a-thunk-leak/


We use Haskell extensively for all of our cloud services software at Plumlife and it has been one of the best engineering decisions of my career.

Amazing language and ecosystem.


I'd love to hear more (and talk offline).

My company is in the very early investigative stages on the Haskell front.

You can reach me at my name (including middle initial) which is this HN name, at gmail dot com.


I've been learning Haskell off and on for the last five years, and despite the steep learning curve for an imperative programmer it's well worth it. Once you grasp the idea of what not how, and embrace type inference it just gets so much easier.

The world seems to be heading towards functional programming, with Swift and the increase in functional constructs in c#. Much easier when the compiler does more work for you.


"There is a joy in programming with Haskell. ... There's so much working Haskell code out there, that you can just stare at days for and not really understand (but always use!)."

Is that a positive quality of Haskell?


I think that, while there was a lot of good in the OP, this part was poorly communicated.

I'd guess that he's talking about something like the Lens library, which is incredibly easy to use, and comes from a place of great aesthetic sense, but whose type signatures take some time to really get. I had to work some things out on paper to see how the general Lens type signature:

    forall f. Functor f => (a -> f b) -> s -> f t
applied to all the "magic" that can be done with lenses. And then you also have Traversals and Prisms. There's lots of stuff that just works, but requires some depth of knowledge to understand how the magic is built. I imagine that Frames, as it matures, will be in the same category (you need a lot of type-level programming to get type-safe data-frames).

That said, I still don't understand how certain languages (that shall remain nameless, but I'm not talking about Haskell here) implemented a type-safe printf, but it was easy to use. In statically typed languages like Haskell, you get just some immediate insight into what you don't understand. In a dynamic one, you can be led to feel like you understand more than you actually do.


If you mean OCaml, the typesafe printf uses compiler magic.


Yeah, I'm never sure about language extensions. I've been trying really hard to avoid just adding them willy-nilly to solve various problems, but I've found there are at least 4ish that I include by default on every new Haskell project I write (and I'm up to 9 on my current project!). Sometimes it feels like their existence is a hindrance to fixing some flaws in the base language (Haskell without OverloadedStrings makes me sad)


> increases productivity by a few orders of magnitude

I'm the biggest Haskell fanboy you'll come across, but this is only true if an order is not much bigger than 1!


> Manually testing: IO related failures

These are clearly ones that i would be testing as part of the test suite...


Why was Golang not a consideration?


My guess is probably the same reason why the author writes "NONE" in the "Ease of / Desire to programming" section for Java... because brackets, pointers, for loops, using : to declare types, and other such C-derived language constructs / tokens.

This may be a generalization, but from my experiences so far, it seems there are two major schools of programmers in today's world: those who come from Java / C, and those who come from Python / Ruby.

The Java/C school likely did a lot of low-level stuff, hardware, OS, compilers, and the like college. If required, they can probably crack open the gnu debugger and crank through assembly. They concern themselves primarily with systems that the computer can efficient perform. In their work, the reader will find lots of loops, indexing temp variables, and comments documenting that does what. Today, they tolerate working with higher level tools like Go, vanilla js, Rust, Typescript, etc.

The Python/Ruby school likely did a lot of math and scientific computations in college. In their hard drives, you can probably find Matlab, R, or (more recently) Julia files containing everything from implementations of Newton's method to routines for calculating Navier Stokes. They concern themselves primarily theory and models, and prefer elegant and beautiful models/code to optimized performance. In their work, the reader will find tons of maps-filter-reduce chains, "arrows", and few comments (they argue their code is clear). Today, their higher-level tool chain include things like Haskell, Coffeescript, HAML, etc.


The guys who made Go (Robert Griesemer, Rob Pike, and Ken Thompson) actually found that it was Python and Ruby devs that were adopting Go, rather than C++ devs like they expected [1].

[1]: http://commandcenter.blogspot.com/2012/06/less-is-exponentia...


In their hard drives, you can probably find Matlab, R, or (more recently) Julia files containing everything from implementations of Newton's method

This is a caricature at best. Lots of great, fast, commonly-used libraries for scientific computation are actually in C++. See e.g. Eigen, Blaze, Armadillo, MLPack, CGAL, Caffe, libsvm/liblinear,and GNU Scientific Library.

The ecosystem for such libraries in Haskell is currently (unfortunately) sparse. And in Python they are mostly wrappers around C and C++ code (apparently, Python programmers also like to dive in C or Cython when it needs to be fast).


Hrm, I'm pretty sure I'm not alone in the Java/C cohort around ~2004-2006 who discovered Ruby/Python and friends and felt.. liberated. Java 1.3/1.4 was so thoroughly completely awful and unproductive, I'm quite sure the uptick in dynamic language popularity was a reaction to that.

Funnily enough, it's the few years of Perl+Moose I've done since which has made a switch back to python just over a year ago a bit hard to stomach. Now that I've tasted something resembling an expressive type "system" (however narrow, incomplete and warty it is), which encourages immutable objects, and does so in a very painless and idiosyncratic way - while still bringing many of the benefits of declarative/"up front" programming for free - I really, really miss it.

So of all the things that could have made me realize I've had enough of dynamic languages, it certainly feels odd that a Perl OO bolt-on would show me a glimpse of what I've been missing out on. And even weirder that the closed-mindedness of the average pythonista made up my mind to move on.


Rust has all the hipster functional programming aspects that this high-level language hipster wants...


Except for HKT.. the true litmus test of the highest of high lvl lang. We're waiting Rust.. we are waiting.


Rust seems to take far more from the ML school than it does from Miranda/Haskell, however.


Haha this reply cracked me up! So stereotypical and somehow true.


If you're going to satisfy a type-checker, you might as well negotiate with a reasonable one that understands what you mean and doesn't confuse the obvious :)


Type-driven development, purity, equational reasoning, exhaustiveness checking, pattern matching, STM, Maybe/option types, generics, and alternative concurrency methods?

These are many of my reasons at least.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: