Hacker News new | comments | show | ask | jobs | submit login
A founder's perspective on 4 years with Haskell (baatz.io)
347 points by mightybyte 499 days ago | hide | past | web | favorite | 195 comments



Maybe someone can chime in here - I would love to be working full-time in Haskell, but I'm having trouble figuring out just how much knowledge I need upfront. I know I would be reasonably good at it after 2-3 months of working full-time with some Haskell expert to keep bugging. I try to learn as much Haskell as possible after my normal work but of course my rate of learning is much slower than if it was my full-time job. Looks to me like a chicken-and-egg problem. Anybody have any tips/knowledge of getting a Haskell job?


I've written working servers and deployed them using Haskell but am having trouble getting to the phone screen stage. I decided to offer a really low rate for a remote contract and at least got someone to agree to a phone call. It's definitely an employer's market, but just apply, and network; I believe my connections in the Haskell community will eventually lead to a job. I've gone to LambdaConf and will be at MonadLibre and StrangeLoop. So far I've meet Chris Allen and will meet Ed Kmett among others. They are very approachable! Haskell isn't the end goal for me, but if I'm going to be working on someone else's vision, I really want it to be in Haskell. The amount and relative ease of getting Scala jobs is depressing, but keep fighting the good fight.


In my experience, it is tricky for people to learn Haskell meaningfully outside of a team of people to explain the concepts. I'd say write a few real programs in Haskell. Beyond that, if you find the right company, they will hire based on general engineering skills rather than Haskell-specific skills. When it comes down to it, Haskell is just another language with a particular set of capabilities.

Also there are occasionally posts to the /r/haskell subreddit with job listings.

Good luck!


The new Haskell Design Patterns book, I think, covers a lot of the existing gap.



Just apply. Chances are that a company is willing to talk with you solely on the fact that you're interested and self-taught. There's so few jobs in Haskell compared to the rest of the market and so few candidates with the experience that you probably already have a leg up. My team works primarily in Erlang and we normally willing to interview people with no Erlang experience but a desire to learn. I can't imagine a Haskell shop being any different.


Depends on how prominent they are, and thus how many applications they already get.

Eg getting into Jane Street requires a firm handle on functional programming, not just a desire to learn.

In general, decent advice. If you don't get in now, try again in a year. There's almost no downside.


Honestly, given the current hype it's probably an employer's market right now (as another poster mentioned), so your best chance is probably getting into a position where your managers actually trust you to make sound technical decisions and changing from $LANGUAGE to Haskell (or starting a new project in Haskell). Obviously, if you're changing an existing project you'll want to do this gradually, so any microservice-based architecture you can get in place probably helps the process along.

Of course, the problem with Haskell being what it is, you'll probably end up with very few bugs to squash, so you may end up programming yourself out of a job :). Hopefully the area that you work in isn't quite that static such that requirements will change from time to time...


I've just been through a round of interviews looking for Haskell / OCaml jobs. If anything, it was an employee's market. But my experience might be biased, since I already have a few years of professional experience with that kind of FP.


I am interested in the uses.

I remember reading Functional Programming (FP) leads to more readable code, fewer lines and financial companies use them for some reason. I assume it must be because FP programs guarantee the certainty of their results, but I am not sure. Are there side effects in FP languages?


ml languages (sml, ocaml) support side effects through mutable references, and sequences in case you need it. Haskell aims for purity so you encode side effects as threaded state variables through functions (State and IO monad); other kind of side effects possible with this pattern too (Continuations, ...).


Pointer to two projects, of a number using Haskell, by Joey Hess, as examples of self-propelled learning of Haskell while solving a problem:

[1] Propeller [2] git-annex

[1] twenty years of free software -- part 12 propellor

http://joeyh.name/blog/entry/twenty_years_of_free_software_-...

[2] twenty years of free software -- part 7 git-annex

http://joeyh.name/blog/entry/twenty_years_of_free_software_-...


Just keep learning next to your job. Or find some ways to fit a bit of Haskell into your day job at the edges, perhaps there's some small tool you can write in Haskell?

I've just interviewed for a bunch of FP jobs. The going is getting better and better. (I went with an OCaml job this time and declined two Haskell offers, but that's because of other factors, not the languages themselves.)


I am not sure where you live but meetups would probably be an invaluable resource. Check Meetup.com, I've found that most major cities have somewhat regular Haskell user group meetups.


You're already doing the most important thing, which is working on learning Haskell on your own time. But I think there are some things you can do to significantly increase your rate of learning. Far and away the most important idea here is to interact with other Haskell developers so you can learn from them. You're right that doing this at a job would be the best, but there are other things you can do.

1. IRC. Join the #haskell channel on Freenode. Lurk for awhile and follow some of the conversations. Try to participate in discussions when topics come up that interest you. Don't be afraid to ask what might seem to be stupid questions. In my experience the people in #haskell are massively patient and willing to help anyone who is genuinely trying to learn.

2. Local meetups. Check meetup.com to see if there is a Haskell meetup in a city near you. I had trouble finding a local meetup when I was first learning Haskell, but there are a lot more of them now. Don't just go to listen to the talks. Talk to people, make friends. See if there's any way you can collaborate with some of the people there.

3. Larger non-local Haskell events. Find larger weekend gatherings of Haskell developers and go to them. Here are a few upcoming events that I know of off the top of my head:

Hac Boston - http://www.meetup.com/Boston-Haskell/events/231606922/

Budapest Hackathon - https://wiki.haskell.org/Budapest_Hackathon_2016

Compose Melbourne - http://www.composeconference.org/2016-melbourne/unconference...

MuniHac - http://munihac.de/

Hac Phi - https://wiki.haskell.org/Hac_%CF%86 (2016 is coming but hasn't been announced yet)

The first event like this that I went to was Hac Phi a few years back. Going there majorly upped my game because I got to be around brilliant people, pair program with some of them, and ultimately ended up starting the Snap Web Framework with someone I met there. You might not have a local meetup that you can go to, but you can definitely travel to go to one of these bigger weekend events. I lived a few hours away from Hac Phi, but I know a number of people who travel further to come. If you're really interested in improving your Haskell, it is well worth the time and money. I cannot emphasize this enough.

4. Start contributing to an open source Haskell project. Find a project that interests you and dive in. Don't ask permission, just decide that you're going to learn enough to contribute to this thing no matter what. Join their project-specific IRC channel if they have one and ask questions. Find out how you can contribute. Submit pull requests. This is by far the best way to get feedback on the code that you're writing. I have actually seen multiple people (including some who didn't strike me as unusually talented at first) start Haskell and work their way up to a full-time Haskell job this way. It takes time and dedication, but it works.

5. Try to get a non-haskell job at a place where lots of Haskell people are known to work. Standard Chartered uses is Haskell but is big enough to have non-Haskell jobs that you might be able to fit. S&P Capital IQ doesn't use Haskell but has a significant number of Haskell people who are coding in Scala.


Apply to StanChart to work on the Cortex team. They are always hiring. At the very least you will come away knowing what you need to know.


Interesting, but so many caveats (e.g. writing rough libraries for things like sending emails). I find that Scala offers a better balance by enabling functional programming while still taking advantage of a huge, mature ecosystem including great libraries and tools.


Working in scala and enjoying it a lot ! You have a library for every thing and syntax express functional code elegantly. The mix of object and functional programming make all seems a little bit confusing. A problem haskell doesn't have.


There are libraries for sending email. He was talking about a specific cloud API for sending email.


Last time I checked, and I did check recently, I couldn't find a Haskell library that offered an easy way to send email using a Gmail account. Resulting in this homegrown example, which doesn't work: https://github.com/runeksvendsen/restful-payment-channel-ser...


Scala loses a lot of the guarantees that Haskell has and I find that Scala programmers often use it in a way that feels more like nicer Java as opposed to functional style.

The tooling is amazing though.


No argument there. My point is "balance", though. You lose some guarantees, but gain a lot, and the quality of a codebase will always be up to its programmers.


    > You lose some guarantees, but gain a lot, and the quality 
    > of a codebase will always be up to its programmers.
I started using Haskell because I thought it was cool how it could spread out work over multiple cores. I continue using it because of its guarantees, enabling me to do large refactors and be reasonably confident that the compiler has caught all errors (if I construct my types properly).

I would argue that the only true value a compiler has to offer are guarantees. A compiler that is able to do something "most of the time" isn't worth much. So, losing some guarantees may be both crucial and irrelevant, depending on which guarantees we're talking about.


Speaking of the ecosystem, has anybody here experience with Frege? (Haskell for the JVM). Does it feel natural to use existing Java/JVM libraries inside a Haskell dialect?


I dabbled with it about six months ago. I write most of my software in Scala, but I continue to hit the ceiling and run into places where type inference fails, compiler bugs add friction, or other oddities (22-tuple limit) get in the way. One of my experiments was to see if I could write some portions of my software in Frege, and some in Scala to get the best of both worlds.

I think the answer that to that question appears to be yes [1], but I stopped experimenting. I am considering resuming my experiment now that Frege appears to be crystallizing a bit more.

The java interop is nice, albeit it imposes discipline (as it should). Since java doesn't care about side effects, most of the values returned from a java function call will be in a Mutable monad. Here [2] is a good walkthrough for how it is done in case you want to try it out.

[1] https://groups.google.com/d/topic/frege-programming-language...

[2] http://mmhelloworld.github.io/blog/2013/07/10/frege-hello-ja...


Do you have some examples for "type inference fails"?

Do you have some examples for "compiler bug"?


Here is a quick shortlist of some type inference issues (some bugs, some enhancements, and some fundamental limitations of the language design) that I run into pretty much daily:

1. (Search for "asInstanceOf"): http://typelevel.org/cats/tut/freemonad.html

2. (Finally fixed and will hopefully be released soon - yay!): https://issues.scala-lang.org/browse/SI-2712

3. https://twitter.com/extempore2/status/430456002392391681

4. http://pchiusano.blogspot.com/2011/05/making-most-of-scalas-...

5. (Anecdotal): Various pattern matching peculiarities.

Compiler bugs: As long as I write code on the happy path and steer clear of scalaz/shapeless/cats, the compiler is pretty well-behaved and I can go months without getting a compiler crash. However, when I add those in and start writing pure functional code with complicated types that hits the intersections of various features (e.g., GADTs, package objects, implicits), I get a few crashes per week related to known bugs.

It works pretty well, and I am not trying to complain too much about all the mileage I get out of Scala and the excellent tooling! Just trying to get the most out of it.


> I get a few crashes per week related to known bugs

Interesting! Can you link to the issue?


Sure, you bet - I will start collecting them as they occur. However, I no longer find new bugs, it seems. I'll break something, google for the error message, and either find something on issues.scala-lang.org, IRC, or one of the Google Groups.

My current code base is not open-source just yet, so reporting self-contained reports may not be possible. (Currently I'm writing an interpreter for a fairly big declarative language using FreeApplicative from cats as the substrate, and I think the sheer size of it along with shapeless and scalaz usage is causing some undesirable feature interactions.

Where would be the best venue for reporting (known) bugs? Is there any value if I can't provide the actual code? Or is just reporting a "I experienced it too" message valuable?


I think this would help a lot. It would also help to check whether the code mentioned in an issue that triggers a crash is equivalent to the code you have.

Even if you can't publish your code, it might help to say "I'm seeing the same error, but I'm using X unlike the exmaple code that uses Y".

This will make sure that people who fix a bug will cover both causes with a test.


I just bumped into SI-1503 [1] and SI-7714 [2]. Fortunately I was able to work around those and keep plugging away.

[1] https://issues.scala-lang.org/browse/SI-1503

[2] https://issues.scala-lang.org/browse/SI-7714


Thanks!

SI-1503 should warn in 2.12, additional progress might happen in https://github.com/scala/scala/pull/5310.


Here is a new one that I reported to scala-internals today: https://groups.google.com/forum/#!topic/scala-internals/w4HG...


Great, thanks!


I hit two bugs today. Both were produced during major global refactoring. I did not have time to catalogue the reproduction steps or anything, so I have no right to complain!

1. https://issues.scala-lang.org/browse/SI-9281

2. https://issues.scala-lang.org/browse/SI-8322



Thanks! What the hell are you doing that you experience issues this often? :-)


I'm working on extending this ([1]) to contravariant functors while making the whole thing polytypic and typesafe (so I can abstract over type products and coproducts, among other things). My stuff will be open-sourced next year sometime at the latest. I think it's a combination of feature interactions and frequent refactoring during development that is a source for a lot of stuff that is not nice to the compiler.

edit: The extension to [1] is actually pretty benign and the compiler does well with it. The part that puts a big load on the compiler is that it is used as an interpreter for a popular, yet complicated, visualization DSL. The compiler, especially the typechecker, has to do a huge amount of work.

[1] https://github.com/jdegoes/scalaworld-2015/tree/master/src/m...


We use Scala, and the advantages you mentioned are definitely nice, but one problem is that different Scala libraries aren't as likely to have consistent APIs as with Haskell, since pure FP is only optional in Scala but mandatory in Haskell.


This is especially frustrating when importing Java libraries. Even more so when your project uses Scala's `Future`, but the Java library uses something like RxJava and suddenly you're writing a huge wrapper library and realize you might be better off writing a Scala lib from scratch.


It very much depends on how far you want to take your functional programming. If you want to make heavy use of purity, monads, type classes, tail recursion etc, you will be much better off in Haskell.


I understand. I don't want to do that, though.


Normally for e-mails, if you're sending out any large volume, I always use a cloud service that has already solved all the problems of sending e-mail without getting blacklisted, and in this case it's simply using something like the fantastic Servant library to send HTTP requests.


Thanks for posting that! We've also been a Haskell at scale in production company for a few years at Front Row, great to see others pulling off the same successfully. It's a small community so I'm happy to pool our resources together to make this continuously a better ecosystem to build businesses on.



<3


I was really disappointed by Haskell when I wrote the simple dynamic programming solution to the knapsack problem in it. To get good performance out of that took a lot of time and help from people at #haskell to deal with space leaks.

Ultimately, the functional solution was verbose, harder to understand and still slower than the more imperative solutions in Clojure (also very hard to get good performance in BTW, but at least you can easily implement performance critical stuff in Java) and Ruby.

That experience really turned me off Haskell.


Why didn't you just write the imperative solution in Haskell?


dynamic programming can be very declarative in Haskell: http://lpaste.net/172965


The Better platform gets around half a million learning actions every week and it has been running for well over a year with no downtime or maintenance. We still don’t know of any bugs.

I'm guessing people use your learning software during weekdays and working hours (5 days and 8 hours/day). That's 3.4QPS. How many machines are you running on?

Also, I'm really shocked to hear you've not encountered any bugs. How many humans are using your systems? A scale is good enough (100s, 1000s, 10000s?).


When we ran it, the workload was mostly during weekdays in different timezones (so distributed over more than 8 hours). This was on a single c4.2xlarge AWS server, which was bigger than we needed. To be clear, I wasn't trying to make a claim about high load with that statement, just that the system has been running smoothly with steady activity for a long time.

Each week the number of individual users was in the thousands; that was usually a rolling window since people tend to not do more than a few e-learning courses per year. I'm certainly not claiming that there are no bugs -- in all likelihood there are -- only that no bugs have yet crashed the system or were obvious and serious enough for users to tell us about.


>That's 3.4QPS.

Who said a "learning action" is equivalent to a query?

>Also, I'm really shocked to hear you've not encountered any bugs.

What's so shocking about it?


What bugs are we talking about? You've never encountered a functional bug or a security bug? That's definitely a good track record. Bug studies on languages definitely point to only a slight lower rate of bug for Haskell compared to other languages, so to have none is quite an outlier.


I only know of two somewhat comprehensive studies on this. One says Haskell had far fewer bugs and the other has a conclusion like yours.

You make it sound like there are numerous studies supporting your conclusion, can you link a few of them?


I wrote Haskell myself and it is indeed generally free of bugs. However, there are many things that type system alone cannot fully address, for example, boundaries (Haskell's functions could be not total, and yes I know there are type-level integers and things, but they are often a pain to use)


> What's so shocking about it?

It's a different style of development.

A lot of people have never worked on projects without lots of bugs.

My current F# project for example has almost 100 outstanding issues.

They would be really surprised to know that lots of people have worked on large c projects that are virtually bug free.


To be fair, not encountering bugs is often broken into two fields: a) aggressively defining user issues as out of scope, or b) not having a varied user pool.

Examples of (a) are not always malicious, mind. For a small shop, if your program messes up because during a process the machine it was on got unplugged, it is easy to see why that can be called not a bug in the system. When your shop gets large enough, though, scenarios like that become far too common.


This is a great article - functional languages like Haskell don't get enough credit in a world where JavaScript's shenanigans are the accepted norm.

Also check out F#, especially if you already have a code base involving .NET in any way. The CLR makes it super easy to write some parts in a functional language and other parts in more traditional OO - after all, the right approach often varies even within projects.

As a personal anecdote, taking the time to learn Haskell or any other functional language makes you a better programmer. The concepts often apply to less'pure' languages and certainly stretch you to think in new ways.


Shenanigans is a good word for that. So many people spend so much time learning the ins and outs of JS, which often find use only in JS, yet balk at learning the simplest things about Haskell. It does not help that theoretical and academic are so often used as slurs.


>>So many people spend so much time learning the ins and outs of JS, which often find use only in JS, yet balk at learning the simplest things about Haskell.

If the lenses for records thing in Haskell is amongst the simplest things for you, I beg to differ. I spent considerable time trying to get this working but could not succeed beyond toyish examples. Combine lenses/records with exceptions, applicatives, monads and monad-transformers as they become necessary once you try to expand your toy-example code and the Haskell thing becomes anything but simplest.

Yet, other languages support records flawlessly and almost out of the box.

The heavy and unnecessary usage of symbols, like, >>, <<, >>=, >=>, <=<, and so on just adds cognitive overload and gives no inherent benefit but most of the Haskell library code is littered with it and due to almost arbitrary overloading of these symbols to mean different things in different contexts (libraries) just adds more cognitive overhead without any benefit. The almost cult-like insistence on the excessive usage of meaningless-to-humans symbols reminds me of the great practical programming language known as brainfuck [1]. Sorry but no sarcasm intended. Some Haskellers harp on succinctness, when talked about this religious fetish for symbolism in Haskell. So I ask, such succinctness at what cost?

The excessive and obsessive usage of symbols may give mental kicks to the hardcore Haskellers out there, but sorry, I don't see any incentive to waste my time trying to wade through the gobble-de-gook of brainfuck type code.

So sadly, Haskell and me are not a match. Maybe some most common things in other languages when tried in Haskell seem to be (are) rocket-science (e.g. lenses for records) and/or maybe I am a daft blockhead, but that's so.

[1] https://en.wikipedia.org/wiki/Brainfuck


> The heavy and unnecessary usage of symbols, like, >>, <<, >>=, >=>, <=<, and so on just adds cognitive overload and gives no inherent benefit but most of the Haskell library code is littered with it

This is an oft-repeated criticism of Haskell which I frankly find has no basis. Firstly, symbols are preferred for these kinds of functions precisely because of their ubiquity. Once you're familiar with them (which you almost certainly will quickly become, for the common ones like `>>=`, and `< * >`) they make the code easier to read, not harder, in general. Few would complain about having to write "<" to compare two numbers in most languages, rather than, say, "lessThan", because it's all over the place, and having a symbol for it makes it easier for a developer to read. The same is true for many common combinators in Haskell. There is no "cult-like insistence" on them; it's simply the case that many Haskell developers find them more pleasant to write than named functions. Honestly, the comparison to brainfuck is completely unfounded, and the references to cults and religious fetishes are IMO unnecessarily derisive.

Furthermore, I'd add that what makes the symbols seem hard is not the fact that they're symbols, but that the ideas that they're representing are challenging when coming from the world of imperative programming. Using a named function like "bind" rather than ">>=" or "apply" rather than "< * >" is not going to help the fact that these functions are complicated and unintuitive to newcomers. I think a lot of the complaints about Haskell symbols, much like complaints about Haskell syntax, are more driven by how different Haskell semantics are to mainstream languages. Once a developer becomes comfortable with the "zen" of Haskell, these criticisms tend to melt away. That isn't to say that everything becomes easy: there are many concepts that one is only likely to encounter in Haskell, and they can often be challenging, or make someone else's code hard to grok. But the difficulty does not stem from the fact that they're written as symbols. IMO.

As for your comments on lenses and records, I've been writing Haskell for a good number of years and still haven't really spent the time to grok lenses. I think they're probably great, but certainly mastery of them is not required to be productive with the language.


> This is an oft-repeated criticism of Haskell which I frankly find has no basis. Firstly, symbols are preferred for these kinds of functions precisely because of their ubiquity. Once you're familiar with them (which you almost certainly will quickly become, for the common ones like `>>=`, and `< * >`) they make the code easier to read, not harder, in general. Few would complain about having to write "<" to compare two numbers in most languages, rather than, say, "lessThan", because it's all over the place, and having a symbol for it makes it easier for a developer to read. The same is true for many common combinators in Haskell. There is no "cult-like insistence" on them; it's simply the case that many Haskell developers find them more pleasant to write than named functions. Honestly, the comparison to brainfuck is completely unfounded, and the references to cults and religious fetishes are IMO unnecessarily derisive.

My issue, back when I was exploring Haskell, was not so much with the symbols in the standard library (eg, for applicative functors), but with the amount of library authors who thought they needed their own little shitty ASCII DSL.


>>As for your comments on lenses and records, I've been writing Haskell for a good number of years and still haven't really spent the time to grok lenses. I think they're probably great, but certainly mastery of them is not required to be productive with the language.

Give me any non-trivial industry scale web/database application which doesn't rely on the notion of records. The support for records in Haskell is if not pathetic is very poor.[1] (This is a rather older reddit post but I haven't kept track of it.)

I can be productive with Haskell if I restrict myself to math-type pure computations (e.g. DSL compiler) but if you want anything to do with web/database type non-trivial IO using Haskell, good luck.

So, I restrict myself to use Haskell only for doing pure computations that require extremely simple and trivial IO (mostly just readFile/writeFile).

>>This is an oft-repeated criticism of Haskell which I frankly find has no basis. Firstly, symbols are preferred for these kinds of functions precisely because of their ubiquity.

How about Haskell's notion of fixity of the symbols/operators/names?

It seems this almost arbitrary fixity [2] notion just adds cognitive overhead with little to zero semantic benefits. Correct me if I am wrong and please enlighten me on exactly what semantically significant benefits are offered by the notion of fixity and at what cost.

Most other mainstream languages don't offer such a useless flexibility like fixity which causes so much cognitive distraction when used by different people with different semantics, and that too just for kicks.

[1] https://www.reddit.com/r/haskell/comments/k4lc4/yesod_the_li...

[2] https://www.haskell.org/tutorial/functions.html


> The support for records in Haskell is if not pathetic is very poor.

Be specific. Support for record field access is fine. Support for record creation is fine. Support for pattern matching on records is fine. There are two "problems with records" in Haskell.

First, unlike many languages, records don't create implicit namespaces, so if you define Foo { x :: a } and Bar { x :: b } in the same module you have ambiguity. This is addressed, in part, by several recent GHC extensions. It could always be handled by putting Foo and Bar in different modules (with some added complexity if Foo and Bar are mutually recursive), but the community has settled on prefixing (Foo { fooX :: a }, Bar { barX :: a }) which is a little ugly but perfectly workable.

Second, while support for member update is okay when shallow, it compounds in a particularly nasty way when it gets deep. So what in Java would be "x.y.z.a = 7", in Haskell is something like "x { y = (y x) { z = (z (y x)) { a = 7 } } }" - horrific, I agree. But note that "x.y.z.a = 7" isn't great practice in Java, either! Why do I know about the fields of a field of a field of my object? Far better if that's packaged up in a setter. And if you define setters, the Haskell gets cleaner too. Lens is a principled way of defining getters and setters (and more) that can be composed and manipulated consistently - but it does have a huge vocabulary which obviously creates a barrier until you've learned it. But you don't need lens to write setters (and for that matter, you don't need to know all of lens to use some pieces of it).

I've also found that RecordWildCards makes records genuinely nice to work with in many cases - unpack the record into the local scope, at which point the prefixed names don't seem verbose, transform things as I want, and then re-collect the fields I can while explicitly setting those I need to.


A great, even-handed, summary of the situation with records in Haskell!


thanks! :)


First of all, I never claimed that libraries didn't use records; certainly nearly all do. I was talking about lenses when I said you don't need to master them to write good code. I'm not sure what it means for a record system to be "pathetic", but although there are issues with records in Haskell, it (a) is improving with new features forthcoming in GHC and libraries, and (b) has not stopped developers from writing millions of lines of robust, practical code. I think the "record problem" in Haskell is a legitimate one but not one which will ever prevent you from writing good code. At worst it's an annoyance.

As to the questions about fixity, this is once again in practice not a big problem. First of all the notions of operator fixity and association are familiar to all of us since grade school. We all know "please excuse my dear aunt sally" or some similar mnemonic for the precedence rules of the operators of arithmetic. It's this rule that lets us write "a^2 + 2 * b^2 * c^(2 / 3)" instead of "(a^2) + ((2 * (b ^ 2)) * (c ^ (2 / 3)))". In other words it's a way to make an expression easier to read and parse by removing superfluous parentheses. It's not arbitrary at all, and in fact operator precedence rules is a feature that every other mainstream language has.

As to what semantic benefits there are to fixity, fixity is a purely syntactic notion. It has nothing to do with semantics. Giving your operator an associativity and precedence only affects how a particular expression is parsed, not how it's interpreted or evaluated.

I'd like to mention that your comments so far seem unnecessarily combative and hyperbolic. You use phrases and words like "pathetic", "good luck", "useless flexibility", "zero benefits", "please enlighten me". This injects contention into a discussion where there needn't be any. Furthermore, your statement that you only use Haskell for pure computations with minimal IO makes me question the legitimacy of your criticisms. Certainly, there are countless others who have written all manner of heavily interactive programs, web servers, database interaction, or IO-intensive programs in Haskell. If Haskell isn't your cup of tea, that's fine, but do you really think you're in a position to be leveling such criticisms when your own usage is so restricted?


> How about Haskell's notion of fixity of the symbols/operators/names?

Every language has implicit fixity and precedence for infix operators. What's different about Haskell is that you can actually define infix operators yourself. And if you're going to define your own you need to be able to also define the precedence of your operators. Fixity has always been there. You just didn't realize it.


>>Every language has implicit fixity and precedence for infix operators. What's different about Haskell is that you can actually define infix operators yourself.

It seems I didn't make my point clearer. I am asking what is the semantic benefit for the programmer of the explicit fixity and ability to change fixity at one's will?

The whole notion of explicit fixity in Haskell exists because of Haskell's fetish (pardon me using this term again) for symbols instead of functions-names with fixed implicit fixity. They needed to support explicit fixity only because they wanted the programmers to be able to define operator symbols arbitrary with arbitrary fixities.

So, now when you wish to read someone's code, then other than dealing with the inherent complexity present in their code, you also have to deal with the intentionally inflicted accidental complexity (to paraphrase Fred Brooks) over there because of their choice of using symbols with arbitrary fixities. [1]

IMHO, Haskell adds to accidental complexity in this way instead of curtailing it just to satisfy their urge on usage of arbitrary symbols.

Let me explain it a bit further: to read someone's code, first I have to see what symbols they have defined and what fixity they have ascribed to each of those. Now I have to keep all that additional unnecessary cognitive load in my brain to just be able to get a hang of their code with their fixity rules associated with their symbols.

[1] https://en.wikipedia.org/wiki/No_Silver_Bullet

edit: s/symbols over functions/symbols instead of functions/


Why does any programming language have infix operators? Because they make some things easier to read. Fixity and precedence are not accidental complexity. They are the essential complexity of having infix operators. In order to evaluate "3 * 4 ^ 3 ^ 2 / 4 / 2 + 5 * 8" you must define the fixity and precedence of the operators involved. You don't usually have to think about defining these things in most mainstream languages because the language defines the fixity and precedence of those operators for you to give the behavior you expect. You still have to learn it though. Do you know how Python evaluates the above expression? (Change the carets to double stars...HN didn't render double star properly.) In my first programming class long before Haskell was created, the textbook had a C operator precedence chart. The concept is not new to Haskell. Haskell has just given you control over something that has been there the whole time.

Haskell lets you make your own infix operators so that you can get the same benefits in other domains than the ones that the language designers decided to provide for you. Therefore it follows that Haskell also needs to allow you to specify the fixity and precedence so that your operators will fit together in the way that is appropriate for your domain.

> Let me explain it a bit further: to read someone's code, first I have to see what symbols they have defined and what fixity they have ascribed to each of those. Now I have to keep all that additional unnecessary cognitive load in my brain to just be able to get a hang of their code with their fixity rules associated with their symbols.

The same is true of C, Java, Python, etc. The only difference is that those languages have a closed set of operators while Haskell's is open. There is a small amount of cognitive overhead, but I think you're greatly exaggerating it. In six and a half years of professional Haskell development I can count on one hand the number of times this has been an issue. And it can be trivially resolved by adding parentheses or by decomposing the expression into separate expressions.


> Therefore it follows that Haskell also needs to allow you to specify the fixity and precedence so that your operators will fit together in the way that is appropriate for your domain.

This is the key point. Operators drawn from my target domain probably already have well established fixities. Forcing my expressions to look radically different than what's in the textbooks and papers discussing the domain in which I'm working is itself introducing incidental complexity.


Haskell has its own share of shenanigans. Pragmas being the most blatant one, but also package management, the cumbersome-ness of some libraries, seq, and the number of operators (specifically the functorial ones).


Can you expand on all of these complaints? I've been using Haskell for about 9 years and none of these are things I've ever had a complaint about, except package management, which I find stack's solution works incredibly well in practice.


Here are some that I have encountered:

Which string to use? String, ByteString or Text or something else? How to convert between zillions of String types, which are, it seems, completely incompatible even though providing similar functionality? So much for the support of abstraction.

Which library is good for doing X? It seems, like String, the Haskell provides too many experimental and/or incompatible and/or immature libraries to do many trivial things.

Then there are issues of laziness combined with IO and you get a `seq` shenanigan.

Then there are issues of laziness combined with inability to profile/debug and you get a `analysis` shenanigan.

Then there are issues of laziness combined with inability to handle exceptions without great grand-daddy-catch-all-bad-things IO monad and you get a `IO monad exceptions` shenanigan.

So, there are a lot of Haskell shenanigans, too. Take your pick.


I've never personally understood the exaggeration people use when talking about string types in Haskell. You say "zillions", but there's only really five. Five is the most disappointing "zillions" I've ever encountered.

There's strict and lazy ByteString, which you should use whenever you're working with buffers of bytes that don't have any associated encoding and aren't text.

There's strict and lazy Text, which you use for human-readable text that's decoded from some specific encoding, like ASCII or Unicode.

There's String, which you use when interacting with an API that requires use of String, like anything in the language standard.

It's certainly some amount of complexity, but it's not anywhere near as complex as I keep seeing claimed, unless I'm missing something. It's certainly ugly that we still have to deal with String, and that's a notable wart on the language, but I rarely even think about it.

One guess I've had about part of the confusion is that ByteString has the word "String" in it, which might make people assume that it's for text. A better name would be "Buffer" or "Bytes". I've also speculated that maybe part of the complexity is having to explicitly encode and decode to get Text, instead of the implicit coercion you get in other languages that freely mix bytes and text.

If you want a greatly simplified way to unify string conversion, you can use string-conv. https://hackage.haskell.org/package/string-conv-0.1/docs/Dat...

Edited to add: it's been pointed out to me that "zillions" might not have actually been a claim that there's enough string types actually keep track of them, but is probably just hyperbole to express some frustration. If that's the case, my apologies for my failure at reading comprehension.


The optimal number of core string types for a language to have is one.

Python got to two, and the community decided that was enough of a PITA to merit a breaking change in the language in order to get back down to one. C++ also has two, but we put up with it because it's C++ so really we're just thankful it's not three or four.

Five is so far beyond the pale that it is absolutely reasonable to hyperbolize it as "zillions".


ByteString isn't really a "string" type; it's a byte buffer. It's not about text. I think that it's entirely reasonable to have a dedicated "Text" type that works with decoded human text, separate from buffers of bytes. I also think that it's entirely reasonable to have lazy and strict variants of core data types; they have very different time/space tradeoffs. It's definitely frustrating that the core language definition has a data type "String" that's a pretty bad choice for almost any application (it's a linked list of characters), and that's a legitimate problem, but I think an accurate characterization of the real problem with string data types in Haskell is much more like "You should be able to use Text everywhere to deal with decoded unicode text, but for unfortunate legacy reasons you have to use String to interact with large parts of the standard library".

Treating all bytes as if they happen to accidentally represent utf-8 encoded unicode text would be a big mistake; there's significant advantages to representing text and byte buffers separately. They are very different things.

Choosing to support only strict or only lazy handling of byte buffers or text would be quite unfortunate; there are significant advantages to both for different algorithms and use cases, and encoding the difference in the type system seems entirely reasonable to me.

I'm curious which of these you disagree with. Would you prefer that Text and ByteString be merged into one data type, so that the compiler doesn't consider it a mistake to treat arbitrary bytes as if it were text without specifying any encoding? Would you prefer that Text and/or ByteString discard support for lazy representation, or for strict representation?


Haskell has 1 unnecessary string type: String.

All the other types provide specific functionality that you won't get otherwise. (While String is there only because it is easy to use.) If you really don't care about that extra functionality, just keep with strict Text, and be done with that.

Now, if the perfect number of string types is 1, why couldn't you name any language with that number. By the way, Java also has two string types, as does .Net. Python still has 2 types, they just don't coerce into each other anymore, and I've probably seen a dozen C++ string types already.


I think the ancestor post was referring to python 2's separate str and unicode types, not bytes vs str.


If he does not think Python's bytes vs. str is confusing, he has no basis to claim Haskell's Text vs. ByteString is confusing either.


"ByteString has String in the name" is a basis, however narrow...


Rust has six, plus the Path types:

http://www.suspectsemantics.com/blog/2016/03/27/string-types...

I would say it's actually justified in that.


Those aren't string types, they're buffers for platform-dependent system interoperation. When used, the goal is to convert them to String as soon as possible.

If one considers those to be string types, then Vec<u8>, Vec<16>, and Vec<32> would be considered string types as well (in addition to both fixed-size arrays and unsafe pointers to the same).


Doesn't python3 have both bytes and str, the equavilent of ByteString and Text?


I have found the same to be true whenever I've tried to do anything in Haskell, especially with regards to error handling. I would try to decide on a specific style of handling errors, but the libraries I depended on maybe did it differently. So there ended up being a lot of boilerplate code to wrap other libraries to my own particular style.

A lot of the problems I encountered probably boiled down to inexperience on my part, and perhaps not understanding idiomatic ways of doing things in the language. But another part of it is that I think the community has not settled on a set of common idioms.

There are things about Haskell that I absolutely love. The expressiveness is amazing. Chaining a monadic sequence can feel magical at times. Treating errors as just values is nice. But what do you do when the libraries that you depend on throw exceptions instead of using Either?

And those awful compiler errors can be so, so frustrating. But I will return to you again one day, dear Haskell.


> But what do you do when the libraries that you depend on throw exceptions instead of using Either?

tome's response is amusing, but assuming you need to keep getting work done and aren't interested in forking the library...

You wrap the library in a shim that forces the return values, catches anything that's thrown, and returns Either. If it's a library that you only use in one or two places, that shim might be inlined.


I should note that, where you prefer to be dealing with exceptions, it's trivial to go the other way with a shim as well.


> Chaining a monadic sequence can feel magical at times.

The issue is when this monad is actually a monad transformer tower, in which case there is actually a bunch of "magic" happening at each line.


> what do you do when the libraries that you depend on throw exceptions instead of using Either?

File a bug on them


And wait forever instead of getting any actual work done.


It's not to say Haskell is bad, though. I'd be a fool to say so. The above comment is mainly to point out that there are its shortcomings and that it has a lot to improve and most of the times I have seen the proponents of Haskell tend to downplay and to avoid talking about these shortcomings.

In fact, I have learned a lot many good programming practices/principles by spending time to learn Haskell. I do apply some of those in my projects in other languages.


> Which library is good for doing X?

https://haskelliseasy.readthedocs.io/en/latest/


JS runs natively on the web, it embraces multiple paradigms, it has a huge community, and you can get a job in almost any city on earth if you know how to use it. Haskell may be "better" as a language, (just as Esperanto may be superior to English) but if you want the widest range of opportunities in your life and not just in your language, Javascript/English beats Haskell/Esperanto.

Why not help bring some of Haskell's concepts to a more relevant language instead of bemoaning the entirely rational decisions made by millions of junior and senior developers?

Edit: the rabid insularity of the Haskell community definitely doesn't help. The Python community was dissatisfied with JS and created Coffeescript, and as of ES6 now everyone can enjoy the fat arrow syntax. Less pleasantly, the Java community got class syntax added to ES6, but at least they're trying to contribute. I'm sure Haskell has plenty of really cool things to bring to the table, but I've never heard of any of them, because all the Haskell community wants do is talk about how JavaScript sucks instead of embracing the functional side of it.


Firstly, poor analogy—English is a much superior language to Esperanto in terms information density over seconds per syllable in addition to being more expressive (it's easier to talk about a wide variety of topics and abstractions). Possibly more importantly than that, it takes a lot of people to make a spoken language useful for doing business but far far fewer for a computer language.

> instead of bemoaning the entirely rational decisions made by millions of junior and senior developers

I think people are bemoaning the stupid shit and not the entirely rational decisions. JS makes the former a bit easy.

> Python community was dissatisfied with JS and created Coffeescript

Nitpicking, but the first CoffeeScript compiler was written in Ruby and the language was largely inspired by Ruby.

> the Java community got class syntax added to ES6

Huh that's a new one

pretty sure that's not just the Java community, given that JS developers have been independently inventing class libraries (starting with Prototype, if not earlier) for ages.

> I'm sure Haskell has plenty of really cool things to bring to the table, but I've never heard of any of them

Underscore, lazy.js, other popular JavaScript libraries have some Haskellisms; LiveScript is a fork of CoffeeScript that used to be moderately popular and very Haskelly; React takes a lot of ideas from Haskell; immutable.js is quite Haskellian….

I think you're just not looking.

> all the Haskell community wants do is talk about how JavaScript sucks

You can drive a go cart on the highway, and you can keep modding your go cart, but at some point you might want to not be driving a fucking go cart on the highway.


Ramda is arguably the most Haskelly JS library.


I am loving Ramda. I started integrating it for my React project, and I love how my code started collapsing.


[flagged]


> No examples, no evidence

I described a very specific example of something Haskell can do that JS cannot in this reddit thread:

https://www.reddit.com/r/haskell/comments/4ujg5i/what_are_yo...

It is almost impossible for the Haskell metadata widget I described to have bugs. But I've seen production bugs in the corresponding JS code twice in the last few months. You simply cannot get the kind of correctness guarantees in JS that you have in Haskell.


> Pedantic, arrogant, and self-contradictory combined with poor reading comprehension and even worse social skills is the stereotype of the Haskell fanatic, and you're living up to it perfectly.

I'm not sure (s)he is the one.


"Personal attacks are not allowed on HN, even when someone is wrong."

https://news.ycombinator.com/item?id=9835422


> the rabid insularity of the Haskell community definitely doesn't help. The Python community was dissatisfied with JS and created Coffeescript, and as of ES6 now everyone can enjoy the fat arrow syntax. Less pleasantly, the Java community got class syntax added to ES6, but at least they're trying to contribute. I'm sure Haskell has plenty of really cool things to bring to the table, but I've never heard of any of them, because all the Haskell community wants do is talk about how JavaScript sucks instead of embracing the functional side of it.

Since you mentioned CoffeeScript, Elm seems like a particularly relevant example of something that came out of the Haskell community and which compiles to JavaScript.

I think you have a negative impression of both Haskell and its community which isn't necessarily justified. It's a little insular, but not to the extent that you're suggesting here. PureScript is another language that Haskell has almost directly spawned. It's a lot more complex than Elm, like Haskell itself, but has quite a few brilliant insights of its own.


> Since you mentioned CoffeeScript, Elm seems like a particularly relevant example of something that came out of the Haskell community and which compiles to JavaScript.

Let's not forget Purescript[0], which is looking to be the spiritual successor to Haskell, and runs on node.

[0] http://www.purescript.org/


While I can't say that I am a big fan of the Haskell community, I fully understand why they see how extending Javascript is never going to work out: You can't just bolt Hindley-Milner to a language with prototypical inheritance, so the functional part of Javascript just doesn't help at all. This is why instead, for the web, they have Purescript, which compiles down to Javascript, but it doesn't look like Javascript.

Not quite coming from the Haskell community, but heavily inspired by parts of Haskell is Elm, which you might have heard about. It has the best compiler errors ever, it takes immutability seriously, and is far nicer IMO than either Purescript or Javascript for web development.

It's really an issue of types. Languages like Java, Python and Javascript don't have quite the same approach to types, but their worldviews there are not that different. Anything coming from the ML family just isn't going to translate, and that is going to happen regardless of how insular the communities might be. Existing Javascript features actively make most of the things a Haskell programmer would want just not work at all. This is why you hear them talk about how Javascript sucks: Everything they'd want involves taking things out first, and that is never going to happen.

So don't blame in on the community here, bad as some elements might be: The differences just cannot be negotiated away. You might as well ask people to open their mind and breathe carbon dioxide and sulfur so they can go visit you in Venus: It's a barrier that is too hard to be worth crossing in either direction. Trying to add Javascript or Python features to Haskell would get you in a similar boat.


> Not quite coming from the Haskell community, but heavily inspired by parts of Haskell is Elm

It's written in Haskell. How much more "coming from the Haskell community" could it be?


I don't know where they came from, but these projects and libraries seem to be influenced by Haskell (maybe?):

https://github.com/folktale

https://github.com/sanctuary-js/sanctuary


Not that I know much about it, but http://www.purescript.org certainly looks like a kind of "Coffeescript for Haskell”? Or is that language too different from JS for that purpose? The that might just be because Haskell is quite different from JS.


There is also PureScript, which is basically a "Haskell community's CoffeeScript"


Hey how's that new patio coming along?

The carpenter blew us away. They don't allow publication of a preprint, but it was just accepted into the Journal of Patio Support Design!! Which, I'm told, is the third most-prestigious journal in the business...

No I meant like can we have a barbecue?


This is a great article - functional languages like Haskell don't get enough credit in a world where JavaScript's shenanigans are the accepted norm.

What do you mean? As far I can tell, functional languages get an amount of attention disproportionate to their usage and even this article is example - using js wouldn't get to the top of hn but Haskell would.

And it's not that I think this is necessarily a bad thing - functional is the "next big thing" for taming the insanity of programming and even sixty years in, we need still new models because programming is still a mess.

But let's frame things the way they are "here's an concrete argument the hype might be justified" or something like that.


> functional languages get an amount of attention disproportionate to their usage and even this article is example - using js wouldn't get to the top of hn

JavaScript topics do make it to the front of HN. For example, https://github.com/getify/You-Dont-Know-JS was the top article on June 24.

If you mean that simply writing about using JavaScript without any particular angle probably wouldn't make it to the front page, you're right. But that's actually proportionate to the two languages' usage. It's similar to how "I drank some water" is less interesting than "I drank a rare Japanese whiskey" — absent of context, they're equivalent, but because everyone has done the former, the fact that you did it too is boring on its own.


He said using JS won't get you to the top of HN.


I would expect you're right that functional languages get more coverage than they would warrant based on usage, at least in industry and the startup scene.

I was referring more to people dismissing the benefits of functional programming as they dismiss the benefits of strictly typed and compiled languages. Many of the benefits of functional are the result of restrictions far more 'onerous' than knowing the type of an object at compile time. Those 'restrictions' make you think about your code more carefully and catch huge classes of bugs.

Using strings to access object properties in JS (to me) feels like shenanigans.


> functional languages like Haskell don't get enough credit in a world where JavaScript's shenanigans are the accepted norm

The issue is that many times the programming languages don't matter, what matters are the libs written for that language. For example, there are cryptocurrency libs for Ethereum and Bitcoins that are only available for JavaScript/Node.

Like it or not the better decision will be to use JavaScript only for this reason.


I would argue that it's less about the libraries, and more about platforms and branding. Judging languages on their popular libraries can be a bit misleading, since not all languages need libraries for some of the stuff other languages need them for, they're often already baked-in to the language.

F#'s Type Providers basically make ORMs, query builders, scrapers, web request libs, parsers, etc, largely unnecessary, because they enable the language to have built-in support for fully typed foreign data sources.

Dart meanwhile, makes a large chunk of typical boilerplate js libs like jquery, underscore, promises, requirejs, etc, all obsolete while still being relatively similar to js.

However, js is still the language of the largest platform out there (the browser), and the F# community still doesn't push it's branding and marketing towards trendy startup engineers very much, compared to a community like ruby's.


I don't think those kind of libraries are what the parent is talking about though. They're talking about when you need to interact with some interface, or implement some specific non-trivial algorithm, how likely is it that there's already a library for that.


I hear this argument a lot too, but when you think about it, what would pull the original authors of those rarer libs to use the languages they chose in the first place? R is clearly geared and marketed towards statisticians, which have in turn given it a lot of such algorithms. Python has SciPy, which was also marketed similarly and has practically become its own platform. And Python itself was marketed towards academic types as 'executable pseudo code' that's easy to understand, which then culminated in NumPy and such.

Also, I would argue that what op was talking about did fall more in line with platforms than algorithms, despite what the libs actually did, because Ethereum and Bitcoin are themselves platforms that spark those kinds of libs. Not to mention that you're always liable to run into situations where some obscure algorithm isn't available in your language of choice, regardless of how popular it is (unless you're using c++, which marketed/encouraged the kitchen-sink approach to development, and has had a lot of time to build quite a few kitchens).

As for interfaces, I'm not sure exactly you mean that hasn't been covered already, like in the case of F# being able to smoothly interface with external data (which also includes the ability to interface with libraries from entirely different languages like R and Python). And interfacing with hardware is also a lot like a platform issue similar to the browser situation, except with C instead of js. And then there's stuff like arduino, which is also very clearly a platform.

All in all, branding and marketing shouldn't be underestimated as the driving forces behind a lot of tech decisions, because software development does have a very strong social component that shouldn't be ignored. I mean, just look at the type of site we're on...


    > The issue is that many times the programming languages don't matter, 
    > what matters are the libs written for that language.
I'm writing a crypto currency app in Haskell [1], because even though every library exists written in JavaScript, I cannot read the code and feel like I know whether it works or not. There are too many ways for JavaScript to fail for me to be able to reason about whether the code is correct or not. And a financial library (one that can cause people to lose money if it doesn't work) with bugs in it is worse that no library at all, at least for my application.

[1] https://github.com/runeksvendsen/restful-payment-channel-ser...


> The concepts often apply to less'pure' languages and certainly stretch you to think in new ways.

Purity is great help, but it isn't essential. The essential part is having a type system that actually captures what's going on in your program, without burdening you with manual annotations, and ML-family languages in general do very well in this regard.


I've gotten much more value from purity and immutability than I ever have from advanced types although I'm speaking from a F#/Clojure background rather than a Haskell one.


I'm not talking about fancy types either. Standard ML's type system is dead simple (no higher-kinded types, no GADTs, no type families, not even type classes), and I got a lot more value from it than from Haskell's type system, mostly thanks to:

(0) Abstract types, generated by means of opaque signature ascription, which Haskell doesn't have. This ensures that different modules can't “accidentally” each other's internal invariants.

(1) The fact variables stand for values (due to ML being strict), rather than potentially diverging computations (due to Haskell being lazy), which makes useful algebraic laws sound even in the presence of nontermination or whatever effects.


> a world where JavaScript's shenanigans are the accepted norm.

Hey, we have Typescript nowadays, which does help a lot.


Isn't JS functional?


The term 'functional language' is not terribly useful these days. You can program in a functional style in many languages, just as you can program in an imperative style in Haskell or Scheme.

Last year, I wrote an article [1] talking a little bit about this, arguing that the notion of 'language paradigms' is mostly a crutch to avoid thinking critically about what a language is.

[1] https://chadaustin.me/2015/09/haskell-is-not-a-purely-functi...


JavaScript has functional features (first class functions for one are done well) but I think that's about it. Side effects, mutability, etc all are allowed. Compare with Haskell, Elm, or Clojure where those things are either not allowed at all or discouraged.

The recent trend of "functional JavaScript" is mostly using third party tools that emulate feature from other languages (like Immtutable.js or Ramda), but IMO the core language cannot be called a proper functional language.


It is a proper functional language, just not a "pure" functional language.


Define proper functional language, as opposed to an improper one


To be fair, everyone has their own definition of FP. For me, it just feels weird to call JS functional when it only has one "feature" of most functional languages. And at least in my college studies, the discouragement of side effects was stressed as the main feature of them.


To me, as long as functions are first class members, and higher-order functions are possible, I consider a language to be functional.

Everything else is a convenience and not a requirement for the functional paradigm.

Mind you, I do much prefer a Lisp or Scala to Javascript, but I face no trouble writing functional code in Javascript.


Ok, just be aware that by that definition, just about every language that wasn't developed in the stone ages would qualify.


Yup. The same is roughly true for being object-oriented.

The big paradigm divisions were largely a thing of the 1990s; since then languages have become increasingly multiparadigm, so that it's almost hard to find a language that isn't nowadays. I think Haskell and Golang might be just about the last remaining standard-bearers of the league of one-trick ponies.


I did not know COBOL was functional too :)

Out of curiousity, what other criteria would you apply?


Not the one you replied but I was taught through university and books that functional programming's most important feature is the discouragement of side effects. Even on Wikipedia it's said to be a style of programming that "avoids changing-state and mutable data". I'd put discouragement of side effects and immutable data up there with higher-order functions (HOFs).

You're more than welcome to have your own definition, but it's my opinion that it's too broad since HOFs exist in some form in languages like Java and Ruby and I wouldn't call them functional though like another commented said, languages are mixing paradigms nowadays (with the exception of Haskell!).


It is. Sadly the standard library isn't, but there are great libs. Programming Javascript with lodash/fp or Ramda is great fun, and in some places I prefer it over Clojure.


I think immutability is more important than "functional styles" and "function as first class citizen" to a "functional" language


So, would you call e.g. XQuery 1.0 a functional language? It didn't have first-class functions at all, but it was completely immutable.

Or, say, SQL sans stored procedures?

I think the more appropriate term in this case is "declarative". "Functional" really implies something having to do with functions.


A functional language is a procedural language that meets the following three additional conditions:

(0) It encourages using values over objects with a notion of physical identity.

(1) It encourages using procedures that compute functions: mappings from values to values.

(2) It discourages distinguishing between procedures that compute the same function.

Let's see how well JavaScript fares:

(0) Are `[1,2,3]` and `[1,2,3]` equal or different?

(1) Given `function(x) { return { first: x, second: y }; }`, are `f("hello")` and `f("hello")` equal or different?

(2) Are `function(x,y) { return x + y; }` and `function(x,y) { return x + y; }` equal or different?

This should tell you whether JavaScript is a functional language.


[citation needed], because this sounds wrong:

> A functional language is a procedural language

this doesn’t sound authoritative:

> that meets the following three additional conditions

this doesn’t sound like the right distinction:

> It encourages using values over objects

and this doesn’t sound meaningful:

> It discourages distinguishing between procedures that compute the same function.


> this doesn’t sound authoritative

A functional language still uses procedures, even Haskell! To see what I mean, consider the possibility of divergence (infinite loop, `undefined`, `error "foo"`, you name it). It doesn't make sense for a mathematical function to diverge. The difference between, on the one hand, Haskell, ML and Racket, and, on the other hand, JavaScript, Python and Ruby, is that the former group encourages you program with procedures that compute mathematical functions, whereas the latter group does not.

> this doesn’t sound like the right distinction

It doesn't suffice, but it's a precondition. For good or for bad, mathematical functions are mappings from values to values, so you can't talk about computing functions in a language without a rock-solid notion of (possibly compound) value.

> and this doesn’t sound like the right example for (2) [deleted from your original post, but I'm not deleting it from here]

Yes, it's the right example. Haskell and ML prevent you from accidentally distinguishing two extensionally equal functions by not letting you compare procedures in the first place.

> and this doesn’t sound meaningful

These functions ought to be extensionally equal in any mathematically civilized language:

    function foo(x,y) { return x + y; }
    function bar(x,y) { return x + y; }
But JavaScript lets me distinguish them.


The nice thing about infinite loops, `undefined` and `error "foo"` is that nothing can use their result (i.e. nothing happens "after", although that's a tricky concept with lazy evaluation!)

In other words, Haskell doesn't have try/catch for things like `undefined`, so we do have to worry about them occuring, but we don't have to worry about them causing the code to misbehave; it'll either behave or it will stop progressing (either halting or looping infinitely).

On the other hand, the "IO" DSL built in to Haskell can do try/catch; but once you're in IO, all bets are off.


Oh, I'm not saying Haskell doesn't let you express functions. What I'm saying is that functions are expressed as procedures that compute them.

The nice thing about Haskell is that it distinguishes between procedures that may only compute functions (or diverge), and procedures with arbitrary effects.


Wikipedia is far from the be-all and end-all source, but at least it is one, so here’s a quote from https://en.wikipedia.org/wiki/Functional_programming: “It is a declarative programming paradigm, which means programming is done with expressions[1] or declarations[2] instead of statements.” I would agree with it that the usual meanings of “procedural language” and “functional language” are not compatible.


Any procedural language, not only functional ones, can be expression-oriented, rather than statement-oriented, e.g., Ruby.

And Prolog programs are literally lists of declarations (of inference rules), yet Prolog isn't a functional language.


The quote says functional programming is a type of declarative programming, not the other way around. Prolog is irrelevant.

Backing up to equality as an aside, is there any language that will let you determine whether two functions are equal? If the functions that can be compared aren’t restricted to some silly level and the language is Turing-complete, I can’t imagine any better behaviour than Haskell’s that you already mentioned, which you can bring to JavaScript pretty easily.


> The quote says functional programming is a type of declarative programming, not the other way around.

Well, your quote doesn't even provide a full definition, sooo...

> Backing up to equality as an aside, is there any language that will let you determine whether two functions are equal?

No, that's obviously undecidable. (Rice's theorem.) But read what I said more carefully. Did I ever say “A functional language lets you determine whether two functions are equal?” I didn't! What I said is “A functional language doesn't let you distinguish equal functions.” Not letting you test functions for equality is a perfectly valid (and, in fact, the only valid) approach in a higher-order language.

> which you can bring to JavaScript pretty easily.

Not really. I can't undefined JavaScript's existing equality operator.


> a perfectly valid (and, in fact, the only valid) approach

I thought as much and it would really help if you would just say these sorts of things outright so the comments are actually useful instead of deeply nested wording quibbles.

With that out of the way: you can define a subset of JavaScript that satisfies your third condition by saying “don’t compare functions”. That’s why I didn’t consider it a meaningful condition.

Functional programming works best as a paradigm and then we can just call languages where it’s standard “functional languages”. Much simpler.


> I thought as much and it would really help if you would just say these sorts of things outright so the comments are actually useful instead of deeply nested wording quibbles.

I find it absolutely essential to make perfectly clear what exactly I'm refuting.

> With that out of the way: you can define a subset of JavaScript that satisfies your third

Second.

> condition by saying “don’t compare functions”.

That's a different language. Do you know of any implementation of it?

> Functional programming works best as a paradigm and then we can just call languages where it’s standard “functional languages”. Much simpler.

Nope. A functional language is one that makes it easy to write functional programs, by satisfying the conditions I gave above. A programming language is first and foremost its semantics, whatever the culture of its users.


> Nope. A functional language is one that makes it easy to write functional programs, by satisfying the conditions I gave above.

Except you’ve still given no source for the conditions, condition -1 (“a functional language is a procedural language”) is wrong, and a language satisfying the conditions in no way makes it easier to write a functional program just by virtue of satisfying the conditions (because they can be satisfied entirely by removing features you don’t have to use). Maybe you meant “harder to write non-functional programs”.

> Second.

The last item in a list of three items is not the second item.


> condition -1 (“a functional language is a procedural language”) is wrong

It's correct. In a procedural language, computations are structured as procedures that may call each other. In a functional language, that condition (plus three additional ones) hold.

> and a language satisfying the conditions in no way makes it easier to write a functional program just by virtue of satisfying the conditions

Removing pointer arithmetic makes it easier to write programs that don't contain memory errors. Removing spurious distinctions between memory blobs that hold the same value makes it easier to program with mathematical functions.

> (because they can be satisfied entirely by removing features you don’t have to use)

Removing features creates a different language. Often a nicer one!

> Maybe you meant “harder to write non-functional programs”.

Nope. That would exclude any general-purpose programming language. Lest you think otherwise: it's very easy to write imperative programs in Haskell and ML.

> The last item in a list of three items is not the second item.

Zeroth, first, second.


> For good or for bad, mathematical functions are mappings from values to values, so you can't talk about computing functions in a language without a rock-solid notion of (possibly compound) value.

Erm, yes you can. You don't need "a rock-solid notion" of anything in order to talk about anything. It's certainly nice to have such well-defined concepts/terms, but requiring them is preposterous.


JavaScript doesn't have compound values. This is a fact. When you evaluate the JavaScript expression `{ first: "hello", second: "hello" }`, what you get is a pointer (or reference if you will) to an object with two fields. The object is compound, but the pointer is a primitive, indivisible value.


> The object is compound, but the pointer is a primitive, indivisible value.

So? Is the problem here that you can’t compare objects equivalent according to this with `==`? You can make a comparison to do this properly. (This is why functional programming as a paradigm works better and functional languages are just those where it’s the obvious standard.)


> You can make a comparison to do this properly.

That's not the point! Even if I made a custom equality comparison that says two different objects are “equal”, it would be unsound to treat them as equal, because there exist other constructs in JavaScript that will let me distinguish them. For example, if I mutate one of the objects, the other will remain in its original state. This is exactly why a usable functional language must provide a good notion of compound value!


> Even if I made a custom equality comparison that says two different objects are “equal”, it would be unsound to treat them as equal

I didn't see anyone making claims about soundness. From a Curry-Howard perspective, all JS code is sound: it's unityped, so all programs are proofs of the trivial theorem "true".

From the perspective of equality functions/operators, you're right that it won't be sound; but so what? It's pretty much given that "Javascript foo" is a poor model of "foo", for all values of "foo" (function, integer, boolean, etc.); why should equality make any difference?

Just add "==" and "===" to your list of things to avoid, alongside mutation, random numbers, user input and other things which aren't referentially transparent. To be honest, equality isn't particularly bad if your code has some notion of types/contracts (whether enforced or not); e.g. it's perfectly safe to use "==" when you know that both sides will be strings, for example.

> For example, if I mutate one of the objects, the other will remain in its original state.

Mutation doesn't seem like a great example when discussing (pure) functional programming patterns.

> This is exactly why a usable functional language must provide a good notion of compound value!

You introduction of the term "usable" seems like a moving goalpost/no true scotsman.


> From a Curry-Howard perspective, all JS code is sound: it's unityped, so all programs are proofs of the trivial theorem "true".

Haskell and ML aren't unityped, but, in both, every type is inhabited by expressions. (Caveat: In ML, not all types are inhabited by values, but that doesn't matter, because programs correspond to expressions, not values.) So this doesn't distinguish them from JavaScript.

> From the perspective of equality functions/operators, you're right that it won't be sound; but so what? It's pretty much given that "Javascript foo" is a poor model of "foo", for all values of "foo" (function, integer, boolean, etc.); why should equality make any difference?

Equality matters, because the very first condition to determine whether a procedure computes a function is to see whether it maps equal inputs to equal outputs!

> Just add "==" and "===" to your list of things to avoid, alongside mutation, random numbers, user input and other things which aren't referentially transparent.

That's a different language. Do you know of any good implementations of it? And, FWIW, procedures that compute random numbers are totally fine. You can't rule out procedures that don't compute mathematical functions. You can only keep their use to the minimum necessary.

> To be honest, equality isn't particularly bad if your code has some notion of types/contracts (whether enforced or not); e.g. it's perfectly safe to use "==" when you know that both sides will be strings, for example.

In my day-to-day programming I manipulate more complex data structures than strings.

> Mutation doesn't seem like a great example when discussing (pure) functional programming patterns.

I'm not talking about pure anything. I'm saying that mathematical functions are value mappings, and if the language doesn't let you define compound values (tuples, lists, trees, whatever), then it can only conveniently implement very trivial functions - not enough for practical programming.

> You introduction of the term "usable" seems like a moving goalpost/no true scotsman.

I'm not moving anything. A functional language is one that makes it easy to write functional programs. The lack of compound values means that I'm given two unpleasant choices:

(0) Use Gödel numbers to encode compound values. Would you?

(1) Do imperative programming.


Yes it makes sense for a mathematical function to diverge. You just need to change the domain and range to their "lifted" version. See [https://en.wikibooks.org/wiki/Haskell/Denotational_semantics]


That gives you the <sarcasm>very pleasant</sarcasm> choice of working in a category without sums or tensor products distributing over sums! No thanks.

In practice, Haskellers don't think this way about their programs. Haskell's library ecosystem attests to the fact they think of Hask as some moral analogue of the category of sets, which has both sums and tensor products. And the worst sin a semantics can commit is to not reflect how programmers actually think about their code.


It would be reasonable to add _referential transparency_ to this definition and consider the implications. Say we have a function:

    function time() { return new Date() }
Referential transparency implies that we can everywhere replace an expression with any reduction of it but because the return value of this function can change even when its arguments don't, this is not true in Javascript.


> Referential transparency implies that we can everywhere replace an expression with any reduction of it

I'm aware, but that's too strong a requirement. I wanted my definition to capture the bare minimum necessary to make it pleasant to write functional programs. The basic idea is “functional programming is designing your programs in terms of mathematical functions whenever possible (not always!)”, not “functional programming is requiring an embedded IO DSL to do anything effectful”. (But don't take this as a dismissal of Haskell. While I do have complaints against it, effect segregation isn't one of them.)

> but because the return value of this function can change even when its arguments don't, this is not true in Javascript.

That's okay. A procedure that returns the system date will never be referentially transparent, yet such a procedure is a necessity in any programming language.

My beef is actually with the fact JavaScript doesn't have compound values: https://news.ycombinator.com/item?id=12191119 . Since compound entities (whether values or mutable objects) are a necessity in practical programming, the harsh reality is that JavaScript programmers have to use mutable objects whether they want it or not.


> That's okay. A procedure that returns the system date will never be referentially transparent, yet such a procedure is a necessity in any programming language.

As long as it accepts a Universe or State parameter, a system time function is referentially transparent. This is rather like how every query in a database can be seen as implicitly depending on a transaction ID parameter.

One intuition that falls out of the Church-Turing thesis is that there is no routine in C, Pascal, &c that has no referentially transparent equivalent.


> As long as it accepts a Universe or State parameter, a system time function is referentially transparent.

Which begs the question - where do you get that Universe parameter from? Note that getting a single Universe parameter at the beginning of the program is of no use, because then you couldn't query the system date right now. You could only query the system date at the moment the program started.

You can't completely get rid of non-referentially transparent procedures in a general-purpose programming language. What you can do, at most, is banish them to a special sublanguage, like Haskell does.

> the Church-Turing thesis

The Church-Turing thesis is a statement about computable functions, but a procedure that returns the system date clearly doesn't compute a mathematical function.


> ...getting a single Universe parameter at the beginning of the program is of no use, because then you couldn't query the system date right now.

This is like saying, "...getting a single input at the beginning of the program is of no use..." but see below.

> The Church-Turing thesis is a statement about computable functions...

It is a statement about what is "computable", which is the alpha and the omega of what computers can do. If there is a Turing model for a particular "computation" there is a Lambda Calculus model for it.

A program that returns the system date operates on the machine state. You can treat this as "read from the tape" in the Turing sense or as input in the Lambda Calculus sense. The "output" can be seen as one of:

* A tape where the system states has been updated (since we took some steps, we updated the CPU counter, handled some interrupts, &c) and the system time has been written to a special place on the tape so the next computation can find it.

* A tuple of a new system state and the time value (picked out, again, for our convenience).

If we imagine the world "as a computer" then it is natural to compute a later state of the world as a function of the previous state. This approach actually works well in practice and it is not coincidental that this approach works with computers, since it is deeply tied into how they work.

Per Heraclitus, we never set foot in the same river twice. Treating the State or Universe as non-comparable captures this idea.


> It is a statement about what is "computable", which is the alpha and the omega of what computers can do.

The Church-Turing thesis is about what functions are computable.

> A program that returns the system date operates on the machine state. You can treat this as "read from the tape" in the Turing sense or as input in the Lambda Calculus sense.

Are you saying that the input tape may contain the later time at which a program will query the system's date? That seems like a physical impossibility to me.

> If we imagine the world "as a computer" then it is natural to compute a later state of the world as a function of the previous state.

The fundamental deficiency of this model is that it assumes you can put the entire world inside your program, and then get an entire new world as result. It utterly fails to take concurrency into consideration.


> The Church-Turing thesis is about what functions are computable.

Are you saying, that procedural programming can do something besides compute computable functions?

> Are you saying that the input tape may contain the later time at which a program will query the system's date? That seems like a physical impossibility to me.

When the computer retrieves the time from the clock, it does exactly that -- reads it from a location in memory.

> > If we imagine the world "as a computer" then it is natural to compute a later state of the world as a function of the previous state.

> The fundamental deficiency of this model is that it assumes you can put the entire world inside your program, and then get an entire new world as result. It utterly fails to take concurrency into consideration.

How does it fail to take concurrency into account? We can treat the world as being made up of many objects and evolve their state simultaneously if we like.


Is there something like a "standard" HTTP client library and also a SQL abstraction library in Haskell nowadays, that everyone is using? Similar to what 'requests' and 'SQLAlchemy' are in Python?

I feel those would be the two libraries that I would probably miss the most when switching to Haskell for a project.


Groundhog (http://hackage.haskell.org/package/groundhog) is a nice "ORM" for Haskell. If you need something more flexible you can drop down and write raw SQL with postgresql-simple (http://hackage.haskell.org/package/postgresql-simple). There's also a corresponding mysql-simple package.

There are several HTTP clients:

http-streams (http://hackage.haskell.org/package/http-streams)

http-conduit (http://hackage.haskell.org/package/http-conduit)

wreq (http://hackage.haskell.org/package/wreq)


I'll second the groundhog suggestion.

It is much more flexible than persistent. If you are not doing an yesod project (for what persistent is the standard), you should think about using it.


Have a look at Haskell Toolbox: http://haskell-toolbox.com/


Woops, I lost steam a little on that project. How did you find it?


Thanks to positive experiences with The Ruby Toolbox and The Clojure Toolbox, I usually search for "X toolbox" whenever I start experimenting with a new language.


I like the idea behind it, but it does indeed seem a little unfinished. Esp. the metric for top packages (most dependencies from other packages in this category) seems flawed, since some other apparently popular libraries mentioned in this thread, don't appear there at all (e.g. wreq, persistent or esqueleto).

I think "#stars on Github" could be an interesting metric to add. Thanks for your effort in creating this site.


wreq and persistent are probably the closest libs to what you've listed for python


Persistent is an ORM that doesnt support SQL stuff entirely. So replacement for SQLAlchemy ORM but not Core

If you want to be at the same level of abstraction as SQLAlchemy Core, you'll want to try esquelito (a SQL DSL).


Thanks for pointing this out. I would indeed be looking for a replacement for SQLAlchemy Core, since I usually avoid ORMs.

Esqueleto looks like it would fit the bill, since it does offer composability of queries, automated migrations, and protection against SQL-injections.


Ah thanks, I wasn't familiar with that part of SQLAlchemy


wai and warp are the haskell packages that pretty much every haskell webserver framework are built on. they are very battle-tested and mature


I also want to say wreq for http and persistent/esqueleto for SQL alchemy equivalent.


I've heard good things about Servant as an HTTP client library.


Servant is built on wai and warp and is more of a DSL for writing APIs.

It's a fantastic framework, but it's not an 'http client library' as such.


servant-client is nice but quite different from 'requests'. And i'd also argue it supports a more limited set of use-cases due to having to specify the endpoint as a Rest type api.


Servant can do HTTP but it's more of a library for REST APIs.


> You might later find that something that took you 5 lines is a standard abstraction ready to be re-used.

Such a great point and perhaps one of the biggest challenges as languages allow increasingly reusable and powerful abstractions. I would love to have GHC tell me something like "this piece of code here has a signature that is familiar, you could probably make this fit into {list of abstractions}".


If you're interested in a tool like that, check out Hoogle: https://www.haskell.org/hoogle/. It allows you to search by signature, including using generic types (e.g. their front page example is (a -> b) -> [a] -> [b], which will return map as a result, among others).


That doesn't seem like something a compiler should care about, I think it would be better to add it to something like hlint.


Djinn: https://hackage.haskell.org/package/djinn

Does more or less what you described.


Such a tool exists. I've had hlint tell me many times how to do something in a better way. At some point I had it hooked into my IDE, so it would mark improvable code in yellow.

Especially in the first year I learned a lot of new Haskell syntax and functions just from its suggestions.


If you don't mind me asking, which IDE do you use?


Oh, yeah, I should have mentioned that: IntelliJ IDEA with the Haskforce plugin. The plugin isn't maintained any longer, though, so it's not the best solution, but it was perfect when both ghc-mod and hlint worked. ghc-mod still works, because I can't go back to coding without it, speeds everything up so much to have instant compiler feedback.

For example, there is this fatal, unresolved bug that no one knows how to fix: https://github.com/carymrobbins/intellij-haskforce/issues/28...

EDIT: It's working again (for projects that do not exhibit above bug): http://i.imgur.com/jPo4Tas.png


Intero is amazing for this.


How I wish for a lisp like Clojure with a type system like Haskell...

Hope core.typed will be that!


Have you looked at Typed Racket? They are doing some amazing things with rich type systems and gradual typing. You can get the best of both dynamic languages and statically typed ones by introducing types gradually as you develop. I quite like their model for types.

I like Haskell also, but I feel Haskell is less readable than a lisp. But I will concede, this might be because I have been writing lisps longer than I have worked with Haskell.


Sadly haven't, because I have not had a reason to need it. Working with Clojure mostly. I like how Typed Racket "carries" the types with it when code from it is imported into other languages (like untyped Racket) via contracts.

Lisps are definitely the most readable language - and with something like Smartparens or Paredit, combined with Rainbow-delimiters in Emacs it's the best programming I know by very very far.

What is your experience of Typed Racket?


But will it have type inference? Haskell would be unbearably wordy without it.


Take a look at Shen.


I am curious why the founder chose Zurich if they didn't know anybody there, which makes me think they are from somewhere else. Isn't Zurich particularly expensive? Wouldn't that be a detriment to a startup?

I realize the article is about Haskell but, but its also about a startup and a founder so I thought I would ask.


Carl and Jan are from Sweden and the Czech Republic, respectively, and our staff when I joined were a Dane, and Dutchman, a Hindu and an American (me).

There was no particular effort to integrate -- of the team, I was the only one who regularly took lessons in Swiss German -- but it turns out that there are several practical reasons to run a European startup in Switzerland:

* Taxation in Switzerland is fairly light, lighter even than in the United States.

* Business regulation is comparatively streamlined, and labor flexibility is relatively great.

* The tradition of efficient public services is rather relevant to any one trying to manage a small tribe of people, anywhere.

* Lots of European people would like to work in Switzerland and, unlike SF, it is possible to hire from your European network without visas.

Relative to San Francisco, Zurich is not particularly expensive.


I am a bit confused, lets suppose that TLS is broken in haskell libraries, or there is some kind of thing haskell has no libraries for and you don't want to lose time implementing it; what stops you from making a service for that task in another language?. I fail to picture a big system that is not distributed, maybe the problem of this startup required a monolithic system, or is it that a monolith project is in some ways easier to manage?.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: