Shenanigans is a good word for that. So many people spend so much time learning the ins and outs of JS, which often find use only in JS, yet balk at learning the simplest things about Haskell. It does not help that theoretical and academic are so often used as slurs.
>>So many people spend so much time learning the ins and outs of JS, which often find use only in JS, yet balk at learning the simplest things about Haskell.
If the lenses for records thing in Haskell is amongst the simplest things for you, I beg to differ. I spent considerable time trying to get this working but could not succeed beyond toyish examples. Combine lenses/records with exceptions, applicatives, monads and monad-transformers as they become necessary once you try to expand your toy-example code and the Haskell thing becomes anything but simplest.
Yet, other languages support records flawlessly and almost out of the box.
The heavy and unnecessary usage of symbols, like, >>, <<, >>=, >=>, <=<, and so on just adds cognitive overload and gives no inherent benefit but most of the Haskell library code is littered with it and due to almost arbitrary overloading of these symbols to mean different things in different contexts (libraries) just adds more cognitive overhead without any benefit. The almost cult-like insistence on the excessive usage of meaningless-to-humans symbols reminds me of the great practical programming language known as brainfuck [1]. Sorry but no sarcasm intended. Some Haskellers harp on succinctness, when talked about this religious fetish for symbolism in Haskell. So I ask, such succinctness at what cost?
The excessive and obsessive usage of symbols may give mental kicks to the hardcore Haskellers out there, but sorry, I don't see any incentive to waste my time trying to wade through the gobble-de-gook of brainfuck type code.
So sadly, Haskell and me are not a match. Maybe some most common things in other languages when tried in Haskell seem to be (are) rocket-science (e.g. lenses for records) and/or maybe I am a daft blockhead, but that's so.
> The heavy and unnecessary usage of symbols, like, >>, <<, >>=, >=>, <=<, and so on just adds cognitive overload and gives no inherent benefit but most of the Haskell library code is littered with it
This is an oft-repeated criticism of Haskell which I frankly find has no basis. Firstly, symbols are preferred for these kinds of functions precisely because of their ubiquity. Once you're familiar with them (which you almost certainly will quickly become, for the common ones like `>>=`, and `< * >`) they make the code easier to read, not harder, in general. Few would complain about having to write "<" to compare two numbers in most languages, rather than, say, "lessThan", because it's all over the place, and having a symbol for it makes it easier for a developer to read. The same is true for many common combinators in Haskell. There is no "cult-like insistence" on them; it's simply the case that many Haskell developers find them more pleasant to write than named functions. Honestly, the comparison to brainfuck is completely unfounded, and the references to cults and religious fetishes are IMO unnecessarily derisive.
Furthermore, I'd add that what makes the symbols seem hard is not the fact that they're symbols, but that the ideas that they're representing are challenging when coming from the world of imperative programming. Using a named function like "bind" rather than ">>=" or "apply" rather than "< * >" is not going to help the fact that these functions are complicated and unintuitive to newcomers. I think a lot of the complaints about Haskell symbols, much like complaints about Haskell syntax, are more driven by how different Haskell semantics are to mainstream languages. Once a developer becomes comfortable with the "zen" of Haskell, these criticisms tend to melt away. That isn't to say that everything becomes easy: there are many concepts that one is only likely to encounter in Haskell, and they can often be challenging, or make someone else's code hard to grok. But the difficulty does not stem from the fact that they're written as symbols. IMO.
As for your comments on lenses and records, I've been writing Haskell for a good number of years and still haven't really spent the time to grok lenses. I think they're probably great, but certainly mastery of them is not required to be productive with the language.
> This is an oft-repeated criticism of Haskell which I frankly find has no basis. Firstly, symbols are preferred for these kinds of functions precisely because of their ubiquity. Once you're familiar with them (which you almost certainly will quickly become, for the common ones like `>>=`, and `< * >`) they make the code easier to read, not harder, in general. Few would complain about having to write "<" to compare two numbers in most languages, rather than, say, "lessThan", because it's all over the place, and having a symbol for it makes it easier for a developer to read. The same is true for many common combinators in Haskell. There is no "cult-like insistence" on them; it's simply the case that many Haskell developers find them more pleasant to write than named functions. Honestly, the comparison to brainfuck is completely unfounded, and the references to cults and religious fetishes are IMO unnecessarily derisive.
My issue, back when I was exploring Haskell, was not so much with the symbols in the standard library (eg, for applicative functors), but with the amount of library authors who thought they needed their own little shitty ASCII DSL.
>>As for your comments on lenses and records, I've been writing Haskell for a good number of years and still haven't really spent the time to grok lenses. I think they're probably great, but certainly mastery of them is not required to be productive with the language.
Give me any non-trivial industry scale web/database application which doesn't rely on the notion of records. The support for records in Haskell is if not pathetic is very poor.[1] (This is a rather older reddit post but I haven't kept track of it.)
I can be productive with Haskell if I restrict myself to math-type pure computations (e.g. DSL compiler) but if you want anything to do with web/database type non-trivial IO using Haskell, good luck.
So, I restrict myself to use Haskell only for doing pure computations that require extremely simple and trivial IO (mostly just readFile/writeFile).
>>This is an oft-repeated criticism of Haskell which I frankly find has no basis. Firstly, symbols are preferred for these kinds of functions precisely because of their ubiquity.
How about Haskell's notion of fixity of the symbols/operators/names?
It seems this almost arbitrary fixity [2] notion just adds cognitive overhead with little to zero semantic benefits. Correct me if I am wrong and please enlighten me on exactly what semantically significant benefits are offered by the notion of fixity and at what cost.
Most other mainstream languages don't offer such a useless flexibility like fixity which causes so much cognitive distraction when used by different people with different semantics, and that too just for kicks.
> The support for records in Haskell is if not pathetic is very poor.
Be specific. Support for record field access is fine. Support for record creation is fine. Support for pattern matching on records is fine. There are two "problems with records" in Haskell.
First, unlike many languages, records don't create implicit namespaces, so if you define Foo { x :: a } and Bar { x :: b } in the same module you have ambiguity. This is addressed, in part, by several recent GHC extensions. It could always be handled by putting Foo and Bar in different modules (with some added complexity if Foo and Bar are mutually recursive), but the community has settled on prefixing (Foo { fooX :: a }, Bar { barX :: a }) which is a little ugly but perfectly workable.
Second, while support for member update is okay when shallow, it compounds in a particularly nasty way when it gets deep. So what in Java would be "x.y.z.a = 7", in Haskell is something like "x { y = (y x) { z = (z (y x)) { a = 7 } } }" - horrific, I agree. But note that "x.y.z.a = 7" isn't great practice in Java, either! Why do I know about the fields of a field of a field of my object? Far better if that's packaged up in a setter. And if you define setters, the Haskell gets cleaner too. Lens is a principled way of defining getters and setters (and more) that can be composed and manipulated consistently - but it does have a huge vocabulary which obviously creates a barrier until you've learned it. But you don't need lens to write setters (and for that matter, you don't need to know all of lens to use some pieces of it).
I've also found that RecordWildCards makes records genuinely nice to work with in many cases - unpack the record into the local scope, at which point the prefixed names don't seem verbose, transform things as I want, and then re-collect the fields I can while explicitly setting those I need to.
First of all, I never claimed that libraries didn't use records; certainly nearly all do. I was talking about lenses when I said you don't need to master them to write good code. I'm not sure what it means for a record system to be "pathetic", but although there are issues with records in Haskell, it (a) is improving with new features forthcoming in GHC and libraries, and (b) has not stopped developers from writing millions of lines of robust, practical code. I think the "record problem" in Haskell is a legitimate one but not one which will ever prevent you from writing good code. At worst it's an annoyance.
As to the questions about fixity, this is once again in practice not a big problem. First of all the notions of operator fixity and association are familiar to all of us since grade school. We all know "please excuse my dear aunt sally" or some similar mnemonic for the precedence rules of the operators of arithmetic. It's this rule that lets us write "a^2 + 2 * b^2 * c^(2 / 3)" instead of "(a^2) + ((2 * (b ^ 2)) * (c ^ (2 / 3)))". In other words it's a way to make an expression easier to read and parse by removing superfluous parentheses. It's not arbitrary at all, and in fact operator precedence rules is a feature that every other mainstream language has.
As to what semantic benefits there are to fixity, fixity is a purely syntactic notion. It has nothing to do with semantics. Giving your operator an associativity and precedence only affects how a particular expression is parsed, not how it's interpreted or evaluated.
I'd like to mention that your comments so far seem unnecessarily combative and hyperbolic. You use phrases and words like "pathetic", "good luck", "useless flexibility", "zero benefits", "please enlighten me". This injects contention into a discussion where there needn't be any. Furthermore, your statement that you only use Haskell for pure computations with minimal IO makes me question the legitimacy of your criticisms. Certainly, there are countless others who have written all manner of heavily interactive programs, web servers, database interaction, or IO-intensive programs in Haskell. If Haskell isn't your cup of tea, that's fine, but do you really think you're in a position to be leveling such criticisms when your own usage is so restricted?
> How about Haskell's notion of fixity of the symbols/operators/names?
Every language has implicit fixity and precedence for infix operators. What's different about Haskell is that you can actually define infix operators yourself. And if you're going to define your own you need to be able to also define the precedence of your operators. Fixity has always been there. You just didn't realize it.
>>Every language has implicit fixity and precedence for infix operators. What's different about Haskell is that you can actually define infix operators yourself.
It seems I didn't make my point clearer. I am asking what is the semantic benefit for the programmer of the explicit fixity and ability to change fixity at one's will?
The whole notion of explicit fixity in Haskell exists because of Haskell's fetish (pardon me using this term again) for symbols instead of functions-names with fixed implicit fixity. They needed to support explicit fixity only because they wanted the programmers to be able to define operator symbols arbitrary with arbitrary fixities.
So, now when you wish to read someone's code, then other than dealing with the inherent complexity present in their code, you also have to deal with the intentionally inflicted accidental complexity (to paraphrase Fred Brooks) over there because of their choice of using symbols with arbitrary fixities. [1]
IMHO, Haskell adds to accidental complexity in this way instead of curtailing it just to satisfy their urge on usage of arbitrary symbols.
Let me explain it a bit further: to read someone's code, first I have to see what symbols they have defined and what fixity they have ascribed to each of those. Now I have to keep all that additional unnecessary cognitive load in my brain to just be able to get a hang of their code with their fixity rules associated with their symbols.
Why does any programming language have infix operators? Because they make some things easier to read. Fixity and precedence are not accidental complexity. They are the essential complexity of having infix operators. In order to evaluate "3 * 4 ^ 3 ^ 2 / 4 / 2 + 5 * 8" you must define the fixity and precedence of the operators involved. You don't usually have to think about defining these things in most mainstream languages because the language defines the fixity and precedence of those operators for you to give the behavior you expect. You still have to learn it though. Do you know how Python evaluates the above expression? (Change the carets to double stars...HN didn't render double star properly.) In my first programming class long before Haskell was created, the textbook had a C operator precedence chart. The concept is not new to Haskell. Haskell has just given you control over something that has been there the whole time.
Haskell lets you make your own infix operators so that you can get the same benefits in other domains than the ones that the language designers decided to provide for you. Therefore it follows that Haskell also needs to allow you to specify the fixity and precedence so that your operators will fit together in the way that is appropriate for your domain.
> Let me explain it a bit further: to read someone's code, first I have to see what symbols they have defined and what fixity they have ascribed to each of those. Now I have to keep all that additional unnecessary cognitive load in my brain to just be able to get a hang of their code with their fixity rules associated with their symbols.
The same is true of C, Java, Python, etc. The only difference is that those languages have a closed set of operators while Haskell's is open. There is a small amount of cognitive overhead, but I think you're greatly exaggerating it. In six and a half years of professional Haskell development I can count on one hand the number of times this has been an issue. And it can be trivially resolved by adding parentheses or by decomposing the expression into separate expressions.
> Therefore it follows that Haskell also needs to allow you to specify the fixity and precedence so that your operators will fit together in the way that is appropriate for your domain.
This is the key point. Operators drawn from my target domain probably already have well established fixities. Forcing my expressions to look radically different than what's in the textbooks and papers discussing the domain in which I'm working is itself introducing incidental complexity.
Haskell has its own share of shenanigans. Pragmas being the most blatant one, but also package management, the cumbersome-ness of some libraries, seq, and the number of operators (specifically the functorial ones).
Can you expand on all of these complaints? I've been using Haskell for about 9 years and none of these are things I've ever had a complaint about, except package management, which I find stack's solution works incredibly well in practice.
Which string to use?
String, ByteString or Text or something else?
How to convert between zillions of String types, which are, it seems, completely incompatible even though providing similar functionality? So much for the support of abstraction.
Which library is good for doing X?
It seems, like String, the Haskell provides too many experimental and/or incompatible and/or immature libraries to do many trivial things.
Then there are issues of laziness combined with IO and you get a `seq` shenanigan.
Then there are issues of laziness combined with inability to profile/debug and you get a `analysis` shenanigan.
Then there are issues of laziness combined with inability to handle exceptions without great grand-daddy-catch-all-bad-things IO monad and you get a `IO monad exceptions` shenanigan.
So, there are a lot of Haskell shenanigans, too. Take your pick.
I've never personally understood the exaggeration people use when talking about string types in Haskell. You say "zillions", but there's only really five. Five is the most disappointing "zillions" I've ever encountered.
There's strict and lazy ByteString, which you should use whenever you're working with buffers of bytes that don't have any associated encoding and aren't text.
There's strict and lazy Text, which you use for human-readable text that's decoded from some specific encoding, like ASCII or Unicode.
There's String, which you use when interacting with an API that requires use of String, like anything in the language standard.
It's certainly some amount of complexity, but it's not anywhere near as complex as I keep seeing claimed, unless I'm missing something. It's certainly ugly that we still have to deal with String, and that's a notable wart on the language, but I rarely even think about it.
One guess I've had about part of the confusion is that ByteString has the word "String" in it, which might make people assume that it's for text. A better name would be "Buffer" or "Bytes". I've also speculated that maybe part of the complexity is having to explicitly encode and decode to get Text, instead of the implicit coercion you get in other languages that freely mix bytes and text.
Edited to add: it's been pointed out to me that "zillions" might not have actually been a claim that there's enough string types actually keep track of them, but is probably just hyperbole to express some frustration. If that's the case, my apologies for my failure at reading comprehension.
The optimal number of core string types for a language to have is one.
Python got to two, and the community decided that was enough of a PITA to merit a breaking change in the language in order to get back down to one. C++ also has two, but we put up with it because it's C++ so really we're just thankful it's not three or four.
Five is so far beyond the pale that it is absolutely reasonable to hyperbolize it as "zillions".
ByteString isn't really a "string" type; it's a byte buffer. It's not about text. I think that it's entirely reasonable to have a dedicated "Text" type that works with decoded human text, separate from buffers of bytes. I also think that it's entirely reasonable to have lazy and strict variants of core data types; they have very different time/space tradeoffs. It's definitely frustrating that the core language definition has a data type "String" that's a pretty bad choice for almost any application (it's a linked list of characters), and that's a legitimate problem, but I think an accurate characterization of the real problem with string data types in Haskell is much more like "You should be able to use Text everywhere to deal with decoded unicode text, but for unfortunate legacy reasons you have to use String to interact with large parts of the standard library".
Treating all bytes as if they happen to accidentally represent utf-8 encoded unicode text would be a big mistake; there's significant advantages to representing text and byte buffers separately. They are very different things.
Choosing to support only strict or only lazy handling of byte buffers or text would be quite unfortunate; there are significant advantages to both for different algorithms and use cases, and encoding the difference in the type system seems entirely reasonable to me.
I'm curious which of these you disagree with. Would you prefer that Text and ByteString be merged into one data type, so that the compiler doesn't consider it a mistake to treat arbitrary bytes as if it were text without specifying any encoding? Would you prefer that Text and/or ByteString discard support for lazy representation, or for strict representation?
All the other types provide specific functionality that you won't get otherwise. (While String is there only because it is easy to use.) If you really don't care about that extra functionality, just keep with strict Text, and be done with that.
Now, if the perfect number of string types is 1, why couldn't you name any language with that number. By the way, Java also has two string types, as does .Net. Python still has 2 types, they just don't coerce into each other anymore, and I've probably seen a dozen C++ string types already.
Those aren't string types, they're buffers for platform-dependent system interoperation. When used, the goal is to convert them to String as soon as possible.
If one considers those to be string types, then Vec<u8>, Vec<16>, and Vec<32> would be considered string types as well (in addition to both fixed-size arrays and unsafe pointers to the same).
I have found the same to be true whenever I've tried to do anything in Haskell, especially with regards to error handling. I would try to decide on a specific style of handling errors, but the libraries I depended on maybe did it differently. So there ended up being a lot of boilerplate code to wrap other libraries to my own particular style.
A lot of the problems I encountered probably boiled down to inexperience on my part, and perhaps not understanding idiomatic ways of doing things in the language. But another part of it is that I think the community has not settled on a set of common idioms.
There are things about Haskell that I absolutely love. The expressiveness is amazing. Chaining a monadic sequence can feel magical at times. Treating errors as just values is nice. But what do you do when the libraries that you depend on throw exceptions instead of using Either?
And those awful compiler errors can be so, so frustrating. But I will return to you again one day, dear Haskell.
> But what do you do when the libraries that you depend on throw exceptions instead of using Either?
tome's response is amusing, but assuming you need to keep getting work done and aren't interested in forking the library...
You wrap the library in a shim that forces the return values, catches anything that's thrown, and returns Either. If it's a library that you only use in one or two places, that shim might be inlined.
It's not to say Haskell is bad, though. I'd be a fool to say so. The above comment is mainly to point out that there are its shortcomings and that it has a lot to improve and most of the times I have seen the proponents of Haskell tend to downplay and to avoid talking about these shortcomings.
In fact, I have learned a lot many good programming practices/principles by spending time to learn Haskell. I do apply some of those in my projects in other languages.
JS runs natively on the web, it embraces multiple paradigms, it has a huge community, and you can get a job in almost any city on earth if you know how to use it. Haskell may be "better" as a language, (just as Esperanto may be superior to English) but if you want the widest range of opportunities in your life and not just in your language, Javascript/English beats Haskell/Esperanto.
Why not help bring some of Haskell's concepts to a more relevant language instead of bemoaning the entirely rational decisions made by millions of junior and senior developers?
Edit: the rabid insularity of the Haskell community definitely doesn't help. The Python community was dissatisfied with JS and created Coffeescript, and as of ES6 now everyone can enjoy the fat arrow syntax. Less pleasantly, the Java community got class syntax added to ES6, but at least they're trying to contribute. I'm sure Haskell has plenty of really cool things to bring to the table, but I've never heard of any of them, because all the Haskell community wants do is talk about how JavaScript sucks instead of embracing the functional side of it.
Firstly, poor analogy—English is a much superior language to Esperanto in terms information density over seconds per syllable in addition to being more expressive (it's easier to talk about a wide variety of topics and abstractions). Possibly more importantly than that, it takes a lot of people to make a spoken language useful for doing business but far far fewer for a computer language.
> instead of bemoaning the entirely rational decisions made by millions of junior and senior developers
I think people are bemoaning the stupid shit and not the entirely rational decisions. JS makes the former a bit easy.
> Python community was dissatisfied with JS and created Coffeescript
Nitpicking, but the first CoffeeScript compiler was written in Ruby and the language was largely inspired by Ruby.
> the Java community got class syntax added to ES6
Huh that's a new one
pretty sure that's not just the Java community, given that JS developers have been independently inventing class libraries (starting with Prototype, if not earlier) for ages.
> I'm sure Haskell has plenty of really cool things to bring to the table, but I've never heard of any of them
Underscore, lazy.js, other popular JavaScript libraries have some Haskellisms; LiveScript is a fork of CoffeeScript that used to be moderately popular and very Haskelly; React takes a lot of ideas from Haskell; immutable.js is quite Haskellian….
I think you're just not looking.
> all the Haskell community wants do is talk about how JavaScript sucks
You can drive a go cart on the highway, and you can keep modding your go cart, but at some point you might want to not be driving a fucking go cart on the highway.
It is almost impossible for the Haskell metadata widget I described to have bugs. But I've seen production bugs in the corresponding JS code twice in the last few months. You simply cannot get the kind of correctness guarantees in JS that you have in Haskell.
> Pedantic, arrogant, and self-contradictory combined with poor reading comprehension and even worse social skills is the stereotype of the Haskell fanatic, and you're living up to it perfectly.
> the rabid insularity of the Haskell community definitely doesn't help. The Python community was dissatisfied with JS and created Coffeescript, and as of ES6 now everyone can enjoy the fat arrow syntax. Less pleasantly, the Java community got class syntax added to ES6, but at least they're trying to contribute. I'm sure Haskell has plenty of really cool things to bring to the table, but I've never heard of any of them, because all the Haskell community wants do is talk about how JavaScript sucks instead of embracing the functional side of it.
Since you mentioned CoffeeScript, Elm seems like a particularly relevant example of something that came out of the Haskell community and which compiles to JavaScript.
I think you have a negative impression of both Haskell and its community which isn't necessarily justified. It's a little insular, but not to the extent that you're suggesting here. PureScript is another language that Haskell has almost directly spawned. It's a lot more complex than Elm, like Haskell itself, but has quite a few brilliant insights of its own.
> Since you mentioned CoffeeScript, Elm seems like a particularly relevant example of something that came out of the Haskell community and which compiles to JavaScript.
Let's not forget Purescript[0], which is looking to be the spiritual successor to Haskell, and runs on node.
While I can't say that I am a big fan of the Haskell community, I fully understand why they see how extending Javascript is never going to work out: You can't just bolt Hindley-Milner to a language with prototypical inheritance, so the functional part of Javascript just doesn't help at all. This is why instead, for the web, they have Purescript, which compiles down to Javascript, but it doesn't look like Javascript.
Not quite coming from the Haskell community, but heavily inspired by parts of Haskell is Elm, which you might have heard about. It has the best compiler errors ever, it takes immutability seriously, and is far nicer IMO than either Purescript or Javascript for web development.
It's really an issue of types. Languages like Java, Python and Javascript don't have quite the same approach to types, but their worldviews there are not that different. Anything coming from the ML family just isn't going to translate, and that is going to happen regardless of how insular the communities might be. Existing Javascript features actively make most of the things a Haskell programmer would want just not work at all. This is why you hear them talk about how Javascript sucks: Everything they'd want involves taking things out first, and that is never going to happen.
So don't blame in on the community here, bad as some elements might be: The differences just cannot be negotiated away. You might as well ask people to open their mind and breathe carbon dioxide and sulfur so they can go visit you in Venus: It's a barrier that is too hard to be worth crossing in either direction. Trying to add Javascript or Python features to Haskell would get you in a similar boat.
Not that I know much about it, but http://www.purescript.org certainly looks like a kind of "Coffeescript for Haskell”? Or is that language too different from JS for that purpose? The that might just be because Haskell is quite different from JS.
The carpenter blew us away. They don't allow publication of a preprint, but it was just accepted into the Journal of Patio Support Design!! Which, I'm told, is the third most-prestigious journal in the business...