I only remember one situation over the past 5 years that we had a performance issue with Haskell, that was solved by using the profiling capabilities of GHC.
I disagree that performance is hard to figure out. It could be better, yes - but it's not that different than what you'd get with other programming languages.
Until you have to solve a (maddeningly common) space leak issue. That's a problem unique to lazily evaluated languages (basically Haskell) and is godawful to debug.
It reminds me of solving git problems... you suddenly find yourself having to tear back the abstraction layer that is the language and compiler and start thinking about thunks and so forth.
It's jarring and incredibly frustrating.
I am not experienced enough with Haskell to know whether peeking under its hood involves a more complicated model or not. It might be more frustrating. But its certainly not a unique experience - it's costs are just less distributed over other runtimes.
Maybe you meant C++? You see no complaint because noone use that anymore. Anything that can be done with an easier language is done with an easier language. The hardcore C++ performance critical code is left to a few veterans, who don't complain.
That's a funny way to put it. It's more like the difference between getting results or abandoning the thing altogether due to exploding cost of required effort.
C's model requires you to understand the machine model. Haskell presumably requires you to understand the machine model (but less thoroughly) but understand the compiler's model also. It's a little more but comparable. So complaining only about the Haskell runtime just seems ironic to me.
You... can't be serious. That's common to you?
"Because malloc and its relatives can have a strong impact on the performance of a program, it is not uncommon to override the functions for a specific application by custom implementations that are optimized for application's allocation patterns."
I expect high performance games are the most common exception, but they represent a fraction of the C/C++ in the wild.
Space leaks in Haskell, on the other hand, are unfortunately relatively easy to introduce.
Hm, I've been making useful things with Haskell for a couple years including quite a few freelance projects and haven't encountered many space leaks.
Definitely not enough to say they are maddeningly common, or even enough to say they are common.
My experience is that actually fixing them can be incredibly difficult, hence my comment about needing to understand the gory details about how the runtime evaluates lazy expressions.
Heck, that post even uses the phrase "Attempt to fix the space leak" as it's often not an obvious slam dunk. Sometimes it even devolves to peppering !'s around until the problem goes away.
My experience differs, FWIW. If you know where you're creating too many thunks, and you force them as you create them, they don't accumulate.
Making sure you actually are forcing them, and not simply suspending a "force this", is probably the trickiest bit until you're used to the evaluation model.
Translation: if you've internalized the way Haskell code is compiled and executes, so that you can easily reason about how lazy evaluation is actually implemented, you can solve these problems.
If not, it devolves to throwing !'s in and praying.
Which is basically my point.
If I don't have a hope of solving common space leaks without deeply understanding how Haskell is evaluated, that's a real challenge for anyone trying to learn the language.
The only time I can think of where that's not the case is when microoptimizing for performance, where the exact instructions being produced and their impact on pipelining and cache behaviour matter. But that's in general far more rare than one encounters space leaks in idiomatic Haskell.
Heck, one just needs to read about two of the most common Haskell functions to hit concerns about space leaks: foldl and foldr. It's just part of the way of life for a Haskeller.
There's simply no analog that I can think of in the world of eagerly evaluated languages that a) requires as much in-depth knowledge of the language implementation, and b) is so commonly encountered.
The closest thing I can come up with in a common, eager language might be space leaks in GC'd languages, but they're pretty rare unless you're doing something odd (e.g. keeping references to objects from a static variable).
We typically don't evaluate the efficacy of a language contingent on someone who is missing fundamental, well-known concepts about the language.
Also, I don't think folks "ignore the evaluation strategy for eagerly evaluated languages". They simply learn it early in in their programming experience.
Oh, I'm not "taking issue". This isn't personal. It's just my observations.
And yes, that one needs to explain the consequences of lazy evaluation and the potential for space leaks to a complete neophyte to justify foldr/foldl is literally exactly what I'm talking about! :)
Space leaks are complicated. And they're nearly unavoidable. I doubt even the best Haskeller has avoided introducing space leaks in their code.
That's a problem.
Are you saying it's not? Because that would honestly surprise me.
Furthermore, are you saying eager languages have analogous challenges? If so, I'm curious what you think those are! It's possible I'm missing them because I take them for granted, but nothing honestly springs to mind.
- Reference is kept alive so that a computation can't be streamed. Easy to figure out with profiling but fixing them might make the code more complex. Also, if you have a giant static local variable ghc might decide to share it between calls so it won't be garbage collected when you'd expect it.
- Program lacks strictness so you build a giant thunk on the heap. This is probably what you think of when talking about dropping !'s everywhere. I don't find it that difficult to fix once the location is known but figuring that much out can be seriously annoying.
- Lazy pattern matching means the whole data has to be kept in memory even if you only use parts of it. I don't think I have ever really run into this but it is worth keeping in mind.
- I have seen code like `loop = doStuff >> loop >> return ()` several times from people learning haskell, including me. Super easy to track down but still worth noting I guess.
Building giant thunks is the only one where you really need some understanding of the execution model to fix it. 99% of the time it is enough to switch to a strict library function like foldl', though.
To be clear, I agree with this. Easy to fix once you know where it is, if you're competent in the language. Occasionally very hard to know that.
More precisely, unique to significant use of laziness, which is going to (obviously) be more common in lazily evaluated languages but laziness is supported elsewhere.
I do say it's not that big of a deal in the end: it almost always is OK and at the end you optimize the inner loops by looking at profiles; like in any other project.
But when using GHC, I have indeed sometimes ran into situations where I expect something to be fast when it is not (e.g., `ByteString.map (+ value)` is incredibly slow compared to a pseudo-C loop).
I also did find a bona fides performance bug in GHC https://ghc.haskell.org/trac/ghc/ticket/11783
We had an issue with the Data.Text package and OverloadedStrings in 7.10.2 which caused extremely slow compilation times and filed a bug report for that, which was solved for 7.10.3.
That situation sounds like you needed a ByteString Builder to get comparable performance.
I even have some "real C" in my Haskell code to handle some inner loop stuff.
SQream DB is a GPU SQL database for analytics. Everything was written in-house - from the SQL parser all the way down to the storage layer.
We're designed to deal with sizes from a few terabytes to hundreds of terabytes, and we use the GPU to do most of the work.
It is the compiler (written in Haskell) however, that decides about specific optimizations and deals with generating the query plan.
Our SQL parser and typechecker is actually BSD3 licensed, if it's interesting: https://github.com/jakewheat/hssqlppp
I don't like having to be that guy, but your landing page hijacks scrolling (which always causes problems; I noticed first because it broke ctrl-scroll zooming), downloads a 30MB video in a loop, forever, takes a significant amount of CPU even when the video isn't in view and the tab is not active, and despite having almost no content manages to take far more memory than any other non-email tab I have open.
I've passed on your comments in any case, but know that we're in the process of rebuilding the website from scratch.
We too are currently during some large projects for enterprises...
We will be releasing a community version on AWS and Azure in the near future which should be cheap or free, other than the instance cost on the respective cloud provider.
FWIW, HsSqlPpp also supports various dialects.
In SQream DB we use a custom dialect, while HsSqlPpp is mostly Postgres.
It was relatively simplistic - along the lines of "things tested for equality must share a type". It was also focused on domain-relevant types, not SQL type (/representation).
> FWIW, HsSqlPpp also supports various dialects. In SQream DB we use a custom dialect, while HsSqlPpp is mostly Postgres.
Nice. It'll be interesting to see where our implementation choices differed, and what we can learn from each other :)
Does your list of other programming languages include C/C++/asm?
I find Elm exciting, and with the prevalence of React/Redux these days, a lot of front end developers would be wise to familiarize themselves with the origin of many of the concepts borrowed from Elm.
It drives me crazy that Microsoft doesn't push it harder from an investment perspective. They don't even write books about it :)
I've been meaning to try Elm since I've been playing around with Haskell more and more. Purescript is relatively similar but allows you to build more than frontends. It seems Elm has grown quite a lot in popularity though.
The only problem from my experience trying to use it in a project is that since it's alpha software a) it's changing so fast that APIs/libraries are very unstable to the point they are almost unusable if you try to follow newish releases (this was 6-8 months ago, I hope/expect stability has improved) and b) you have to be intimately familiar with Haskell.
As the OP's article describes about Haskell with PureScript the documentation situation is even worse, the documentation is 90% "hard" and 10% "soft". Which is fine if you know Haskell, or even a bit of Elm, but it's very tough coming in as a newbie of both Haskell and PS.
Monads then seemed much easier to understand after Elm and reading this: http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...
I always had a hard time wrapping my head around functional programming, but elm was easy to understand and shows off the strengths of a functional approach.
Here's an exciting example: https://github.com/dailydrip/firestorm
For other haskellers, Purescript seems to solve this problem.
Is Haskell more academic in nature or used heavily in Production environments?
Is there an application sweet spot/domain (say Artificial Intelligence/Machine Learning, etc) where it shines over using other languages (I am not talking about software/language architectural issues like type systems or such)?
I have no experience with Haskell but do use functional languages such as Erlang/Elixir on and off.
State of the Haskell ecosystem: https://github.com/Gabriel439/post-rfc/blob/master/sotu.md
The topics are roughly sorted from greatest strengths to greatest weaknesses.
Each programming area will also be summarized by a single rating of either:
Best in class: the best experience in any language
Mature: suitable for most programmers
Immature: only acceptable for early-adopters
Bad: pretty unusable
Take out the programs that are for writing code, ghc, shellcheck, darcs etc. And what are you actually left with? git-annexe, pandoc, xmonad is hledger worth knowing about if you don't care what language it was written in? (Maybe it's amazing, first I've heard of it).
Whenever I bring this up, people find it upsetting. The Haskell community might need to get past both of those things. 1.0 was 27 years ago. So many fabulous programmers have gone down the road for entertainment or enlightenment so where are your damn apps if you're a language that doesn't suck for writing them, no really, where are they?
For example, I think many would struggle to name many open source user application written in Clojure as well.
However, the work also includes a runtime platform that is more low-level, including building our own container system, talking to the Linux kernel, using cgroups/resource management, and distributed message passing - areas where languages such as Go have found a niche, and may be classified as Real World.
However for us, Haskell's type-safety, runtime performance, and extensive ecosystem has been a boon even in this domain. We effectively use it as an (incredibly powerful) general-purpose language here, and it's worked more than fine.
We're currently at around 15,000 lines of code with a team of 5 Haskellers, and it hasn't really been a problem regarding performance, understanding the codebase, or with newcomers to the team.
(plug - we're at https://www.github.com/nstack/nstack and are hiring more Haskellers)
For the massively parallel workloads you find in data science, it seems like you'd benefit a lot from the wealth of container orchestration tools around Docker (swarm, Rancher/Cattle, Kubernetes) in order to easily scale out your functions. Especially when many companies already have these set up for their more vanilla applications.
This is an example I've seen that can leverage a docker swarm for invoking functions, loosely modeled after AWS Lambda: https://github.com/alexellis/faas
We're really lucky also to have one of the main `rkt` developers joining us soon to work on the container side.
Also, COGENT for lowest-level stuff being wrapped for use in Haskell somehow might be interesting. Used on ext2 filesystem already.
I haven't seen COGENT before - will take a look over the weekend - thanks!
Weirdly, what Haskell is good at is generality. Which is much the same as for LISP.
I'd say the Haskell community remains academic/hobbyist, but that's not to say there aren't plenty of people doing Real Work in it.
It is a very good language once the type system makes sense and you can reliably reason about the lazy nature of it.
For instance, if you were to write a CAD program, you probable want to be spending your time thinking about algebra and geometry, and not about little details like how an array is laid out in memory or when memory needs to be freed. The latter might be important for optimizing the last bit of performance out of a tight loop, but a lot of problems are hard enough on their own that just coming up with any solution that works is enough trouble on its own.
It's also good for problems where you want pretty good performance but don't have to be as fast as C, or where you want to take advantage of multi-core parallelism but don't want to deal with the usual complexities of threaded code.
Persistent data structures are really nice for certain kinds of applications, such as web services, where you have some complex internal state that needs to be updated periodically. Error handling is really easy when you can just bail out when something fails, since none of the state was ever modified-in-place and your old pointer to the old state is still valid.
The powerful static type checker makes Haskell a good fit for applications where it's really important that you calculate the right answer, or where you don't want to spend much time tracking down weird bugs.
Haskell isn't so good at applications where you need hard real-time guarantees or you want to have very fine control over the memory layout of your data structures or you want to be able to easily predict CPU or memory usage.
I'd say that people mostly do not know what is running in industry. It could be any share of anything. We get a hint looking at job offers; Haskell is small there, but not unheard of.
On the sweet spot, Haskell is great for modeling complex domains; it's great for long-lived mostly maintenance stuff, since refactoring is so easy; it tends to generate very reliable and reasonably fast (faster than Java/.Net) executables, but hinders complete correctness proofs and has some grave speed issues that can ruin your day if you hit them.
I would like to know how is it to manage a team of Haskell developers. I do expect it to be better than Java/.Net in forcing safe interfaces between developers (thus making it reasonably easy to survive a bad developer in a team), but I never saw it in practice.
well if anyone does know what's out there then HN crowd would be a pretty good suspect in that regard.
I personally have done my fair share of consulting and prof services engagements - dozens of them across all kinds of industries - I've never seen a Haskell shop.
Java, Python, JS, Golang, Scala, Clojure, .Net, C, PHP - definitely out there in the field. Haskell - not so much.
Compare with how many Java consultancies do you think you can find on a quick search?
I don't think the Haskell development labor market is nearly liquid enough to rely on consultants. For full time jobs the picture is very different.
Then he'll have to look up some that he can't name off the top of his head? I can't name any Java consultancies, but I'm nonetheless confident that there are plenty...
With regard to Haskell, there are a lot of people interested in Haskell work compared to the number of positions presently available.
HN is a comically bad barometer for anything other than what's trendy in norcal.
I'd argue Python is much more prevalent with researchers than Haskell.
As for particular places where it shines, I don't know of any in particular. I remember when taking my Declarative Programming course in university my lecturer loved to talk about an experiment the military did where they challenged a few teams to implement some kind of radar system in their chosen language and evaluated them based on time taken, correctness and code complexity (LOC) at the end. The Haskell team performed very very well. I can't find any details on this experiment though...
Probably this one: http://www.cs.yale.edu/publications/techreports/tr1049.pdf
But take it with some sacks of salt -- it's a single research paper. It should be replicated, peer reviewed, etc to have any merit. It's methodology could be totally crap for example...
I love it for general back end web development. I can actually solve problems so they stay solved.
How did you convince the client, and why did you choose Haskell for this over, say, .NET, which throws a web service up quicker than it took me to type this? What sort of productivity are you achieving?
At SQream (www.sqream.com), Haskell is used for CLI, SQL parser, SQL language compiler, SQL optimizations and a variety of tools.
We also use Haskell to generate Python code for testing (think: describing a testing scenario and sets of queries, translating into a runnable, predictable Python script that's easy to debug)
Haskell's strength at expressing and transforming AST's, in pure functional, understandable, testable code seems to be the key here.
We also employ QuickCheck rather extensively, which makes testing millions of scenarios really straightforward.
That's what drove me to languages like OCaml and Rust, which try to solve this tradeoff in a different way: Same thing about strictness and correctnes (despite slight differences in the type system), but lazy evaluation is only provided when explicitly asked for.
The cool thing about OCaml is that you can actually reason about performance, as long as you ignore memory issues (memory usage, garbage collector runs, etc.). Rust is heavily inspired by OCaml and allows you to also reason about correctness and performance of memory usage.
If you ignore mem issues you can also reason about the performance of Haskell :)
If we want to reason about perf, I think Rust is currently leading in terms of a modern language that allows perf reasoning.
Really? My guess (based on close to complete ignorance) would be that laziness would make reasoning about performance hard. Can you ELI5 why my guess is wrong?
But all of that is just because data stays in memory longer than expected.
If you ignore that, laziness has a small performance impact, but it's less than the performance impact of interpreting code, which is something many people consider completely acceptable.
That may not be "a performance issue caused by laziness", in your terms, because the laziness isn't causing performance issues. But it's laziness making a performance issue hard to find.
Does my scenario happen? Is it common? Is it easy to analyze/find/fix when it does happen?
Most generally, does laziness make it harder to reason about performance?
Profiling is a bit of a black art in any case. I've seen horrible performance problems in python when we added enough code to cause cache thrashing to a loop. No new code was slow, but the size of it crossed a threshold and things got suddenly bad.
Some performance problems in Haskell are hard to track down, but most are pretty easy to find and fix. It's basically the same as every other language in that way.
The 'memory stays around indefinitely' problem is caused by not cleaning up references to generators. You can make your own xrange() clone in Python easily, and start consuming values, but if you stop before consuming all of them (assuming it's not an infinite generator), that xrange() generator object still lives and has to keep around whatever references it needs to generate the next value if you ask it for one. When your whole language is based around laziness, you might have a lot of these and it may not be clear what references are still around that keep the GC from cleaning them up.
I wouldn't say reasoning about performance is necessarily more difficult, and profiling will light up a slow part of a lazy pipeline just as well as an eager one. My own struggles with laziness in Clojure were all around my own confusions about "I thought this sequence was going to be evaluated, but it wasn't!" which may have led to unnecessary calls to (doall), which forcibly walks a sequence and puts the whole thing in memory.
Because of it potentially hogging memory! Now we were told to ignore mem issues by the commenter. I just point out the contradiction.
Good Type safety seems to be an acquired taste.
Somehow reminds me of Structs/Records in Erlang/Elixir and using pattern matching, conditions and guard clauses to enforce validity or logic.
I think it's just a pretty good general purpose language, for the most part? I know that's not very exciting of an answer, but I do consulting for people and I've seen Haskell for everything our clients do, from web development (a lot of web development), compiler development, distributed OLAP style analytic processing on large data volumes, custom languages/DSLs, machine learning, hardware/FPGA design work, desktop software (flight tracking/schedule software, used by on-ground airport personnel, with some incredibly beautiful code), cryptocurrency and cryptography etc etc. There are the occasional research-y things, and we've also just done things like that for random institutions or companies (e.g. prototyping RINA-style non-TCP network designs.)
I wouldn't say any of these were a particular "sweet spot". Though some things like compiler development or web development are pretty pleasurable. The language and libraries make those pretty good. Web development is one of the more popular things now especially. Everyone has to have their thing in a browser now, and there's a fair amount of library code for this stuff now. Definitely one of the stronger things.
You do get the occasional problem that just has this perfectly elegant solution at the core and that's always an awesome bonus. One of my favorites was the aforementioned flight software. It had this very strict set of bureaucratic guidelines but all the core logic ended up having a very nice, pure formulation that ended up being about 8 functions used in a single function-composition pipeline. (We use this sometimes as an example in our training of real solutions for real problems.) A lot of software is much more boring and traditional.
Mostly, it works well and stays out of my way, and lets me solve problems for good. And I really like the language in general compared to most other languages; it hits a very unique sweet spot that most others don't try to touch.
I do admit the sort of high level, abstract shenanigans are fun and interesting too, but I think of it more as a fun side thing. Just this week I literally wrote the documentation for a data type, describing it as an "Arbitrary-rank Hypercuboid, parameterized over its dimension." Luckily I do not think you will ever have to use that code in reality, so you can breath easy. :)
EDIT: and to be fair, there are definitely some problems. Vis-a-vis my joke, we could all collectively use better "soft" documentation. I think some problems like "operator overload extravaganza" are overblown due to unfamiliarity. Most real codebases aren't operator soup; but bigger problems are things like "When to use the right abstraction" are real problems we have hard times explaining.
I don't think performance is much harder than most other languages, but we definitely don't have tools that are as nice in some ways, nor tools that are as diverse. The compiler is also slow compared to e.g. OCaml. Some parts of the ecosystem are a lot worse than others; e.g. easy-to-use native GUI bindings are still a bit open. Windows is a bit less nice to use than Unixen.
There's a big gap between training in-person vs online/self taught content too, IME.
Time and experience can cover up anything. So this does not say much about Haskell other than it is all negative without time and experience.
> 2. Haskell has some very nice libraries
So does NodeJS and (on an abstract level) Microsoft Word. Libraries are infrastructures and investments that (like time and experience) can cover up any shortcomings.
> 3. Haskell libraries are sometimes hard to figure out
That is simply negative, right?
> 4. Haskell sometimes feels like C++
That is also negative, right?
> 5. Performance is hard to figure out
> 6. The easy is hard, the hard is easy
That is a general description of specialty -- unless he means all hard are easy.
> 7. Stack changed the game
Another infrastructure investment.
> 8. Summary: Haskell is a great programming language.
... I am a bit lost ... But if I read it as an attitude, it explains a lot about the existence of infrastructure and investment. Will overcomes anything.
That is the meat of why Haskell is so great. I've never so reckless refactored code as much as I do in Haskell, I just wait for the compiler to tell me what I missed and go back and fix it up. I'd never do that in C, C++, Java, etc; it'd be suicide.
And while that's still not a great fleshed out explanation, it's a great oversimplification of the symptoms of a programming language with a great type system. And that type system is more or less the only reason to use Haskell.
As other people in this thread have commented, Haskell has fantastic abstract libraries, which let you do abstract things, the most useful of which that I'm aware of/understand is parsing. Parser combinators are a very natural fit for Haskell, and making DSLs with them becomes practically trivial to do.
Edit: I think the author takes for granted the general praise that Haskell gets, and was attempting to temper it with his practical experiences using the language.
That's precisely what I was doing at work today in a C# project. I was going particularly crazy today, doing some refactoring with project wide regex replaces rather than leaning on VS/Resharper for everything.
I think it depends a lot on how your project is structured. If you're passing around object everywhere and casting... you're gonna have a bad time, sure. But at that point you're practically using Python or something. If you're using generics and so on properly, then you can lean on the compiler and type system quite a lot.
I work on Java projects and do big re-factorings using IntelliJ, relying on the compiler. So this is a bit project-dependent.
Java chooses two wrong defaults - nullability by default, and mutability by default. If your project consciously chooses to not opt into these defaults, from my experience, you can use javac to guide you in fearlessly making large refactorings.
Which of course, are two defaults that Haskell chooses correctly (which is maybe what you were implying).
The point about a learning curve is that Haskell is different from most mainstream programming languages.
> > 2. Haskell has some very nice libraries
> So does NodeJS and (on an abstract level) Microsoft Word.
> Libraries are infrastructures and investments that (like
> time and experience) can cover up any shortcomings.
> > 4. Haskell sometimes feels like C++
> That is also negative, right?
I actually like C++ a lot.
It's also one of the most successful programming languages in the history of programming languages, so it has something going for it.
> > 5. Performance is hard to figure out
> That is also negative, right?
Yes, it's a pitfall.
> > 7. Stack changed the game
> Another infrastructure investment.
Yes, tooling matters.
> I actually like C++ a lot.
Haskell is an example of academic carefully crafted language, while C++ is rather a practically driven continuous patchwork (and they are extremities to both ends). I simply find your comparisons odd. Given their deliberate design choice difference and it ended up feeling alike, how can that be positive?
Haskell is also a continuous patchwork, just look at the number of deprecated GHC extensions.
>> Haskell is also a continuous patchwork
> which is the opposite of its pure vision, which is why I can't help think it is a negative
I'm very confused. One second, you're literally saying Haskell was "carefully crafted", and the next, you're saying it's "the opposite of its pure vision"
It's almost as if you're just trying as hard as you can to argue, not caring what stance you're arguing, just so long as you can act like the author is wrong
> It's almost as if you're just trying as hard as you can to argue, not caring what stance you're arguing, just so long as you can act like the author is wrong
Let me write the complete sentence:
([luispedrocoelho commented that] Haskell is also a continuous patchwork ) (which I simply accepted and followed -- even though I am doubtful) which is the opposite of its pure vision, which is why I can't help think it is a negative (regarding OP's point).
I commented originally that OP's this C++ comparison is also negative, right? So it is consistent.
When I say something is carefully crafted, I am refering to its intention. When I accept that someone find it like C++ or is also a patchwork, I accept that as a reality. When the reality runs against its design goal/vision/purpose, it is a negative, IMHO. -- Hope this clarifies.
EDIT: to make it even clearer, I didn't comment that whether a pure design goal or a continuous patchwork reality on its own is positive or negative. That is subjective. However, having a design goal and reality conflict, that can't be positive.
The way that functions can compose in the hands of a gifted developer is truly elegant. That said, I'm not sure it's a skill that translates well to the general development community (and maybe that's fine?).
I highly recommend the lecture notes online from the "Upenn Spring 2013 Haskell" class. Search that phrase and it should be the first hit (am on mobile).
Go through the lecture notes, and do all the exercises in the homework. (Link in purple at the top). It's a night and day difference from LYAH.
And you get to build some nifty real world style projects along the way.
This is simply not true. Time and experience can never cover up for lack of expressiveness, and expressiveness is one of the biggest selling points of Haskell.
In fact, experience can only cover for complexity, and lack of discoverability or intuitiveness. Haskell has all those three flaws, so you see, experience is really needed.
> (#2) So does NodeJS and (on an abstract level) Microsoft Word.
> (#3) That is simply negative, right?
That's the inevitable downside of #2.
> (#6) That is a general description of specialty -- unless he means all hard are easy.
Well, not all obviously. You still can't declare "f = solution to SAT in polynomial time" and be done with that. But the sheer amount of hard stuff that becomes easy is unsettling.
About #8, I'll let it open. Experience and investment can certainly not overcome anything as you claim, but I'm still not decided wether to classify Haskell as "the best available for nearly anything (maybe except if you have a killing library in another language)" or "hell, why can't somebody just come and rewrite it as something _simple_", or both.
Expressiveness is subjective. Experience can alter one's perception.
In fact, time and experience alter the basis of comparisons, from objective comparisons to subjective comparisons.
You mean you can express the idea of taking browser screenshot (for example) or producing a publisher acceptable document with the same kind of ease(expressiveness in my dictionary) in Haskell?
Again, without specifics, the comparisons does not mean much -- which was all I was commenting.
> But the sheer amount of hard stuff that becomes easy is unsettling.
Another meaningless subjective word (sheer). I wasn't debasing Haskell. I was commenting on the meaningless of original post.
> Another meaningless subjective word (sheer).
That "The easy is hard, the hard is easy" assessment is inherently subjective, but "subjective" is not the same as "meaningless", even less when nearly everybody that experienced it has the same assessment. (Are you also going to complain about my "overwhelming" above?)
1 - I'd give you a point on subjectivity if you were talking about the difference between a library and an interpreter. But those languages (that is, JS and Basic) are just not expressive enough for this to become a problem.
Subjective on its own don't have to be meaningless. It can be subjectively meaningful. However, use a subjective statement to pass as an objective support, that is meaningless. So if OP and you are merely commenting on the status of mind or his and yours, that is fine -- and I do learn something in that regard. OP, and several other commenters, did not seem to realize they are substituting their (personal) subjective opinion to objective reasoning, that was what I pointed out, in case it become useful (to them).
one of the primary benefits of laziness is to make function composition possible under a wider variety of situations (which also increases expressiveness, at the cost of occasional though not difficult to avoid space leaks)
one joke I've heard is that the ideal haskell program is 20 lines of imports and one line of perl.
It happens that I measure expressiveness by the amount of the time the author takes to express an idea and/or the amount of the time the reviewer takes to comprehend an idea. That, unfortunately, is very subjective.
APL programmers write very short programs, but they express it at one character a minute (or less) pace.
What we need realize is not all ideas are precise. In fact, most of our ideas are vague to certain extent. They are still OK as long as the vagueness does not matter to the problem of interest or it is already constrained or implied in the context. So to efficiently express an idea, both insufficiency or over specificness are negative to expressiveness. To disclaim, I don't claim any language is the best to that regard. I believe language should be suited to the problem (as well as the experience of the team).
Since you particullarly mentioned type, I would suggest that not always a particular type is important in a idea. For example, sorting, the types of the items are not intrinsic in the idea. Having to specify the type contributes negatively to the expressiveness. However, when performance is concerned ragarding to certain specific sorting problem, types (as narrowly specific as we can) are of importance. However, we should recognize that is a different idea from the original idea of sorting. So even though in the program eventually it expresses both the algorithm for sorting as well as the types, being forced to mix ideas are negative toward expressiveness.
Haskell, its types and pureness, for example, are not always essential in a programming idea. Having to take care of these language requirement while it is non-essential, makes it less expressive (in those problem domain).
For web dev, the library situation seems very mature to me.
How do you know you're not just a Blub programmer?
The second evidence is an indirect evidence. If average APL programmers can write and read programs at the same speed (character count/minute) as average programmers for other languages, Java for instance, then APL programmer will possess significant advantage in finishing similar programming tasks, given that the typical code size is often a few orders smaller. Why such advantage is not embraced and see APL programmers everywhere? My hypothesis is avg APL prgrammers program at much slower speed.
Like any science, I cannot say my hypothesis is conclusive. I am open (eager) to hear and examine any other evidences (and change my hypothesis if necessary).
Google "cognitive dissonance" and the "sunk cost fallacy."
That is dangerously close to the infamous "if it compiles, then it works" boast, which Haskellers make all the time (while denying that they make it), demonstrating in the process that they don't write real software, where the defects one encounters are very often of a nature such that the program behaves exactly as intended by its authors, but the intended behavior is itself wrong. How does the type system help here?
My comment wasn't glib. I actually think those phenomena quite well explain much Haskell advocacy. Considering how massive an undertaking it is to learn the language and its Byzantinely-complex, PhD-theses-in-disguise libraries (each of which sports a zoo of custom operators) and how small the payoff is, it's unsurprising that those who take the plunge begin zealously encouraging others to do the same, lest their own investment have been for nothing. In a way it's like a conspiracy.
The "if it compiles, it works" thing describes people's experience. It isn't always true and it isn't a valid excuse not to do proper testing, but in Haskell a non-working solution is usually at least broken in a way that makes sense in the context of the problem you're trying to solve. If you program an incorrect solution to a problem, you'll probably get a wrong answer. But a wrong answer is different than nonsense, which is what you get if, say, you write past the end of an array in C.
The Haskell type system is a very good nonsense filter, and probably a majority of programming errors are from telling the program to do something nonsensical. If you filter those out, sometimes what's left is a working program.
A Haskell programmer is unlikely to write a several thousand line program and have it work correctly the first time, but even large programs are written in small pieces. When those small pieces work correctly the first time, it's gratifying.
I think it's especially telling that its community skews so heavily towards this blogger/monad tutorial writer dilettante demographic rather than the D. Richard Hipp/Walter Bright 'actually gets real work done' demographic. I know which of the two I'd rather be in. Haskellers are even worse than Lispers in this regard. For the amount of noise about Haskell, you'd expect to see high-quality operating system kernels, IDEs, or RDMBSs written in it by now. Instead its killer apps are a tiling window manager, a document converter, and a DVCS so slow and corruption-prone even they avoid it in favor of Git.
What struck me about your post was how much it sounded like something he would write, in particular the interpretation of facts (a strictness pragma was recently introduced to Haskell) in the most extremely negative way possible (said pragma is admission that Haskell's default evaluation strategy is simply "wrong").
If you are not he, you should look him up, as I expect you two (?) would get along fantastically.
The Haskell community is smaller than others but there are numerous highquality projects, including kernel/OS level projects that you don't mention:
HaLVM, Haskell Unikernels - https://github.com/GaloisInc/HaLVM
The specification model of the sel4 microkernel - https://github.com/seL4/l4v/tree/master/spec/haskell
There are many smaller examples of OS projects in Haskell, but for better examples of production systems you only have to look to areas requiring high assurance such as finance, where a number of large companies use Haskell and other functional languages. (BoA, Barclays, Credit Suisse, Standard Chartered, etc.)
It's true that there a lot of Haskell blogs focused on research level Category theory, but the concepts there don't need to be used or understood for high quality, production ready code. I'm actually not a Haskell guy, and very much of the mindset that you should use the right tool for the job, but being dismissive of languages with unique features such as being lazy by default is shortsighted.
Haskel's alien, counter-intuitive evaluation strategy was actually marketed (circa 2010) as a powerful performance-enhancer that facilitated optimizations that were difficult if not impossible to do in a strict language like C. Instead, it ironically tends to cause more performance problems than it solves, with hard-to-debug spaceleaks capable of taking down production systems.
The entire Miranda branch of the PL tree is likely going to prove to be an evolutionary dead end, and Haskell will eventually lose out to Idris or some other strict language. And I don't think it's a coincidence that the strictness pragma (which does not even give you a strict program) was added soon after Idris and the rest started getting traction.
You mean like this? (I linked it the other day.) https://github.com/rigetticomputing/cmu-infix But one might argue that the prefix notation + macros being able to support such a library is just further testament to their use as default...
It's such a good practice that there's -XStrictData (different than the pragma mentioned by GP)
Why must you publicly post something so wrong?
> mfix :: (a -> m a) -> m a
> The fixed point of a monadic computation. mfix f executes the action f only once, with the eventual output fed back as the input. Hence f should not be strict, for then mfix f would diverge.
But why tho?
class Foldable t where
foldl :: (b -> a -> b) -> b -> t a -> b
class Foldable myFoldable where
foldl :: (summary -> element -> summary)
-> myFoldable element
Apparently-idiomatic Haskell reads (or, rather, doesn't) about like line-noise/code-golf style Perl to me.
I think this is kind of the problem when you are really high up in abstraction-land. The functions that you're using are so generic that it's hard to say that they operate on anything in particular, except that the arguments fulfill certain properties or laws.
This is a documentation issue, not an implementation issue.
mfix $ \threadId -> forkIO $ do
-- computation in forked thread
mfix, by nature of its laziness, captures the return value of the function passed to it, and passes it to said function.
But the general pattern applies to many things. One other example that immediately springs to mind is e.g. registering a callback while needing the ability to unregister the callback from within itself, using some sort of id returned from the registering function.
fix f --> f (fix f)
That is, to compute the fixed point of some function f, we give f access to the fixed point (i.e. fix f) in order to compute the fixed point.
Consider f is strict, and that we try to run this:
fix f --> f (fix f)
--> f (f (fix f))
--> f (f (f (...)))
On the other hand, if f is non-strict, we can evaluate its body without evaluating its argument first.
mfix generalizes fix to monadic functions. When we have monadic functions, we're usually dealing with side-effectful computation, so we want to make sure that we only force the monadic action f once, so that the side effects only run once.
Another question to ask is "what is fix even useful for"? fix is how we introduce general recursion into a language. For example, the lambda calculus has fix, except it's called the Y combinator. Compare:
Y f --> f (Y f)
fix f --> f (fix f)
fix is often used in smaller toy languages to also make them Turing complete, because this form of recursion is straightforward.
While Haskell has recursive function definitions, fix (and mfix more generally) are sometimes useful in their own right.
Basically it's the primitive that allows monadic computations to be written in the same lazy cyclic style as regular values in Haskell. (e.g. `ones = 1:ones` to create an infinite lazy list of 1.)
There isn't a single answer to "why?" any more than there is for monads in general, but as an example, I've been looking into using this abstraction to model circuit graphs.
-- create an input text widget with auto clean on return and return an event firing on return
-- containing the string before clean
inputW :: MonadWidget t m => m (Event t T.Text)
inputW = do
-- fire send event when pressing enter
let send = textInputGetEnter input
-- textInput with content reset on send
input <- textInput $ def & setValue .~ ("" <$ send)
-- tag the send signal with the inputText value BEFORE resetting
return $ tag (current $ input ^. textInput_value) send
However we are creating a text field here so if we make the value recursive on itself we are gonna be spawning text fields all over the place. MonadFix is variant of this that splices the effect out so it is only run once and then allows the value to be recursive.
I find it useful because it lets me take a recursive structure and a multi-pass algorithm on that structure, which involves side effects, and express it in code as a single pass, without mutation.
This conversion of multi-pass algorithms to a single pass while retaining time/space complexity guarantees is one of the key benefits of laziness in general.
: e.g., an AST
: e.g., type inference, where pass 1 is generating type constraints, and pass 2 is solving the constraints and annotating the tree with the final inferred types
: e.g., generating fresh type variables
The key line is:
(term', t, tenvFinal) <- inferType dictionary tenvFinal' tenv term
sieve :: Integral a => a -> [a]
sieve l =
sieve' [2..l] 
sieve' (p:ns) ps =
sieve' (filter (\x -> rem x p /= 0) ns) (p : ps)
sieve'  ps =
Haskell has trade offs.
See e.g. https://wiki.haskell.org/Tying_the_Knot
EDIT: To be specific, I can think about the example of machine efficiency that can simply be specified literally in C but it is uncertain and opaque in Haskell.
Ultimately, if you want imperative style, you can just write imperative code that's well typed in Haskell.
By implying certain logic dependence? It would be rather un-natural, which I assume is the reason for the invention of monad.
> Ultimately, if you want imperative style, you can just write imperative code that's well typed in Haskell.
It is not just a style. There are logic implications. In imperative programming, the states are always assumed. That is, the operations are assumed to have different effect switching orders. Note this is a much stricter assumption than the alternative that some/all operations are stateless. The benefit of such stricter confinement provides certain convenience. For example, because the order of instruction is meaningful and can convey ideas, contexts can take place, and local ideas can be focused (rather than carrying the whole states). In the real world (compared to the mathematical world), most action affects vast states that are impossible to quantify, and we (as an adaptive species) are well adapted to stateful imperative thinking. How do you perform rigorous logic thinking when the inputs (states) are never complete?
In computer programming, the states can always be fully described. However, if you always want to fully describe your states, you'll either restrict your functions and programs with limited states (input/output) or you will be constantly writing cumbersome functions/programs with a long description of all its states. The solution is to share and imply some of those states in the types, monad, for example. So yes, monad is not strictly a necessity but a convenience. Since Haskell never makes assumptions of hidden states, merely imitating an imperative style is still not imperative programming. Because the states are still not assumed, but rigorously specified and carried. So even with a seemingly similar code style, the cognitive process in Haskell and a conventional imperative language is very different.
Which is better? It depends. If you are holding the view that everything computer solves are mathematical problems, or when you are solving mathematical problems, certainly you would think only Haskell's approach makes sense. On the other hand, if you understand that computers are merely a tool for solving real-world problems, then you are often not looking for mathematically correct answers but practically good enough solutions. Practically good enough solutions allow room for shortcuts or undefined behaviors. Embracing these uncertainties (shock to mathematicians) allows for efficient means. Under the latter context, a pure mathematical restriction can be a burden rather than help. The latter approach, in fact, carries much more complexity. For example, calculating pi to one millionth digit. In functional approach, it is solving a mathematical problem and a correct answer is what you looking for. In imperative approach, memory cost, speed efficiency, various shortcuts are all part of the consideration and part of solution. It is not always just the answer matters, how you get the answer also matters. But do they really matter? It depends.
Real programming is often a mixture of both -- in C or in Haskell. C defaults to imperative style with explicit stateless when needed. Haskell defaults to stateless with explicit stateful when necessary. Trivial in one is difficult in the other.
Why is it that people talk about this almost as if it's done virtue of the language? As if the fact that's it's so inscrutable proves that it's valuable, different, and on a higher plane of computing.
And it really confirms my biases against the language.
Probably the best single line description of Haskell.
The learning curve helps with the former (easy) part, the latter (hard) part contains some really brilliant ideas where you start to wonder why so many people still use other languages for everything, then you remember how hard the easy stuff can be...
> Haskell definitely does not have the most advanced type system (not even close if you count research languages) but out of all languages that are actually used in production Haskell is probably at the top.
What are these other research languages, that have such incredible type systems? Do they usually have implemented compilers, or would they only be described in an abstract form? Can I explore them for fun and curiosity?
Some reasons I remember for the various failures, in no particular order:
- steep learning curve = experienced (in other languages) programmers having a tough time not being productive for weeks/months in the new language, with no clear payoff for the kind of projects they're working on
- sometimes/often side-effects/states/global vars/hackyness is what I want, because I'm experimenting with something in the code; and if I'm not sure if this code will be around in 3 months, I want to leave the mess in and not refactor it
- in general, I think all-the-way pure no side effects is too much; I think J.Carmack said sth along the lines: Haskell has good ideas which should be imported into more generally useful languages like C++/etc, eg. the gamestate in an FPS game should be write-once, it makes the engine/architecture easier to understand (but in general the language should support side-effect)
- I found the type system to be cumbersome: I kept not being able to model things the way I wanted to and running into annoyances; I find classes/objects/templates etc from the C++/Java/Python/whatever world to be more useful for modeling applications
- when the spec of the system keeps changing (=the norm in lean/cont.delivery environments), it's cumbersome/not practical to keep updating the types and deal with the cascading effects
- weird "bugs" due to how the VM evaluates the program (usually around lazyness/lists) leading to memory leaks; when I was chasing these issues I always felt like I'm wasting my time trying to convince the ghc runtime to do X, which would be trivial in an imperative language where I just write the program to do X and I'm done
- cryptic ghc compile errors regarding types (granted, this is similar in C++ with templates and STL..)
- if it compiles it's good => fallacy we kept running into
- type system seemed not a good fit for certain common use-cases, like parsing dynamic/messy things like json
Working at Facebook for the last year and seeing the PHP/Hack codebase which powers this incredibly successful product/company has further eroded my interest in Haskell: Facebook's slow transition from PHP to Hack (=win) shows that some level of strictness/typing/etc is important, but it's pointless to overdo the purity. Just pick sth which is good enough, make sure it has outstanding all-around tooling, have good code-review, and then focus on the product you're building, not the language.
I'm not saying Haskell is shit, I just don't care about it anymore. I'm happy if people get good use out of it, clearly there are problem spaces that are compact, well-defined and correctness is super-important (like parsing).
Interesting. I find that one of the best parts of Haskell. If I expose the assumptions I've relied on in the types, then when those assumptions break I have tremendous help knowing what code must change to deal with it.
1000x changes in performance is not a problem if:
1. Performance of one module is not overly dependent on the code that uses it.
2. Performance never degrades order of magnitude with new compilers.
A scientific mindset as well as liberalism are also ideas where new proponents often want to draw a line in the sand to stratify people into superior and inferior. The original proponents were chasing a higher level of quality for all, but the need for social stratification weaponizes and gates ideas.
The author of the article is sharing his experience with Haskell, explaining ups and downs. He concludes by
> Haskell is a great programming language. It requires some effort at the beginning, but you get to learn a very different way of thinking about your problems.
Are you interpreting the word "different" as "better"?
Thank you, that is all I need to know about Haskell. I won't be learning Haskell then, in the same way that I won't have anything to do with C++. I don't have enough time to use these fashion-dominated and fad-obsessed programming languages.
To be honest, having been a Haskeller since ~2010 (and a C++ guy since long before that) I’m not even really sure what specifically the author is referring to there.
Should one dive in given those quirks? The decision is up to the reader.
Curious, what did you expect/wish for?
EDITED for better phrasing.
Clickbait title are so common we think they're normal titles.... Here's how he did it: you create a craving for an answer then you offer a solution for that craving. "here’s how it was" ==> that's the trick
Also "here’s how" should never be used in a title, we all know that the title's subject IS what you're going to talk about.
a pre-click bait era title would have sounded like: Learnings after using Haskell for 5 years
This title is on point: The author used Haskell for several years, and found there was a learning curve, some helpful libraries, some poorly documented libraries, lots of historical baggage, unclear performance consequences, and uncommon correlations (good and bad) between task and difficulty.
Granted, once you click you find out that's not why it's titled that, but you still feel baited into clicking to find out. A better title would be something like "I tried Haskell for 5 years and here's the good and the bad", or something like that.
> It's clickbaity because you have to click to find out if their opinion is highly positive or negative.
I think you just stated the purpose of a title. Wiki says clickbait is:
> "... relying on sensationalist headlines or eye-catching thumbnail pictures "
There's nothing sensationalist about. "I tried Haskell for 5 years". It's rather banal.
The other day, I saw some Medium post titled "Functional programming will make you happier". I was furious and about to blast some diatribe on the increasingly infantile-inanity-inflation-in-headlines but looking at the post, the author seemed genuine in taking the troubles to work out their thoughts to the reader, so I refrained.. tilting at windmills, increasingly people are simply growing up with these sorts of titles and develop a primal instinct for what gets attention and clicks --- aka, reads.