Major adoption of non-hybrid languages doesn't seem like something that will happen any time soon though, and it sure seems like the FP community actually wants it that way. We just sent one of our people to attend Lambdajam, and in his 5 minute version of what he learned, he was telling us how his impression is that there were plenty of Haskell people there that seemed to barely tolerate having any talk about Scala in the conference. Badmouthing of hybrid languages is the order of the day by extremely prominent members of the FP community, even if they end up using said languages in their presentations.
So I suspect that the future of FP is to be influential, for most of the best bits to be copied by language makers that build communities that care about marketing, and Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.
If that was Haskell's problem, its effects would be unique to Haskell. Rather, Haskell, like Erlang, Lisp, Eiffel, Smalltalk, and, in its early days -- and it had to fight to overcome this -- Java: these all have or had the same problem, which is that they don't integrate well with the "outside world". As a result, a great deal of effort is expended in the developer communities surrounding these languages to reinvent the wheel, axle, spoke, tire, hubcap, and spinner, which prevents said developers from making things that are truly interesting.
Languages which work with the "outside world" can be used to develop great software even if they are not popular, like Lua and CoffeeScript. Go, Swift, Scala and Rust learned this. The most obvious point of contrast, though, is how quickly Clojure became more popular than Scheme/Racket, despite being basically the same thing.
After using Clojure for a while, all this cons, car, cdr stuff of Scheme feels like being stuck in implementation detail.
With the scheme community insisting on (1) recursion as the primary means for looping (2) no general polymorphism in the language standard I see no future in Scheme.
(1) - I have seen the light with Clojure's `for` macro. While it is not much more as a `map` (or a python/haskell list comprehension) its that kind of sugar that is really readable and is a good platform for beginners to reach up to map, fold, and other higher order functions.
(2) - Lists have car, cdr, Lazy lists in Scheme have stream-car, stream-cdr. I.e. I cannot use an algorithm that works on lists on lazy lists, etc. Even some lightweight dispatch here would make scheme so much more powerful and useful.
So: While clojure is not much more than Scheme, it demonstrates how a little more can actually make a huge difference in usability and adoption
PS: I really like Scheme as a small, simple language that gives you a good idea how an interpreter might work under the hood.
Also, in what way does Swift work with the outside world? It seems to be locked in pretty tightly in its own little world?
Lua's C API allows Lua programs to bind to any C library. It's probably better at this than anything else out there, and there have been some attempts at really ambitious things like automatically generated Qt bindings: https://github.com/mkottman/lqt
>in what way does Swift work with the outside world?
Swift uses the Objective-C runtime. It interoperates with Objective-C very well and is designed to do so; by definition this means it also interoperates with C quite well. See:
Oh please, you just devalued your entire comment with that untrue and derisive comment. Haskell is used by people who highly value composability, purity, correctness, and who believe mutable state should have a big red flag around it and be limited as much as possible.
And it's still not related the gp's points of badmouthing other languages or being unmarketable.
Nothing I said is about "Purity, composability, correctness, and avoidance of mutability" and "why that's a bad thing", and all about "badmouthing other languages or being unmarketable".
I don't know about you, but I'd rather do real math (or at least take a stab at it) in Scala than write more routing web frameworks in Haskell. Ideally, I would be writing it in Haskell (or Idris ;)) but the lack of willingness to cross over to the other side and interact with "normal" programmers really hurts us, and I'm honestly a little fed up with the fundamentalism. There are really smart folks out there with a lot of domain experience that we alienate when we blow off stepping stone languages like Scala.
If they didn't, you'd probably not call them "the languages crowd" and we'd never get any new inspirational theories and principles out of them.
- Scala's type system feels at the same time more complex and less powerful. It has worse type inference, for example.
- Being a hybrid language, Scala offers fewer guarantees than a pure FP language. For example, its type system isn't adequate to express and constrain side effects.
- Scala seems more verbose and less elegant than Haskell to me.
- Compatibility with Java means Scala has some warts like allowing nulls. Yes, you have the Option type. But something you expect to be of type Option can still be null due to carelessness!
- Some libraries used by the Scala community, such as Actors in Akka, seem to eschew the static type systems that's the raison d'etre to use something like Scala (or Haskell) and seem a lot like dynamic typing, except without the simplicity of dynamic typing.
- Lastly, the highly opinionated "Mostly functional programming doesn't work" 
For instance, Scala's OO/module system is one of the best ones out there, but is readily dismissed by Haskell people, despite modularity being a complete train-wreck in Haskell.
I think there's a good argument, albeit one I'm not qualified to make, about whether Ml modules or Scala objects are better.
It’s a conference about functional programming. If they believe that functional languages are better for software development than imperative languages, then they may see hybrid languages as making unnecessary and detrimental concessions.
> Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.
You’re repeating the false meme that I/O in Haskell is difficult or theoretical, and furthermore implying that learning is a bad thing.
If that kind of extremism was applied in discussion of other paradigms, we'd be excluding mention of "hybrid languages" like C++ from discussions of Object Oriented Programming.
If I had mentioned exclusion, yes. C++ is mentioned at OOP conferences like ECOOP and OOPSLA as often as Scala is mentioned at FP conferences like ICFP, CUFP, and Lambdahack. But I have little doubt that a strawman proponent of a pure-OOP language such as Smalltalk—or Self or Ruby or Java or whichever language embodies the definition of OOP you prefer—would be as miffed about the use of C++ in lieu of their preferred language as hibikir’s friend’s strawman Haskeller is about Scala.
And anecdotally, the top C++ developers I’ve worked with didn’t have anything good to say about object-oriented programming anyway.
Which makes sense. C++'s support for OOP is pretty painful. It would be like hating functional programming because of being exposed to it in APL.
You provided an explanation for a why people "barely tolerated" (i.e., preferred to exclude) mention of hybrid languages at FP confrerences.
It is hard to get a good grasp of OO in C++, if you don't experience pure OO languages, at least for some time.
Multi-paradigm languages like C++ are great, but that power can work against learning a specific paradigm properly.
Good program design, regardless of the paradigm requires experience and the ability to use the tools that expose paradigm best practices.
Those hybrid languages (e.g. Python, F#, and Scala) are great for new development but awful when you have to maintain low-quality legacy code. That said... I think that observation applies far beyond functional programming. (It's rare, but certainly possible, for bad Haskell code to exist.) A common impetus behind language churn is the programmer's desire to justify green-field development (which is fun, even in C++) instead of career-damning legacy maintenance (which is unpleasant, even in good languages).
Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.
This is an exaggeration of that prejudice. I do think that the Haskell community has failed (but is it a failure or a deliberate choice not-to?) to package a not-that-hard language as something average programmers can understand. Functor, monoid, and monad sound a lot scarier than they actually are.
Not to say that Python is a bad language, but having continuously tried to program in a functional style I've found that more iterative-seeming code ends up being more idiomatic and maintainable.
> do think that the Haskell community has failed (but is it a failure or a deliberate choice not-to?) to package a not-that-hard language as something average programmers can understand.
That is extremely true. Even in widely-used packages, you'll rarely find good official documentation (quickstarts or the like).
Sure, for a lot of packages , types are good enough (if you have a package to read a CSV file, all you need to do is find the CSVConf -> Handle -> [[String]] function), but with the bigger, more framework-y packages, there are so many types internal to the system that you _have_ to read through everything to do even simple things.
Too much time is spent trying to explain how the language (which ends up being simple) works and not enough time on practical guides on how to actually do things.
This has become a bit of a rant, but in Math (the kind most people go through in middle school), you always learn specific cases before going into general theory. Unfortunately, a lot of Haskellers end up being mathematicians of the PhD style, where they're fine with just going into general theory. Examples are useful!
When is it not awful to maintain low-quality legacy code?
Well, the unofficial motto of the Haskell community is "avoid success at all costs" after all :) 
Of course if you know your language very well, it is much easier for YOU to be working with it. Just saying it has this or that feature is great, but does not really tell us why it would be costly to do similar things in other languages.
ML and Haskell: ubiquitous use of higher order functions through standard library; concise syntax for lambdas; bi-directional type inference with principle typing; named algebraic data types including sums, products, exponentials, recursion, universal, and existential quantification; ubiquitous purity, generalization at higher-kinded types, ubiquitous immutability; effect typing.
ML alone: true modules; module functors; structural equality in typing (polymorphic variants); great metaprogramming (camlp4)
Haskell alone: bounded polymorphism (typeclasses), completely ubiquitous purity/completely ubiquitous effect typing; completely ubiquitous immutability; lazy evaluation; a standard library including many high-level higher-kinded type generalizations (functor, monad, applicative, traversable, foldable, category, arrow, anything else you can think up); ok metaprogramming (template haskell); programmer controlled rewrite rules, results in list fusion/vector fusion
I list more under Haskell partly because I'm just more familiar with it, but also because it takes more things further and thus derives greater differentiation for it.
I've been impressed by the Mirage blog posts . If I were going to pick a single example with the biggest size/complexity win over an imperative language, the ASN.1 Combinators  post would be the example.
The general problem with the good examples is that they're deeply tied into a different school of thought. If you don't have a general idea of what's going on, the example is completely incomprehensible.
The only functional language advocacy site I know of that's really attempting to bridge this gap is "F# for fun and profit" .
It shows how F# is a much more suitable language than C# for Domain-Driven Design (a popular design approach for boring database software which _originated_ in C#-o-world). It makes a very compelling case that you're just making life hard for yourself if you do domain modelling in C# instead of a functional language.
The fact Java has these features does not mean they are not FP :)
The above is how haskel was introduced when i took it in CS at Syracuse University.
The practicalities of haskel as a useful and practical language are realized through monads. Use of monads to deal with side effects and impure world automatically make haskel "none pure".
With the existence of monads,the purity of the language is a matter of opinion or degrees since it largely based on "idiomatic" usage.
To say haskel is an impure language because of monads will be akin to say C# is an unsafe language because it allows pointer managements through the usage of "unsafe" keyword.
I guess I would have grasped your point earlier if Haskell had a keyword "IMPURE", just like C# has the keyword "unsafe".
Come to think of it, doesn't "pure" in the context of PLs mean much the same as "safe"?
Haskell requires this because, lets be frank, its really hard to understand. I can hear the howls of anger but if you ask “man in the street”, will he do a better job explaining the above haskell sans comments or a similar pretty printer written in VB? I have CS background, studied FP and type theory, but I am more like the man in the street.
What I want from a programming language is an executable description of what I want to achieve. I want that description to have all the good qualities sought in fp but more importantly I want it to flow into and out of my brain (and my colleague’s brains) like butter not like a sudoku puzzle trapped in a type theory proof and then Huffman encoded (because descriptive names are the preserve of all those "terrible java programmers").
In the real world, I need to write code that makes fast incremental changes to large interconnected data-structures e.g. graphs and trees with indexes and all sorts. I cannot begin to imagine how to solve half my problems with immutable data-structures. With what I know at the moment, it would be a massive obstacle to the real problems I need to solve.
I think you make a valid point. Many comparisons of Programming Languages productivity only try to measure how fast you can code a new application in them, but not how fast you can modify an existing application to do something else - ASSUMING you are not the author of the original version.
It's weird, the first time I read this I was excited to read the source of this quote. When I finished reading I scrolled up to the top and was disappointed to see a citation missing. Does anyone know what experiment he's referring to?
Feels bad, man.
I read a quote once about The Velvet Underground that stuck with me. It said that, while not many people bought their albums, everyone who did started a band.
Sometimes it seems like Haskell is The Velvet Underground of programming languages. While not many people are using it, everyone who is has started their own language/functional library -- Influencing other languages is, in a way, people using FP languages, just indirectly.
I've not written a single serious Haskell program -- I don't think I would even know how to, tbh --, but I sure feel like just playing with it and learning about its theoretical motivations has made me a much better programmer in other languages.
That is certainly true but I think more people will not shift to functional programming languages to do functional programming style programming but will stick with their favorite imperative language and do functional programming there as more and more imperative languages add features and facilities that allow functional programming style programming.
Case in point, i have this Qt/C++ library that evaluates lambdas in a lazy fashion.
You can have,for example, a function that returns what is called "a future",ie something that can be evaluated at a later time to produce a value "held" in the future or the future can be cancelled if the function caller decides they no longer need the result.
Introductions of lambdas in C++11 allows for this kind of programming to happen in C++ and i suspect more and more C++ programmers will start coding to this style as more and more APIs will start getting released that will take lambdas as arguments or return them as results.
It is an interesting new way of doing C++ and these ideas are mostly borrowed from functional programming world.
And possibly a country.
I would and I believe it started to happen when people started talking about Scala.
I also think something similar happened with OOP in (maybe?) the early 1990s - i.e. we'd had Simula, Smalltalk and so on for years but it became mainstream. I'm thinking of things like Visual Basic 1.0 (1991), C++ v2.0 (1989)
All major languages are going hybrid, mixing imperative (mostly OOP) with functional programming concepts.
Until we get a complete new computer architecture, like quantum computers, the mainstream will be hybrid OOP + FP.
Nowadays people see the value of recursive thinking, and hence the value of functional programming. The convergence of many technology trends seems to suggest that the idea of "The program can be the data, and the data can be the program" is no longer so far fetched.
Took me about a week to be productive in Scala in a functional style. Haskell is a little more complex, but I found that throwing myself in the deep end, while frustrating, was not overly difficult.
The cost of switching to FP only seems high because people don't want to even bother trying. Once you've gotten over the initial bump, FP is (in my opinion) substantially easier to wrote code in.
Number of characters in your post: 139
Number of characters on Dvorak home row: 83
Number of characters on QWERTY home row: 41
A solid 1/3d of those are 'a', the homerow letter they have in common. Also, my original motivation in learning Dvorak was to stop myself from touch-typing (I repeatedly tried and failed on QWERTY). It was quite a success in that regard.
Incidentally, I do agree with the earlier post saying that it wasn't worth it. But I also strongly believe that it was an improvement (there's a reason why I don't switch back), just not an improvement large enough to justify the effort (which was much greater than expected).
"The carefully controlled study failed to show any benefit to the Dvorak keyboard layout in typing or training speed"
Note that this is for English, which means Dvorak fails to deliver even for what it was designed for.
As I mentioned above, Dvorak's performance and strain on your hand is even worse when you are doing something else than typing English (programming, typing another language, writing spreadsheets, etc...).
I don't take issue with the tests showing that Dvorak doesn't have speed benefits. My anecdata confirms this claim at the skill level which is relevant to me: my Dvorak typing speed, which I have made no effort to improve, is roughly the same as my QWERTY typing speed, which I also made no effort to improve.
However, Dvorak dramatically (factor of 2-3) decreases the fingertip slew distance which is monotonic in the distance your tendons will have to travel in your carpal tunnels for any given piece of typing. The difference isn't remotely subtle because it's easy to feel the tendons moving in your hand if you pay attention. In practice, it's the difference between tingles and numbness after 5 pages vs 15 pages of typing. So I do take issue with your claim that Dvorak is bad for carpal tunnel.
Your original post mentioned carpal tunnel and did not mention typing speed so I think it's more than fair to ask you to elaborate specifically on your claim regarding carpal tunnel.
I didn't learn Dvorak to increase my typing speed - I learned because I was developing RSI in my right hand and put it down to QWERTY (and the mouse). The benefits of Dvorak are pretty clear when typing up large texts, although pain is still present after a few hours of typing. I found ergonomic keyboards to be a better solution than changing keyboard layout for dealing with RSI though, and I typically use QWERTY these days because configuring application's keybindings for dvorak is too awkward, and the gains are too little.
I've been considering learning Workman or QGMLWY, as their supposed benefits are even greater than Dvorak, and they have CUA-shortcut friendly layouts.
There's hardly any information about Programmer's Dvorak, by the way, not even a Wikipedia page.
"This car sucks! It can't even reach freeway speeds!"
"If you would shift out of first gear, you could reach freeway speeds."
"We weren't talking about higher gears!"
> There's hardly any information about Programmer's Dvorak
It's a more recent layout, yes, but it has achieved decent enough penetration that I would expect anyone seriously contemplating the switch after 2010 or so to be aware of it. Anyone who looked at stack overflow opinions on Dvorak, selected a Dvorak layout on linux, or googled Dvorak and programming in conjunction would have run across it.
You still haven't substantiated your claim regarding carpal tunnel.
(On Linux, it's a simple xmodmap configuration in /etc/X11. I even made a Windows equivalent: an executable that can be installed/removed like any other program.)
With so many keyboard customization utilities available, I'm surprised more folks don't optimize the keyboard for their own use. Kinda like building one's own lightsaber...
I live mostly in emacs, so it tends not to be an issue, since I reconfigure nearly every shortcut anyway (default emacs shortcuts are IMO, terrible on modern keyboards). There's extentions to configure for firefox too (because it's a pain to do manually via about:config).
The lack of ability to configure key-bindings explicitly is perhaps one of my biggest gripes with "modern" software design - which has the "do it one way only" philosophy, and forces the user to adapt to the software, rather than adapt it to their needs. Firefox for example, gets worse with every iteration - and it's not like I can revert back to using Opera, since they gimped that too by turning it into another Chrome clone.
I don't want to go into to much detail but a .net financial management client and web application. Nothing fancy by any means. Let's also just say I don't have very many options where I live, and am geographically restricted because I don't believe in sacrificing watching my daughter grow up. I'm interested in remote work but that hasn't been very easy to look into. I'm in that mystical land between mid-level and senior and most remote jobs seem to want senior in both skills and time. I'd even consider getting away from .Net entirely and professionally pursing a new stack but those opportunities haven't been available at all when looking at remote positions (And I completely understand why).
I program in Python, and Perl for years before that. I regularly use the functional features of the languages, but I would have no idea to architect a reasonably large application using only FP, certainly not to the extent I would using OOP / procedural.
"Immutability is stupid, because if your program is immutable it can't do anything."
"You just end up writing everything in an IO Monad anyway."
"Functions are good for math, but not for coding."
FP isn't the only thing in this category - like many people on HN I learned OOP a long time ago, but I remember it being similarly earth shattering, and I went through a period of over-enthusiasm and evangelism for Java and GoF design patterns just like the one I went through when I first learned Haskell. The code-is-data realization that you have when you learn a Lisp is similarly powerful.
I hope I experience more revelations like these! These moments where you learn a whole new way of thinking are the purest joy programming has given me.
FP means additional levels of indirection which makes it less efficient than imperative approaches. If FP really were more productive than imperative languages it would have been adopted by the industry long ago.
Functional programming has a couple of problems, when it comes to uptake. The first is that you only get 40% of the benefit if you go 90% of the way. (Static typing is the same way, which is why I prefer Clojure's typelessness over the mediocre static typing of Java or C++ or Go.) In that case, the "impure" 10% still causes, if not realized complexity, the potential for complexity that infuriates maintainers.
If you go 90% of the way on FP and get 40% of the benefits, that still means you're writing code faster and generally producing less of it, and those are good things. However, there's a nasty duality in programming which is that saving time on the fun stuff means more time is spent on the un-fun stuff. (If you make writing of code 4 times faster and debugging of nasty interface issues 2 times faster, you spend a larger proportion of your time on the latter.) Unless you can eliminate whole classes of un-fun-ness (e.g. categories of bugs) it's often not worth it. Eliminating 90% of the un-fun-ness just means you're spending more time on that remaining 10%.
For example, Python can be used as a mostly functional language and thereby go 90% of the way (conceptually) to FP, but it isn't. The community has judged it to be not worth it. Scala should be used for FP, but there are plenty of Java++ programmers who still use null instead of Option[T]. Sadly, it doesn't take much of this dysfunctional programming to emasculate FP and leave an unbiased person asking, "Why bother?" It might not even be the right way to go, for a maintenance project. Mixing two styles is going to make it more illegible than using the existing (if suboptimal) style for new changes and work.
The career issues noted are also huge. The average Clojure or Haskell developer makes slightly than the average Java developer, but if you control for skill and compare at-level, the Java or C++ developer wins. A 1.8 Java developer (scale here: http://michaelochurch.wordpress.com/2013/04/22/gervais-macle...) makes $175,000 and turns down invitations into VP-level positions at major corporations on a regular basis, because 1.6+ Java developers are just so rare. A 1.8 C++ developer can make $500,000 per year at a hedge fund. The 1.8 Haskell or Clojure developer makes about $125,000 on average, which is not bad but hardly stratospheric. This tends toward a self-perpetuating exaggeration of historical discrepancies in language popularity. Even if the average Haskell job is better, Haskell jobs (at all) are much harder to get, and Haskell jobs that pay as well as even an upper-middle Java job are extremely rare. The market is just stronger for Java people, so that keeps people wanting to use it, which keeps the majority of companies preferring it over better languages because they fear maintenance risk more than low productivity.
I'm less charitable than Wadler on the "they don't get it" angle. Certainly there are smart people who "get" functional programming and have good reasons not to use it. Still, I do think that a contributing factor to FP's lack of uptake is the anti-intellectualism of the programming world.
It's not that anti-intellectuals have some natural inclination toward Java or C++. The anti-intellectuals have the general attitude that language doesn't matter. And then they pick Java for some risk-averse, enterprisey shitfuck reason like maintenance risk or lack of an available developer pool, both of which are just business stinginess ("we don't want to train people"). What makes them vile is not their tastes in languages (there is a lack of taste in the Java community but, even still, there are cases where Java is the absolute right language to use) but the fact that they don't take programmers' concerns (such as tooling choices) seriously at all, and prioritize manager-type concerns over the needs of the people actually doing the fucking work.
There are people in finance who work 13-hour days, and there are people who go home at 5:00. There are definitely more of the former kind in finance than elsewhere, but it doesn't seem to help your career much to put in the ridiculous long hours. Sure, you won't advance if you always leave at 5:00. You have to put in the hours when it counts, but it's maybe one or two weeks per year.
Quants average 9 to 7, but a third of that time is the research and exploratory work they'd do anyway. It's stuff that tech people do at home, off the clock. The job involves a lot of research-- reading papers, attending conferences, keeping up with tech-- and the savvy ones do it on the clock. Programmers work fewer hours but have a harder time getting to learn on the job.
There are some nightmare hedge funds with long hours and bad cultures, but the good quants stay the fuck away from those.
This attitude should not be discounted. You'll find fewer proponents of strong type systems than me.
I recently started working with a client who wanted to build a Haskell system in order to do some complex financial modelling. I told them to forget Haskell and build a Python/Django app that will handle 90% of their customers. The only significant bug was comparing a backward discounted quantity to a forward discounted quantity. Haskell wouldn't solve that.
The development effort saved due to not rewriting built-in django apps vastly outweighed any productivity benefit we'd have gotten from using Haskell.
Anyway, a couple of years later he got a grant to continue with the project. It was junior level wages. And he couldn't get a Python programmer to work for that (he asked me) amongst others. PHP would have probably opened up more options in that respect, even though it is considered a poorer choice of language.
It could, depending on how you code. For that matter, so could C.
That's not anti-intellectualism. It's practicality. The "right tool for the job" isn't always the same language. Often it's Python, because the libraries are mature. Machine learning is one area where this is often true. Python itself doesn't seem like it should be a leading ML language, but it's far ahead of the competition in terms of library support (e.g. Numpy, Scipy, Pandas, et al).
Plus ça change, plus c'est la même chose:
At least on the Fortune 500 consulting world, where the customer dictates the tooling, the best we can hope for is C#+LINQ, Java 8 and C++11.
Scala, F#, Clojure and all the other ones, only at home projects.
At least on our client portfolio.
- Immutability. You have to minimize state whatever the program you write, be it language-enforced or not.
- Purity: same story.
- Traits. I can use compile-time duck-typed interfaces in C++ or D that achieve the same thing that eg. Caml modules.
- Monads: if I understand correctly could be done with compile-time interfaces in many languages, doesn't mean we want to.
- Getting access to the GPU power by using some magic library. I'll be honest: I don't believe it.
- Laziness: never needed it, I think SPJ said the next Haskell would be strict.
- Better type inference. This was already nice in Ocaml. It doesn't bring anything for maintenance work, types are _nice to read_, and inference inside functions is done in "lesser" languages.
- exhaustive match: Very nice. Other languages have equivalents.
- deconstructed match: I think it can lead to abuse. I never need to match the first three elements of a list.
- closures: every language have them now. C++ even allow to manipulate the capture as a first class value. And guess what? In practice closures are less readable that their retarded brother, the Object.
- tuple syntax: I think it's considered cool while it's not readable. Aggregates have names, tuples have "first element", "2nd element", no semantic meaning.
- named tuple syntax: available with other language aggregate literals most of the time.
- custom operators: considered a liability, I don't want to learn to read your super duper (+=+) operator.
- true parametric polymorphism: implies uniform representation, implies lessened performance, implies not being a general purpose language. You want to be used for any program, _be fast_ or go home.
- forced to write good code. No no no I want to be _able_ to crank out stinking code when I want, so that I can automate something in the first place given a time budget.
AFAIK Haskell has no story for custom memory allocations, explicit SIMD intrinsics, use two different string types, one of which is a list of Char. A list. Why a list? This is never needed in high performance programs.
There is also a perception that "the compiler will make my program fast because $compiler_guy is super smart" which is terribly junior, and this hipster community tells me I'm dumb for not using Haskell.
Also wtf with the preprocessor? Really?
Am I not "getting FP"?
I think that's the case. Haskell never claimed it would save your soul.
Quickly and in particular:
Immutability/purity: of course, but being more explicit is beneficial and challenging, so the compiler helps
Traits: if you mean typeclasses then they are much more than merely traits
Monads: I don't think you understand correctly, sorry
GPU: You don't have to believe it. It exists. Embedded DSLs and cross-compilation are very common in Haskell.
Laziness: it's not a needed thing at all, it increases composability. SPJ has asked that question, but the jury is still very far out.
Closures: Let's not argue readability, it just makes everyone look bad. Instead, I agree that Objects and anonymous functions are similar. The point is not to have them, it's to have a culture which really does use them everywhere that's appropriate.
Syntax: not worth arguing about. If you dislike it, you dislike it.
True parametric polymorphism: I have no idea what you're talking about here. Parametricity leads to much more program correctness and specialization recovers speed. Further, parametricity opens up compiler-enforced behavior leading to fusion which dramatically improves speed.
Forced to write good code: you really aren't, you're just forced to mark it as bad code (i.e. live in IO)
GHC has custom memory allocations, SIMD probably by the next version, and three different string types. Each has their own particular use and forcing them to be the same would lead to inefficiencies or incorrectness all around.
Using CPP is annoying, no doubt. It'd be great if that were different.
Of course I believe it exist, I just don't believe it can achieve top performance (which I happen to need in $dayjob).
> Let's not argue readability, it just makes everyone look bad.
??? It's still the most important thing for a programming language. Most of our problems are because programs are hard to understand and hard to change.
> Forced to write good code: you really aren't, you're just forced to mark it as bad code (i.e. live in IO)
If I need an ugly script quickly for splitting a big binary file in parts to people that went to a tradeshow (yes true example), I don't have time to type "IO" or even think about it. The language won't help me achieve anything in that case.
> True parametric polymorphism: I have no idea what you're talking about here. Parametricity leads to much more program correctness and specialization recovers speed. Further, parametricity opens up compiler-enforced behavior leading to fusion which dramatically improves speed.
Unboxing/specialization needs access to the whole source code like any whole program analysis. So what I get is that if you are lucky, the compiler will recover the speed that was lost in the first place. But you can't be sure.
> Each has their own particular use and forcing them to be the same would lead to inefficiencies or incorrectness all around.
It still an impediment to have three of them. Many languages have one, period.
Your problem with primitive types is valid, but I've always seen this easily handled by optimizing inner loops. It's ugly, but so is all low-level optimized code.
And many languages trade off between uniform access, byte-string representation, and Unicode. Haskell just splits each use case into its own type.
OK. I guess if there is a way for the optimizing backend output which unboxing failed it would satisfy me. For us native programmers having tagged pointers that are often optimized, but we don't really know when, is a concern.
Might be irrational terror for sure like with the GC fear.
So FP languages proponents need to explain more why and how these features like GC/uniform representation/whatever are not a speed impediment (imho).
> And many languages trade off between uniform access, byte-string representation, and Unicode. Haskell just splits each use case into its own type.
True and this can indeed be contentious where this isn't done.
> "Laziness has lots of advantages, including modularity. These days, the strict/lazy decision isn’t a straight either/or choice. For example, a lazy language has ways of stating ‘use call by value here’, and even if you were to say ‘oh, the language should be call by value strict’ – which is the opposite of lazy - you’d want ways to achieve laziness anyway.
> Any successor language to Haskell will have support for both strict and lazy functions. So the question then is: what’s the default, and how easy is it to get to these things?
> How do you mix them together? But on balance yes, I’m definitely very happy with using the lazy approach, as that’s what made Haskell what it is and kept it pure."
I'd say he is arguing that keeping laziness as the default was the right choice...
> - Purity: same story.
I agree you can write decent code in any language, but the point is which guarantees your language gives you about the code other people wrote. Which is most of the code, usually. Your average imperative language gives you far fewer guarantees than Haskell. Even with your own code, you can achieve purity and immutability in Java or C++ if you want to, but you are mostly fighting against the language.
> - Laziness: never needed it, I think SPJ said the next Haskell would be strict.
On the other hand, Wadler himself in "Why Functional Programming Matters" argues that laziness is essential to achieve modularity and composability.
I think purity works very nice in D, using D definition of purity.
I just read the part about it, I agree that lazy computation is a powerful tool for modularity and composability.
But it's nothing specific to FP languages.
Python and Nimrod do that through generators (yield keyword), D through "ranges" (no language support).
Also not all algorithms you would want to write only once are easily expressible in a lazy way (eg. push parsers).
Things that Haskell used to be weak on but now has good support for:
FFI - yes
Libraries - Yes
Portability - Yes
Ease of installation - Yes
Packagability - Yes
Tools - yes
Training - available
Performance - yes
(non)Reasons why it doesn't get adopted. Still apply.
Popularity - No still not popular
They don’t get it - Yes they still don't get it
Killer App - none that I can see (darcs?)
Once I had that working came Cabal issues. "Cabal hell" has been ameliorated by sandboxes, but to use these you need to update the version of cabal-install that comes with Haskell Platform (I think they're close to updating Haskell Platform though). But when you update cabal-install the built binary isn't in your PATH for many users .
Once you get sandboxes working, you're in a good position to install packages. But because you're starting fresh, installing a larger package like Yesod takes me around ~2 hours (iirc).
Even when you have all that working, adding new packages can be hairy because of the frequent conflicts between packages.
I've distilled all this into a few sentences, but installation issues on Haskell have caused many hours of frustration for me (far more so than, say, Ruby).
I think many of these problems are fixed by using http://ghcformacosx.github.io/, but that project is 2 months old so wasn't an option until recently. Recent advancements in Cabal like sandboxes and version freezing (1.20) are dramatically improving this situation so I'm optimistic for future users of Haskell.
Cabal does OK for the most part nowadays, but it's definitely still a challenge with the monster projects like Yesod. Supposedly that's a work in progress for now.
Really? I literally just did
brew install haskell-platform
cabal install cabal-install
Additionally, you're working with the knowledge that once you install Haskell Platform, you need to update cabal-install (This is not something a newcomer would expect given that they just downloaded cabal via Haskell Platform).
Just getting the Platform installed is misleading as well; there's still the issue of dealing with package conflicts which happens frequently in Haskell and is non-trivial to deal with for a newcomer.
Finally, using Haskell Platform at all might cause troubles; it's explicitly recommended against by the popular Getting Started guide by bitemyapp https://github.com/bitemyapp/learnhaskell because it uses the global package database.
Well, you don't really need to. It will work fine even if you don't (and I think it notifies you to upgrade if an upgrade is available). You don't even need to use cabal to get started.
> there's still the issue of dealing with package conflicts which happens frequently in Haskell and is non-trivial to deal with for a newcomer.
How strange. I've done quite a few Haskell projects and never run into this. I didn't start using Cabal sandboxes until recently.