Hacker News new | comments | show | ask | jobs | submit login
Why No One Uses Functional Languages (1998) [pdf] (caltech.edu)
85 points by tel 1156 days ago | hide | past | web | 112 comments | favorite



With most of those problems pretty much gone, functional programming is doing quite a bit better than it did 15 years ago. Hybrids languages are gaining popularity: You can find Scala in quite a few large companies out there.

Major adoption of non-hybrid languages doesn't seem like something that will happen any time soon though, and it sure seems like the FP community actually wants it that way. We just sent one of our people to attend Lambdajam, and in his 5 minute version of what he learned, he was telling us how his impression is that there were plenty of Haskell people there that seemed to barely tolerate having any talk about Scala in the conference. Badmouthing of hybrid languages is the order of the day by extremely prominent members of the FP community, even if they end up using said languages in their presentations.

So I suspect that the future of FP is to be influential, for most of the best bits to be copied by language makers that build communities that care about marketing, and Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.


>Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.

If that was Haskell's problem, its effects would be unique to Haskell. Rather, Haskell, like Erlang, Lisp, Eiffel, Smalltalk, and, in its early days -- and it had to fight to overcome this -- Java: these all have or had the same problem, which is that they don't integrate well with the "outside world". As a result, a great deal of effort is expended in the developer communities surrounding these languages to reinvent the wheel, axle, spoke, tire, hubcap, and spinner, which prevents said developers from making things that are truly interesting.

Languages which work with the "outside world" can be used to develop great software even if they are not popular, like Lua and CoffeeScript. Go, Swift, Scala and Rust learned this. The most obvious point of contrast, though, is how quickly Clojure became more popular than Scheme/Racket, despite being basically the same thing.


I like to see Clojure definitely scheme-inspired, but it is a very different language. Scheme has this approach where you can build almost anything out of cons, car, cdr.

After using Clojure for a while, all this cons, car, cdr stuff of Scheme feels like being stuck in implementation detail.

With the scheme community insisting on (1) recursion as the primary means for looping (2) no general polymorphism in the language standard I see no future in Scheme.

(1) - I have seen the light with Clojure's `for` macro. While it is not much more as a `map` (or a python/haskell list comprehension) its that kind of sugar that is really readable and is a good platform for beginners to reach up to map, fold, and other higher order functions.

(2) - Lists have car, cdr, Lazy lists in Scheme have stream-car, stream-cdr. I.e. I cannot use an algorithm that works on lists on lazy lists, etc. Even some lightweight dispatch here would make scheme so much more powerful and useful.

So: While clojure is not much more than Scheme, it demonstrates how a little more can actually make a huge difference in usability and adoption

PS: I really like Scheme as a small, simple language that gives you a good idea how an interpreter might work under the hood.


In what way does Lua interact with the outside world? Are you referring to it being embeddable, or the FFI in luajit2?

Also, in what way does Swift work with the outside world? It seems to be locked in pretty tightly in its own little world?


>In what way does Lua interact with the outside world?

Lua's C API allows Lua programs to bind to any C library. It's probably better at this than anything else out there, and there have been some attempts at really ambitious things like automatically generated Qt bindings: https://github.com/mkottman/lqt

>in what way does Swift work with the outside world?

Swift uses the Objective-C runtime. It interoperates with Objective-C very well and is designed to do so; by definition this means it also interoperates with C quite well. See:

http://en.wikipedia.org/wiki/Swift_(programming_language)#Li...


> Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.

Oh please, you just devalued your entire comment with that untrue and derisive comment. Haskell is used by people who highly value composability, purity, correctness, and who believe mutable state should have a big red flag around it and be limited as much as possible.


I think you have just proven his comment about the attitude of Haskell users.


Purity, composability, correctness, and avoidance of mutability are hardly category theoretic or exclusively praised by Haskellers. You could claim that Haskellers are particularly enthusiastic in their desire to optimize for those things, but now you have to describe why that's a bad thing.

And it's still not related the gp's points of badmouthing other languages or being unmarketable.


Try reading again what I wrote.

Nothing I said is about "Purity, composability, correctness, and avoidance of mutability" and "why that's a bad thing", and all about "badmouthing other languages or being unmarketable".


I didn't badmouth any other language in that comment.


I've had a lot of trouble here too. Too often, the languages crowd will care more about the language itself rather than the problems you are solving with it. Most recently, I designed an algebraic query language, and some Haskellers were like... and it's written in Scala?

I don't know about you, but I'd rather do real math (or at least take a stab at it) in Scala than write more routing web frameworks in Haskell. Ideally, I would be writing it in Haskell (or Idris ;)) but the lack of willingness to cross over to the other side and interact with "normal" programmers really hurts us, and I'm honestly a little fed up with the fundamentalism. There are really smart folks out there with a lot of domain experience that we alienate when we blow off stepping stone languages like Scala.


>Too often, the languages crowd will care more about the language itself rather than the problems you are solving with it.

If they didn't, you'd probably not call them "the languages crowd" and we'd never get any new inspirational theories and principles out of them.


I'm not familiar with the community - why do Haskell folks look down on Scala?


The common opinion is that Scala is a poor man's Haskell, with few of the strengths and more weaknesses. It's seen as, at best, a stepping stone from imperative languages to "real" functional languages.


I see. What are the perceived weaknesses and lack of strengths of Scala as compared to Haskell?


I cannot speak for the either the Haskell or Scala communities, being a newbie in both, but here is my opinion of why Scala is "the poor man's Haskell":

- Scala's type system feels at the same time more complex and less powerful. It has worse type inference, for example.

- Being a hybrid language, Scala offers fewer guarantees than a pure FP language. For example, its type system isn't adequate to express and constrain side effects.

- Scala seems more verbose and less elegant than Haskell to me.

- Compatibility with Java means Scala has some warts like allowing nulls. Yes, you have the Option type. But something you expect to be of type Option can still be null due to carelessness!

- Some libraries used by the Scala community, such as Actors in Akka, seem to eschew the static type systems that's the raison d'etre to use something like Scala (or Haskell) and seem a lot like dynamic typing, except without the simplicity of dynamic typing.

- Lastly, the highly opinionated "Mostly functional programming doesn't work" [1][2]

[1] http://queue.acm.org/detail.cfm?id=2611829

[2] https://news.ycombinator.com/item?id=8084302


I think the issue is that Haskell people basically assume that if they don't have feature X themselves, it has to suck and can't count as a strength of another language.

For instance, Scala's OO/module system is one of the best ones out there, but is readily dismissed by Haskell people, despite modularity being a complete train-wreck in Haskell.


Personally I feel most Haskellers wish that Haskell had ML style modules—there just needs to be a way to make them play with Typeclasses properly. It's being actively worked on at this moment, actually!

I think there's a good argument, albeit one I'm not qualified to make, about whether Ml modules or Scala objects are better.


> …his impression is that there were plenty of Haskell people there that seemed to barely tolerate having any talk about Scala in the conference.

It’s a conference about functional programming. If they believe that functional languages are better for software development than imperative languages, then they may see hybrid languages as making unnecessary and detrimental concessions.

> Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.

You’re repeating the false meme that I/O in Haskell is difficult or theoretical, and furthermore implying that learning is a bad thing.


> It’s a conference about functional programming. If they believe that functional languages are better for software development than imperative languages, then they may see hybrid languages as making unnecessary and detrimental concessions.

If that kind of extremism was applied in discussion of other paradigms, we'd be excluding mention of "hybrid languages" like C++ from discussions of Object Oriented Programming.


> If that kind of extremism was applied in discussion of other paradigms, we'd be excluding mention of "hybrid languages" like C++ from discussions of Object Oriented Programming.

If I had mentioned exclusion, yes. C++ is mentioned at OOP conferences like ECOOP and OOPSLA as often as Scala is mentioned at FP conferences like ICFP, CUFP, and Lambdahack. But I have little doubt that a strawman proponent of a pure-OOP language such as Smalltalk—or Self or Ruby or Java or whichever language embodies the definition of OOP you prefer—would be as miffed about the use of C++ in lieu of their preferred language as hibikir’s friend’s strawman Haskeller is about Scala.

And anecdotally, the top C++ developers I’ve worked with didn’t have anything good to say about object-oriented programming anyway.


> the top C++ developers I’ve worked with didn’t have anything good to say about object-oriented programming anyway.

Which makes sense. C++'s support for OOP is pretty painful. It would be like hating functional programming because of being exposed to it in APL.


> If I had mentioned exclusion, yes.

You provided an explanation for a why people "barely tolerated" (i.e., preferred to exclude) mention of hybrid languages at FP confrerences.


> And anecdotally, the top C++ developers I’ve worked with didn’t have anything good to say about object-oriented programming anyway.

It is hard to get a good grasp of OO in C++, if you don't experience pure OO languages, at least for some time.

Multi-paradigm languages like C++ are great, but that power can work against learning a specific paradigm properly.

Good program design, regardless of the paradigm requires experience and the ability to use the tools that expose paradigm best practices.


Badmouthing of hybrid languages is the order of the day by extremely prominent members of the FP community, even if they end up using said languages in their presentations.

Those hybrid languages (e.g. Python, F#, and Scala) are great for new development but awful when you have to maintain low-quality legacy code. That said... I think that observation applies far beyond functional programming. (It's rare, but certainly possible, for bad Haskell code to exist.) A common impetus behind language churn is the programmer's desire to justify green-field development (which is fun, even in C++) instead of career-damning legacy maintenance (which is unpleasant, even in good languages).

Haskell and the like will remain used only by people that just think there's no way to write any useful program without first learning all about category theory.

This is an exaggeration of that prejudice. I do think that the Haskell community has failed (but is it a failure or a deliberate choice not-to?) to package a not-that-hard language as something average programmers can understand. Functor, monoid, and monad sound a lot scarier than they actually are.


pedantic, but I really wouldn't call Python a hybrid language on the same level as F# and Scala. Python's functional programming capabilities are pretty poor (not to mention that things like TCO are missing).

Not to say that Python is a bad language, but having continuously tried to program in a functional style I've found that more iterative-seeming code ends up being more idiomatic and maintainable.

> do think that the Haskell community has failed (but is it a failure or a deliberate choice not-to?) to package a not-that-hard language as something average programmers can understand.

That is extremely true. Even in widely-used packages, you'll rarely find good official documentation (quickstarts or the like).

Sure, for a lot of packages , types are good enough (if you have a package to read a CSV file, all you need to do is find the CSVConf -> Handle -> [[String]] function), but with the bigger, more framework-y packages, there are so many types internal to the system that you _have_ to read through everything to do even simple things.

Too much time is spent trying to explain how the language (which ends up being simple) works and not enough time on practical guides on how to actually do things.

This has become a bit of a rant, but in Math (the kind most people go through in middle school), you always learn specific cases before going into general theory. Unfortunately, a lot of Haskellers end up being mathematicians of the PhD style, where they're fine with just going into general theory. Examples are useful!


> awful when you have to maintain low-quality legacy code

When is it not awful to maintain low-quality legacy code?


> I do think that the Haskell community has failed (but is it a failure or a deliberate choice not-to?) [...]

Well, the unofficial motto of the Haskell community is "avoid success at all costs" after all :) [1]

[1] https://www.simple-talk.com/opinion/geek-of-the-week/simon-p...


I think the Haskell, OCaml, F# people are promoting their language of choice the wrong way. The way to do it would be to show concrete examples of problems which are difficult to solve in say Java, and then show how that is a non-problem in Haskell etc.

Of course if you know your language very well, it is much easier for YOU to be working with it. Just saying it has this or that feature is great, but does not really tell us why it would be costly to do similar things in other languages.

What does FP really mean? The term is used frequently as a synonym for Haskell. Does it mean "type-inference"? But is that really a 'functional' feature? Does it mean 'immutability'? In Java you can declare your data to be immutable. Does it mean 'closures? Or being able to pass a function as an argument or result. You can do those in JavaScript.


"Functional" is really poorly defined. Instead, here are some reasons to feel ML/Haskell are valuable

ML and Haskell: ubiquitous use of higher order functions through standard library; concise syntax for lambdas; bi-directional type inference with principle typing; named algebraic data types including sums, products, exponentials, recursion, universal, and existential quantification; ubiquitous purity, generalization at higher-kinded types, ubiquitous immutability; effect typing.

ML alone: true modules; module functors; structural equality in typing (polymorphic variants); great metaprogramming (camlp4)

Haskell alone: bounded polymorphism (typeclasses), completely ubiquitous purity/completely ubiquitous effect typing; completely ubiquitous immutability; lazy evaluation; a standard library including many high-level higher-kinded type generalizations (functor, monad, applicative, traversable, foldable, category, arrow, anything else you can think up); ok metaprogramming (template haskell); programmer controlled rewrite rules, results in list fusion/vector fusion

I list more under Haskell partly because I'm just more familiar with it, but also because it takes more things further and thus derives greater differentiation for it.


You're preaching to the choir. Your typical Java programmer doesn't even know what these phrases mean.


Sure, and it's a little sad that there isn't any good resource to explain them... But I also am not going to try to do it in the space of a comment.


> The way to do it would be to show concrete examples of problems which are difficult to solve in say Java, and then show how that is a non-problem in Haskell etc.

I've been impressed by the Mirage blog posts [1]. If I were going to pick a single example with the biggest size/complexity win over an imperative language, the ASN.1 Combinators [2] post would be the example.

[1] http://openmirage.org/blog/announcing-mirage-20-release

[2] http://openmirage.org/blog/introducing-asn1

The general problem with the good examples is that they're deeply tied into a different school of thought. If you don't have a general idea of what's going on, the example is completely incomprehensible.

The only functional language advocacy site I know of that's really attempting to bridge this gap is "F# for fun and profit" [3].

[3] http://fsharpforfunandprofit.com/why-use-fsharp/


This is actually a very nice one:

http://fr.slideshare.net/ScottWlaschin/ddd-with-fsharptypesy...

It shows how F# is a much more suitable language than C# for Domain-Driven Design (a popular design approach for boring database software which _originated_ in C#-o-world). It makes a very compelling case that you're just making life hard for yourself if you do domain modelling in C# instead of a functional language.


You can write functional programs in Java, because it supports immutability, pure functions, etc.

The fact Java has these features does not mean they are not FP :)


For me, purity, functional composition, and avoiding mutable state.


Makes sense, a function must always return the same value for the same argument. Therefore if there were mutable state you might have things that look like functions but are really not. But what is "purity"? Does it mean that you can NOT HAVE mutable state in any program written in a "purely functional" language? I still don't quite get it how a language could be "purely" functional if it has to do IO. Reading the user's input will return a different value most of the time. And what do you do in Haskel if you need to know the current TIME for any reason? It can't be returned by a "function" because its result will always be different. So there must be something ELSE besides functions in the language. But doesn't that then mean that it is no longer a "purely functional" lanaguage?


A "pure" functional programming language has the problem you have identified and that is why haskel for a very long time was considered an academic language since it couldnt do anything useful as anything useful requires side effects like IO.

The above is how haskel was introduced when i took it in CS at Syracuse University.

The practicalities of haskel as a useful and practical language are realized through monads[1]. Use of monads to deal with side effects and impure world automatically make haskel "none pure".

With the existence of monads,the purity of the language is a matter of opinion or degrees since it largely based on "idiomatic" usage.

To say haskel is an impure language because of monads will be akin to say C# is an unsafe language because it allows pointer managements through the usage of "unsafe" keyword.

[1] http://en.wikipedia.org/wiki/Monad_%28functional_programming...


Good explanation, thanks. I would take from it that "purity" is relative, just as "safeness" is. There are no safe languages, nor pure ones, just some that are more pure than others.

I guess I would have grasped your point earlier if Haskell had a keyword "IMPURE", just like C# has the keyword "unsafe".

Come to think of it, doesn't "pure" in the context of PLs mean much the same as "safe"?


Here is my take as someone who could but chooses not to use fp. When I look at “good haskell” I see it dominated with comments written in procedural english describing what the code does e.g.

http://hackage.haskell.org/package/pretty-1.1.1.1/docs/src/T...

Haskell requires this because, lets be frank, its really hard to understand. I can hear the howls of anger but if you ask “man in the street”, will he do a better job explaining the above haskell sans comments or a similar pretty printer written in VB? I have CS background, studied FP and type theory, but I am more like the man in the street.

What I want from a programming language is an executable description of what I want to achieve. I want that description to have all the good qualities sought in fp but more importantly I want it to flow into and out of my brain (and my colleague’s brains) like butter not like a sudoku puzzle trapped in a type theory proof and then Huffman encoded (because descriptive names are the preserve of all those "terrible java programmers").

In the real world, I need to write code that makes fast incremental changes to large interconnected data-structures e.g. graphs and trees with indexes and all sorts. I cannot begin to imagine how to solve half my problems with immutable data-structures. With what I know at the moment, it would be a massive obstacle to the real problems I need to solve.


Yes, readability. It's only half the story if you can be more productive when writing programs. Readability is probably more important because SW needs maintenance. And readability is not only about how fast YOU can read and understand Haskel. It's more about how fast other programmers can. Imagine all scientific papers were written in Latin.

I think you make a valid point. Many comparisons of Programming Languages productivity only try to measure how fast you can code a new application in them, but not how fast you can modify an existing application to do something else - ASSUMING you are not the author of the original version.


> Advocates of functional languages claim they produce an order of magnitude improvement in productivity. Experiments don't always verify that figure -- sometimes they show an improvement of only a factor of four.

It's weird, the first time I read this I was excited to read the source of this quote. When I finished reading I scrolled up to the top and was disappointed to see a citation missing. Does anyone know what experiment he's referring to?



It's interesting to see how 15+ years later, general attitudes among the masses haven't changed much in this regard (according to the most popularly used languages [0][1]).

Feels bad, man.

[0] http://beust.com/weblog/2014/05/03/language-popularity-on-gi...

[1] https://www.google.com/search?q=language+popularity


Would you say that the functional paradigm has started to gain more traction lately though? -- I don't know because I haven't been programming long enough, so maybe it just seems that way to me simply because now that I'm aware of its existence I'm seeing it everywhere.

I read a quote once about The Velvet Underground that stuck with me. It said that, while not many people bought their albums, everyone who did started a band.

Sometimes it seems like Haskell is The Velvet Underground of programming languages. While not many people are using it, everyone who is has started their own language/functional library -- Influencing other languages is, in a way, people using FP languages, just indirectly.

I've not written a single serious Haskell program -- I don't think I would even know how to, tbh --, but I sure feel like just playing with it and learning about its theoretical motivations has made me a much better programmer in other languages.


>Would you say that the functional paradigm has started to gain more traction lately though?

That is certainly true but I think more people will not shift to functional programming languages to do functional programming style programming but will stick with their favorite imperative language and do functional programming there as more and more imperative languages add features and facilities that allow functional programming style programming.

Case in point, i have this Qt/C++[1] library that evaluates lambdas in a lazy fashion.

You can have,for example, a function that returns what is called "a future",ie something that can be evaluated at a later time to produce a value "held" in the future or the future can be cancelled if the function caller decides they no longer need the result.

Introductions of lambdas in C++11 allows for this kind of programming to happen in C++ and i suspect more and more C++ programmers will start coding to this style as more and more APIs will start getting released that will take lambdas as arguments or return them as results.

It is an interesting new way of doing C++ and these ideas are mostly borrowed from functional programming world.

[1] https://github.com/mhogomchungu/tasks


"It said that, while not many people bought their albums, everyone who did started a band."

And possibly a country.

http://en.wikipedia.org/wiki/V%C3%A1clav_Havel


> Would you say that the functional paradigm has started to gain more traction lately though?

I would and I believe it started to happen when people started talking about Scala.

I also think something similar happened with OOP in (maybe?) the early 1990s - i.e. we'd had Simula, Smalltalk and so on for years but it became mainstream. I'm thinking of things like Visual Basic 1.0 (1991), C++ v2.0 (1989)


> Would you say that the functional paradigm has started to gain more traction lately though?

Surely.

All major languages are going hybrid, mixing imperative (mostly OOP) with functional programming concepts.

Until we get a complete new computer architecture, like quantum computers, the mainstream will be hybrid OOP + FP.


My impression is that it is making a difference amongst top programmers. When I learned Scheme way back when in college, there was no pretense of it being practical. My AI professor said, "AI is reduced to LISP hacking" and that was to denigrate AI, not raise LISP.

Nowadays people see the value of recursive thinking, and hence the value of functional programming. The convergence of many technology trends seems to suggest that the idea of "The program can be the data, and the data can be the program" is no longer so far fetched.


But many concepts from functional languages have become commonplace in popular multi-paradigm languages. That's probably the best we can realistically hope for in that time frame.


Switching to FP is a bit like switching to Dvorak: the cost of switching are still to high to justify the effort.


I went into FP from the perspective of a Python/Javascript/Groovy zealot, considered FP to be a silly academic thing with no real world applications.

Took me about a week to be productive in Scala in a functional style. Haskell is a little more complex, but I found that throwing myself in the deep end, while frustrating, was not overly difficult.

The cost of switching to FP only seems high because people don't want to even bother trying. Once you've gotten over the initial bump, FP is (in my opinion) substantially easier to wrote code in.


The answer is obviously to teach Dvorak and FP from the beginning so you don't have to switch.


Except that Dvorak has been repeatedly proven as an inferior solution to qwerty, especially for programmers (and it's also pretty bad for carpal tunnel syndrome).


[citation needed]. I have a hard time believing that a layout which cuts the travel distance of your fingertips in half is worse for carpal tunnel.

Number of characters in your post: 139

Number of characters on Dvorak home row: 83

Number of characters on QWERTY home row: 41

A solid 1/3d of those are 'a', the homerow letter they have in common. Also, my original motivation in learning Dvorak was to stop myself from touch-typing (I repeatedly tried and failed on QWERTY). It was quite a success in that regard.

Incidentally, I do agree with the earlier post saying that it wasn't worth it. But I also strongly believe that it was an improvement (there's a reason why I don't switch back), just not an improvement large enough to justify the effort (which was much greater than expected).


Dvorak wasn't designed for programming, which includes a lot of non-alphabetical symbols.

Wikipedia says:

"The carefully controlled study failed to show any benefit to the Dvorak keyboard layout in typing or training speed"

http://en.wikipedia.org/wiki/Dvorak_Simplified_Keyboard

Note that this is for English, which means Dvorak fails to deliver even for what it was designed for.

As I mentioned above, Dvorak's performance and strain on your hand is even worse when you are doing something else than typing English (programming, typing another language, writing spreadsheets, etc...).


It's ridiculous to claim that Simplified Dvorak is bad for programming while completely ignoring Programmer's Dvorak. Of course Simplified is worse at typing symbols, which is why anyone who cares about rapidly typing symbols will use Programmer's. But who are these programmers limited by typing speed? I've certainly never met one.

I don't take issue with the tests showing that Dvorak doesn't have speed benefits. My anecdata confirms this claim at the skill level which is relevant to me: my Dvorak typing speed, which I have made no effort to improve, is roughly the same as my QWERTY typing speed, which I also made no effort to improve.

However, Dvorak dramatically (factor of 2-3) decreases the fingertip slew distance which is monotonic in the distance your tendons will have to travel in your carpal tunnels for any given piece of typing. The difference isn't remotely subtle because it's easy to feel the tendons moving in your hand if you pay attention. In practice, it's the difference between tingles and numbness after 5 pages vs 15 pages of typing. So I do take issue with your claim that Dvorak is bad for carpal tunnel.

Your original post mentioned carpal tunnel and did not mention typing speed so I think it's more than fair to ask you to elaborate specifically on your claim regarding carpal tunnel.


My typing speed with Dvorak is around the same as with QWERTY too, perhaps because I've not been using it for as many years. It took me 3 months to get up to speed (contrary to Wikipedia's claim of a year), and it's pretty easy to make the context switch between QWERTY and Dvorak - where I only make mistakes in the first few minutes of switching - and not six to eight weeks as Wikipedia suggests.

I didn't learn Dvorak to increase my typing speed - I learned because I was developing RSI in my right hand and put it down to QWERTY (and the mouse). The benefits of Dvorak are pretty clear when typing up large texts, although pain is still present after a few hours of typing. I found ergonomic keyboards to be a better solution than changing keyboard layout for dealing with RSI though, and I typically use QWERTY these days because configuring application's keybindings for dvorak is too awkward, and the gains are too little.

I've been considering learning Workman or QGMLWY, as their supposed benefits are even greater than Dvorak, and they have CUA-shortcut friendly layouts.


Programmer's Dvorak? This is the first time it's come up in the discussion, we were talking about the regular Dvorak keyboard.

There's hardly any information about Programmer's Dvorak, by the way, not even a Wikipedia page.


> we were talking about the regular Dvorak keyboard

"This car sucks! It can't even reach freeway speeds!"

"If you would shift out of first gear, you could reach freeway speeds."

"We weren't talking about higher gears!"

> There's hardly any information about Programmer's Dvorak

It's a more recent layout, yes, but it has achieved decent enough penetration that I would expect anyone seriously contemplating the switch after 2010 or so to be aware of it. Anyone who looked at stack overflow opinions on Dvorak, selected a Dvorak layout on linux, or googled Dvorak and programming in conjunction would have run across it.

You still haven't substantiated your claim regarding carpal tunnel.


My own "programmer's Dvorak" is just regular Dvorak with a few keys remapped for my preferred language(s). Since the syntax is so minimal, it's works very well - for me.

(On Linux, it's a simple xmodmap configuration in /etc/X11. I even made a Windows equivalent: an executable that can be installed/removed like any other program.)

With so many keyboard customization utilities available, I'm surprised more folks don't optimize the keyboard for their own use. Kinda like building one's own lightsaber...


Obviously the plain Dvorak keyboard is "no true Dvorak".


Have you ever tried to use the most common shortcuts on Dvorak (Ctrl-C and Ctrl-V)? You will have to use 2 hands, or stretch your (right sided) mouse hand. This killed it for me.


Yeah, it's really awkward to configure every application's keybindings to be dvorak-friendly (if possible). Every application inventing its own keybinding solution is clearly a mis-design of our operating systems, which should have some common daemon/configuration for them.

I live mostly in emacs, so it tends not to be an issue, since I reconfigure nearly every shortcut anyway (default emacs shortcuts are IMO, terrible on modern keyboards). There's extentions to configure for firefox too (because it's a pain to do manually via about:config).

The lack of ability to configure key-bindings explicitly is perhaps one of my biggest gripes with "modern" software design - which has the "do it one way only" philosophy, and forces the user to adapt to the software, rather than adapt it to their needs. Firefox for example, gets worse with every iteration - and it's not like I can revert back to using Opera, since they gimped that too by turning it into another Chrome clone.


Really? Do you know of any research that shows this?


The effort isn't very high at all? What are you talking about?


The effort required is dependant on a bunch of things - your existing familiarity with FP concepts from using 'multi-paradigm' languages, the language you're considering switching to (clojure is probably easier to learn than haskell), your familiarity with tools commonly used in that language's ecosystem (emacs, repls) etc.


It's not just popularity. I wouldn't trust anyone on my team (of 4) to 'get' a Functional Language. Maybe one out of four (Not counting myself in this scenario), and almost no-one that they would hire later on. That leads to a pretty low chance of it ever being adopted where I work.


What kind of work are you doing, and what kind of people are you hiring? I'd be very nervous of working with someone that wasn't actually capable of "getting" FP.


I don't hire anyone, just to get that out of the way. :)

I don't want to go into to much detail but a .net financial management client and web application. Nothing fancy by any means. Let's also just say I don't have very many options where I live, and am geographically restricted because I don't believe in sacrificing watching my daughter grow up. I'm interested in remote work but that hasn't been very easy to look into. I'm in that mystical land between mid-level and senior and most remote jobs seem to want senior in both skills and time. I'd even consider getting away from .Net entirely and professionally pursing a new stack but those opportunities haven't been available at all when looking at remote positions (And I completely understand why).


I suppose that depends on your definition of FP. If you're just talking about HOF or 'programming with functions' fair enough, but some individuals (erroneously imo) only consider languages with higher kinded types 'functional'. The term is really overloaded.


Which puts all Lisp inspired languages out of FP.


It depends what you mean by "getting" FP.

I program in Python, and Perl for years before that. I regularly use the functional features of the languages, but I would have no idea to architect a reasonably large application using only FP, certainly not to the extent I would using OOP / procedural.


I think the "they don't get it" thing is actually quite a valid reason. I've heard some really ignorant things said in opposition to functional programming. I don't think it applies to the majority of potential users, but certainly a non-negligible percentage.

Examples: "Immutability is stupid, because if your program is immutable it can't do anything."

"You just end up writing everything in an IO Monad anyway."

"Functions are good for math, but not for coding."


I agree that this is the case for a lot of people. It's not a judgement of their abilities or anything like that; it's just that, as a practical matter, to really understand some things you have to commit and actually solve some real problems with them. The payoff isn't very apparent when you're starting out, but for me at least the insights I've gained from functional programming have been huge.

FP isn't the only thing in this category - like many people on HN I learned OOP a long time ago, but I remember it being similarly earth shattering, and I went through a period of over-enthusiasm and evangelism for Java and GoF design patterns just like the one I went through when I first learned Haskell. The code-is-data realization that you have when you learn a Lisp is similarly powerful.

I hope I experience more revelations like these! These moments where you learn a whole new way of thinking are the purest joy programming has given me.


> For a given task, the imperative solution may leap immediately to mind or be found in a handy textbook, while a comparable functional solution may require considerable effort to find

FP means additional levels of indirection which makes it less efficient than imperative approaches. If FP really were more productive than imperative languages it would have been adopted by the industry long ago.


Plus le change, plus le meme chose.

Functional programming has a couple of problems, when it comes to uptake. The first is that you only get 40% of the benefit if you go 90% of the way. (Static typing is the same way, which is why I prefer Clojure's typelessness over the mediocre static typing of Java or C++ or Go.) In that case, the "impure" 10% still causes, if not realized complexity, the potential for complexity that infuriates maintainers.

If you go 90% of the way on FP and get 40% of the benefits, that still means you're writing code faster and generally producing less of it, and those are good things. However, there's a nasty duality in programming which is that saving time on the fun stuff means more time is spent on the un-fun stuff. (If you make writing of code 4 times faster and debugging of nasty interface issues 2 times faster, you spend a larger proportion of your time on the latter.) Unless you can eliminate whole classes of un-fun-ness (e.g. categories of bugs) it's often not worth it. Eliminating 90% of the un-fun-ness just means you're spending more time on that remaining 10%.

For example, Python can be used as a mostly functional language and thereby go 90% of the way (conceptually) to FP, but it isn't. The community has judged it to be not worth it. Scala should be used for FP, but there are plenty of Java++ programmers who still use null instead of Option[T]. Sadly, it doesn't take much of this dysfunctional programming to emasculate FP and leave an unbiased person asking, "Why bother?" It might not even be the right way to go, for a maintenance project. Mixing two styles is going to make it more illegible than using the existing (if suboptimal) style for new changes and work.

The career issues noted are also huge. The average Clojure or Haskell developer makes slightly than the average Java developer, but if you control for skill and compare at-level, the Java or C++ developer wins. A 1.8 Java developer (scale here: http://michaelochurch.wordpress.com/2013/04/22/gervais-macle...) makes $175,000 and turns down invitations into VP-level positions at major corporations on a regular basis, because 1.6+ Java developers are just so rare. A 1.8 C++ developer can make $500,000 per year at a hedge fund. The 1.8 Haskell or Clojure developer makes about $125,000 on average, which is not bad but hardly stratospheric. This tends toward a self-perpetuating exaggeration of historical discrepancies in language popularity. Even if the average Haskell job is better, Haskell jobs (at all) are much harder to get, and Haskell jobs that pay as well as even an upper-middle Java job are extremely rare. The market is just stronger for Java people, so that keeps people wanting to use it, which keeps the majority of companies preferring it over better languages because they fear maintenance risk more than low productivity.

I'm less charitable than Wadler on the "they don't get it" angle. Certainly there are smart people who "get" functional programming and have good reasons not to use it. Still, I do think that a contributing factor to FP's lack of uptake is the anti-intellectualism of the programming world.

It's not that anti-intellectuals have some natural inclination toward Java or C++. The anti-intellectuals have the general attitude that language doesn't matter. And then they pick Java for some risk-averse, enterprisey shitfuck reason like maintenance risk or lack of an available developer pool, both of which are just business stinginess ("we don't want to train people"). What makes them vile is not their tastes in languages (there is a lack of taste in the Java community but, even still, there are cases where Java is the absolute right language to use) but the fact that they don't take programmers' concerns (such as tooling choices) seriously at all, and prioritize manager-type concerns over the needs of the people actually doing the fucking work.


The 500k number is disingenuous, I feel. There has to be something to control for effort/sacrifice. Everyone I know who works in HFT sleeps under their desk and doesn't even hit that rate.


Then they're terrible at negotiating for themselves and projecting status.

There are people in finance who work 13-hour days, and there are people who go home at 5:00. There are definitely more of the former kind in finance than elsewhere, but it doesn't seem to help your career much to put in the ridiculous long hours. Sure, you won't advance if you always leave at 5:00. You have to put in the hours when it counts, but it's maybe one or two weeks per year.

Quants average 9 to 7, but a third of that time is the research and exploratory work they'd do anyway. It's stuff that tech people do at home, off the clock. The job involves a lot of research-- reading papers, attending conferences, keeping up with tech-- and the savvy ones do it on the clock. Programmers work fewer hours but have a harder time getting to learn on the job.

There are some nightmare hedge funds with long hours and bad cultures, but the good quants stay the fuck away from those.


The anti-intellectuals have the general attitude that language doesn't matter.

This attitude should not be discounted. You'll find fewer proponents of strong type systems than me.

I recently started working with a client who wanted to build a Haskell system in order to do some complex financial modelling. I told them to forget Haskell and build a Python/Django app that will handle 90% of their customers. The only significant bug was comparing a backward discounted quantity to a forward discounted quantity. Haskell wouldn't solve that.

The development effort saved due to not rewriting built-in django apps vastly outweighed any productivity benefit we'd have gotten from using Haskell.


A friend started a company a couple of years back. He was not a programmer, but worked in a lab, so knew the bioinformaticians favored Python. He insisted that the app was built in Python. I was thinking that the language didn't really matter.

Anyway, a couple of years later he got a grant to continue with the project. It was junior level wages. And he couldn't get a Python programmer to work for that (he asked me) amongst others. PHP would have probably opened up more options in that respect, even though it is considered a poorer choice of language.


"The only significant bug was comparing a backward discounted quantity to a forward discounted quantity. Haskell wouldn't solve that."

It could, depending on how you code. For that matter, so could C.


I told them to forget Haskell and build a Python/Django app that will handle 90% of their customers.

That's not anti-intellectualism. It's practicality. The "right tool for the job" isn't always the same language. Often it's Python, because the libraries are mature. Machine learning is one area where this is often true. Python itself doesn't seem like it should be a leading ML language, but it's far ahead of the competition in terms of library support (e.g. Numpy, Scipy, Pandas, et al).


I have never seen Java developer position for $175,000 and especially haven't seen anything about a C++ developer making $500,000. I'm not saying I don't believe you, more that I'm in disbelief; Do you have any sources?


Good HFT (High Frequency Trading) programmers make $500K or more, see: http://beta.slashdot.org/comments.pl?sid=2357190&cid=3693676...


> Plus le change, plus le meme chose.

Plus ça change, plus c'est la même chose:

http://french.about.com/od/vocabulary/a/pluscachange.htm


Thanks. I'd change that if I could.


I feel the pain.

At least on the Fortune 500 consulting world, where the customer dictates the tooling, the best we can hope for is C#+LINQ, Java 8 and C++11.

Scala, F#, Clojure and all the other ones, only at home projects.

At least on our client portfolio.


For me it's much simpler. I don't feel like "FP" would bring me anything at all. Let's enumerate the Haskell features that will supposedly save my soul.

- Immutability. You have to minimize state whatever the program you write, be it language-enforced or not.

- Purity: same story.

- Traits. I can use compile-time duck-typed interfaces in C++ or D that achieve the same thing that eg. Caml modules.

- Monads: if I understand correctly could be done with compile-time interfaces in many languages, doesn't mean we want to.

- Getting access to the GPU power by using some magic library. I'll be honest: I don't believe it.

- Laziness: never needed it, I think SPJ said the next Haskell would be strict.

- Better type inference. This was already nice in Ocaml. It doesn't bring anything for maintenance work, types are _nice to read_, and inference inside functions is done in "lesser" languages.

- exhaustive match: Very nice. Other languages have equivalents.

- deconstructed match: I think it can lead to abuse. I never need to match the first three elements of a list.

- closures: every language have them now. C++ even allow to manipulate the capture as a first class value. And guess what? In practice closures are less readable that their retarded brother, the Object.

- tuple syntax: I think it's considered cool while it's not readable. Aggregates have names, tuples have "first element", "2nd element", no semantic meaning.

- named tuple syntax: available with other language aggregate literals most of the time.

- custom operators: considered a liability, I don't want to learn to read your super duper (+=+) operator.

- true parametric polymorphism: implies uniform representation, implies lessened performance, implies not being a general purpose language. You want to be used for any program, _be fast_ or go home.

- forced to write good code. No no no I want to be _able_ to crank out stinking code when I want, so that I can automate something in the first place given a time budget.

AFAIK Haskell has no story for custom memory allocations, explicit SIMD intrinsics, use two different string types, one of which is a list of Char. A list. Why a list? This is never needed in high performance programs.

There is also a perception that "the compiler will make my program fast because $compiler_guy is super smart" which is terribly junior, and this hipster community tells me I'm dumb for not using Haskell.

Also wtf with the preprocessor? Really? Am I not "getting FP"?


> Am I not "getting FP"?

I think that's the case. Haskell never claimed it would save your soul.

Quickly and in particular:

Immutability/purity: of course, but being more explicit is beneficial and challenging, so the compiler helps

Traits: if you mean typeclasses then they are much more than merely traits

Monads: I don't think you understand correctly, sorry

GPU: You don't have to believe it. It exists. Embedded DSLs and cross-compilation are very common in Haskell.

Laziness: it's not a needed thing at all, it increases composability. SPJ has asked that question, but the jury is still very far out.

Closures: Let's not argue readability, it just makes everyone look bad. Instead, I agree that Objects and anonymous functions are similar. The point is not to have them, it's to have a culture which really does use them everywhere that's appropriate.

Syntax: not worth arguing about. If you dislike it, you dislike it.

True parametric polymorphism: I have no idea what you're talking about here. Parametricity leads to much more program correctness and specialization recovers speed. Further, parametricity opens up compiler-enforced behavior leading to fusion which dramatically improves speed.

Forced to write good code: you really aren't, you're just forced to mark it as bad code (i.e. live in IO)

GHC has custom memory allocations, SIMD probably by the next version, and three different string types. Each has their own particular use and forcing them to be the same would lead to inefficiencies or incorrectness all around.

Using CPP is annoying, no doubt. It'd be great if that were different.


> GPU: You don't have to believe it. It exists. Embedded DSLs and cross-compilation are very common in Haskell.

Of course I believe it exist, I just don't believe it can achieve top performance (which I happen to need in $dayjob).

> Let's not argue readability, it just makes everyone look bad.

??? It's still the most important thing for a programming language. Most of our problems are because programs are hard to understand and hard to change.

> Forced to write good code: you really aren't, you're just forced to mark it as bad code (i.e. live in IO)

If I need an ugly script quickly for splitting a big binary file in parts to people that went to a tradeshow (yes true example), I don't have time to type "IO" or even think about it. The language won't help me achieve anything in that case.

> True parametric polymorphism: I have no idea what you're talking about here. Parametricity leads to much more program correctness and specialization recovers speed. Further, parametricity opens up compiler-enforced behavior leading to fusion which dramatically improves speed.

http://www.haskell.org/ghc/docs/6.12.1/html/users_guide/prim... https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Haskell...

Unboxing/specialization needs access to the whole source code like any whole program analysis. So what I get is that if you are lucky, the compiler will recover the speed that was lost in the first place. But you can't be sure.

> Each has their own particular use and forcing them to be the same would lead to inefficiencies or incorrectness all around.

It still an impediment to have three of them. Many languages have one, period.


I don't think anyone claims you get too performance, merely great performance while retaining high level coding. I don't think syntax is that important, it's about a 3/10 while semantics are a 15/10. For your tradeshow example I can personally attest to the ability to hack together a script in IO on the fly under pressure. It's not a huge problem, and if you're worried about typing IO then just use inference.

Your problem with primitive types is valid, but I've always seen this easily handled by optimizing inner loops. It's ugly, but so is all low-level optimized code.

And many languages trade off between uniform access, byte-string representation, and Unicode. Haskell just splits each use case into its own type.


> Your problem with primitive types is valid, but I've always seen this easily handled by optimizing inner loops. It's ugly, but so is all low-level optimized code.

OK. I guess if there is a way for the optimizing backend output which unboxing failed it would satisfy me. For us native programmers having tagged pointers that are often optimized, but we don't really know when, is a concern. Might be irrational terror for sure like with the GC fear. So FP languages proponents need to explain more why and how these features like GC/uniform representation/whatever are not a speed impediment (imho).

> And many languages trade off between uniform access, byte-string representation, and Unicode. Haskell just splits each use case into its own type.

True and this can indeed be contentious where this isn't done.


Also, re: laziness, it's more nuanced than that. See this quote by SPJ, from a 2009 interview ( https://www.simple-talk.com/opinion/geek-of-the-week/simon-p... ):

> "Laziness has lots of advantages, including modularity. These days, the strict/lazy decision isn’t a straight either/or choice. For example, a lazy language has ways of stating ‘use call by value here’, and even if you were to say ‘oh, the language should be call by value strict’ – which is the opposite of lazy - you’d want ways to achieve laziness anyway.

> Any successor language to Haskell will have support for both strict and lazy functions. So the question then is: what’s the default, and how easy is it to get to these things?

> How do you mix them together? But on balance yes, I’m definitely very happy with using the lazy approach, as that’s what made Haskell what it is and kept it pure."

I'd say he is arguing that keeping laziness as the default was the right choice...


> - Immutability. You have to minimize state whatever the program you write, be it language-enforced or not.

> - Purity: same story.

I agree you can write decent code in any language, but the point is which guarantees your language gives you about the code other people wrote. Which is most of the code, usually. Your average imperative language gives you far fewer guarantees than Haskell. Even with your own code, you can achieve purity and immutability in Java or C++ if you want to, but you are mostly fighting against the language.

> - Laziness: never needed it, I think SPJ said the next Haskell would be strict.

On the other hand, Wadler himself in "Why Functional Programming Matters" argues that laziness is essential to achieve modularity and composability.


> Even with your own code, you can achieve purity and immutability in Java or C++ if you want to, but you are mostly fighting against the language.

I think purity works very nice in D, using D definition of purity.


> On the other hand, Wadler himself in "Why Functional Programming Matters" argues that laziness is essential to achieve modularity and composability.

I just read the part about it, I agree that lazy computation is a powerful tool for modularity and composability.

But it's nothing specific to FP languages. Python and Nimrod do that through generators (yield keyword), D through "ranges" (no language support).

Also not all algorithms you would want to write only once are easily expressible in a lazy way (eg. push parsers).


Yes, fully agreed, it's not specific to FP languages and it's not always easy to use.


I'm pretty convinced that the holy grail is one where either laziness or strictness is a tracked effect and both evaluation schemes are used as optimal.


That's really old so I actually looked at it (again). Here are the criteria that Wadler mentions that ho<eld back adoption.

Things that Haskell used to be weak on but now has good support for:

FFI - yes Libraries - Yes Portability - Yes Ease of installation - Yes Packagability - Yes Tools - yes Training - available Performance - yes

(non)Reasons why it doesn't get adopted. Still apply.

Popularity - No still not popular They don’t get it - Yes they still don't get it Killer App - none that I can see (darcs?)


Haskell does still have major "ease of installation" problems in my experience. On OS X I had to install GCC and change a config file to get packages compiling correctly. [1]

Once I had that working came Cabal issues. "Cabal hell" has been ameliorated by sandboxes, but to use these you need to update the version of cabal-install that comes with Haskell Platform (I think they're close to updating Haskell Platform though). But when you update cabal-install the built binary isn't in your PATH for many users [2].

Once you get sandboxes working, you're in a good position to install packages. But because you're starting fresh, installing a larger package like Yesod takes me around ~2 hours (iirc).

Even when you have all that working, adding new packages can be hairy because of the frequent conflicts between packages.

I've distilled all this into a few sentences, but installation issues on Haskell have caused many hours of frustration for me (far more so than, say, Ruby).

I think many of these problems are fixed by using http://ghcformacosx.github.io/, but that project is 2 months old so wasn't an option until recently. Recent advancements in Cabal like sandboxes and version freezing (1.20) are dramatically improving this situation so I'm optimistic for future users of Haskell.

[1] http://stackoverflow.com/a/21285413/1176156 [2] http://stackoverflow.com/questions/14918251/have-i-upgraded-...


Setting up Haskell is actually pretty well understood nowadays thanks to tons of people going through the flow and posting the latest "how to" here: https://github.com/bitemyapp/learnhaskell . For Ubuntu for example, it's not much harder than installing a few packages from a PPA and adding a line to your $PATH

Cabal does OK for the most part nowadays, but it's definitely still a challenge with the monster projects like Yesod. Supposedly that's a work in progress for now.


>Haskell does still have major "ease of installation" problems in my experience.

Really? I literally just did

   brew install haskell-platform
   cabal update
   cabal install cabal-install
and I'm running the second-to-latest major release of GHC and the latest release of cabal.


Yes, really. Reference the SO post linked to; there are numerous people who had issues updating cabal-install. I've seen them referenced in /r/haskell and in Haskell documentation. The homebrew installation of haskell-platform may resolve some of these issues; I'm not familiar with it.

Additionally, you're working with the knowledge that once you install Haskell Platform, you need to update cabal-install (This is not something a newcomer would expect given that they just downloaded cabal via Haskell Platform).

Just getting the Platform installed is misleading as well; there's still the issue of dealing with package conflicts which happens frequently in Haskell and is non-trivial to deal with for a newcomer.

Finally, using Haskell Platform at all might cause troubles; it's explicitly recommended against by the popular Getting Started guide by bitemyapp https://github.com/bitemyapp/learnhaskell because it uses the global package database.


>Additionally, you're working with the knowledge that once you install Haskell Platform, you need to update cabal-install

Well, you don't really need to. It will work fine even if you don't (and I think it notifies you to upgrade if an upgrade is available). You don't even need to use cabal to get started.

> there's still the issue of dealing with package conflicts which happens frequently in Haskell and is non-trivial to deal with for a newcomer.

How strange. I've done quite a few Haskell projects and never run into this. I didn't start using Cabal sandboxes until recently.


"Killer App - none that I can see (darcs?)"

Maybe pandoc.


Historically I think Haskell will be very important as the substrate on which a lot of the next generation of dependently-typed languages (Idris, Agda) are bootstrapped on top of. It's a niche use admittedly, but it's hard to understate how great Haskell is for writing modern functional compilers.


Certainly possible. There's some great stuff going on there.


I actually think that clckwrks[0] cms could become it. Having a cms that is less buggy than those commonly used these days that can guarantee plugins being correct would be pretty huge.

0: http://www.clckwrks.com


Yeah, but both of those are command line tools. You don't have to know Haskell in order to use them. You don't even need to know that they are written in Haskell if your distro provides binary packages.


True, though pandoc exists as a library as well. Xmonad? Any of the web frameworks? That new rest framework? Shelly?




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: