Meh, this is unfortunate advice because everyone tends to act this way too much. Almost everybody errs towards stable technologies; the exceptions are people with a genuine interest in the novel. They are disproportionately vocal so judging from the internet there's a lot of them, but in reality they're a minority.
This makes sense psychologically. The cost of spending an extra day setting up a build system? It's obvious, hard to miss and annoying. A 10% benefit to productivity thanks to a superior programming model? Sure, that'll add up to a day in just two weeks of use, but most people wouldn't notice it at all! This is doubly true if the benefit is delayed--for example, if you win out on less time spent on maintenance and debugging.
I've seen far more people ignore great technologies for fairly limited and superficial reasons that I've seen use superior technology in the face of true practical concerns. Too often it's something like "well, yes, it's much better, but it doesn't have a JSON parser built in". Of course, writing a JSON parser should take less than a day, so the cost is essentially negligible.
That example, coincidentally, is taken from a talk by Brian O'Sullivan about running a startup with Haskell. (I don't have a link to it handy, I'm afraid.) Essentially, he found that the productivity benefits outweighed having to write some basic libraries (including a JSON one) himself. And he shared those libraries with the community, so everyone is now better off.
So my advice is completely the opposite: don't be distracted by superior polish.
Counter-example: A few years back, I was looking for a way to do user authentication for HTTP web services; I decided that HTTP Basic/Digest weren't sufficiently cryptologically strong - OAuth was the new hotness, so we were going to use that. There was a nice Apache-licenced library, but it wasn't laid out right for our purposes, so I spent two weeks re-writing it [maintenance burden #1]. Then I had to integrate it into our (terribly ill-documented open-source) web services framework [maintenance burden #2], then we had to make it work with the other OAuth libraries [maintenance burden #3]. And of course, it was all new, so there was no documentation anywhere on how to make it work, but if I'd been working with better established tech I wouldn't have wasted those weeks of development work, and that project wouldn't still be using OAuth 1.1. There were no productivity benefits to the new tech (quite the opposite, in fact), and the technical benefits were likely unneccessary.
> Of course, writing a JSON parser should take less than a day, so the cost is essentially negligible.
Writing a JSON parser that parses my examples correctly is a couple hours at most, yes. Writing a JSON parser that I am confident is correct and robust and efficient takes somewhat longer.
Haskell, and BoS, i think, are both special cases. First, Haskell is actually pretty old. There are changes, of course, but the core language is '98. IIRC, Brian O'Sullivan wrote a high performance parsing library, attoparsec.
In general, I think i agree with the article. In this specific case, it's a fantastic example of leveraging your strengths to do something amazing.
It's really more of a case of porting a JSON parser because many working examples of JSON parsers already exist, as do complete sets of test data that your parser must be able to handle. There's really not a lot of original thought that has to go into recreating something that's already well-established elsewhere.
Fair point. There can still be a lot of work to ensure robustness and efficiency in the new setting (especially so if the execution model is particularly different).
No Meh: you and the blogger are substantially in accord on your optimal tech stack:
someone would inevitably point out a language that
gracefully handled that particular issue in a much
nicer way than Haskell, ML, or Erlang.
Except that the suggested language was an in-progress
research project.
Pretty bad advice imo. History is littered with examples of tech companies that used the wrong programming technology to build their app, then had to rewrite or otherwise go through significant pain when the failings became apparent at scale.
Haskell is a bad choice for business simply because you limit yourself to a very small talent pool. On the other hand, the beauty of a small startup is that you can do whatever the fuck you want. If Sullivan knew a few really good haskell programmers, it may even have been the correct business decision in his case.
But if I were making a web back-end meant to scale, and I know very good java programmers and very good haskell programmers, it makes more sense to go with the tried and true(even though it's not as fun).
I would also take most X language is better than Y with huge grains of salt. It's much like the my phone is better than your phone debate, the people making the noise have spent thousands of hours deeply learning language X and tend to be heavily biased.
Interesting, but how many are good enough that they make up for the handicaps of poor library support and whatever other issues people have when they use Haskell for 'real' work.
And how many are willing to stay at your company when it becomes bigger and more boring?
Many Haskell programmers come from academia, how many are willing to do boring business logic?
Not saying it's always the wrong choice, but I think there's pretty sound reasoning for avoiding esoteric languages.
Actually, I've found that Haskell libraries are absolutely fantastic- For me, its ecosystem is superior to other languages I've used, and I find libraries tend to be better designed and easier to pick up and use.
> And how many are willing to stay at your company when it becomes bigger and more boring?
I feel like this is an argument that could apply to any set of programmers- I'm not sure I see how it's relevant to Haskellers in particular.
> Many Haskell programmers come from academia, how many are willing to do boring business logic?
I don't know, but it's almost certainly non-zero. I know of a few Haskellers with PhDs working in startups.
Maybe the haskell libraries that exist are fantastic, but a non esoteric language like java will also have really solid libraries and tool chains(likely more solid) plus have lots more available libraries so you spend less time reinventing the wheel.
Theirs very few haskell programmers that are interested in doing boring work. At some point though, boring work is necessary for a business, and at that time you want to be able to hire someone to do it, which will be very difficult if the codebase is in haskell.
I've found this to be totally the opposite: my time in Java is much less productive. There are many libraries in Haskell which aren't even possible to write in Java (automatic differentiation, parser combinators, lenses, etc.)
> very few haskell programmers that are interested in doing boring work
Nobody is interested in doing boring work, or it would be interesting work. Nothing to do with Haskell programmers.
Hiring (good) Haskellers is easy because there's a demand for jobs. It's as simple as that.
Turing-completeness is a red herring. All it talks about is what a machine can compute, not how. This is an important distinction because libraries can be used for things like meta-programming.
It's completely possible to have a Turing-complete method of computation that does not allow the equivalent of self-modification, for example. And then you won't be able to write a library for self-modifying code!
You can't retrofit Java with a macro system without writing a preprocessor, for example. So it is impossible to have a whole bunch of useful constructs in a library.
Essentially, as soon as you consider "self-reference"--that is, programs that depend on details of the machine itself--Turing-completeness stops mattering. And this is still important; programmers care about more than just what the program does, after all!
It's a red herring because the original claim was not "Haskell can compute function Java cannot" but rather "there are many libraries in Haskell which aren't even possible to write in Java".
The point is that libraries do more than just compute stuff. If all we cared about was what you could compute, there are plenty of interesting libraries that wouldn't count for anything! Any library exporting new types of control flow, for example.
The argument is that there are libraries you can write in Haskell that you can't write in Java. And since libraries can do meta-programming which is possible in Haskell but not in Java, this is accurate. Similarly, since self-modification is something a library can do, possible on one model but not another, it isn't a red-herring.
Hmm, how can I make my point clearer? Let me give you a very specific example then: a library with new syntax sugar would be impossible to write in Java. Sure, this does not allow you to compute anything different, but it's still very important and has a practical effect. Let's take a very trivial library of this sort: a library only providing an "unless" statement like Perl's. Possible in some languages, not possible in others.
Some of the examples in the original post also fall into this category. The lenses library, for example, just allows you to write composable accessors and mutators. It does not compute anything new. But it's still very useful because it makes your code simpler and gives you new ways to express yourself. You could not write anything like it in Java.
> It's a red herring because the original claim was not "Haskell can compute function Java cannot" but rather "there are many libraries in Haskell which aren't even possible to write in Java".
That's false. It's as false as the other claims.
> The argument is that there are libraries you can write in Haskell that you can't write in Java.
Still false.
> Similarly, since self-modification is something a library can do, possible on one model but not another, it isn't a red-herring.
It's a red herring because libraries do not ever self-modify. Not. Ever.
False, is it? I'm quoting the opening comment directly: "There are many libraries in Haskell which aren't even possible to write in Java". So that is the initial argument. It says nothing about computable functions and everything about possible libraries.
Or do you mean the argument itself is false? Well, a library providing new control structures certainly fits that! But yes, it's as false as my other claims--not false at all.
There are Turing-complete languages where you cannot add new control structures. (Or even ones where the concept of "control structure" does not apply.) So a library providing control structures would be impossible. And Haskell has a slew of libraries for control flow: everything in the Control.* namespace.
So both languages are Turing-complete and equally powerful, and yet certain libraries can only be written in one. This only makes sense for self-referential things, where the library somehow affects the language.
As far as self modifying code goes--it's not that the library modifies itself; rather, the library gives you tools for writing self-modifying code. If your language does not have this capability, such a library could not exist. But the language would still be Turing-complete!
The post seems to have cleared up nothing. Ah well, c'est la vie.
Nope! The Turing-completeness of Haskell and Java means they can both compute the same functions. However, there are certain things which are just not possible to express directly in some languages. Possible to compute, but not express directly.
Instead of Java, consider Brainfuck. It's a turing-complete language and certainly has no notion of what a function is. There's simply no way in Brainfuck to directly express that idea.
Another example, this time with Java and Haskell. In Haskell, I can write a function to add any two numbers (regardless of their type) like this:
add' :: Num a => a -> a -> a
add' x y = x + y
As far as I'm aware in Java, there's no way to write a function with a signature that says "Take two numeric arguments of any type". Certainly pre-generics, this is impossible.
> This isn't really relevant to the thread of discussion.
It was, before I got carried away: I was going to say that Turing-completeness does not translate to practice. And whoever chooses a language according to its Turing-completeness is missing the big picture.
Java can produce any result that Haskell can produce. End of story, full stop. All non-broken computer languages are Turing machines.
This is not to suggest that someone would want to use Java for anything sufficiently sophisticated, unless one is a masochist, but then, that wasn't your claim.
> As far as I'm aware in Java, there's no way to write a function with a signature that says "Take two numeric arguments of any type". Certainly pre-generics, this is impossible.
A separate claim with no meaningful relation to the original one. This claim is that it would be inconvenient, not impossible.
> Java can produce any result that Haskell can produce. End of story, full stop.
Yes, I'm not disputing this- I'm fully in agreement that all turing-complete languages can compute all turing-computable computations. I've never been trying to argue against that. I'm sorry if I haven't explained myself clearly enough- let me give it another shot.
In Haskell, I can write a function operating on numbers like this:
f x = x**2 + x + 1
The AD library means I can do this:
g = diff f
and g will behave as if I'd declared
g x = 2 * x + 1
As far as I'm aware, it's not possible to write this code in Java- you can't get the differentiated version of a Java function this way.
You can compute the same results, of course. You just can't write a library that will accept a function and give back its differentiation.
You can write a function that takes some representation of a mathematical function, for example as a string, and build a function from that to compute the same results, but you can't pass in an actual Java function- so you can't write the library. You can write a library which differentiates string-representations-of-functions, but you can't write one that differentiates first-class-Java-functions (well, objects with functions on, but you get the idea. I hope.)
> As far as I'm aware, it's not possible to write this code in Java- you can't get the differentiated version of a Java function this way.
No. The algorithms for symbolic differentiation are well-known, they aren't unique to Haskell. They happen not to be present in any Java library AFAIK, but that's because no one is sufficiently masochistic to code them there.
> You can write a library which differentiates string-representations-of-functions, but you can't write one that differentiates first-class-Java-functions ...
I think we're getting into a moot area now. If the end result is symbolic differentiation, the specifics become irrelevant. And, since Java bytecode can be converted into a textual representation, it can also be symbolically differentiated by conventional means.
Also, we've left the original topic, which was that Haskell has properties not present in other languages and/or outcomes not possible in other languages.
In a larger sense, wouldn't it be more praiseworthy to say that Haskell does exactly what other languages do, but much better? That time spent programming in Haskell is more productive when measured by bug-free lines of code and concise expressions? Or that, when parallel processing finally arrives in full force, functional programming will not longer be optional?
No- again, I've been saying the same thing all along. In fact I'm only using Haskell as an example. There are things which cannot be expressed in Haskell, too- for example dependent typed programs as you might find in Agda or Idris.
> In a larger sense, wouldn't it be more praiseworthy to say that Haskell does exactly what other languages do, but much better?
This is a separate conversation entirely. My aim is not to "praise Haskell". I have used Haskell and Java only as examples.
There are completely reasonable languages (distinctly non-broken) that are not Turing complete like Coq and Agda. You can still use them to write interesting, non-trivial programs! In fact, their limitations aren't immediately obvious--the only interesting program it's obvious they can't write is an interpreter for themselves or another programming language at least as powerful.
> Java can produce any result that Haskell can produce. End of story, full stop.
But can average Java programmers produce any result that average Haskell programmers can produce? I'd bet they can't. Haskell programmers are self-selecting: if you can grasp Haskell and be productive in it then you are an elite programmer. Not so if you can grasp Java.
> if you can grasp Haskell and be productive in it then you are an elite programmer.
Somehow I knew this was where it was going.
> Not so if you can grasp Java.
Compared to a cabbage or a turnip? I've been told you can't teach calculus to a horse. Elitism is like an onion ... and feel free to imagine that tedious conversation.
Rewrites happen even with PHP, once it becomes clear that their product is creaky, slow, and no one good is willing to maintain it. (To be clear, I'm sure PHP is appropriate for many situations, but I have seen people trying to dig themselves out of a slow PHP framework, and a better PHP framework isn't evaluated to be as worthwhile as moving to a different platform.) As much as some managerial types try to deny it, tools end up having an effect on the product. The skilled people they hire are often right when they try telling them that the tools aren't appropriate for the job. At least if they're experienced and provide a reasonable, fair analysis.
Tools present tradeoffs. Advantages and disadvantages. Haskell can be very problematic for a business, unless you know how to deal with its disadvantages, or aren't affected by them.
Jeff Atwood just blows me away with the way he combines information, entertainment, and insight in his articles. What I like most about this one is that he brings in Steve Martin's take on good vs. great, thus showing us that the whole issue is by no means limited to the world of software development. (Just like the poster of the parent of this reply said, this is to reinforce and complement the original submission.)
I’m not a programmer but have witnessed – and mediated - a lot of these arguments. Some are very esoteric while others are pretty practical. But some programmers forget that when you’re trying to implement a “large” project (relatively large user base and relatively long expected life), the programming language is just a tool. You can tinker forever with a technologically superior language – and enjoy the intellectual challenge – and never implement your project. Or you can settle for a less sophisticated language and implement your project in a reasonable amount of time, though not as elegantly as you might like.
"Or you can settle for a less sophisticated language and implement your project in a reasonable amount of time, though not as elegantly as you might like."
And be stuck maintaining horribly inelegant 10-year-old systems written in a "stable" language that was obsolete 5 years ago. A system where pushing a change in one module breaks 3 others.
Of course, this the preferred option. But when the arguments ensue, it's usually because the developers are saying I can't give you the project in the superior language in a reasonable amount of time. So if a choice must be made between superior language and reasonable amount of time, project managers usually choose reasonable amount of time.
> So if a choice must be made between superior language and reasonable amount of time, project managers usually choose reasonable amount of time.
Sure. Of course project managers usually choose deadlines over people, then when talented people burn out and leave, they wonder why talents are so difficult to keep. We know what the safe choice is: hire average programmers who see programming languages as interchangeable, and are themselves interchangeable. No worries about competitors, for they are doing the same. Nothing wrong with that, but please let's not pretend things are any different.
If a superior language cannot deliver the project as quickly and to the same level of quality as in an inferior language then you need to examine how the words inferior and superior relate to the relationship between the languages.
Talented programmers prefer to work with tools they feel productive with. While it is true that inexperienced programmers lack pragmatism when it comes to choosing their tools, it is likewise true that non-programmers who think about themselves as pragmatic are often just clueless. Hopefully you don't manage people, because you don't understand them.
Times change.
Interestingly enough, "fringe" languages were breathed new life with introduction of LLVM, and Haskell is no longer research project, so there are industry job offers there too.
And we know what happened with "winners" within a wave of dynamic languages: JavaScript, Python and Ruby developers are quite often paid higher wages, than old-style C programmers.
>Haskell is no longer research project, so there are industry job offers there too.
Well, there are far fewer Haskell job offers than Haskell programmers.
>And we know what happened with "winners" within a wave of dynamic languages: JavaScript, Python and Ruby developers are quite often paid higher wages, than old-style C programmers.
Are they? I'm not so sure a competent C programmers earns less than a Javascript/Python/Ruby guy.
As much as I like LLVM (I sometimes work on it for a living) I don't think you can ascribe the rise in popularity of "fringe" languages to LLVM. Are there even any languages whose rise to prominence occurred with an LLVM implementation?
Depends upon what you mean by "rise to prominence." It is not widely used (yet), but [Rust](http://rust-lang.org) is a former research project that is now being developed/used by a major tech company (mozilla) to implementation next-generation web browser. It's main (only) implementation is built on LLVM.
This is sort of like pointing out the obvious, but that's just a change in the market, and has comparatively little to do with the languages themselves.
There's no reason C programmers can't easily transition into web development. The inverse, assuming that the market for C development will remain static is just illogical (likewise for the current in-demand languages).
Developers are developers IMHO. Languages are tools. What tool a developer uses is incidental and inevitably subject to change.
JavaScript, Python and Ruby developers are quite often paid higher wages
Based on what I've seen, good C++ programmers working in areas like finance and aerospace still command higher wages than most JavaScript or Ruby devs.
The point that the OP is making is not to avoid new technologies, but to avoid technology that isn't ready for prime time (and higher ed efforts are a great examples of this). That is "ok" advice in many cases, but don't avoid things because they are new. Few get great jobs and salaries for failing to take risks.
I believe the pace of change has actually increased recently. As Hague himself has noted, we have a lot of computing power to throw around now. 30 years ago, serious usage of academic languages was still restricted to "big iron" environments; today you can expect any language to do some useful work even on a smartphone. As a result we can make "softerware" that benchmarks less well, but is massively cheaper to make and maintain.
As well, there's been a cultural shift related to the hardware changes. In webdev it's become routine in some circles to make polyglot systems, or to cross-compile between different languages or runtimes. DSLs are increasingly slipping into "everyday" environments. There's a mindshare war going on there, but architecting against one language, one environment, and one toolchain is increasingly seen as the "old way". Similarly, developers who don't know how to architect towards performance increasingly get away with it - it goes hand in hand with the idea of software becoming softer, since that process moves the optimization burden towards the people making runtimes or libraries, rather than the app dev.
All of this favors a faster transition from academia.
> There's a mindshare war going on there, but architecting against one language, one environment, and one toolchain is increasingly seen as the "old way".
That old way is there for a reason: maintenance. If you write throw-away software (and most websites, especially the front end stuff are) then this is not a problem. But if you're supposed to support your creation for the next 15 years then having a stable toolset really pays off.
The smaller the project and the shorter lived it is the more corners you can cut.
No, that's still making the assumption that all worthwhile solutions are balls of mud, leading to "freeze it in time" as the only sane option for preservation. That's cultural, not inherent to the technology.
If every part of the stack is tiny and connected via common, documented protocols you have ample room for maintenance. That's easily validated by the Internet as a whole (and not the Web, which was burdened by its early design). A present-day "throw-away" that can do more than in the past is, more likely than not, leveraging a bigger and more decomposed software ecosystem, where bits of infrastructure can get remixed more readily. Pooh-poohing the results by saying "they solved a trivial problem" dismisses this underlying trend - tools and services that are currently "weekend hack" or "prototype only" have a habit of turning into tomorrow's "production-grade" if they gain substantial adoption.
There's no particular reason we can't achieve Internet-scale maintainability for all problems, other than it taking time to build protocols of the right size/complexity for every problem domain.
Ah Modula-2. Used it in one of my undergrad CS classes, ca. 1986. Never touched it anywhere else since. Back then every semester was a different language. Used at least C, PL/1, Pascal, Modula-2, Scheme, assembler. The language itself was not covered in the lectures, you were supposed to figure that out on your own time.
A good reminder of the engineering virtue of using tech which is commonly used and thus well-supported (by the creator, by third parties, by Stack Overflow, etc.)
It was one of the first languages (although not the first) that had a proper module system with separate compilation, implementation hiding, module nesting, and qualified exports / imports. It stops short of adding features like parameterized modules that confer significant problems for both theory and implementation.
In typical Wirth fashion it was also quite minimal; you could write a complete Modula-2 compiler as a series of undergrad assignments.
It's kind of sad that proper module systems are not a part of more programming languages, but as the adage in the field goes, the languages most used to write modular software are the ones least suited to doing so.
Modules can be implemented in any programming language, if the programmer is disciplined enough[1]. However, Modula-2 originated modules as language constructs, not only for grouping & namespacing, but also as compilation units.
Very important to repeat that namespacing bit; Modula-2 modules allowed for a way to group & scope names. Other block structured languages allowed for name shadowing within a block, but with modules, one has access to shadowed names in enclosing blocks.
At the implementation level, you can think of per-Module block structured languages as having a stack of hash-tables for environment/symbol-table. As each name is declared, a fresh entry is created in the hash-table and the initialization value for the variable stored as value for the key. When processing enters a new scope, say BLOCK, BEGIN, LET, new function declaration or similar name hiding construct, a fresh hash-table is created and pushed. When the block is exited, the stack is popped and we return to previous definitions.
Except the environment stack is actually implemented as a list, to allow non-shadowed names to be available without popping. The compiler can walk up and down the stack list to look up identifiers, often assigning an stack-depth number to each nested environment. Say, a top-level global variable might actually be internally represented as {env: 0, name: x, val:3.0, type: float}.
Programmers don't have access to that numeric environment ID. Once a variable is shadowed, we lose all access to it, if we don't keep a copy, and even that is useless with side-effects.
Modules change this in one important way. The hash-tables have names! They are not just anonymous values to be pushed around (eh? ;-) but named entities that we can look up. Why settle for environment IDs when you have glorious, descriptive, human readable names?
If you substitute a graph for the environment stack, you get yourself a more interesting structure. One that allows for module composition and structure sharing, so that two or more environments can have their common bits factored out.
To allow for separate compilation of modules, we need to know in advance what services they offer (i.e. what keys are in their environment table.) If we can find that out quickly, without processing the module itself, we can move along faster. This is very important. A common pattern in language compilation is name resolution. Some languages force programmers to declare all names before use. Other languages are more forgiving, and try to resolve the names themselves, often by processing input code in multiple passes, and only then signaling errors for yet still unbound names. For modules, we can help the compiler discover names by abstracting out the keys ahead of time. So break the module definition into signature declaration, and actual module body known as structure. The signature is a compact, high-level view of the map that tells code processors and other modules what names they can expect from the module. (A primitive form of signature/structure separation is C & C++'s header files, but those have nothing to offer us, intellectually.)
Like any form of cooperation, contractual agreements between modules will have to be in place. We need a certificate of authenticity of sorts. The addition of types to module definition is a pillar of modern software engineering.
Modules make software serious.
For the full story on modules, see Standard ML.
--
[1] David L.Parnas(1972). On the criteria to be used in decomposing systems into modules.
Can you give an example (or link to an example) of either real code or pseudo-code where a there is a reference in a sub-block (sub-module) to both a shadowed variable in that sub-block and also a reference made to the same variable in the parent scope by way of name hash-tables? I'm just curious to see how this is done in practice and some real examples of where a module system that permits access to a shadowed variable proves more useful, flexible and generally more robust.
Also, how does Modula-2 (or Standard ML's) module system enforce contracts beyond what a C/C++ header file offers us? Why does C/C++'s primitive system offer us nothing intellectually?
Lastly, if you were to try adding a more robust module system to a language like JavaScript, how would you go about doing it? (assuming such a thing is possible).
This makes sense psychologically. The cost of spending an extra day setting up a build system? It's obvious, hard to miss and annoying. A 10% benefit to productivity thanks to a superior programming model? Sure, that'll add up to a day in just two weeks of use, but most people wouldn't notice it at all! This is doubly true if the benefit is delayed--for example, if you win out on less time spent on maintenance and debugging.
I've seen far more people ignore great technologies for fairly limited and superficial reasons that I've seen use superior technology in the face of true practical concerns. Too often it's something like "well, yes, it's much better, but it doesn't have a JSON parser built in". Of course, writing a JSON parser should take less than a day, so the cost is essentially negligible.
That example, coincidentally, is taken from a talk by Brian O'Sullivan about running a startup with Haskell. (I don't have a link to it handy, I'm afraid.) Essentially, he found that the productivity benefits outweighed having to write some basic libraries (including a JSON one) himself. And he shared those libraries with the community, so everyone is now better off.
So my advice is completely the opposite: don't be distracted by superior polish.