Advanced languages like Haskell will never reach Prime Time in the sense of the term because they're too challenging. Similar to the "Python Paradox," except that Python's easy to learn. There's a strange appeal (that we all still can relate to) that's led people to prefer simple junk like ColdFusion to better-designed, more useful languages over the years.
Object orientation eventually "won" in the market. Unfortunately, it was adopted by the general programming public in the form of Java, not Smalltalk.
It seems like functional programming is similarly "winning" as an idea. Anyone who thinks about programming much agrees that a language where you can't pass a function as a value, or construct functions at runtime, is pretty broken.
The question then, is whether Haskell is the new Smalltalk of functional programming, and what language will play the role of the functional programming Java.
I think part of Lisp's success was that academics liked it, so when they did clever stuff, it was related to Lisp. Today, it seems that academics like Haskell (is it true?), and so when they think of clever stuff, it will be related to Haskell.
The "Java of FP" could be C# or F# (although Microsoft isn't willing to go multi-platform, so that might stop it becoming "successful", by many definitions).
But honestly, I don't think there is a powerful enough vector available to carry FP. C had Unix. Java had the net. A novel way to think about the next "Java of FP" is to think of what vector could matter, and then what FP language is well-placed to be carried by it.
Mobile phones are all I can think of at the moment, but no FP has come of it; the netbooks (which are becoming phones) are plain ol' PCs; and the iPhone runs Objective-C (doesn't it?). What's the next IT revolution?
Actually, time_management is right: the next IT revolution is clearly multi-core.
There's a counter-argument to multi-cores being the next IT revolution: although they seem to be the way to faster performance, faster performance isn't needed.
Data point one: the rise of the netbook. They have slower, single-core processors, and are fast enough for most of the mainstream stuff people want to do: web, email, word-processing. Netbooks are getting faster (and this is needed for some webapps e.g. that musical score webapp is a little sluggish on my eee PC). But what is more important in a netbook is weight and battery life - that is what is needed. Performance isn't needed.
Data point two: even in video games, the wii console is the most successful of the present batch. It is unusually low powered even for single-core (and in coincidence, one of its competitors, the PS3, is multi-core). Performance isn't needed.
Data point three (an exception): some applications do need performance (weather simulation; google server farms), but these are already multi-core, and have been for a long time (the core used to be in separate computers, but the app was multi-core).
I'm arguing that most of the mainstream market can't absorb faster performance (and the high-performance market has already solved the problem).
Ripe for Disruption?: It's common for whole industries to overshoot what is needed, in terms of the particular metric they have worked at improving for years. Their corporate organization, product development and marketing are based on improving that aspect, and selling that aspect. When the basis of competition shifts, you get a new set of leaders, because the old ones can't adapt.
I think Haskell is intellectually challenging in a way that isn't needed to get the job done. The one exception is for proving things (which so far is not practical - but it's cool, and might become practically useful, one day).
Don't get me wrong: I like the intellectual challenge, and I especially like the Haskell community - they are nice people, excited about what they're doing, very helpful, and just not interested in putting down other languages - and I think there is real beauty in Haskell. It's like a branch of mathematics.
Disclosure: I'm terrible at Haskell's intellectual challenge. It makes me feel stupid. I hate that.
Indeed; all functional languages are not alike. Haskell is not just functional, it's also lazy and has "weird" syntax. If functional programming is going to take off, I would expect to see it in the form of an eagerly-evaluated language with C-like syntax.
Maybe I'm getting Haskell-blind, but what's "wierd" about the syntax? If I think about "wierd" syntax, I think about Erlang or Objective-C or Befunge for that matter. However, it seems that once you spent a day or so with a language the syntax wierdness tends to go away. (Still need to test that hypothesis with Erlang, though.)
Unlike some other features* , pattern matching isn't all that scary, though. "Oh, it's like a switch statement... but it can break things up and check inside... that's pretty cool." You can understand it in terms of other fairly commonplace constructs, and it's relatively clear how useful it is.
It could be presented a bit differently, but the underlying idea seems approachable.
* I'm thinking of monads, in particular; whether they actually are or whether it's just due to the way they've been presented is another question.
I started to understand monads when I compared the Maybe monad to the short-circuiting behavior of the Unix pipe. That doesn't mean my insight will work for anybody else, though.
Also, having masses of Haskell blog posts saying, "No, monads are actually not that hard, here's (an explanation that probably won't actually make all that much sense)." probably doesn't help to dispel the myth that Haskell is hard / impractical / whatever.
In understanding monads, I think Maybe and List are the best places to start, because they're familiar data structures. IO is conceptually familiar, but IO as an immutable value representing a series of actions is not.
Well, just as "monads are like burritos", IO is like a delicious but sketchy fish taco. Eating is input. We shan't discuss the output process; we all know how it works and what it looks like. Food poisoning or poor health can lead to usage of standard error. In practice, stdout and stderr are usually redirected to the same buffer, which is flushed shortly after a write.
this haskell example is exposing you to two cool things:
- algebraic types. haskell has type inference, so it can do the right thing while you only describe the function as operating on lists. now look at java...strong typing but no intelligent inference, which is why you are forced to write retarded java like HashSet<Integer> s = new HashSet<Integer>()
- pattern matching. you can describe functionality based on the pattern of the data
this isn't just syntax. these are really powerful fundamental features of the language
Yes, I totally agree. But the question I was responding to was just, "What parts of Haskell have weird syntax?" All of this cool stuff would also be possible with c-like syntax.
And that man would absolutely love it if Java had a type system closer to Hindley-Milner with type inference. But, given the reality of Java, generics become undesirably verbose.
A typical well written Haskell program has more type annotation than the same program written in Java. Not only that but because of type classes those annotations are even more verbose than an equivalent declaration in Java.
It's true that you can omit most of the type signatures and let the compiler figure it out but nobody really writes Haskell code that way if they want it to be comprehensible by human readers.
I've wondered about this. It seems common for OCaml code to only have the type annotations where necessary, but it's usually included in Haskell by convention.
It recently dawned on me that this is what makes it hard for me to understand snippets of Haskell. Specifically, trying to parse the precedence relations in a sequence of tokens uninterrupted by any kind of punctuation.
Say what you will about Lisp, it is always very clear which arguments go to which functions, what the scoping boundaries are, etc.
Multi-core concurrency is going to kill OO. FP will resurge in the 2010s among good programmers, spreading out to the "vital 25%" (who are, right now, mostly upper-tier Java developers) rather than being restricted to an elite 3-5%.
I know this is the conventional wisdom, but can you provide support for your claim that multi-core will help FP?
The usual claim is that the immutability of FP will avoid the problems of different processes writing to shared memory, because they only read it.
But Erlang is famous for its concurrency, and this is not due to it being a FP; it is due to its shared-nothing pure-message passing (from Smalltalk). It's another solution.
If you have multi-cores, the problem is not what you do within one core (we can already manage that); it's that you have many cores. If [1] they have their own local memory, then pure-message passing is the inevitable solution.
Anyway, that's my reasoning - I'm interested to hear your reasoning.
[1] At the moment, the on-chip memory is (very small) cache, and so shared memory is an option... but I think it's also inevitable that we'll get sufficient local memory for each core to execute code independently.
erlang is really good at running distributed programs. that's where the share nothing actor model is necessary and works well. it's actually a kludge to write a simple concurrent application in erlang because the simplest reads requires sending two messages. For writing concurrent applications (single process, multiple threads) I like the Clojure approach much better. I can't explain it length here, but check out http:/clojure.org
I think Clojure is likely to shoot ahead of Erlang as the leading concurrency language. After that will be Concur, an ML/Haskell-inspired statically-typed concurrency language my friend Brian Hurt is designing.
One thing I've picked up, looking at Clojure, is that the lack of mutable state allows certain cool optimizations. For example, let's say you have a map A with a million key-value pairs. You want to do something with B = A + (k', v'). In a mutable-state language, you'd physically add (k', v') to the map, which is usually implemented as a hash table. Clojure creates a new map (32-ary tree) for B that shares most of its pointers and structure with A, which allows you to do FP without FP's greatest drawback, which is the copying of large data structures. Since the nodes will never change, this sharing is entirely safe. A remains entirely unchanged, so a thread working with A still has A.
Functional programming doesn't actually eliminate side effects and mutability. Haskell has the IO and Array monads. Clojure has refs and agents. FP simply segregates them from the rest of the program so that they exist only when desirable, and can be more easily managed.
I don't know the details about cache and local memory. This is a weak area for me.
> FP's greatest drawback, which is the copying of large data structures
I guess you made that one up on the spot. Go read about "Purely Functional Data Structures" http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf or have a look at Haskells Data.Map implementation for another example.
I meant that it's the greatest (potential) drawback of the functional style, especially when that style is employed in a non-purely functional language. You're correct that you don't often end up doing full-object copies in the leading implementations of certain purely functional languages, such as Haskell or Clojure.
Q. If you append to a linked list, you can't avoid creating an entirely new copy of the list, can you? (prepending is OK, because you reuse the old list).
Or can Haskell's laziness help with this? Can it somehow avoid actually creating the new copy, but instead just return the correct value when you reach the end of the list? So that the new copy is never actually made, but it just behaves as if it was?
There's a trick for efficient appending that's somewhat analogous to laziness. It's quite clever. Sadly, I cannot claim to have come up with it, and I don't know who did.
Instead of working with [a] lists, work with [a] -> [a] prepender functions. So the analogue of [1, 2, 3] is f: \x -> [1 2 3] ++ x. Then the analogue of the append of f and 4 is f . (\x -> [4] ++ x), where the . represents composition. Appending using these analogues, I believe, is O(1) instead of O(n).
To "evaluate" an element of this "closure space", just apply it to the empty list.
You can also re-use some form of balanced binary trees where all interesting operations are doable in log(n) and adding requires about log(n) (mostly internal) nodes to be rebuild. That is worse than the optimum O(1) for linked lists you can reach --- but it's easy to come up with and often good enough.
Well, I'll be glad as hell if Haskell is as popular in 4.5 years from now as Python is currently (pg's article about the Python Paradox was written August 2004). Hmm, I think I'll even be glad if Haskell will ever be as popular as Python was 4.5 years ago :)
I rather like OCaml, but it seems like the community is too quiet / too small, and it's unlikely that it will suddenly explode. It's probably too late. :(
Rather than writing blog articles touting how great your favorite language is for general purpose programming you should write some code.
When you have more non-trivial applications written in Haskell than a Haskell compiler and a DVCS it will be a lot more credible to talk about it being ready for 'Prime Time'.
You can even have very broad definitions of "non-trivial": it used to be received gospel on one programming board I frequent that Java was a terrible choice for a downloadable application. If you tried it your users would abandon you in the middle of their 120 terabyte download, your application would win awards for poor non-native design, and it would take approximately seven years to boot the JVM.
Then I made a few tens of thousands of dollars with Bingo Card Creator. (Which is written in Java. Further frustrating my efforts to win geek cred, the UI is in Swing.)
We don't hear much about the impossibility of making money with downloadable Java applications these days.
Lines of code fails as a measure of code quantity for languages like Lisp and Haskell. 500 lines of Haskell code can accomplish what will take 1500-5000 lines of Java.
When I'm trying to get stuff done I would rather spend 3x as much time typing in code than waste all week deciphering 'core' and reading heap profiling output to track down a space leak that could never happen in any other language.
I've heard that Haskell's space performance isn't great, but I don't know enough about the language to confirm or deny this.
I would recommend, if you're concerned with performance, using Ocaml. You can compile it down to native code, and its performance characteristics are comparable to C.
Yes, I have had to solve this problem a lot. It's probably my own fault because I am an inexperienced Haskell programmer, and maybe it's just because I'm incompetent but I find space performance very difficult to reason about.
It seems like anybody can make these mistakes, and they are mostly invisible until one day your problem size is a little bit larger than normal and your program explodes and crashes.
In a strict language, you write an expression and it's perfectly clear when it will be evaluated.
With lazy evaluation to know if or when an expression will be evaluated you have to follow it through possibly many layers of function calls (which may have been written by somebody else) looking for a place where it needs to be evaluated. Along the way it may become embedded in other expressions which complicates the problem.
How do you know when an expression needs to be evaluated? I don't know if I'm smart enough to understand the answer to that question:
Hasn't changed since I last looked at it. Still a big clusterfuck of esoteric math libraries, wrappers for useful things written in other (practical) languages, and a handful of half finished clones of libraries and applications first written 10 or 20 years ago in C.
This is a bit unfair. Every code repository has this sort of thing. The quality of a repository shouldn't be the ratio of [no-external-dependency + extremely mature] projects, but rather should be based on whether or not people can find what they need there. I'd argue that Hackage has that.
As for being full of bindings to languages written in other libraries, why is that a bad thing? In general, these libraries are common, cross platform, and thoroughly tested. Isn't code-reuse worth something?
This isn't a "spending credibility" situation. Credibility is something you need when you're asking someone to believe something based on your authority. In this case, I wrote an article filled with links with the hope that the reader could evaluate the claim based on that material.
Besides, I don't think I should need to spend all of my waking time coding; I already spend several hours a day doing that. Why can't I take a break to argue that Haskell's tools make the language more production focused than most people realize?
how about an operating system (house) a window manager (xmonad) and fully featured dvcs (darcs) or another (camp)?
is there as much haskell out there as c, perl, python? no. but if you want to be a pioneer you have to drive on a little dirt. but don't worry, we'll keep plugging away and in no time at all the haskell highway will be lined with HoJos and rest stops and miles of flat-top...we'll send you a postcard.
Ok, you're not addressing the subtext here. Nobody is going to get fired for choosing Python. You are actually likely to get fired for choosing Haskell, in Q1'2009.
Not every program is written in the context of "I need to pick a language to use at my desk job." In fact, a great deal of software is very obviously not written in this context.
Darcs, at least, broke new ground. It was a very innovative VCS.
I'm not sure about which features originated in dwm vs. xmonad (there seems to be cross-pollination of ideas between the two, so that's a good thing either way), and I haven't used house or camp.
> Darcs, at least, broke new ground. It was a very innovative VCS.
Innovative does not mean valuable, or, more important, popular.
I proposed the theory that languages become popular because they make creating new things possible and/or easier.
If Darcs qualifies as such a new thing, the fact that it didn't help popularize haskell argues against my theory.
Maybe my theory is wrong. (I'd certainly patch it in certain ways.) Maybe Darcs doesn't qualify. Maybe Darcs didn't do the trick because no one knows about it.
So, let's flip the question. What has the Haskell community done that has been important in making other languages popular in the past? If they're not doing any of those things, why should Haskell nevertheless become popular? (Note that lots of "superior" languages never became popular, so if you're going that route, you get to explain why Haskell will be different.)
I think that Darcs did help draw attention to Haskell, just that it's still not especially popular. The fact that we're talking about Haskell and not e.g. Joy says something about it at least crossing a low threshold of popularity, though.
(XMonad is a much less significant example for Haskell's popularity, IMHO.)
It seems highly likely to me that Haskell's type system made inventing Darcs's theory of patches significantly easier. While a Darcs-like system could be written in C, designing the system as a whole was almost certainly aided greatly by Haskell's type checking and lightening of certain conceptual burdens through laziness.
The author is asserting that Haskell, the language and the tools, is ready for `Prime Time'. There is also a sustainable community of smart and enthusiastic Haskell hackers.
The developer mainstream may not be quite ready for Haskell, but Haskell is ready for them.
EDIT: Meant to be in reply to thomasmallen: "Advanced language like Haskell..."
I think it is or will be soon enough. The internet made communication between geographically disparate developers so much easier. It's become mature enough that there are sites now where curious developers can congregate and exchange ideas.
When I started in the field the internet wasn't on the radar of businesses. Email accounts were uncommon never mind blog sites. Most book stores didn't carry books on academic topics. Besides, the wave of OO was just starting to gather so 9 out of 10 books were on OO or C++ or both.
Developers are now exposed to more and can share code and techniques. More complex problems are shared now causing curious developers to stretch their limits.
The largest problem I see now with adoption of Haskell is the us/them mentality of field trained developers vs. academically trained ones. I'm in the former group. While not all academically trained developers code/design well, the median one has been exposed to a wider range of coding problems and tools. Our ability to share ideas and problems on the internet levels that playing field if we let it.
In short, yes, mainstream developers are ready for Haskell/Lisp/OCaml/etc. We need to get over the "that's just for academics" mentality. They are tools, thought out by people with lots of brains and time on their hands, to solve problems. The only thing holding us back is our mentality.
I remember, about a week into my exposure to Ocaml, asking what a functor was. "It's a function from a module to a module."
My response was along the lines of, What? It turns out, though, that Ocaml functors are really cool and can provide some of the functionality of Haskell's type classes.
Haskell has algebraic data types, which are much more expressive than conventional types. You can see that from his example. Of course, the types don't define the functionality, but they narrow it down much more quickly than the ints of C and Java (similar to google ranking of search results).
Python tends to deemphasize types due to its reliance on ducktyping and type-inferencing. If you look in the documentation, they don't really care about the input types either. Would it be nicer to have more typing? IMHO, it really depends on the setuation. It would be nice to have multimethods but declerative statements like in C would be horrible.
I thought that was the reason - but if I give the wrong types, it won't work. Often, I can infer the types myself, but why not just tell me? I think there are conventions, like "it's a string or a number unless stated otherwise", and these defaults work great. You just need a little faith in the library writer.
For example, xmlrpclib.ServerProxy takes an URI - is that a string or a special URI object? I guess it's most convenient if it's just a string, but if you had just constructed the URI by concatenating the separate parts, it's inefficient to immediately parse them apart - and so Java has URI objects. Maybe it's just that Python admits it's a scripting language, and uses machine cycles to serve you - instead of the other way round. That's a nice idea.
Indeed. Type signatures mean that the code is naturally self-documenting.
My stance on the static/dynamic controversy is that static typing becomes a win as the amount of time spent reading others' code increases. A single programmer is probably going to be more productive in Lisp than in ML. (I don't know enough about Haskell to speak for it.) On the other hand, if I were choosing a base language for a team of 10 people who would all be editing the same code, I'd most likely choose one that's statically typed. Static typing enforces interfaces and allows the compiler to detect a decent share of bugs as well as most code breaks.
(Note: I have only read a bunch about Haskell. I am by no means an expert)
In my hobby work with Python, I absolutely have no choice but to read the documentation for a function to know what it does. The dynamic typing necessitates that the function itself tells me nothing. Consider:
def get_console_type(console)
Does this function take a console object? The string name of the console? The IPAddress of it?
In my professional work with C# (well, in this case C), the type signature tells me quite a bit more information.
ConsoleType GetConsoleType(IXboxConsole*)
One step further: does GetConsoleType have any side effects? It shouldn't by naming convention. But it turns out it did. It updates a cache, which can have a major effect on some totally unrelated thing.
Haskell, being a pure language, has to declare the presence of side effects. I'd have know this right away, without having to look at any documentation (there isn't any!). Furthermore, what if the function wasn't actually called GetConsoleType. What if it was called GetXboxType?
I really wished I could have searched for "All functions that return ConsoleType"...
Not all ML/Haskell functions are semantically transparent based on the type signature, but type signatures are really useful.
You might wonder what function you use to find the length of a list in ML. The function obviously has to exist, but there are a number of things it could be called (length? count? size?). At the toploop, you type "Module L = List" and you get the module's type signature. You see that a function called length exists with type signature 'a list -> int. You know that that's the function you're looking for.
Other functions' type signatures give a great indication of what the functions do. For example,
List.map : ('a -> 'b) -> 'a list -> 'b list
List.filter : ('a -> bool) -> 'a list -> 'a list
List.filter_map : ('a -> 'b option) -> 'a list -> 'b list
All of these do the most intuitive thing that a function with that type signature should do. Obviously, not all functions can indicate their semantics through their type signature. For example, you might have one called partition with the following signature:
partition : ('a -> bool) -> 'a list -> ('a list) * ('a list).
You don't know whether the "true" list appears first or second in the returned tuple, but you can easily check this at the toploop.
When you're reading other peoples' code, static typing can be a huge win. If the writer of the code used the type system properly, this cuts your read-work by 80%, even without any other documentation.