F# is a mixture of C#, OCaml and Haskell. F# will be included in the next version of Visual Studio so it's no longer just a research language. Performance is similar to C# but coding in it is faster in my experience and above all a lot nicer if you like functional programming. Because it runs on the .NET VM or Mono VM it has good multicore support unlike OCaml. Haskell is more powerful as a language, but also has it's weak aspects. The main thing F# is missing imo in comparison to Haskell is type classes. And of course the Haskell community.
I'm learning F# now because it will be another weapon in my arsenal that I would be allowed to use here. My impression of it, so far, is: "Heh, kinda cool". It is basically the total compromise functional language, but in a good way. Immutability is optional, you can write imperative code whenever it makes sense, and there is a reasonably syntax for defining .NET object oriented constructs. It is very much a functional language that peacefully co-exists with the rest of the .NET ecosystem and that is a strength if you need it. Sure, all of that interop comes at a power and complexity cost, but it is surprisingly small.
If Microsoft can work out a sane licensing arrangement for compute farms/clusters/clouds (whatever we're calling them this week) of Win 2008 Server, then F# is a game-changer of a similar scale to Google's MapReduce. That's a big if tho'.
Having said that, Windows is cheaper than RHEL already...
I think I can see how MapReduce could be argued to be a "game-changer" (although I'm not sure it's true), but why F#? If I understand correctly, it's basically OCaml on the CLR. So you get a Python-like brevity of code (and ease of programming? maybe it has better error messages than OCaml?} with C# performance and access to the CLR libraries. That sure sounds useful, but why is it a "game-changer"?
CUDA yes. Cg yes. HLSL yes. Verilog yes. VHDL yes. C++ or Java with MapReduce yes. PHP and MySQL with memcached yes. Erlang yes — and it really is functional, inside each process, anyway — that is, the level where you aren't getting any parallelism. Octave or R, potentially, but not today, as far as I know. Mathematica yes, and it, too, is mostly functional.
In theory, side effects are what make parallelism hard, and so languages whose semantics are side-effect-free (unlike F# or Mathematica or Erlang) should make it easy. So we all thought in 1980. Since then we spent 20 years or so trying to make that happen, and it basically didn't work.
There are basically four kinds of parallelism within easy reach today. There's SIMD, like MMX, SSE, 3DNow, AltiVec, and the like; you'd think that data-parallel languages and libraries like Numpy and Octave would be all over this, but except for Mathematica, that doesn't seem to be happening. There's running imperative code on a bunch of tiny independent processors that share no data; AFAIK that's what the shader languages are doing. There's instruction-level parallelism on a superscalar processor, which largely benefits from things like contiguous arrays in memory, or maybe what Sun is doing with Niagara, where the processor pretends to be a bunch of tiny independent slow processors. And then there's splitting up your data across a shared-nothing cluster, which is how every high-traffic web site works, and that's what MapReduce makes simpler.
Uh, and then there's designing your own hardware or programming FPGAs, which is what Verilog and VHDL are for.
Languages like OCaml (I don't know anything about F# except that it's like OCaml, but for the CLR) have no special advantage for any of these scenarios. They don't even have the theoretical advantage that they have no side effects and therefore you can speculatively multithread them without breaking the language semantics. They do have the massive practical disadvantage, in most of the scenarios I described, of needing unpredictable amounts of memory, having massive libraries, and using pointers all over the place. Using pointers all over the place kills your locality of reference and your ILP. Having massive libraries and using unpredictable amounts of memory makes it impossible to run them inside your GPU and means they can't run on an FPGA (except by using external memory, like the awesome Reduceron). And nothing about the language semantics helps with SIMD either.
So, sheesh, go read Alan Bawden's dissertation or whatever, but don't go around claiming that ML (or even Haskell) is going to magically make your algorithms parallel. We tried that. It didn't work. We're trying something else now.
Just for my own edification, since you seem quite familiar with the subject: Is Obsidian* a big fat waste of time that'll never be as good as just compiling Haskell for a CPU like a normal person? 'cause I was considering investing some time in learning it, when I wouldn't have the free brain cycles for CUDA.
I didn't know about it! From a cursory look (all I can find is presentation slides?), it doesn't sound like you'll be able to take any off-the-shelf Haskell subroutines and run them in parallel on the GPU; rather, it's an embedded DSL for constructing shader programs. So I imagine you'd have to learn both CUDA and Obsidian to use it.
But it would be awesome if someone came along and proved me wrong.
Thanks for the read--now, how 'bout another? http://www.cse.unsw.edu.au/~chak/papers/gpugen.pdf, another Haskell-embedded GPU DSL, claims to work at a higher abstraction level than Obsidian, but still provide high performance general-purpose GPU programming capabilities.
It's not that it magically makes thing parallel. It's that it puts the tools to do it in the hands of Joe Developer who has hankered to do functional programming in the "real world" and now can say to his manager, look, it's part of Visual Studio now, it's official, there's no reason I shouldn't do this. I am confused as to why you say OCaml/F# don't even have theoretical applications here - it it because the .NET libraries aren't side-effect free? It's about practicality, not about technology. The former is what was missing, the latter's been around for 20 years or more.
And please, PHP and MySQL? For computation? Are you serious?
The main thing Joe Developer lacks to do parallel programming is massively parallel hardware, and if he's happy with just getting a 100× speedup with GPGPU hacking, maybe he doesn't even lack that. High-performance parallel programming has very little to do with functional programming languages, today and in the foreseeable future.
And please, PHP and MySQL? For computation? Are you serious?
What do you think people use PHP and MySQL for, fertilizing flowerpots? Computation is all they do! Obviously the PHP5 interpreter isn't the most computationally efficient medium in the world, but it's where it's at when it comes to horizontal web site scaling, i.e. parallelism.
I keep saying this, there is more to computing than websites. I completely concede your point that F#/OCaml/FP in general has little to offer when it comes to serving up web pages more quicky. Formatting records from a database for display on a client device isn't all that different since the 80s anyway. Fortunately, I don't care about doing that. I care about, for example, people not trying to do Monte Carlo work in Java because that's the organization's "standard" language. A language like F# that it is acceptable to use in a large organization with arbitrary standards made by non-technical people is a huge deal.
I keep saying this, there is more to computing than websites.
I'm aware of that; I wasn't suggesting that people should be programming their websites in Verilog, after all. But running websites is part of computing.
I completely concede your point that F#/OCaml/FP in general has little to offer when it comes to serving up web pages more quickly.
That wasn't my point; I think functional programming might have a lot to offer when it comes to serving up web pages more quickly, and especially programming web-server software more easily. My point was that functional programming doesn't have a lot to offer when it comes to making your code more parallel.
Formatting records from a database for display on a client device isn't all that different since the 80s anyway.
I cannot imagine in what sense this sentence could be true. The database architectures, the languages used, the required level of efficiency, the CPU architectures, the scale of operations, the kinds of failure we expect from components, the structure of the machines (then SMP mainframes, now shared-nothing clusters of thousands of computers), the type of people doing it, the client devices, the networks, the formatting, and the nature of the data have all changed dramatically since the 80s.
I care about, for example, people not trying to do Monte Carlo work in Java because that's the organization's "standard" language. A language like F# that it is acceptable to use in a large organization with arbitrary standards made by non-technical people is a huge deal.
Do you think doing Monte Carlo work in Java is bad? Because of performance? Last I heard, the optimizations in the CLR's JIT were pretty minimal, while the Java JITs had pretty much reached parity with GCC and were breathing down icc's neck. (What Fortran compilers do people use these days?) Maybe you should get excited about people doing Monte Carlo work in Scala instead?
Anyway, whether a language is pleasant to program in or has a good compiler has very little to do with whether it helps you take advantage of available hardware parallelism — unless the way in which the compiler is good is that it auto-vectorizes your loops or supports HPF directives or something. As far as I know, F# and Java are equally abysmal at that.
Sounds interesting. How much is the community tied to Microsoft? "Runs on Mono" is ok, but if no one actually does, that's perhaps problematic for those who don't do MS.
Good cross-platform support is indeed very important for a programming language. The F# community is still small so there probably aren't many users on Linux and OS X.
Don Syme (the creator of F#) has said that Microsoft will release the F# compiler under an open source license. I think that the future of the language will depend on it. If the F# compiler cannot be included in Linux distributions then the uptake from Linux and OS X users will be low. That would seriously hinder the (open source) community aspect of the language. I don't think that a young advanced language can survive without it.
I find Self to be worth checking out, as well (http://research.sun.com/self/) if for no other reason than seeing how fast dynamic languages can be. The research that went into it--techniques for blazing fast implementations of highly dynamic, object-oriented languages--has become especially relevant and provides the basis for many of the numerous new Javascript vms (especially V8).
With regards to Smalltalk and its incredible reflective abilities, this video from OOPSLA'08 ("Smalltalk Superpowers") is particularly amusing: http://www.veoh.com/videos/v163138695pJEMGmk. Unger even does a pretty neat demo of Self.
Strongtalk is definitely worth a look. The ability to have static typing when you want it is awesome!
I could imagine a deployment process where code can't be deployed unless it's all statically typed. This lets you do very fluid rapid prototyping, but then firm up parts of the system once they go into production.
Actually, checking out the cvs repository on sourceforge (http://self.sourceforge.net/), there seem to have been some recent commits; though, in general, I think you're right that there isn't much active development.
I'm curious if anyone has any experience with Io (http://iolanguage.com/). The description sounds like an appealing combination:
Its unusual, minimalist and yet elegant and powerful syntax reminds of Smalltalk, but the language goes far beyond that. Io is an object-oriented, prototype-based, message-based and fully-reflective programming language. This means that you use messages like in Smalltalk, you create objects like in Javascript and every bit of your code can be inspected and passed around as you see fit.
Io is still under development so you might run into the occasional bug. Bugs are encountered more often in the addons than in the core interpreter (which is stable).
Ah, I see you're the creator of Io. Do you know of any Io tools for Vim? Syntax and indent would be great if nothing else...Google didn't turn up anything.
Oh, I saw the link and didn't read your username clearly. I have the TextMate bundle here...I suppose some interested person will have to get to writing a few io.vims :^)
I played with Io a long time (like 4 or 5 years) ago, at which time it didn't seem ready for anything serious. I would expect it's gotten much better since then. It would be interesting to know what sort of apps are easy to build in it.
This author doesn't seem to know his subject very well, though. To say that an object-oriented, message-based, fully-reflective programming language goes "far beyond" Smalltalk suggests ignorance of how far Smalltalk goes.
As I hinted in the article, you're right: I didn't really experiment much with Smalltalk. Does Smalltalk allow the same level of reflection? Does it offer the same "freedom" as Io? I'm asking because I _presume_ it does not, but I honestly don't know.
Smalltalk blocks are objects that respond to messages about themselves, so yes, Smalltalk offers the same kind of reflection, in fact it invented it. (It's possible that Io takes it further in some respect, but I'd need to see details.)
So which (if any) of all these languages will you use to do a real project?
The very reason I wrote this article in the first place was that I couldn't make upmy mind. I am, as a matter of fact, interested in all of them: I was hoping you people could help me decide.
As a matter of fact, I work as a technical writer, so I'm not going to use any of these for any big project. However, I do code in my spare time and prepare small programs to automate tasks at work, when my boss lets me.
At the moment (literally) I'd like to try learning Haskell again, but it feels very difficult.
You're likely to just get people recommending their favorite.
If I were you I'd apply some combination of two criteria: which is the most fun, and what do I want to try to build?
It's great that you've done some broad experimentation (much more than I have). I suggest that you'll get a different and rewarding view if you try to use one of these languages to do something real that you care about.
There is also an issue with an implementation: the official implementation from Walter is awkwardly packaged and lacks the source (you can't build it yourself). This pretty much turns it into a toy for Linux/OSX programmers: you can't distribute your D code, nobody will figure out how to build it.
And GCC-based implementation is lagging behind in features, and also scores consistently 10-30% slower than C/C++ in benchmarks, which kills D's appeal as a performant replacement for C.
What Walter needs to do, IMO, is to abandon his own implementation completely and closely work with GCC/Linux community to include high-quality D into standard GCC package. Then we may start seeing decent software written in it.
well, i played around with it for sometime. it is pretty good e.g i really like the new (and much improved!) template mechanism especially meta-programming abilities, variadic template arguments, contracts for enforcing class/function invariants, function-literals, dynamic closures etc.
the issue that i have (imho ofcourse) is that of 2 incompatible standard libraries phobos and tango, which makes it quite painful to work at anything significant/non-trivial. iirc, there was an effort to have an stl kind of thing for D, not sure how far (or in what state) it is.
also, it seems to me (i can be wrong about it) that D is being marketed towards corpo-drones doing win32 (c++) stuff, rather than to linux C devs. most likely, they (corpo-drones) are moving towads C#/.net and i doubt that they are interested...
"Unlike other Lisps (and Schemes) you may have encountered before, Clojure comes with some interesting additions: [...]
Many pre-built data structures, like Vectors, Maps, Sets, Collections, …"
1. It has the simplicity fo Scheme.
2. It has literals for datastructures.
Hashmap: {"BANK" 122323 "TRANSFER" 212001} . That's it, I created one. Oh, here's a vector for you: [1 2 3 4 5].
Veru convenient, not revolutionary by any measure, just very convenient. And it doesn't sacrifice any power for that practicability.
3. It is more functional than common lisp.
4. Access to all the Java libraries. Whatever you want to do, there is a library. And yes the excessive amount of documentation you have to read to just do something simple might frustrate you but it's a lot more effective than re-solving a problem that has been solved a hundred times before.
I think these things together make it an excellent language.
Why? It's a nice adition to the language and makes using it a much better experience.
My first big project in LISP was making a Sudoku solver for a school project. Could only use basic instructions and lambdas. It was not pleasant trip :P
But common lisp does come with vectors (make-array), hashtables (make-hash-table), and set-operators (intersection, union, pushnew, etc) that work on lists.
Also, Common Lisp (CLOS, in particular) supports multimethods (defmethod), contrary to what the author claims. And all of the special forms I saw in clojure have a counterpart in common lisp.
In fact, a lot of common lisp implementations (I like SBCL) have support for threading/asynchronous action built in. The one problem is that none of that is part of the ansi spec, so most multithreaded code won't be portable between implementations.
It isn't the language's fault that your teacher wouldn't let you use all these features.
What's distinctive about Clojure is not that it has them, but rather that it makes them first-class citizens in the way that lists are - i.e. provides a universal set of operators for manipulating them (or so I hear anyway). This is certainly a weakness of CL. I think this is the original point underlying the author's garbled statement, and it's an improvement specifically to CL that has nothing to do with the presence of arrays and hashtables in the language.
Edit: I realize this is probably obvious, but let's not confuse what this guy says about Clojure with what Hickey has to say. He's well aware of CL and doesn't make silly claims about it.
That's very true. I personally like clojure - and it does provide a lot of potential advantages to common lisp. I just don't think the author of this article understands them - he's just spewing bs.
Hickey does make a lot of valid points about the advantages/differences of clojure (http://clojure.org/lisps) though.
Have you tried programming in Prolog? The automatic backtracking search is cool when it is exactly what you want, but most of the time it isn't. So you end up working to keep it on the leash least it give you exponential run times when you were not expecting it.
I ended up concieving my algorithms without thinking about Prolog's backtracking, and then coding: bending the Prolog interpreter to my will by getting the backtracking to implement the control flow called for by my algorithm. It seemed a bit indirect and a bit clumsy.
My attempts at writing a parser in Prolog were absurdly slow. Later I read that there are ways of writing parsers in Prolog that work well, and writing parsers was one of the killer apps for Prolog, so I guess I never really understood programming in Prolog properly. On the other hand it is troubling that I failed at one of the things the language is supposed to make easy. I'm in no hurry to give it a second look.
Erlang gets a lot of its syntax and implementation technology from Prolog, but it's true that the distinctive things about Erlang (massive shared-nothing concurrency, supervision trees, tuples, pattern-matching on binaries) have nothing in common with the distinctive things about Prolog (backtracking). They both have pattern-matching, but so do lots of other languages (even Python to a small extent).
I'm pretty sure Prolog is still being used, but it doesn't have the buzz of, say, Fortran or COBOL or Pascal.
In the quick looks I've made at each language, I found Prolog much easier to understand than Erlang. Prolog's pure logic syntax is simple if you already know 1st order expressions. Erlang borrows some syntax from Prolog but doesn't have Prolog's simplicity.
Erlang has its roots in prolog (was originally built on top of prolog, actually..) Software Engineering Radio's episode 89 is a great listen for anyone interested in Erlang
It's one of my favorite languages, and is rapidly crowding out Python. It's a tremendously clean and powerful language. Since Lua was designed for embedding, its design favored adding a few meta-features that could be used to build task-appropriate features into the language, rather than adding to the language core. It's also a relatively quick language to learn, and is extraordinarily portable - the whole language is written in pure ANSI C.
It also has tail-call optimization, closures, coroutines, and other aspects that are interesting from a pure CS standpoint, such as the way the table datatype is implemented and its register-based VM.
It's great as an embedded language. The implementation is so tiny you can effortlessly change the language to change whatever it is you don't like about it.
The C++ bindings aren't so great, last time I checked. The documentation is not of a consistently high quality, but it doesn't matter much because, once again, the language isn't very complex.
I prefer lua to any other embeddable language, including Python.
Playing with Lua is a blast. I don't have much experience in using it for anything but a third-order language (e.g. writing programs that write programs), but it's very lightweight, fast, and easy to pick up. I view it as a language that I use when I need the power of a programming language (as opposed to a one-off program), but not enough to justify using Perl/Python/Ruby.
There are some parts that I did find annoying, however, like the fact that in the standard libraries array indexing starts at 1 instead of 0 and that it gets a bit kludgy when you try and add lots of things that are not part of standard language features, but nothing you can't overcome.
Nice to have all the up-and-coming languages lined-up in an internal comparison. I like Lua, Haskell and Io. I think that future languages will have to have a cleaner rather than an uglier look and syntax. For that reason, I have a visceral reaction to scala and factor. Even the one-line examples I've seen of these language have look like gibberish. I'm sure they are great in many ways but don't want them to succeed since they will make my brain hurt more. Also, I think that they won't succeed for similar reasons.
Don't forget Eiffel and its progeny. Design By Contract and heavy use of assertions feels to me like it's closely related to Test First development. You also get fast runtime code out of it (as in C++ fast) in a very Pure-OO environment. (By many accounts, even more so than Java and C++.)
I played with Eiffel for a time, but found that Design By Contract was a bit overly restrictive for amateur programming. Now that I'm doing more professional stuff, I think it's worth looking at again. There's also a lot of Eiffel influence in Ruby, so moving over from Ruby is not so hard.
What I've found is that you actually want restrictive when you're doing maintenance. You want to know as much as possible about the code you're changing. You want to know exactly what's in that instance variable.
When you're doing prototyping, you want to delay that decision as long as you can, so you are making the most informed decision possible. And nothing informs your decisions like the act of developing and getting feedback from users.
Optional static typing gives you the best of both worlds. But I don't know how you'd get the equivalent with Design By Contract/Unit Testing. Both of those feel a little unnatural to me. I'd rather just tinker, and not have to set up extra stuff before or after I code.
F# is a mixture of C#, OCaml and Haskell. F# will be included in the next version of Visual Studio so it's no longer just a research language. Performance is similar to C# but coding in it is faster in my experience and above all a lot nicer if you like functional programming. Because it runs on the .NET VM or Mono VM it has good multicore support unlike OCaml. Haskell is more powerful as a language, but also has it's weak aspects. The main thing F# is missing imo in comparison to Haskell is type classes. And of course the Haskell community.