

Programming languages worth checking out  - reazalun
http://www.h3rald.com/articles/10-programming-languages

======
felixmar
11\. F#

F# is a mixture of C#, OCaml and Haskell. F# will be included in the next
version of Visual Studio so it's no longer just a research language.
Performance is similar to C# but coding in it is faster in my experience and
above all a lot nicer if you like functional programming. Because it runs on
the .NET VM or Mono VM it has good multicore support unlike OCaml. Haskell is
more powerful as a language, but also has it's weak aspects. The main thing F#
is missing imo in comparison to Haskell is type classes. And of course the
Haskell community.

~~~
gaius
If Microsoft can work out a sane licensing arrangement for compute
farms/clusters/clouds (whatever we're calling them this week) of Win 2008
Server, then F# is a game-changer of a similar scale to Google's MapReduce.
That's a big _if_ tho'.

Having said that, Windows is cheaper than RHEL already...

~~~
kragen
I think I can see how MapReduce could be argued to be a "game-changer"
(although I'm not sure it's true), but why F#? If I understand correctly, it's
basically OCaml on the CLR. So you get a Python-like brevity of code (and ease
of programming? maybe it has better error messages than OCaml?} with C#
performance and access to the CLR libraries. That sure sounds useful, but why
is it a "game-changer"?

~~~
gaius
Because functional languages are a good choice for extracting as much
parallelism as you can get from your algorithms.

~~~
kragen
No.

CUDA yes. Cg yes. HLSL yes. Verilog yes. VHDL yes. C++ or Java with MapReduce
yes. PHP and MySQL with memcached yes. Erlang yes — and it really _is_
functional, inside each process, anyway — that is, the level where you _aren't
getting any parallelism_. Octave or R, potentially, but not today, as far as I
know. Mathematica yes, and it, too, is mostly functional.

In theory, side effects are what make parallelism hard, and so languages whose
semantics are side-effect-free (unlike F# or Mathematica or Erlang) should
make it easy. So we all thought in 1980. Since then we spent 20 years or so
trying to make that happen, and it basically didn't work.

There are basically four kinds of parallelism within easy reach today. There's
SIMD, like MMX, SSE, 3DNow, AltiVec, and the like; you'd think that data-
parallel languages and libraries like Numpy and Octave would be all over this,
but except for Mathematica, that doesn't seem to be happening. There's running
imperative code on a bunch of tiny independent processors that share no data;
AFAIK that's what the shader languages are doing. There's instruction-level
parallelism on a superscalar processor, which largely benefits from things
like contiguous arrays in memory, or maybe what Sun is doing with Niagara,
where the processor pretends to be a bunch of tiny independent slow
processors. And then there's splitting up your data across a shared-nothing
cluster, which is how every high-traffic web site works, and that's what
MapReduce makes simpler.

Uh, and then there's designing your own hardware or programming FPGAs, which
is what Verilog and VHDL are for.

Languages like OCaml (I don't know anything about F# except that it's like
OCaml, but for the CLR) have no special advantage for any of these scenarios.
They don't even have the _theoretical_ advantage that they have no side
effects and therefore you can speculatively multithread them without breaking
the language semantics. They do have the massive practical disadvantage, in
most of the scenarios I described, of needing unpredictable amounts of memory,
having massive libraries, and using pointers all over the place. Using
pointers all over the place kills your locality of reference and your ILP.
Having massive libraries and using unpredictable amounts of memory makes it
impossible to run them inside your GPU and means they can't run on an FPGA
(except by using external memory, like the awesome Reduceron). And nothing
about the language semantics helps with SIMD either.

So, sheesh, go read Alan Bawden's dissertation or whatever, but don't go
around claiming that ML (or even Haskell) is going to magically make your
algorithms parallel. We tried that. It didn't work. We're trying something
else now.

~~~
khafra
Just for my own edification, since you seem quite familiar with the subject:
Is Obsidian* a big fat waste of time that'll never be as good as just
compiling Haskell for a CPU like a normal person? 'cause I was considering
investing some time in learning it, when I wouldn't have the free brain cycles
for CUDA.

* <http://www.google.com/search?q=obsidian+haskell>

~~~
kragen
I didn't know about it! From a cursory look (all I can find is presentation
slides?), it doesn't sound like you'll be able to take any off-the-shelf
Haskell subroutines and run them in parallel on the GPU; rather, it's an
embedded DSL for constructing shader programs. So I imagine you'd have to
learn both CUDA and Obsidian to use it.

But it would be awesome if someone came along and proved me wrong.

~~~
khafra
Thanks for the read--now, how 'bout another?
<http://www.cse.unsw.edu.au/~chak/papers/gpugen.pdf>, another Haskell-embedded
GPU DSL, claims to work at a higher abstraction level than Obsidian, but still
provide high performance general-purpose GPU programming capabilities.

------
agentcoops
I find Self to be worth checking out, as well
(<http://research.sun.com/self/>) if for no other reason than seeing how fast
dynamic languages can be. The research that went into it--techniques for
blazing fast implementations of highly dynamic, object-oriented languages--has
become especially relevant and provides the basis for many of the numerous new
Javascript vms (especially V8).

With regards to Smalltalk and its incredible reflective abilities, this video
from OOPSLA'08 ("Smalltalk Superpowers") is particularly amusing:
<http://www.veoh.com/videos/v163138695pJEMGmk>. Unger even does a pretty neat
demo of Self.

~~~
gaius
Self seems to be a dead project?

~~~
agentcoops
Actually, checking out the cvs repository on sourceforge
(<http://self.sourceforge.net/>), there seem to have been some recent commits;
though, in general, I think you're right that there isn't much active
development.

There's also work on a linux port at
<http://www.gliebe.de/self/download.html>, but I think it might be relatively
defunct as well.

------
Jasber
I'm curious if anyone has any experience with Io (<http://iolanguage.com/>).
The description sounds like an appealing combination:

 __ _Its unusual, minimalist and yet elegant and powerful syntax reminds of
Smalltalk, but the language goes far beyond that. Io is an object-oriented,
prototype-based, message-based and fully-reflective programming language. This
means that you use messages like in Smalltalk, you create objects like in
Javascript and every bit of your code can be inspected and passed around as
you see fit._ __

~~~
richcollins
I co-wrote (with Steve) SoundConverter for Windows:

<http://dekorte.com/projects/shareware/SoundConverter/>

The interface talks to an Io "server" which handles business logic.

We're currently working on a shopping aggregation startup. The crawler is
written in Io.

Pixar's RenderMan image tool uses Io as its scripting language:

<https://renderman.pixar.com/products/tools/it.html>

Io is still under development so you might run into the occasional bug. Bugs
are encountered more often in the addons than in the core interpreter (which
is stable).

~~~
thomasmallen
Ah, I see you're the creator of Io. Do you know of any Io tools for Vim?
Syntax and indent would be great if nothing else...Google didn't turn up
anything.

~~~
richcollins
No, Steve Dekorte is the creator of Io. I work with him on some projects
(including Io).

I'm not sure about vim. I use TextMate (there is an Io bundle).

~~~
thomasmallen
Oh, I saw the link and didn't read your username clearly. I have the TextMate
bundle here...I suppose _some_ interested person will have to get to writing a
few io.vims :^)

------
russell
Is anyone using D? Is it a lot better than C++ or just a little?

~~~
old-gregg
There is also an issue with an implementation: the official implementation
from Walter is awkwardly packaged and lacks the source (you can't build it
yourself). This pretty much turns it into a toy for Linux/OSX programmers: you
can't distribute your D code, nobody will figure out how to build it.

And GCC-based implementation is lagging behind in features, and also scores
consistently 10-30% slower than C/C++ in benchmarks, which kills D's appeal as
a performant replacement for C.

What Walter needs to do, IMO, is to abandon his own implementation completely
and closely work with GCC/Linux community to include high-quality D into
standard GCC package. _Then_ we may start seeing decent software written in
it.

------
mattmaroon
Noticeably missing from this is Maroon, the Ultra High Level Programming
Language

<http://mattmaroon.com/?p=337>

I guess I should have paid that guy on elance his $1,000 to code it up in C
for me.

------
tptacek
_"Unlike other Lisps (and Schemes) you may have encountered before, Clojure
comes with some interesting additions: [...] Many pre-built data structures,
like Vectors, Maps, Sets, Collections, …"_

Here I stopped reading.

~~~
marcocampos
Why? It's a nice adition to the language and makes using it a much better
experience.

My first big project in LISP was making a Sudoku solver for a school project.
Could only use basic instructions and lambdas. It was not pleasant trip :P

~~~
smanek
But common lisp does come with vectors (make-array), hashtables (make-hash-
table), and set-operators (intersection, union, pushnew, etc) that work on
lists.

Also, Common Lisp (CLOS, in particular) supports multimethods (defmethod),
contrary to what the author claims. And all of the special forms I saw in
clojure have a counterpart in common lisp.

In fact, a lot of common lisp implementations (I like SBCL) have support for
threading/asynchronous action built in. The one problem is that none of that
is part of the ansi spec, so most multithreaded code won't be portable between
implementations.

It isn't the language's fault that your teacher wouldn't let you use all these
features.

~~~
gruseom
What's distinctive about Clojure is not that it has them, but rather that it
makes them first-class citizens in the way that lists are - i.e. provides a
universal set of operators for manipulating them (or so I hear anyway). This
is certainly a weakness of CL. I think this is the original point underlying
the author's garbled statement, and it's an improvement specifically to CL
that has nothing to do with the presence of arrays and hashtables in the
language.

Edit: I realize this is probably obvious, but let's not confuse what this guy
says about Clojure with what Hickey has to say. He's well aware of CL and
doesn't make silly claims about it.

~~~
smanek
That's very true. I personally like clojure - and it does provide a lot of
potential advantages to common lisp. I just don't think the author of this
article understands them - he's just spewing bs.

Hickey does make a lot of valid points about the advantages/differences of
clojure (<http://clojure.org/lisps>) though.

------
caustic
What about Prolog? Why it's not in the list?

~~~
Tichy
Is it still being used at all? I would be interested in it, too (and I don't
think Erlang is very similar to it at all???).

~~~
kragen
Erlang gets a lot of its syntax and implementation technology from Prolog, but
it's true that the distinctive things about Erlang (massive shared-nothing
concurrency, supervision trees, tuples, pattern-matching on binaries) have
nothing in common with the distinctive things about Prolog (backtracking).
They both have pattern-matching, but so do lots of other languages (even
Python to a small extent).

I'm pretty sure Prolog is still being used, but it doesn't have the buzz of,
say, Fortran or COBOL or Pascal.

~~~
tlrobinson
Wasn't Erlang originally implemented in Prolog?

~~~
kragen
Yes.

------
trevelyan
experimenting with lua this evening. be curious to hear if others have had
good/bad experiences with it.

~~~
silentbicycle
It's one of my favorite languages, and is rapidly crowding out Python. It's a
_tremendously_ clean and powerful language. Since Lua was designed for
embedding, its design favored adding a few meta-features that could be used to
build task-appropriate features into the language, rather than adding to the
language core. It's also a relatively quick language to learn, and is
extraordinarily portable - the whole language is written in pure ANSI C.

It also has tail-call optimization, closures, coroutines, and other aspects
that are interesting from a pure CS standpoint, such as the way the table
datatype is implemented and its register-based VM.

For a good taste of Lua, skim "The Implementation of Lua 5.0"
(<http://www.tecgraf.puc-rio.br/~lhf/ftp/doc/jucs05.pdf>), the quick intro to
Lua in the LPEG paper (<http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html> ,
around page 6 of the PDF), or _Programming in Lua_ (<http://www.inf.puc-
rio.br/~roberto/pil2/>). The first edition of the latter is online, though the
second edition has a bunch of added material, and IMHO is well-worth buying if
you get into Lua. Also, check out LuaJIT (<http://luajit.org/>), if you're
using i386 hardware -- it's a _FAST_ JIT.

------
brent
Try minikanren for all your logic programming needs:
<http://kanren.sourceforge.net/> . :)

------
joe_the_user
Hmm,

Nice to have all the up-and-coming languages lined-up in an internal
comparison. I like Lua, Haskell and Io. I think that future languages will
have to have a cleaner rather than an uglier look and syntax. For that reason,
I have a visceral reaction to scala and factor. Even the one-line examples
I've seen of these language have look like gibberish. I'm sure they are great
in many ways but don't want them to succeed since they will make my brain hurt
more. Also, I think that they won't succeed for similar reasons.

~~~
stcredzero
Don't forget Eiffel and its progeny. Design By Contract and heavy use of
assertions feels to me like it's closely related to Test First development.
You also get fast runtime code out of it (as in C++ fast) in a very Pure-OO
environment. (By many accounts, even more so than Java and C++.)

~~~
jballanc
I played with Eiffel for a time, but found that Design By Contract was a bit
overly restrictive for amateur programming. Now that I'm doing more
professional stuff, I think it's worth looking at again. There's also a lot of
Eiffel influence in Ruby, so moving over from Ruby is not so hard.

~~~
stcredzero
What I've found is that you actually want restrictive when you're doing
maintenance. You want to know as much as possible about the code you're
changing. You want to know exactly what's in that instance variable.

When you're doing prototyping, you want to delay that decision as long as you
can, so you are making the most informed decision possible. And nothing
informs your decisions like the act of developing and getting feedback from
users.

Optional static typing gives you the best of both worlds. But I don't know how
you'd get the equivalent with Design By Contract/Unit Testing. Both of those
feel a little unnatural to me. I'd rather just tinker, and not have to set up
extra stuff before or after I code.

