Hacker News new | past | comments | ask | show | jobs | submit login
Sisal: a functional parallel programming language (uea.ac.uk)
110 points by panic on June 27, 2016 | hide | past | web | favorite | 36 comments



Sisal is one of the true pioneers of high-level parallel programming. It was roughly contemporary with (slightly preceded) NESL[0], another data-parallel language. Sisal was probably simpler and more pragmatic. I'm not sure why none of these languages ever left the confines of academia. I have tried NESL, and it was probably the most natural parallel programming experience I've seen. Perhaps performance was sub-par in practice?

A modern "spiritual successor" (if you squint a bit) of Sisal is Single Assignment C (SAC), which despite its name, is a pure functional language. SAC is also undeservedly obscure, although that is possibly a result of the compiler being closed source.

[0]: https://www.cs.cmu.edu/~scandal/nesl.html [1]: http://www.sac-home.org/


Back in the 90's when I was at the university I got hold of a book describing a Lisp based language for parallel computation.

It was a very interesting book about a computer architecture that thanks to the FP abstractions would, in theory, allow the users to get automatically scaling for free.

Something like Clojure parallel abstractions, but that was a Lisp Machine like environment, not a plain language runtime.

However I cannot recall anything else about the book, other that the book cover might have been a green one.


It might have been *Lisp (StarLisp) for the Connection Machine. Incidentally, the Connection Machine also seems to have been one of the main platforms for NESL (which might explain why NESL went nowhere...).



A few videos on youtube, I enjoyed this one https://www.youtube.com/watch?v=IjmostrFetg


My memory is a bit fuzzy in that regard, but it looks like that one.

Thanks for pointing it out.


That sounds to me like Gary Sabot's ‘Paralation Model’.

https://www.amazon.com/Paralation-Model-Architecture-Indepen...


Perhaps people are turned off by the language being way more verbose than is necessary. Why would anyone want to type out that it is a function and the name of the function again when closing the definition of said function?


> the name of the function again when closing the definition

I think that's just a comment. It follows '%' as in TeX.

But yes, nowadays programmers tend to love concise syntax (myself included), not to mention elegant syntax, which is ever harder to define.

For example we make a big deal about being able to type 'x => x * 2' instead of 'function (x) { return x * 2; }' in Javascript, or '[]' instead of 'array()' in PHP. One of the draws of ML-derived functional languages (such as Haskell) and maybe APL-derived ones is the fact that they are very much 'shorthand' programming languages.

At the same time, one should not stop at the syntax, but try to understand the ideas behind a programming language. Syntax can always be improved with a bit of aesthetic sense, but good ideas are hard to come by.


It's hard to beat Haskell operator sections where you can just write

  (* 2)
and get an anonymous function that does

  \x -> x * 2


In K, you could just say

  2*


In Scala and in Factor too! ;-)


It would be nice if Haskell could figure out (-) functions; as it is, it always consider things like (- 2) to be a number, not a function (even when only a function would typecheck).


ML might have had the right idea, using op~ for unary negation and op- for binary subtraction. I definitely would not want parsing to depend on type declarations; that's a slippery slope towards Perl...


I suspect that you could do this already using typeclasses. We want (-2) to be a function of the type `Num a => a -> a`, so we could, in theory, make functions of this type an instance of Num.

I do not immediately see this causing problems for correct programs, but it would probably lead to horribly confusing problems in incorrect code, including converting some type errors into runtime errors.

A simpler approach is to say that you can not have a space after the unary "-", so (-2) parses to a numeric literal, and (- 2) parses to a function.


If APL was released today it would be probably a huge success.


Not at all. Syntax is only the first step. Then comes imperativeness. If you can't write what's essentially Java code in its syntax, it's not going to be a thing.


Agreed. You have your bits of logic that are great for a functional sort of style...map and fold and transform this data...then sometimes you have a script-like area that really benefits from an imperative style...open this, save this off, close this, if error do this, etc.


A closed source compiler is a non-starter for anyone looking to use a language for any purpose beyond tinkering / play.

I wouldn't call it undeservedly obscure.


CMU uses SPARC for their Parallel Algorithm Design course: http://www.parallel-algorithms-book.com/ I wonder how that compares.


SAC looks like a dead project, the last release posted Feb 2014.


It's not dead, it's just... deliberate. Development is supposedly occurring inside a (non-public) source repository, according to a developer I spoke to recently.


You mean they just want people to think their project is dead, while they secretly keep working on it? I don't get it...


The Sisal source is available here: https://sourceforge.net/projects/sisal/


Is it still actively maintained? The CVS repository looks fairly dead.


As a reply to my own comment: the source distribution is fairly modern, with a decent `configure` script and all. Compiles out-of-the-box on my Debian machine, so it's not just a code dump from the early 90s. This might be fun to play around with.


Indeed. The last time I tried (about 5~6 years ago, I think) some patches were needed but nothing that required a huge investment of time. Unfortunately its not Sisal90 that had other goodies, (starting with true multi-dimensional arrays and not just array of arrays).

If you are digging into this take a look at the parts that call backends. Many optimization passes are disabled by default by hard coding. At least that's the way it was when I had picked it up. There was not much interest in merging the changes and I thought a fork would be rather rude and presumptuous (I am hardly the compiler writer type).



Yeah. I wrote my master's thesis on the topic and worked for Dr. Dan Cooke who created the language. SequenceL falls under the Data and Control parallel language category. I was also playing with parallel Haskell around that time (2003) when you explicitly had to specify hints to Haskell for parallelization. However, with SequenceL, nothing like that is required. The problem was too much parallelism. We could find all the parallelisms no problem. distributed computing, load balancing etc. were some of the issues the group was looking at. My own work was to dvelop an interpreter in Python - and executed on Tuplespaces, a concept popularized by David Gelernter


Makes me think of http://futhark-lang.org/ in terms of semantics.


Probably not coincidental. There's only so many ways to construct a data-parallel language.


Rather humble of you ;)


Sisal was used to program the Manchester Prototype Dataflow Computer (https://courses.cs.washington.edu/courses/csep548/05sp/gurd-...).


A sourceforge repository: https://sourceforge.net/projects/sisal/


How is this much different from erlang?


The linked TOC is long and I've only skimmed it, but I wouldn't call Erlang a data parallel language per se. When I hear "data parallel" I'm thinking of a language with abstract data types (ADTs) for lists, and maybe sets, bags, and dictionaries too. There are also language intrinsics like join, map, etc. that can and will automatically be optimized and will "lift" computations into a parallel or distributed context for you. There are various minimal sets of core datatypes and intrinsics that a language can provide to achieve this.

The closest thing that most programmers are familiar with is a relational database. Imagine a language that treats certain data structures like an RDBMS treats a table and does the sort of optimizations that a query optimizer does when you run an intrinsic (either statically at compile time or dynamically at runtime). That's data parallelism.

Erlang has facilities to make this sort of thing possible, but last I looked it doesn't do it for you.

If anyone who's actually an expert wants to step in and correct my assumptions please do so!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: