IIRC, the Haskell solutions were disqualified, and actually for good reasons. The competition was about architecture of the system, with the geoserver an example component. Notice that for example Rapide is an Architecture Description Language (ADL).
As far as I can tell, the Haskell team(s) only concerned themselves with the (purposely) simplistic/simplified functionality of the geoserver component, and not the architectural concerns that were the actual point of the competition.
Which, quite frankly, is something I see frequently from the FP community: declare that <complicated actual problem> is really <much simpler problem easily solved by FP>, solve the simpler problem that ignores almost all the real requirements, declare victory and go home.
Why do you feel they avoid real problems? When did you see that?
Lesson: the "real requirements" typically include that the solution doesn't require learning anything new or weird.
Stripping out the specifics, the problem was: "write some code that blocks until one of a set of memory locations changes its value". The Haskell/STM API provides exactly that interface (where the memory locations you block on are those that you looked at in the transaction). Rephrasing then, the question was "how would you implement a simplified version of the Haskell STM blocking mechanism?", and your solution was "I would use the Haskell STM blocking mechanism", answering the question as though it were one of API usage rather than of API implementation.
That's probably fine as a question of workaday engineering (and maybe an argument for Haskell over C++ on the basis that it has more extensive libraries built-in), but somebody's gotta write that STM implementation, and that's what the challenge was trying to get at.
(In fact, the GHC STM implementation does exactly what the solution I outlined does: with each TVar, it keeps a pointer to threads parked waiting for the TVar's value to change, and unparks all those threads when a transaction modifying the TVar commits).
However, "real requirements" often do include relevant things that aren't stated. I don't know if this was the case here, but "concurrency" is often something you need because of "performance", and so a Haskell STM solution may not be what your C++ hacker is looking for.
The prime example of this is the Haskell "Quicksort" implementation, which is, of course, not actually quick and only vaguely related to the idea of Quicksort.
Another example was SPJ's talk about parallel programming in Haskell, where the result was that a 6 core Haskell version was equivalent to a single core C solution. In the same talk he also boasted that "this is only possible with Haskell", to which an audience member replied that the HPC community had been doing this for well over a decade with FORTRAN.
The problem with the FP community is that there is so much of the latter examples going on that the true cases of the former tend to be less believable.
Reminds me of a Rich Hickey talk were he proudly declared that a Clojure solution based on its persistent vector type running on four cores was as fast as the Java/mutable vector solution running on one!
The point there was supposed to be that Clojure's inefficiency is more than counterbalanced by the fact that it makes writing parallel code easier.
I was not convinced, though.
> In the same talk he also boasted that "this is only possible with Haskell", to which an audience member replied that the HPC community had been doing this for well over a decade with FORTRAN.
The FP advocates always seem to use examples which show non-interactive processing of "embarrassingly parallel" vector data. And yes, that is trivial in any language.
Even in C land all you need to do is to add one keyword / annotation to your for-loop and you are good.
Yeah but at least it's lazy! ;) If you only ask to (sort) because from the result you're going to (take) the say 4 (smallest/latest/whatever-est), it'll only just sort-enough to give you these 4 sorted, unneeded values being discarded (usually) partially unsorted --- no matter the size of the total.
(Still, there may no doubt well be better quicksorts for Haskell out there..)
Of course you can write this sort of specific short-cutting in any language manually, but with lazy-evaluation it's just the standard behaviour for any naive `(take 4 (sortBy mysorter mydata))`-type expression.
(Takeaway regarding "quick" above: everything regarding performance and efficiency in lazily-evaluated pure functional languages depends to a large degree on the current live data; predictability gets harder here consequently.)
;) Yeah, except that the overhead of lazy is often (usually?) more than the "overhead" of doing the whole computation eagerly.
> just sort-enough to give you these 4 sorted
And how is that going to work in a sort? The 4 first elements can be located anywhere and have to brought into position.
There are "first k of n" algorithms, but these are (last I checked) quite different from generic sorting algorithms, and I doubt that a lazy sort in Haskell actually magically turns into one of those algorithms.
Happy to be proved wrong!
But, as you mentioned the constant factors involved are ridiculously high and this is not going to give you a practical solution.
Yes, it is vague. This was 20 years ago.
> Do you have any more details?
If you read the text carefully, and strip it of the "we were so awesome and those idiots didn't understand just how awesome we were" you can actually see quite a bit of what I was saying in the text itself.
For example: "... neither [of the authors] attended the kickoff meeting at NSWC". So it's not entirely surprising if (as I suggest), they weren't fully aware of the actual purpose [especially if, as the other paper suggests, there were relevant discussions that preceded the kickoff meeting].
Or the following: 'One observer described the solution as “cute but not extensible” (para-phrasing); this comment slipped its way into an initial draft of the final report, which described the Haskell prototype as being “too cute for its own good”'
Again, the authors frame this as the judges not understanding how wonderful their solution was, i.e. "everybody else is an idiot". Well, maybe everyone else isn't an idiot, and maybe the evaluation had something to do with not really contributing to the actual software-architectural purpose, which in turn had something to do with the authors not being there for the kickoff meeting?
Of course, I wasn't there either, but this sure is what it looks like to me.
Here is another paper, which talks more about the original problem and purpose of the competition, which was to be a sample problem (with sample solutions) for software architecture:
I remember seeing a more comprehensive report, which also had larger code samples and talked about the disqualifications, but I can't seem to find it right now.
> It happened that some modifications and extensions to Reference 1 were agreed to at the experiment kickoff meeting at NSWC. Dr. Jones and Dr. Hudak were not informed of those extensions until after the completion and review of the initial draft of Prototype 1. [...]
> Prototype 2: We believe that our Prototype 2 will be most comparable to the prototypes prepared by the other teams. This prototype satisfies all mandatory requirements of the problem as defined in References 1 and 2, and also incorporates generalizations to support anticipated future requirements. To accomplish this task, the original author of Prototype 1 made a number of modifications and enhancements requiring about five hours [...]
P1 and P3 solutions are attached in appendices, along with some others.
I skimmed through the paper and I don't see where they state the P1 prototype (or any prototype, for that matter) was "disqualified", as user mpweiher claimed. The authors continue to state they found Haskell and FP to be great for rapid prototyping... and in my opinion, anyone who's ever written a prototype will agree that making changes at short notice is expected :) And Haskell turned out to be very suitable for this. I think they were right to call it a success.
More importantly, that they were able to successfully (and with relatively little effort) generalize and extend the prototype shows that it was indeed extensible. That they were able to provide three prototypes is also telling, because even if some were delivered after the experiment, they were still made in a very short time. Imagine managing this feat in C++ or Ada!
It'd be interesting to see how many of the other teams which completed the assignment also had to implement some changes after the review.
The author singles out the use of higher-order functions as an elegant technique of which the other participants weren't aware, even though some of their languages actually supported it! This technique is precisely what another participant considered "too cute for its own good" -- even though today we know higher-order functions are hugely useful and relatively mainstream, enough that few devs would find them surprising. Another objection was that they couldn't believe the Haskell prototype was a finished executable program and not a top-level design document mixed with some pseudocode... which speaks a lot in Haskell's favor!
To me, Paul Hudak actually sounds humble in his paper. He acknowledges the limitations of the experiment, he is careful not to say this "proves" anything, but is enthusiastic about what it suggests, i.e. that Haskell can be used for rapid prototyping with great success! I'd say this is an uncontroversial conclusion.
It's unfortunate that you cannot back the claim that the Haskell solutions were disqualified. It certainly contradicts the paper, which shows the final scorecard by the review panel...
Well that's precisely the vague stuff you were talking about in your original post, so that's very unfortunate.
Not only that, but it excluded some of the more prominent languages for prototyping of the time: Smalltalk, SML, Eiffel, and Objective C.
And if you were to do it today, you'd invariably have to include languages that have had 20 years to catch up. Would the same result hold in a world with Scala, Ocaml, F#, Idris, Agda, Swift, and Rust?
Let's face it, this study was a victory for higher order functions and ADTs with pattern matching. Let's not pretend it was anything more than that.
In what sense was this not a victory for Haskell at the time the paper was written? You can always decompose any programming language L into a collection of X, Y, Z features and claim it's really about X, Y, Z instead of L, but is that a meaningful distinction?
I love this paper because it explores the idea that functional languages with static typing can actually be very good at rapid prototyping. This goes against some oft-repeated "common sense" notion that "static typing is good for large systems, but for prototyping it's best to use dynamic typing". It's very illuminating that the two Haskell teams were the only ones that delivered  working, feature-complete code, and at the same time they had the fewer lines of code.
 ...which would have mattered more if the examiners had actually looked at working code, the results hadn't been self-reported by each team, and the requirements themselves hadn't been determined more or less by each team. Seriously, if there is ONE complain to make is that the study wasn't rigorous enough. I'd love for this experiment to be conducted again with modern languages and a more rigorous methodology.
Where are you seeing that the Haskell code was the only "working, feature-complete" code?
The authors say that the Rapide submission "did not address the essential functionality of the geo-server"; I guess it is reasonable to assume that if any of the other submissions had been incomplete, they would have mentioned that too.
It was the exclusion of a handful of popular languages that either strongly intersected with Haskell of the era (SML), or had other measurable contributions to make to prototyping capability (smalltalk, erlang, prolog). Including SML at the very least, would have let us know whether Haskell's laziness was a help or a hindrance, and including smalltalk would have helped us to know whether C++'s failings had to do with its reliance on OOP or something else (even in 94, C++ was one of the largest and most complex languages that had ever existed). But they weren't even on the table...a baffling decision given some of the languages that did make it in. It looks like a deliberate bias in language selection, and it makes it look like the conclusion was drawn before the study even began.
From the paper, page 9:
> The Haskell prototype was written by the second author of this paper, with minor help from the first
Seems like they definitely had a horse in the race!
Regardless, the scorecard itself -- showing Haskell to be one of the winners -- was written by the review panel and not by the author of this paper. Those are the cold facts!
Two more interesting facts:
1- The experiment itself was not designed by the FP community, but was an actual (if simplified) system designed by the Navy. No constraints or requirements were tailored to make things easier for FP; in fact most participants were unfamiliar with FP.
2- A second Haskell prototype was written by an independent programmer, unconnected to the paper's author, who was unfamiliar with Haskell and had to learn it before completing the assignment. I find this very interesting.
- C++ needs no explanation.
- Ada was (is?) commonly used in defense systems. It was referred to as the "control group" by the paper's author.
- Some are related to Ada.
- Others seem experimental languages, part of a "ProtoTech" program for the design of prototyping languages. It seems natural to include experimental prototyping languages in an experiment about rapid prototyping!
I expect SML would have fared similarly to Haskell (or Miranda); at this level I would consider both to be more or less the same language. And it would definitely be interesting to see a Smalltalk and more modern languages!
> [the choice of languages] looks like a deliberate bias in language selection, and it makes it look like the conclusion was drawn before the study even began.
This is explicitly shown not to be the case. The experiment wasn't designed by the FP community but by ARPA and the Office of Naval Research; the review panel was unfamiliar with FP; none of the requirements was devised with FP or Haskell in mind. The paper's author is on Haskell/FP's side, but the review panel was not. Of all possible objections, we can safely disregard this one.
They probably wanted to avoid redundancy. It seems like they picked fairly dissimilar languages all around.
So, the bread and butter of Haskell?
There are few languages that do these two things nearly as well as Haskell. All of them are ML derivatives. Haskell, OCaml, and F# come to mind.
Rust has almost-as-good ADTs, but no higher order functions. A number popular of languages support both of these concepts, but not nearly as natively or conveniently.
let a = (0..10).map(|x| x*x).collect::<Vec<_>>();
I have seen a few hashmaps that store functions as values, in terms of real world uses.
You mentioned storing functions (closures here) in a vector, this isn't too much of a stretch I don't think:
let x = |c| c * 2;
let y = |v| v + 2;
let b: Vec<&Fn(_) -> _> = vec![&x, &y];
it seems to me Rust has HOF, do you mean they are not as nice as Haskells' ? Why?
And with C++17 variant, you also almost get pattern matching.
(also, as per "powerful type system", you can do dependent types in C++)
Huh? I cannot relate to this statement at all. C++'s memory model is not friendly for immutable structures.
The semantics of C++ are also substantially more complicated than those of Haskell. Although Haskell has some fairly powerful type-level machinery (if you choose to use it, which you needn't), the value-level semantics are extremely simple across the board. Essentially everything in the entire language is just straightforward syntactic sugar over lambda abstraction/application and constructor application/matching. It gets more complicated (like C++) for high-performance library authors who need to peek under the hood at low-level memory operations, but 99% of users are insulated from that.
I happen to love AWK, but this is absolutely the wrong usecase for it: AWK is a highly domain-specific language, and using it here just makes scripting languages look worse than they are (and in the early 90s, scripting languages weren't that great).
We could've gone PS++, but now we've got SGML++--++^&^!!1!!
My original comment was going to be "Who thought that was a good idea? Maybe there are reasons, but that sounds like a terrible idea at face value".
Either way, it sounds like a pretty bad idea to add this as a feature to Adobe Reader or somehow to the PDF spec. Especially since PDF remote code execution exploits happen relatively often.
Probably not relevant anymore.