Hacker News new | past | comments | ask | show | jobs | submit login
Haskell vs. Ada vs. C++ vs. Awk vs (1994) [pdf] (yale.edu)
92 points by tosh on Dec 29, 2016 | hide | past | web | favorite | 68 comments



This is a long time ago, but I remember coming across this competition when studying software architecture for my Diplomarbeit.

IIRC, the Haskell solutions were disqualified, and actually for good reasons. The competition was about architecture of the system, with the geoserver an example component. Notice that for example Rapide is an Architecture Description Language (ADL).

As far as I can tell, the Haskell team(s) only concerned themselves with the (purposely) simplistic/simplified functionality of the geoserver component, and not the architectural concerns that were the actual point of the competition.

Which, quite frankly, is something I see frequently from the FP community: declare that <complicated actual problem> is really <much simpler problem easily solved by FP>, solve the simpler problem that ignores almost all the real requirements, declare victory and go home.


I find that statement hard to believe when so much complicated software in the industry is built using Scala/Haskell/OCaml not to mention so much of research. (STM is an example of how some complicated stuff is solved easily using FP).

Why do you feel they avoid real problems? When did you see that?


One time someone posted a "concurrency challenge", and I spent a couple minutes typing out the STM calls that solved it. They balked, for some combination of: that doesn't work in C++, you're not really following the spirit of the challenge, etc. Later they posted a 100+ line solution using condition variables.

Lesson: the "real requirements" typically include that the solution doesn't require learning anything new or weird.


Hah, I think I'm that person you're referring to. I would characterize my response differently.

Stripping out the specifics, the problem was: "write some code that blocks until one of a set of memory locations changes its value". The Haskell/STM API provides exactly that interface (where the memory locations you block on are those that you looked at in the transaction). Rephrasing then, the question was "how would you implement a simplified version of the Haskell STM blocking mechanism?", and your solution was "I would use the Haskell STM blocking mechanism", answering the question as though it were one of API usage rather than of API implementation.

That's probably fine as a question of workaday engineering (and maybe an argument for Haskell over C++ on the basis that it has more extensive libraries built-in), but somebody's gotta write that STM implementation, and that's what the challenge was trying to get at.

(In fact, the GHC STM implementation does exactly what the solution I outlined does: with each TVar, it keeps a pointer to threads parked waiting for the TVar's value to change, and unparks all those threads when a transaction modifying the TVar commits).


Fair. Sorry for misrepresenting you.


Yes, that is true, and I sympathize, I am also often in the position of having a solution that is simpler than seems possible to the "less enlightened".

However, "real requirements" often do include relevant things that aren't stated. I don't know if this was the case here, but "concurrency" is often something you need because of "performance", and so a Haskell STM solution may not be what your C++ hacker is looking for.

The prime example of this is the Haskell "Quicksort" implementation, which is, of course, not actually quick and only vaguely related to the idea of Quicksort.

Another example was SPJ's talk about parallel programming in Haskell, where the result was that a 6 core Haskell version was equivalent to a single core C solution. In the same talk he also boasted that "this is only possible with Haskell", to which an audience member replied that the HPC community had been doing this for well over a decade with FORTRAN.

The problem with the FP community is that there is so much of the latter examples going on that the true cases of the former tend to be less believable.


> Another example was SPJ's talk about parallel programming in Haskell, where the result was that a 6 core Haskell version was equivalent to a single core C solution.

Reminds me of a Rich Hickey talk were he proudly declared that a Clojure solution based on its persistent vector type running on four cores was as fast as the Java/mutable vector solution running on one!

The point there was supposed to be that Clojure's inefficiency is more than counterbalanced by the fact that it makes writing parallel code easier.

I was not convinced, though.

> In the same talk he also boasted that "this is only possible with Haskell", to which an audience member replied that the HPC community had been doing this for well over a decade with FORTRAN.

The FP advocates always seem to use examples which show non-interactive processing of "embarrassingly parallel" vector data. And yes, that is trivial in any language. Even in C land all you need to do is to add one keyword / annotation to your for-loop and you are good.



> The prime example of this is the Haskell "Quicksort" implementation, which is, of course, not actually quick and only vaguely related to the idea of Quicksort.

Yeah but at least it's lazy! ;) If you only ask to (sort) because from the result you're going to (take) the say 4 (smallest/latest/whatever-est), it'll only just sort-enough to give you these 4 sorted, unneeded values being discarded (usually) partially unsorted --- no matter the size of the total.

(Still, there may no doubt well be better quicksorts for Haskell out there..)

Of course you can write this sort of specific short-cutting in any language manually, but with lazy-evaluation it's just the standard behaviour for any naive `(take 4 (sortBy mysorter mydata))`-type expression.

(Takeaway regarding "quick" above: everything regarding performance and efficiency in lazily-evaluated pure functional languages depends to a large degree on the current live data; predictability gets harder here consequently.)


> Yeah but at least it's lazy! ;)

;) Yeah, except that the overhead of lazy is often (usually?) more than the "overhead" of doing the whole computation eagerly.

> just sort-enough to give you these 4 sorted

And how is that going to work in a sort? The 4 first elements can be located anywhere and have to brought into position.

There are "first k of n" algorithms, but these are (last I checked) quite different from generic sorting algorithms, and I doubt that a lazy sort in Haskell actually magically turns into one of those algorithms.

Happy to be proved wrong!


Well, you do get an (expected time) asymptotically efficient algorithm for sorting the first k elements of an array by running quicksort without recursing on the last n-k elements. This is what you get in Haskell with "take k (sort xs)".

But, as you mentioned the constant factors involved are ridiculously high and this is not going to give you a practical solution.


I was under the impression that STM, being an opportunistic (success biased) concurrency primitive, was faster than explicit locking. Isn't it more performant? I remember I saw in a talk given where in that benchmark STM was as fast as no locking at all !


Scala/Haskell/OCaml equally? My distant impression is that OCaml has about 95% of the difficult commercial domains.


I have no idea so couldn't say. My point was not to compare them but to say FP is big in complicated software writing be it any FP language.


What you are saying sounds interesting but very vague. Do you have any more details? Nothing you are saying corresponds to the contents of the PDF, which seems like a fairy complete summary of the competition.


> What you are saying sounds interesting but very vague.

Yes, it is vague. This was 20 years ago.

> Do you have any more details?

If you read the text carefully, and strip it of the "we were so awesome and those idiots didn't understand just how awesome we were" you can actually see quite a bit of what I was saying in the text itself.

For example: "... neither [of the authors] attended the kickoff meeting at NSWC". So it's not entirely surprising if (as I suggest), they weren't fully aware of the actual purpose [especially if, as the other paper suggests, there were relevant discussions that preceded the kickoff meeting].

Or the following: 'One observer described the solution as “cute but not extensible” (para-phrasing); this comment slipped its way into an initial draft of the final report, which described the Haskell prototype as being “too cute for its own good”'

Again, the authors frame this as the judges not understanding how wonderful their solution was, i.e. "everybody else is an idiot". Well, maybe everyone else isn't an idiot, and maybe the evaluation had something to do with not really contributing to the actual software-architectural purpose, which in turn had something to do with the authors not being there for the kickoff meeting?

Of course, I wasn't there either, but this sure is what it looks like to me.

Here is another paper, which talks more about the original problem and purpose of the competition, which was to be a sample problem (with sample solutions) for software architecture:

http://www.cs.cmu.edu/afs/cs/project/able/ftp/aegis-iwssd8/a...

I remember seeing a more comprehensive report, which also had larger code samples and talked about the disqualifications, but I can't seem to find it right now.


I think this may be the report [1]? It seems that their P1 did not meet the standards, due to miscommunication -- not language incapability, but they were able to correct it in P2:

> It happened that some modifications and extensions to Reference 1 were agreed to at the experiment kickoff meeting at NSWC. Dr. Jones and Dr. Hudak were not informed of those extensions until after the completion and review of the initial draft of Prototype 1. [...]

> Prototype 2: We believe that our Prototype 2 will be most comparable to the prototypes prepared by the other teams. This prototype satisfies all mandatory requirements of the problem as defined in References 1 and 2, and also incorporates generalizations to support anticipated future requirements. To accomplish this task, the original author of Prototype 1 made a number of modifications and enhancements requiring about five hours [...]

P1 and P3 solutions are attached in appendices, along with some others.

[1] http://www.cs.yale.edu/publications/techreports/tr1031.pdf


Awesome! I've been looking for a followup to the previous paper, with no success. Thank you for posting it!

I skimmed through the paper and I don't see where they state the P1 prototype (or any prototype, for that matter) was "disqualified", as user mpweiher claimed. The authors continue to state they found Haskell and FP to be great for rapid prototyping... and in my opinion, anyone who's ever written a prototype will agree that making changes at short notice is expected :) And Haskell turned out to be very suitable for this. I think they were right to call it a success.

More importantly, that they were able to successfully (and with relatively little effort) generalize and extend the prototype shows that it was indeed extensible. That they were able to provide three prototypes is also telling, because even if some were delivered after the experiment, they were still made in a very short time. Imagine managing this feat in C++ or Ada!

It'd be interesting to see how many of the other teams which completed the assignment also had to implement some changes after the review.


I don't see this arrogance you claim. My reading of the paper is completely different. Nowhere do I see this attitude of "everyone else is an idiot"; the author is just stating the fact that the other participants weren't familiar with FP techniques. This is not insulting, it's a statement of fact. It was also not unexpected in 1994.

The author singles out the use of higher-order functions as an elegant technique of which the other participants weren't aware, even though some of their languages actually supported it! This technique is precisely what another participant considered "too cute for its own good" -- even though today we know higher-order functions are hugely useful and relatively mainstream, enough that few devs would find them surprising. Another objection was that they couldn't believe the Haskell prototype was a finished executable program and not a top-level design document mixed with some pseudocode... which speaks a lot in Haskell's favor!

To me, Paul Hudak actually sounds humble in his paper. He acknowledges the limitations of the experiment, he is careful not to say this "proves" anything, but is enthusiastic about what it suggests, i.e. that Haskell can be used for rapid prototyping with great success! I'd say this is an uncontroversial conclusion.

It's unfortunate that you cannot back the claim that the Haskell solutions were disqualified. It certainly contradicts the paper, which shows the final scorecard by the review panel...


> I remember seeing a more comprehensive report, which also had larger code samples and talked about the disqualifications, but I can't seem to find it right now.

Well that's precisely the vague stuff you were talking about in your original post, so that's very unfortunate.


It's old. It's so old, it predates Haskell's monads as an idea, let alone a solution to the IO problem. It was before Higher Kinded Types and GADTs. Linked Lists were still the primary data structure for everything. Haskell was more like Miranda than it was like Haskell today.

Not only that, but it excluded some of the more prominent languages for prototyping of the time: Smalltalk, SML, Eiffel, and Objective C.

And if you were to do it today, you'd invariably have to include languages that have had 20 years to catch up. Would the same result hold in a world with Scala, Ocaml, F#, Idris, Agda, Swift, and Rust?

Let's face it, this study was a victory for higher order functions and ADTs with pattern matching. Let's not pretend it was anything more than that.


> Let's face it, this study was a victory for higher order functions and ADTs with pattern matching. Let's not pretend it was anything more than that.

In what sense was this not a victory for Haskell at the time the paper was written? You can always decompose any programming language L into a collection of X, Y, Z features and claim it's really about X, Y, Z instead of L, but is that a meaningful distinction?

I love this paper because it explores the idea that functional languages with static typing can actually be very good at rapid prototyping. This goes against some oft-repeated "common sense" notion that "static typing is good for large systems, but for prototyping it's best to use dynamic typing". It's very illuminating that the two Haskell teams were the only ones that delivered [1] working, feature-complete code, and at the same time they had the fewer lines of code.

[1] ...which would have mattered more if the examiners had actually looked at working code, the results hadn't been self-reported by each team, and the requirements themselves hadn't been determined more or less by each team. Seriously, if there is ONE complain to make is that the study wasn't rigorous enough. I'd love for this experiment to be conducted again with modern languages and a more rigorous methodology.


> "It's very illuminating that the two Haskell teams were the only ones that delivered [1] working, feature-complete code"

Where are you seeing that the Haskell code was the only "working, feature-complete" code?


The paper does not support this statement - section 5 says that there were several submissions that could not be executed at the time because no implementation was available (Ada9X, Rapide, Griffin).

The authors say that the Rapide submission "did not address the essential functionality of the geo-server"; I guess it is reasonable to assume that if any of the other submissions had been incomplete, they would have mentioned that too.


Yes, you are right. I misread, that claim is not supported by the paper. Thanks for the correction!


> In what sense was this not a victory for Haskell at the time the paper was written? You can always decompose any programming language L into a collection of X, Y, Z features and claim it's really about X, Y, Z instead of L, but is that a meaningful distinction?

It was the exclusion of a handful of popular languages that either strongly intersected with Haskell of the era (SML), or had other measurable contributions to make to prototyping capability (smalltalk, erlang, prolog). Including SML at the very least, would have let us know whether Haskell's laziness was a help or a hindrance, and including smalltalk would have helped us to know whether C++'s failings had to do with its reliance on OOP or something else (even in 94, C++ was one of the largest and most complex languages that had ever existed). But they weren't even on the table...a baffling decision given some of the languages that did make it in. It looks like a deliberate bias in language selection, and it makes it look like the conclusion was drawn before the study even began.


> it makes it look like the conclusion was drawn before the study even began.

From the paper, page 9:

> The Haskell prototype was written by the second author of this paper, with minor help from the first

Seems like they definitely had a horse in the race!


> (10) Intermetrics, independently and without the knowledge of NSWC or Yale University, conducted an experiment of its own: the Haskell Report was given to a newly hired college graduate, who was then given 8 days to learn Haskell. This new hire received no formal training, but was allowed to ask an experienced Haskell programmer questions as issues came up during the self study. After this training period, the new hire was handed the geo-server specification and asked to write a prototype in Haskell.


This paper never claimed to be an independent report on the study. For the actual report, you have to look elsewhere (not sure if it's freely available). This paper is explicitly a report by one of the parties involved, and a piece of advocacy for FP. Paul Hudak (RIP) was well-known in the Haskell community. See: https://en.wikipedia.org/wiki/Paul_Hudak

Regardless, the scorecard itself -- showing Haskell to be one of the winners -- was written by the review panel and not by the author of this paper. Those are the cold facts!

Two more interesting facts:

1- The experiment itself was not designed by the FP community, but was an actual (if simplified) system designed by the Navy. No constraints or requirements were tailored to make things easier for FP; in fact most participants were unfamiliar with FP.

2- A second Haskell prototype was written by an independent programmer, unconnected to the paper's author, who was unfamiliar with Haskell and had to learn it before completing the assignment. I find this very interesting.


Indeed, we don't know how the languages were selected -- certainly not by the paper's author, so you cannot fault him for this -- though I can guess:

- C++ needs no explanation.

- Ada was (is?) commonly used in defense systems. It was referred to as the "control group" by the paper's author.

- Some are related to Ada.

- Others seem experimental languages, part of a "ProtoTech" program for the design of prototyping languages. It seems natural to include experimental prototyping languages in an experiment about rapid prototyping!

I expect SML would have fared similarly to Haskell (or Miranda); at this level I would consider both to be more or less the same language. And it would definitely be interesting to see a Smalltalk and more modern languages!

> [the choice of languages] looks like a deliberate bias in language selection, and it makes it look like the conclusion was drawn before the study even began.

This is explicitly shown not to be the case. The experiment wasn't designed by the FP community but by ARPA and the Office of Naval Research; the review panel was unfamiliar with FP; none of the requirements was devised with FP or Haskell in mind. The paper's author is on Haskell/FP's side, but the review panel was not. Of all possible objections, we can safely disregard this one.


> It was the exclusion of a handful of popular languages that either strongly intersected with Haskell of the era (SML)

They probably wanted to avoid redundancy. It seems like they picked fairly dissimilar languages all around.


> this study was a victory for higher order functions and ADTs with pattern matching.

So, the bread and butter of Haskell?

There are few languages that do these two things nearly as well as Haskell. All of them are ML derivatives. Haskell, OCaml, and F# come to mind.

Rust has almost-as-good ADTs, but no higher order functions. A number popular of languages support both of these concepts, but not nearly as natively or conveniently.


Rust has first class functions and thus has higher order functions. Iterators have a bunch of higher order functions, map for example:

let a = (0..10).map(|x| x*x).collect::<Vec<_>>();


I've already responded a couple times with the same content, but the short of it is that rust HOFs are very limited due to its memory model. True functional-style HOFs require arbitrarily sized closures on the heap, and the only way to get that in Rust is by boxing the closures, which ends up being relatively inconvenient. There's a reason no one uses HOFs in rust outside of trivially inlinable examples like the one you give. There are no composition or application operators in the standard library because it's just too syntactically inconvenient to work with HOFs in the same way you would in a functional language.


It's true it's not nearly as ergonomic as haskell to pass around functions. You don't necessarily need to box the closure up on the heap in order to store a bunch of functions in say a vector though, it just needs to sit behind some kind of &, &mut, Arc, Rc, Box, etc.

I have seen a few hashmaps that store functions as values, in terms of real world uses.

You mentioned storing functions (closures here) in a vector, this isn't too much of a stretch I don't think:

    let x = |c| c * 2;
    let y = |v| v + 2;

    let b: Vec<&Fn(_) -> _> = vec![&x, &y];
Definitely not "no higher order functions" as you claimed.


By the same token, C "supports" higher order functions; you can pass around a function pointer and a pointer to a closure. It's sort of an idiomatic question; HOFs are inconvenient enough in Rust that no one really uses them outside of a few specific cases (like local maps and folds).


There's a wide berth between inconvenient, which is subjective, and not supported. HOF's exist in Rust, could the ergonomics be better; that's up for debate.


Rust definitely has higher order functions, but it doesn't have higher kinded types required for certain things (like monads). You may have got these confused.


Rust has very limited higher-order functions. True functional-style HOFs require passing around arbitrarily-sized heap closures, which doesn't work with rust's memory model. You cannot, for example, have a vector of functions in rust. You can sort of get it with boxed closures, but it's very inconvenient compared to in any of the languages I mentioned. That's why I didn't include it.


Boxing things is the way to get arbitrarily-sized things in Rust. I've worked with vectors of closures - it works quite nicely.


It may work quite nicely by whatever standard you're used to, but it's extremely inconvenient compared to languages that actually make a habit of using HOFs. There's a reason people don't usually use HOFs in Rust outside of a few easily inlinable use cases. The standard library doesn't even have any sort of composition or application operators, because it would be too unwieldy to use.


No, the standard library doesn't contain many of the things that Haskell's does because it's not Haskell, and people haven't really been asking for them like they have a variety of other things. And the reason people don't use them widely is that there's often better designs than passing around closures - lifetimes make useful mutation of closures' environments difficult, Rust largely revolves around mutation, so it's more common to invert the control flow (have functions return a command and interpret it, possibly in a loop, rather than have them take a closure) except in certain self-contained situations. That doesn't change the existence of HOFs.


Sounds to me like you're saying the exact same thing I am, which is that HOFs are too inconvenient to be practical in Rust.


> Rust has almost-as-good ADTs, but no higher order functions.

it seems to me Rust has HOF, do you mean they are not as nice as Haskells' ? Why?


Correct. True functional-style HOFs require passing around arbitrarily-sized heap closures, which doesn't work with rust's memory model. Rust lets you Box closures, but it's very inconvenient compared to treating them as first-class values, so I didn't count it.


Speaking of Haskell, Yale, and the 1990s: I ported the 1993 release of Yale Haskell to run on top of GNU CLISP. It only required a couple of small changes. The code is here: http://git.elephly.net/software/yale-haskell.git


After learning Haskell, I found its approach to solving problems much simpler. I will admit that it was very unusual at first. Lens, monads, template Haskell, and a very powerful types system are the key features that make it standout.


IMO, if you just use C++ without subtyping (no inheritance) and only immutable structures, you get most of the "simplicity" of Haskell.

And with C++17 variant, you also almost get pattern matching.

(also, as per "powerful type system", you can do dependent types in C++)


> if you just use C++ ... and only immutable structures, you get most of the "simplicity" of Haskell.

Huh? I cannot relate to this statement at all. C++'s memory model is not friendly for immutable structures.

The semantics of C++ are also substantially more complicated than those of Haskell. Although Haskell has some fairly powerful type-level machinery (if you choose to use it, which you needn't), the value-level semantics are extremely simple across the board. Essentially everything in the entire language is just straightforward syntactic sugar over lambda abstraction/application and constructor application/matching. It gets more complicated (like C++) for high-performance library authors who need to peek under the hood at low-level memory operations, but 99% of users are insulated from that.


Simplicity can never happen without enforcement. If there is no way to automatically enforce the use of this simple subset of C++ then it's impossible to use it on a real project where you have more than 1 developer. A lot of people seem to think it's a trivial, handwavable exercise to come up with this subset - but no-one has ever managed to actually do it. http://robert.ocallahan.org/2016/06/safe-c-subset-is-vapourw...


Is that a plus or a minus? If you're not a purity zealot, then you can step out of those rules where it makes sense, and regard the ability to do so as a plus. If you are a purity zealot, you're appalled at the thought, and want to force people not to do that.


I'm not a purity zealot, I'm a correctness zealot. Working on real projects means working with people who are less competent than they think they are; I want to force those people to not introduce bugs, and I'm broadly indifferent to the mechanism for doing that.


Either that statement is incorrect or I am misunderstanding one of C++ or Haskell very wildly.


I really want to see the AWK geo-server implementation. I also want to know who the expert AWK programmer was. AWK is cool, don't get me wrong, but I don't typically think of it as a general purpose programming language.


It seemed like one of "I did it because I could" kind of projects. I've used awk to setup some complicated reformatting of text files that I needed cron but I've never even considered using it as a prototyping language. I wonder what kind of ADTs you can build using it and how viable it would be as the backend of a simpler scripting language.


This was 1994. Why did they pick AWK as an example language? Perl was already fairly mature at that point.

I happen to love AWK, but this is absolutely the wrong usecase for it: AWK is a highly domain-specific language, and using it here just makes scripting languages look worse than they are (and in the early 90s, scripting languages weren't that great).


Previous submission on HN: https://news.ycombinator.com/item?id=13251855 a few days ago.


Disappointed that PDF wasn't one of the programming languages.


It would be great if that format were somehow Turing complete.


That would be PostScript, of which PDF is pretty much the "compiled bytecode" version. I consider its abandonment a big loss for the industry.

We could've gone PS++, but now we've got SGML++--++^&^!!1!!


PDF viewers have lots of remote code execution vulnerabilities, so it is de facto Turing complete.


Well you can run JavaScript inside a PDF, so technically it is Turing complete.


From ~30s of searching, it looks like that's an Adobe Reader feature: https://stackoverflow.com/questions/9219807/using-javascript...

My original comment was going to be "Who thought that was a good idea? Maybe there are reasons, but that sounds like a terrible idea at face value".


Chrome's builtin PDF reader supports it. You can even find a PDF that uses JS to play a game of pong in Chrome.


Well, out of all the places that makes sense, it's Chrome after all.

Either way, it sounds like a pretty bad idea to add this as a feature to Adobe Reader or somehow to the PDF spec. Especially since PDF remote code execution exploits happen relatively often.


I've seen some forms made with PDFs with js in them. A painful experience for sure.


> 1994

Probably not relevant anymore.


"Those who forget the lessons of the past..."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: