
Why calculating is better than scheming (1987) [pdf] - tosh
https://www.cs.kent.ac.uk/people/staff/dat/miranda/wadler87.pdf
======
geocar
I've skimmed this article a few times now, and something I've always had
trouble with was this:

 _“…problem like this are damaging if one is trying to present programming as
a discipline subject to mathematical analysis.”_

because _in general_ programmers steadfastly refuse to subject their
discipline to mathematical analysis, and continue to look for some aesthetic
balance between "enough abstraction" and "enough whitespace".

Here is a program to sum a list of numbers:

    
    
        +/
    

This definition is even less cumbersome than the Miranda or the Lisp, but it's
unclear what the author expects us to conclude from this. _And yet_ I think
it's perfectly reasonable to posit that shorter programs are better. Shorter
programs have less bugs[1], and run faster[2], and yet for some reason there
are a lot of programmers who believe their job is to produce _more code_. Fuck
those guys. Seriously.

[1]: See Code Complete and Software Estimation: Demystifying the Black Art, by
Steve McConnell.

[2]: [https://stacresearch.com/m3](https://stacresearch.com/m3)

Then we have the pursuit of "traditional mathematical notation": I don't think
it produces better programs or better programmers. Computers do not execute
mathematical notation, and mathematicians can't agree on their notation to
enough specificity to permit as much useful computation[3]. It is clear that
notation is important[4], but there is frighteningly little real research in
this space. Zero-cost abstractions still take up source code, and yet many
programmers think this is _given_ as a good thing, despite Dijkstra’s
wonderfully succinct observation:

 _“…the intellectual effort needed to conceive or to understand a program need
not grow more than proportional to program length.”_

[3]: See Whitehead and Russell, Principia Mathematica

[4]:
[http://www.jsoftware.com/papers/tot.htm](http://www.jsoftware.com/papers/tot.htm)

Lazy is also bad. Computers aren't lazy. Starting off by thinking about them
as lazy puts programmers on that path for total abstraction. So much energy
has been put into trying to hide concurrency-problems[5] that it's no wonder
most big-data programs generate so much heat. The real value in Scheme Streams
is that they're explicitly concurrent, and other languages that have optimised
for those constructs[6] have discovered a great deal of utility.

[5]: [http://hadoop.apache.org/](http://hadoop.apache.org/)

[6]: O=spawn(fun(P)->P!{f,self(),F()} end,[self()]), receive{f,O,X}->X end.

It's almost like we've plucked some ideas, said they're important, and then
used them as a lens to study our languages without even doing _the first
thing_ by showing how we can know they are important to the point of excluding
everything else.

About the only thing I really like about Miranda (and Haskell, etc) is the
pattern matching, which is dope. Seriously good stuff. I don't know how to
square that with array languages, but I think it's worth thinking about since
they (array languages) seem to have otherwise sorted out program length and
notation.

~~~
shakna
> Shorter programs have less bugs[1], and run faster[2]

I'd heartily disagree with that statement.

Short programs can run faster, but that entirely depends on the implementation
of the language/architecture that they're running upon.

Consider the following:

    
    
        struct {
          char msg[400];
          double d;
          int i;
        };
    
        struct {
          double d;
          int i;
          char msg[400];
        };
    

The code is nearly identical, only the order is different, but the second uses
far far less memory than the first.

Secondly, shorter programs can be bug free, but that depends on the level of
abstraction exposed, and how that translates into language that can be
understood by those using it.

There's no shortage of regex bugs in the wild, and its a fairly concise
language.

Perl has often been criticised for having too concise a language, leaving it
prone to code rot.

However, all that being said, I think there needs to be a balance between
abstraction, and mathematical analysis, and I would also say there hasn't yet
been enough actual research into the effect of syntax, rather, syntax has been
a result of the abstraction its presenting, or the underlying implementation
the language relies upon.

We do need abstraction, parsing mathematic notation or estoric languages like
Malbolge are difficult for the human brain, even with experience.

I mean, Brainfuck isn't that far from what a processor actually runs, being
registers and jump statements, but...

    
    
        ++++++++++[>+++++++>++++++++++>+++>+<<<<-]>++.>+.+++++++..+++.>++.<<+++++++++++++++.>.+++.------.--------.>+.>.
    

We don't need a Turing Tarpit.

But neither do we need the insane amounts of boilerplate that Java forces upon
you to do even the simplest things that Java is meant to be good at.

I think that in recent years, mathematics has given us the tools to build
better syntaxes:

* Better type inference

* Better proofing of type safety

Though I think Scala's syntax is still overly verbose, DOT calculus is a great
step in the right direction for being able to create a syntax with both zero
cost abstractions, type safety, and a more concise syntax.

~~~
eutectic
With GCC, both those structs seem to have a size of 416 bytes. Can you
elaborate on what you meant?

~~~
shakna
32 or 64bit GCC?

I was referring to Struct Padding[0].

Assuming a 32bit GCC, then the first would be padded with 24 bytes, the second
with 16 bytes.

This padding doesn't show up under size, but does affect speed in small ways,
and available memory (yay for dealing with memory constrained systems!).

[0] [http://www.catb.org/esr/structure-
packing/](http://www.catb.org/esr/structure-packing/)

~~~
userbinator
You seem to be misunderstanding how padding is applied. As your link says
regarding structure alignment:

 _In general, a struct instance will have the alignment of its widest scalar
member._

...which is d if we assume sizeof(double) == 8, so the total size will be
rounded up to a multiple of 8. Both your structure examples will be exactly
the same size:

    
    
        0 msg
        400 d
        408 i
        412 [+4 padding]
        416 TOTAL
    
        0 d
        8 i
        12 msg
        412 [+4 padding]
        416 TOTAL

~~~
shakna
> On some processors, the addressing of a member is more efficient if its
> distance from the beginning of the structure is less than 128 bytes.

> In the first example, to address the d and i fields using a pointer to the
> beginning of the structure, an offset of at least 400 bytes is required.

> Instead, in the second example, containing the same fields in a different
> order, the offsets to address d and i are of few bytes, and this allows to
> use more compact instructions.

------
Chinjut
In case anyone is unfamiliar with the author Philip Wadler, he went on to be a
key figure in the design of Haskell.

~~~
danharaj
In particular the language Miranda he discusses here was a direct predecessor
to Haskell.

------
olewhalehunter
this critique seems to be of the kind of lisp you find in introductory
textbooks on scheme

most lispers like to play around with syntax, doing that in haskell usually
results in write-only code

~~~
willtim
Yes the critique is from an educational perspective. Many, especially in
academia, view syntactic abstraction as a method of last resort, so it doesn't
feature in the courses. So one might ask why then suffer the syntax of scheme
if macros are never used?

------
junke
[https://www.reddit.com/r/lisp/comments/d177c/a_critique_of_a...](https://www.reddit.com/r/lisp/comments/d177c/a_critique_of_abelson_and_sussman_or_why/)

[https://www.reddit.com/r/programming/comments/ujj3/a_critiqu...](https://www.reddit.com/r/programming/comments/ujj3/a_critique_of_abelson_and_sussman_or_why/)

[https://www.reddit.com/r/programming/comments/13gfqt/before_...](https://www.reddit.com/r/programming/comments/13gfqt/before_we_all_get_too_excited_with_lisp_i_suggest/)

------
willtim
All ancient history now, I heard that MIT now teach Python.

~~~
throwaway7645
MIT uses Python now yes, but Scheme's offspring (Racket) is popular at a few
universities. I've found Python easier to use though.

------
ted_dunning
That critique had a lot of observations of the form "lisp is less good because
I am not used to it". From the point of view of something with more lisp
experience, quite a number of these observations would go the other way.

------
white-flame
This article needs a (1986) in the title, at least according to the
handwriting on the 1st page.

~~~
dang
It says "received". That often happens before a paper is published, but we
want the publication year. That was 1987 according to Wadler's CV:
[http://homepages.inf.ed.ac.uk/wadler/vita.pdf](http://homepages.inf.ed.ac.uk/wadler/vita.pdf).

~~~
JadeNB
> … we want the publication year. That was 1987 according to Wadler's CV ….

Or just to the ACM:
[http://dl.acm.org/citation.cfm?id=24706](http://dl.acm.org/citation.cfm?id=24706)
.

