
Designing and evaluating programming languages: Dagstuhl trip report - mnmlsm
https://medium.com/bits-and-behavior/designing-learnable-teachable-and-productive-programming-languages-dagstuhl-trip-report-81e41bde84bd
======
augustk
Niklaus Wirth's "Good Ideas, Through the Looking Glass" is an interesting read
on language design (I started reading from section 4):

[https://pdfs.semanticscholar.org/10bd/dc49b85196aaa6715dd468...](https://pdfs.semanticscholar.org/10bd/dc49b85196aaa6715dd46843d9dcffa38358.pdf)

"Actually, a language is not so much characterized by what it allows to
program, but more so by what it prevents from being expressed. Quoting the
late E. W. Dijkstra, the programmer’s most difficult, daily task is to not
mess things up. It seems to me that the first and noble duty of a language is
to help in this eternal struggle."

~~~
DonaldFisk
Thanks. A quote often attributed to Dijkstra is "Object-oriented programming
is an exceptionally bad idea which could only have originated in California,"
but the article you refer to makes quite clear he was aware of its Norwegian
origins, and that he considers it a small and oversold addition to imperative
programming, rather than a bad idea.

I note he wasn't very keen on functional programming (6.1) or logic
programming (6.2) (and by implication non-imperative languages in general)
because they don't fit well onto the hardware currently available. But that
limits you to imperative languages, whose code is much more difficult to test
for correctness. I accept that preventing programming errors is important, but
also think that languages should be designed for the problems they're intended
to solve, whether or not that's a good fit for current hardware. If it isn't a
good fit, I'd argue that the problem is with the hardware.

------
pmontra
> Language designers often make vague arguments about the “simplicity” or
> “intuitiveness” of language features, without ever saying what they mean by
> these terms, or testing whether these claims are true.

Anecdotally the designer of Ruby, if in doubt, asked his little daughter what
looked simpler / easier to understand.

The post skimmed by the issue of polyglot programmers. I used to work with C,
Java and Perl, now with Ruby, Python, Elixir, JavaScript. There is also PHP,
which I made some money with but I never really cared much about.

What happens when working with so many languages in maybe as many different
projects is that the amount of context switches increases. It's like going
back to a project after six months but it happens the next week. This makes it
easier to spot both the weaknesses in my code (why did I call that variable
so?) and the weaknesses in the programming languages (why did they do that
when everybody else is doing that other thing which is so better?)

I could give examples and make everybody angry because every language has its
share of problems :-)

A side effect is that I tend to write simpler and dumber code, because I have
to understand it when I come back. The clever features of languages are
tempting but I use them only for those special cases when everything else
wouldn't do. There are (un)surprisingly very few of them.

~~~
zzzcpan
> because I have to understand it when I come back

The hard part is understanding why the code does what it does. Which is why
documenting the reasoning behind the choices is much more valuable than the
code itself. Why did you choose that particular algorithm and why do you use
this particular convention for naming things, etc. Documenting conventions is
also important, they are not obvious and let you keep consistency of the code
next time you touch it, so you don't introduce time wasting edge cases that
require learning and dealing with useless things.

~~~
pmontra
I document those reasonings in wikis (example: on github) or Markdown files
checked in to the repository.

For normal stuff choosing the right names is enough. Example:
number_of_customers leaves very little doubts, customers could be many
different things. Autocompletions in editors helps because it doesn't increase
the number of typed characters.

------
tincholio
A bit unrelated, but to all CS people who may get a chance to go to Dagstuhl,
do yourselves a favor and go! It's, by far, the best environment I've ever
encountered to discuss science with top-level (and also younger) researchers.
The place itself is amazing, and the format of the seminars too. Also, you
never know who you might meet there (I got lucky the last time I was there,
and ended up having breakfast with Tony Hoare).

------
jtolmar
> Consider indefinite loops, which require someone to reason about potentially
> infinite executions of a block of code.

What if we didn't have them? I've been thinking about making a language that's
primitive recursive, and not actually Turing complete. This doesn't even have
to look particularly alien: all the looping constructs are for-range and for-
each style, recursion (cycles in the function call graph) is a compiler error,
and otherwise it can be a typical programming language. The majority of code
can be ported to this world just fine.

Graph searches don't naturally look like for-each loops, but they do iterate
over each element in a domain at most once. We can add a for-search loop which
iterates over a domain in an order that's determined mid-loop (looks like a
for-each loop plus a queue command, the compiler generates the visited set and
queue/stack/priority queue). That can be used (a little awkwardly) to do
binary search as well, which is the main place people actually use recursion.
(I think this is worth stealing, btw.)

Some programs like games and web servers are built around an intentional
infinite loop. Having more than one infinite loop on the same thread sounds
like an error, so let's say there's an actual infinite loop construct, and
only the main method is allowed to use it. This plus generator functions allow
dealing with input streams (though somewhat awkwardly). (This makes the
language as a whole Turing complete, but most of the code is still not.)

------
qwerty456127
> We have no theory of language error-proneness

Language error-proneness is directly proportional to the rate of usage of
mutability and side effects. Immutable references to immutable objects don't
quirk and pure functions always do exactly the same thing (usually exactly
what you expect them to do).

The first time I have tried Scala (having no prior functional programming
experience) I was mesmerised with how well it worked: every successful build
would result in a program doing exactly what it was supposed to do (run-time
quirks or errors were extremely rare) and whenever a build failed it was
usually rather easy to find and fix the failure reason.

~~~
kbwt
If you consider performance as a correctness issue, then functional and/or
garbage collected languages have a very high incidence of bugs.

~~~
DonaldFisk
It's usually better to do the right things slowly than the wrong things
quickly.

In practice most speed improvements are made by improving the algorithm. With
high level languages, you can do this faster.

C still uses malloc and free, functional programs which don't dynamically
allocate memory won't call the garbage collector, and there are fast garbage
collection algorithms for the ones which do. Functional languages aren't
intrisincally slow either. Strong static typing/type inference allows them to
be compiled to very fast code.

~~~
kbwt
There is software which is already so optimized at the algorithm level, you
have to use a low level language to fully exploit the hardware. For example
with video games; if you can cram more/better content or achieve higher
simulation fidelity/tickrate you improve the game experience and developers
compete on that.

These are programs where selecting a different instruction to split a loop-
carried dependency makes a significant difference. Data is carefully packed
and reorganized between passes to optimally match the cache hierarchy. Good
luck selling these developers uncontrollable per-access indirections and
thunks.

Furthermore, this kind of code barely uses malloc/free, preferring instead to
use custom allocation patterns tailored for each use case. If you replicate
that in a higher level language, you effectively do your own memory management
within a large block allocated by the runtime. So you are liable to write all
of the same lifetime/indexing bugs in while using your shiny "safe" language.

------
bringtheaction
> One can also reduce the formality of language abstractions, which may

If the author is here, I’d like to inform you that the above quoted text
stopped mid-sentence.

------
Xophmeister
If the author is here, this paragraph just ends on an unfinished thought:

    
    
       ...One can also reduce the formality of language abstractions, which may

