Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On the expressive power of programming languages (2019) (pwlconf.org)
208 points by fanf2 on Nov 14, 2020 | hide | past | favorite | 17 comments


This is the antithesis of a hand-waving opinion-piece on expressiveness. It, is, rather, a talk by Shriram Krishnamurthi introducing the key insights of Matthias Felleisen's 1991 paper of the same name. From the paper's abstract:

"The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction."

Highly recommended, FWIW.


Yeah, I almost didn't click on this because I thought it would be langauge-wars-y. Glad I took the chance.

Also, this talk seems wonderfully lucid.


Me too my friend, tired of those 5min article which just propagates opinions based on flame-wars and ask you to follow the persona social media...


So stop clicking them.


Shriram did an excellent job being a lively, energetic and engaging speaker who distilled some of the core concepts of the paper down without getting tangled in the formalisms.

I do not have an academic background in theoretical computer science and was nevertheless able to follow along his talk perfectly well.

For anyone interested in reading it, the paper by Matthias Felleisen that this talk is based on can be found on the author's website: https://felleisen.org/matthias/papers.html


I had the same reaction. I don’t know much computer science either, but this talk was really interesting and substantial.


Shriram Krishnamurthi is the real deal. When I started at Brown I was majoring in Mech-E and took a programming course with Shriram on a whim. It was the hardest course I took in undergrad. Needless to say, I switched my major to Computer Science right after.


I really liked two of the poinst he made when answering audience questions.

(1) In the expressiveness debate, only the upsides of increasing expressive power are discussed (avoiding global rewrites) but the downsides are not (breaking intuitive expectations).

(2) The theory side and the UX side of language design communities seem to operate independently, to the detriment of the results.


That was a great talk :)

The last example of an extension that increases expressive power of a language — adding truthy/falsey conditionals, over pure Boolean conditionals — is messing with my intuition! The talk obviously showed an example to prove this, and I follow the logic, but I still find it a surprising result. Maybe it chimes with one of the questions that was asked; the context required to show the distinction is engineered and quite “unnatural” (at the risk of being “hand-wavy”), which is at odds with the human meaning of expressiveness.


Your intuition is probably driven by the fact that the theorem isn't true in any popular programming language. The added feature that the proof relies on is that "truthy" conditionals allow the programmer to distinguish a function from a falsy value (such as false). Many other features might provide this capability as well: the ability to compare a function to a boolean directly, a typeof function provided you have the ability to compare its results, an "is function" query, or the ability to try arithmetic on the value or call it as a function and trap an error if that doesn't work. So Shriram's enumeration of possible things you might do to the function, which is required to show the two expressions are equivalent in the starting language, is much shorter than it would be in a practical language, and as a result the proof wouldn't hold. On the other hand, I can attest that if you design a very minimal language and don't consider the question of how to tell what type a value has, you may well end up with a language that doesn't have that capability!


Excellent point; thanks for your clarification :)


He (Morris, actually) defines observational equivalence as:

    e1 == e2 iff
    
    for all contexts C,
    C[e1] halts iff C[e2] halts.
Which is a great trick to avoid having to compare contexts. Around 29:00 they discuss how to show 5 != 6.

One question I have is: what if your language doesn't allow for ways to halt? For example, a language where you have no loops. That definition would imply everything equals everything, since all contexts eventually halt.


In most of the talk he’s using Ω to mean the program doesn’t halt, i.e. does not produce an answer. He also mentions when explaining Ω that it is often used as a stand-in for a run-time error. An uncaught exception can do the same job, for example.

(The reason for this that if you are describing the semantics of a language using a collection of reduction rules, and evaluation gets to a point where no reduction rule applies, then the program gets “stuck”. You would normally implement this as a runtime error but semantically it has the same effect as an infinite loop.)


You can tweak the definition to take advantage of whatever your language DOES allow. He alludes to this during the Q&A at the end. For instance: two expression are equal if (and only if), for all possible contexts C, you have C[e1] and C[e2] send the same bytes to standard output.


Awesome talk! Finally came around watching it this weekend. I can recommend this video to everybody!


Great talk. The definition of observational equivalence is very elegant. I instantly thought it made no sense or must be a partial definition, then the simple example given on the next slide explained it perfectly.


The definition of expressive power they invent is fine as far as it goes, but to me is not very useful. A useful expressiveness is one that makes it possible for one person to write a library that captures semantics in a way that another person can use the semantics by name.

My go-to example is the hash table. Coding in C, one is always cobbling up hash tables, because C hasn't the expressive power to code and publish a generally useful hash table library. Many other languages sidestep the problem by building hash "dictionaries" into the core language.

A few have enough expressive power to capture the hash table pattern (and numerous variations on it) in libraries that may then be used almost as if they had been built into the core language.

Whenever a pattern shows up in many different programs coded in a language, it indicates that the language is not quite expressive enough to usefully capture that pattern in a library. Some languages make a virtue of the weakness, and call it simplicity. Others pull patterns into the core language. A few add expressiveness until it becomes possible to capture the pattern in a library.

Clearly the last is best, for a general-purpose systems language, because it enables capturing an open-ended range of patterns in libraries, and each capability joins the others, multiplying them. But it is also much harder to invent and to get right. Usually, too, the syntax needed to use a library version of a feature cannot be made quite as nice as is possible for a language built-in.

So, in C++ we see core support for exceptions, virtual functions, and coroutines, with pattern matching and reflection coming, but the type system is also continually strengthened to make more kinds of libraries possible.

A more expressive language might not need built-in exceptions or virtual functions, because those patterns could be coded into libraries and used from there. Rust has the Drop trait in its library, so does not need destructors, as such, in the core language.

Downsides to expressiveness of this kind include that interoperability may suffer, as there are too many ways the same thing could be achieved, and therefore little regularity. Performance may suffer if work cannot be moved to compile time; or, if it can, compile time may balloon.

So, some patterns are naturally better to pull into the core: particularly, those with an obviously correct, unique form; or that are needed to help integrate libraries from different sources. A Standard Library is a place to put such features that would not benefit much from adoption into the core.

Whichever course is taken, people will complain that the language is getting too big and complicated, and the standard library too big.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: