
It's Time for a New Old Language – Guy Steele [video] - zengid
https://www.youtube.com/watch?v=dCuZkaaou0Q
======
cryptonector
Wow, this is a very nice talk. I particularly liked the Q&A, where Guy Steele
went back and explained the type checker shown earlier. And I also loved the
LISP macro backtick-comma inspiration for using underline to "escape"
overline. Brilliant!

------
heinrichhartman
Reminds me of Donald Knuth's talk at MSRI in 2004 on "Mathematical Notation"
and the choices he made for TAOCP:

[https://archive.org/details/lecture10604#reviews](https://archive.org/details/lecture10604#reviews)

"The speaker presents numerous examples of situations where good notations
enhance mathematical problem solving, as well as a few that have been less
helpful."

------
TeMPOraL
I've finished watching the talk, and I'm not sure if this is serious, or is
Guy Steele just trolling people in a sophisticated way. The TL;DR as I
understand is:

\- Computer science papers have this ad-hoc, ever evolving notation that
started out in mathematics, that is not well defined (every paper ends up
using a different flavour, sometimes even a bunch of mutually contradicting
flavours at the same time), and that is subject to weird evolutionary
pressures (like page limit of CS papers leading to extreme conciseness).

\- Particularly interesting is the evolution of overline notation, and the
ubiquitous but never defined use of ellipsis.

\- Guy thinks this is a mess and should be cleaned up. His first contribution
is solving overline notation abuse by introducing _underlines_ , which cancel
out overlines, thus implementing quasiquoting in mathematical notation. His
second contribution is formalizing the meaning of ellipsis by defining a
transformation from ellipsis form into over/underline form.

Maybe it's because of the way he presented it, or maybe because of my
unfamiliarity with the space, but overall this talk really felt like a one-
hour long practical joke aimed at CS academics.

~~~
TuringTest
I have not watched the talk in-depth so I can't tell whether it's trolling,
but I have worked with academic CS papers. I would agree with an attempt to
develop a new symbolism for describing CS theory, that was closer to modern
programming languages, or at least to a LISP-like Domain Specific Language.

It's true that the language of mathematics was optimized for a different
medium: written formulas on paper. Algorithms are better expressed in pseudo-
code, but to describe the exact effects of a language instructions we have to
resort to formal semantics (whether denotational, operational or axiomatic),
which basically describe the workings of a virtual machine in terms of a
temporal logic expressed with mathematical symbols.

The ellipsis in maths is usually an ambiguous symbol whose meaning changes
depending on context. In CS, it typically represents either vectors in memory
or successive recursive calls. Trying to provide a precise meaning for its
usage is a legitimate endeavor; it appears that Steele is using symbols from
vector maths to achieve it. What I get from skimming the video is that the
combination of overlines and underlines would be a way to represent how you
would combine your nested loops in imperative programming to traverse the data
structure.

~~~
throwaway7645
I'm guessing Ken Iverson's APL wouldn't work here? It can show algorithms just
fine. I'm not sure about type theory though.

------
grabcocque
Extending the old adage "the best programming language is the one you know the
best" to "the best programming language is the one you created the best".

~~~
visarga
Reminds me of the "no free lunch" theorem from ML. To paraphrase, there is no
language that is better than every other language on all tasks.

[https://en.wikipedia.org/wiki/No_free_lunch_theorem](https://en.wikipedia.org/wiki/No_free_lunch_theorem)

~~~
TeMPOraL
I view this as shifting complexity around.

All Turing-complete languages are essentially equivalent, and there is some
essential minimum complexity in any computation you'd like to describe. Any
given language we use makes trade-offs, which cause some programs to be
expressed simply in it, but the cost is that some other programs will become
much more complex to express in it.

~~~
TuringTest
Maybe, but comp-sci does not study all possible programs with equal
probability. A notation should be optimized for the kind of problems that are
more often studied, and make expressing those easier.

If this pushes complexity towards the expression of randomly generated
programs, that's a good thing (except for the few guys that study randomly
generated programs, that is).

~~~
TeMPOraL
Yes. This is the general mental model I use when explaining that the design
goal of a good general-purpose language is to shift the complexity to the
areas almost no one will care about. Also your corollary from a sibling
comment can be restated as saying that you can get extra simplicity for a
particular task by designing DSL that will shift complexity to outside of its
domain.

