
Safely Composable Type-Specific Languages [pdf] - lelf
http://www.cs.cmu.edu/~aldrich/papers/ecoop14-tsls.pdf
======
mattfenwick
It's very interesting and neat and I hope they keep making progress, but one
big question that I'm left with is why this is better than using an embedded
DSL.

This is supposed to be addressed on page 2, but the example is kind of a
strawman:

> 1 let webpage : HTML = HTMLElement(Dict.empty(), [BodyElement(Dict.empty(),

> 2 [H1Element(Dict.empty(), [TextNode("Results for " \+ keyword)]),

> 3 ULElement((Dict.add Dict.empty() ("id","results")),
> to_list_items(query(db,

> 4 SelectStmt(["title", "snippet"], "products",

> 5 [WhereClause(InPredicate(StringLit(keyword), "title"))]))))])])

Although they claim that it's "too cognitively demanding for comfort", there
are many ways that it could be improved without resorting syntax extensions,
for example:

> 1 let webpage : HTML = HTML({}, [Body({},

> 2 [H1({}, ["Results for " \+ keyword]),

> 3 UL({"id": "results"}, to_list_items(query(db,

> 4 ... etc. ...

Another reason to prefer an EDSL over syntax extensions for writing HTML is
that an EDSL won't require that you write the close tags!

This isn't to say that they're wrong, it's that I think they need to do a
better job of comparing their approach with EDSLs.

\----

EDIT: does anybody know how to do code formatting in HN posts? It looks so
ugly the way I did it.

~~~
mattfenwick
Formatting thanks to @tempodox:

\----------------

It's very interesting and neat and I hope they keep making progress, but one
big question that I'm left with is why this is better than using an embedded
DSL.

This is supposed to be addressed on page 2, but the example is kind of a
strawman:

    
    
      let webpage : HTML = HTMLElement(Dict.empty(), [BodyElement(Dict.empty(),
        [H1Element(Dict.empty(), [TextNode("Results for " + keyword)]),
         ULElement((Dict.add Dict.empty() ("id","results")), to_list_items(query(db,
          SelectStmt(["title", "snippet"], "products",
           [WhereClause(InPredicate(StringLit(keyword), "title"))]))))])])
    

Although they claim that it's "too cognitively demanding for comfort", there
are many ways that it could be improved without resorting syntax extensions,
for example:

    
    
      let webpage : HTML = HTML({}, [Body({},
        [H1({}, ["Results for " + keyword]),
         UL({"id": "results"}, to_list_items(query(db,
         ... etc. ...
    

Another reason to prefer an EDSL over syntax extensions for writing HTML is
that an EDSL won't require that you write the close tags!

This isn't to say that they're wrong, it's that I think they need to do a
better job of comparing their approach with EDSLs.

------
vog
At a first glance, I wondered if the authors knew OCaml with its Camlp4, which
seems to solve exactly that problem. So I immediately got the feeling that
they are just reinventing the wheel.

However, in page 22 they finally provide a short comparison of their approach
with Camlp4 and similar mechanisms for other languages. Now, their approach
makes a lot more sense to me. I think they should have done this at a more
prominent place, e.g. in the abstract or the introduction.

~~~
mpweiher
That's sort of how academic papers work. Related work goes into the
unsurprisingly named "related work" section, which will almost always be the
second to last.

So people scanning papers will often use something like the following path:

(a) abstract (b) end of 1st section, "contribution of this paper" (c) related
work ("do I already know some of this")

-> if still looks interesting, read entire paper.

~~~
mjn
Conventions vary by subfield. In my field (AI) it's more common for the
related-work section to go up front, as the 2nd section right after the
Introduction. You first set up/motivate/preview what you're going to do, then
you survey what's already been done in this area, and then you present your
own advance, once you've laid the groundwork for how you're either building
on, or diverging from, the current state-of-the-art. Sometimes it does read
better to put it 2nd-to-last, but IME that's less common in AI. (To make sure
I'm not completely misremembering, I spot-checked 5 random papers from this
year's AAAI proceedings [1], and 5/5 had the Related Work section up front.)

[1]
[http://www.aaai.org/Library/AAAI/aaai14contents.php](http://www.aaai.org/Library/AAAI/aaai14contents.php)

------
chubot
This reminds me of quasi-literals, proposed (accepted?) for the next version
of JS:

[http://wiki.ecmascript.org/doku.php?id=harmony:quasis](http://wiki.ecmascript.org/doku.php?id=harmony:quasis)

[http://www.nczonline.net/blog/2012/08/01/a-critical-
review-o...](http://www.nczonline.net/blog/2012/08/01/a-critical-review-of-
ecmascript-6-quasi-literals/)

The point is to eliminate content injection bugs (XSS). I suppose it's not
really the same because JS isn't statically typed, but the motivation seems
similar, and it looks similar.

Originally from the E programming language:
[http://www.erights.org/elang/grammar/quasi-
overview.html](http://www.erights.org/elang/grammar/quasi-overview.html)

------
cpfohl
Rapidly goes over my head, but seems really neat. I'd love to see this for a
mainstream language (esp. one I use). Right now the solution seems to be to
write an API that sort of imitates the literal you want (in C# using fluent
extension methods [i.e.: static methods called with the dot operator against
the first argument] to imitate SQL, for instance).

------
tempodox
Looks interesting, but Wyvern is just too obscure an example. That thing isn't
documented anywhere.

And what potions do you have to quaff to be able to read those Hindley Milner
Type Formulae? I still find them incomprehensible.

~~~
peterfirefly
Glynn Winskel, "The Formal Semantics of Programming Languages", 1993 -- a bit
dated but still quite good. Covers basic semantics/type theory.

Benjamin Pierce, "Types and Programming Languages", 2002 -- y'know, just your
basic type theory.

Benjamin Pierce, "Advanced Topics in Types and Programming Languages", 2005 --
Chapter 10 is a juicy 128 pages on Hindley-Milner type inference. Good for
both theory and implementation tricks. It looks at HM typing as a constraint
system where the rules are Just Right so that if you get a solution you get a
solution that is "maximally nice". It describes variants that are perhaps not
quite as beautiful but which in some ways are "better" (= can statically check
more of your code or where the programmer can express him or herself freer).

If you want to know more about unification, which is how HM type inference and
checking is usually/originally stated:

Franz Baader and Wayne Snyder, "Handbook of Automated Reasoning", 2001 --
chapter 8 is called 'Unification theory'.

Peter Norvig, "Correcting A[sic] Widespread Error in Unification Algorithms",
1991 -- many published algorithms were subtly wrong.

Unification is really an example of a term rewriting system:

Franz Baader and Tobias Nipkow, "Term Rewriting and All That", 1998.

It is amazing how much there is to say about a 10-line algorithm (Algorithm W)
/ such a short set of typing rules / such a small constraint system as the HM
type system.

~~~
tempodox
Cool, thanks for that list! I also found [https://github.com/tomprimozic/type-
systems](https://github.com/tomprimozic/type-systems) which has OCaml
implementations (including algorithm W).

