I'm not quite sure that it's just the line between using and mentioning the syntax. Template Haskell uses a syntax element called Oxford brackets ([| and |]) to grab the AST for a piece of Haskell. This is very easy but Haskell still doesn't have the same compile-time macro programming as Lisp.
I think the reason that Lisp metaprogramming works so well is just that Lisp basically has no syntax and the simplest reasonable grammar. Lisp syntax is just the AST with nested brackets. This means that modifying the Lisp syntax structure is equivalent to modifying the AST.
Fundamentally, you can make accessing the AST as easy as possible, but at the end of the day you're still accessing the AST so you can change the AST. The more complicated the AST, the more complicated the change will need to be.
I think they have somewhat different use cases. Lisp's approach works best if you don't really want to analyze a fragment of code deeply, but just want to specify a new construct by looking one level deep in the AST, like the classic example of implementing "cond" or "if" as a macro.
But if you actually want to walk recursively down an AST, the fact that it's hard to get a usable AST in Lisp makes metaprogramming difficult and error-prone. For many uses, I don't really want the source code to a Lisp fragment, but the AST, in the sense of a semantically marked-up fragment that's resolved variable bindings and done all the other things a parser does.
After thinking about it, I agree. Sometimes that semantic information or metadata can be very useful in operation. Specifically, having typing information and pattern-matching can be a Godsend in many cases.
Hard to get a usable AST in Lisp? The ClojureScript analyzer is 500 lines of code.
For all the meta-programming projects I've taken on - delimited continuations, embedded logic programming, and extensible pattern matching - I'll take Lisp data structures over the actual Lisp AST any day.
Isn't he basically just saying "quote" ? In this hypothetical graphical interface, what is the color around letrec bindings ? I'm presuming green again, which is an appropriate way of differentiating "here be dragons", but it doesn't really indicate what the semantics are. That green box could just as well be a Clojure vector. I think having separate colors to indicate specific well-known semantics would indeed be less Lispy (but this might not necessarily be a bad thing).
It occurs to me that most of what made/makes Lisp so powerful historically (expression-based and everything-is-an-object, for example) is probably quite different from what differentiates it today (which probably has more to do with the assumptions that Lisps don't force on you)
Imagine for a second that Julia didn't have the :() quote form, and if you wanted to build up some syntax you had to use constructors, something like this:
a + b - c
(obviously not valid Julia, but that doesn't matter for now)
This would still make Julia homoiconic, but the different between "using" the expression (the first line) and "mentioning" it (the horrible Expr thing) is much greater.
Clojure does have more syntax than Common Lisp, but its syntax is still "near at hand" because it's still made up of the things you use every day. Functions use  for arguments, but that's just a vector which you use anyway. There's not really much syntax in Clojure where "mentioning" it is harder than "using" it.
But the only real difference between "using and mentioning" seems to be the presence of (quasi)quote. I haven't looked at Julia, but I'm presuming you can do :(a + b - c), which mentions the syntax. But as you point out, the manipulation is still annoying because the syntax tree is strongly typed. In fact, the typing strength of the syntax tree seems roughly equivalent to the amount of "syntax dedicated to semantics" a language has, which is the point that the author starts off trying to refute.