Hacker News new | past | comments | ask | show | jobs | submit login

> And DSLs done the right way, via macros, are much better in integrating with tools than any ad hoc interpreted DSLs would ever be able to. You can easily have syntax and semantic highlighting infered, with auto indentation, intellisense and all the bells and whistles. For no extra cost.

No you can't. If the macro is arbitrary code then no tool can offer those things - there's no way to offer intellisense if you don't know what strings are meaningful in the language, and an unconstrained macro could use anything to mean anything.




The tools could have hooks for this.

It doesn't take much imagination.

You know how GNU Bash is customizeable with custom completion for any command, so that when you're, say, in the middle of a git command, it will complete on a branch name or whatever?

Similarly, we can teach a syntax highlighter, completer or whatever in some IDE how to work with our custom macro.


Sure - but at that point we've lost a lot of the value of having a standardized language at all. The whole point of a language standard is that multiple independent tools can be written to work with it - that your profiler and your linter and your compiler can be written independently, because they'll be written to the spec. If everyone has to customize all their tools to work with their own code, that's a lot of duplicated effort. Better to have a common standard for how you embed DSLs in the language, so that all the tools already understand how to work with them.


It is a broken approach. A much better way is to have a standard protocol (see slime for example, or IPython, or whatever else), and use the same tools as your compiler does, instead of reimplementing all the crap over and over again from the language standard.

I expect that not that many C++ tools that do not use libclang will remain.


At that point you're essentially advocating treating libclang as the standard. All the usual problems of "the implementation is the spec" apply.


libclang is just an example, maybe not an ideal one. But, yes, I'm advocating for an executable language spec, one that you'd use as a (probably suboptimal, but canonical) implementation.

A good example of such a thing would be something like https://github.com/kframework/c-semantics


Yes you can - as soon as you start wrapping your arbitrarily complex macros into a custom syntax. I am easily doing this stuff with any language I am adding macros and extensible syntax to.


But if the syntax is arbitrarily customizable, you can't possibly have the tools understand how to highlight it / intellisense / autoindent / etc.


Of course I can. If my tools are communicating with my compiler, they know everything it knows. I have a single tiny generic Emacs mode (and a similar Visual Studio extension) that handles all the languages designed on top of my extensibility framework.

It's trivial. Any PEG parser I add on top automatically communicates all the highlighting, indentation data and all that to the tools (and it's inferred from the declarative spec, no additional user input is required). Underlying typing engines do the same, for the nice tooltips and code completion. The very compiler core doest the same with all the symbol definitions, dependencies, etc. Easy.


This sounds very interesting :) And I think Colin Flemming is doing something similar in Cursive? In any case, I'd like to see more of what you are talking about - do you have any more documentation of it, a writeup, blog post or video?


If I understand it correctly, Cursive is something different, they don't want to run an inferior Clojure image (unlike Slime, for example), but reproducing a lot of Clojure functionality with their massively complex static analysis tools. But I might get it wrong, all the information I have about Cursive came from one of its advocates who is very aggressively against the very idea of an inferior REPL for an IDE.

I've got some code published, but not that much in writing, planning to fix it some time later. See the stuff at my github account (username: combinatorylogic). Relevant things there are Packrat implementation, literate programming tools and an Emacs mode frontend.


Exactly! You end up defining your macros in a particular restricted subset of lisp, and your tooling for Emacs and Visual Studio has to know about that particular subset. Other people writing similar macros will no doubt have their own, subtly different subset, and their own integrations for their subset. But since your way of writing declarative specs for language customization isn't standardized, you can't use each other's tool integrations.

The way you express DSLs is something that needs to be understood by language tooling, so it belongs in the language spec.


No. Tools do not know anything about the restrictions. In fact they work with a wide range of languages, not just lisp. The only "restriction" is a protocol, built into the macro expander, syntax frontend and compiler core.

So, in your rot13 example compiler would rat all the new identifiers with their origins to the tools.


> So, in your rot13 example compiler would rat all the new identifiers with their origins to the tools.

How can the compiler know which identifier connects to which origin, unless because the macro complied with some standard/restriction/protocol? From a certain perspective all I'm suggesting is making these protocols part of the language standard - that is, define the DSL that's used to define DSLs, rather than allowing macros to consist of arbitrary code.


> Yes you can - as soon as you start wrapping your arbitrarily complex macros into a custom syntax.

Well, by that definition you get exactly the same if the host language of your DSL is statically typed and doesn't use macros. Custom syntax is custom syntax and whether tools/IDEs understand it has nothing to do with the host language.


Sorry, I did not quite get what you mean.

Of course macro+syntax extension got absolutely nothing to do with what you can achieve in a language without macros.

And, no, you did not understand. Any custom syntax you're adding (if the right tools are used, like mine, for example) would automatically become available for your IDE and all the other tools, because they're reusing the same compiler front-end.


> And, no, you did not understand. Any custom syntax you're adding (if the right tools are used, like mine, for example) would automatically become available for your IDE and all the other tools, because they're reusing the same compiler front-end.

Just being able to execute the macro isn't enough for the IDE though. E.g. if a macro is "rot13 all identifiers in this block" then sure the IDE can run it, but it can't offer sensible autocompletion inside the block without understanding more about the structure of the macro.


IDE does not execute the macro - it knows the result of its expansion from the compiler. And compiler keeps a track of all the identifiers and their origins.


The IDE can autocomplete the rot13ed identifiers from outside, perhaps. But it can't possibly suggest rot13ed identifiers inside the macro block for autocomplete, because it can't possibly know that that's what the macro does.


Why? You know which macro made the identifiers. You know what this macro consumed. In most practically important cases this is sufficient.

But, yes, you cannot do it with the Common Lisp approach, where macros operate on bare lists, not the scheme-like syntax objects. The problem here is that the lists had been stripped from the important location metadata. For this reason I had to depart from the simple list-based macros and using custom syntax extension with rich ASTs underneath. Still, on top of a Lisp.


Even with location information, if the IDE's going to offer autocomplete inside the macro it would need to be able to invert the way the macro transforms identifiers, which is not possible to do to arbitrary code.

I agree that this is very rarely practically important - but if you think about it that's precisely the fact that a more restricted alternative to macros should be adequate.


No, it would have to be able to invert ROT-13 to offer what I think is PP's point.

EDIT: Which is obviously impossible if you assume a Turing Complete macro expansion language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: