

Compiling JSX with Sweet.js using Readtables - danabramov
http://jlongster.com/Compiling-JSX-with-Sweet.js-using-Readtables

======
jannes
Isn't Facebook's own jstransform library (which is used by their JSX compiler)
supposed to be composable already? They have various ES6 transforms in their
repository. Plus, there are JSX transforms in the React repository. There are
also some third-party transforms on Github – for example es6-destructuring-
jstransform.

Could anyone highlight what the differences between jstransform and sweet.js
are?

jstransform:
[https://github.com/facebook/jstransform/](https://github.com/facebook/jstransform/)

es6-visitors:
[https://github.com/facebook/jstransform/tree/master/visitors](https://github.com/facebook/jstransform/tree/master/visitors)

jsx-visitors:
[https://github.com/facebook/react/tree/master/vendor/fbtrans...](https://github.com/facebook/react/tree/master/vendor/fbtransform/transforms)

es6-destructuring-jstransform:
[https://github.com/andreypopp/es6-destructuring-
jstransform](https://github.com/andreypopp/es6-destructuring-jstransform)

~~~
jlongster
There's a big difference: that's basically just a pipeline for AST tranforms.
That's not very hard. Basically just a bunch of different JS libraries that
you hand an AST through.

That's not actually that useful. In my post, I mentioned adding syntax for
literal persistent objects, using something like Mori data structures. The
syntax could look like this:

* #[1, 2, 3] <\- persistent vector

* #{x: 1, y: 2} <\- persistent map

An AST pipeline makes the "analyze" phase extensible, but it doesn't do
anything to the parser. You can't actually add syntax, ever. So it's just not
that useful for extending the language. It's good for something like
implementing various parts of ES6.

Actually, it's even better for something like types or modules, which really
need the whole AST. But in my opinion, everything else is better as macros.

sweet.js works on a tree of tokens, and it allows extending actual syntax.
It's also way easier to transform code because it has a pattern matching
language and you don't have to manually wrangle AST nodes.

Lastly, sweet.js keeps track of scoping, and in the future it will have
modules, so you will be able to do something like:

``` import JSX from "jsx-reader";

import # from "mori";

// code ... ```

And those language extensions are only available inside the module. Everything
is scoped.

~~~
davedx
"Reliable source maps" \- going to give this a try. If it works I'll buy you a
beer.

------
royjacobs
I would love all of this to work cleanly when debugging. A lot of these things
currently work very well, until you need to de bug them. Even with tools that
don't do THAT much rewriting (i.e. Traceur or something, to convert arrow-
functions into regular ones) source maps work pretty unreliably, where I
usually have to refresh a few times before they work (and remember my
breakpoints).

Is there any traction on this front, something like a system where the source
map actually embeds the AST or the like?

------
skrebbel
Wow! I cannot wait to convert my React code to this. Awesome!

As a complete macro noob, I'm starting to wonder whether readtables would also
allow all of TypeScript to be compiled with sweetjs. Or at least transformed,
without type checking.

~~~
dustingetz
I don't think you have to convert anything, it should just work by replacing
the react build step with sweetjs's build step, though he does say that it
isn't heavily tested yet

~~~
skrebbel
yeah, that's really all i mean. but our frontend build setup looks like it was
made by people who have no clue what they're doing (me, at the time), which
might make it slightly more involved than you might expect :-)

~~~
jlongster
Thanks! I personally prefer gulp, and you can use
[https://github.com/jlongster/gulp-sweetjs](https://github.com/jlongster/gulp-
sweetjs). It makes it really easy to integrate (that isn't the one published
in npm, I'm working on that). There's also a grunt loader:
[https://github.com/natefaubion/grunt-
sweet.js](https://github.com/natefaubion/grunt-sweet.js), although that hasn't
been updated to support readtables yet.

------
lhorie
We can do that now w/ Sweet.js? That's really awesome.

I'm gonna port this to work with Mithril (
[http://lhorie.github.io/mithril](http://lhorie.github.io/mithril) ) when I
get a chance.

------
kasperset
Although not exactly related to this topic but I keep getting confused React
JSX with this JSX with [http://jsx.github.io/](http://jsx.github.io/).

~~~
moreati
Ditto, but with
[https://en.wikipedia.org/wiki/ECMAScript_for_XML](https://en.wikipedia.org/wiki/ECMAScript_for_XML)

------
samstokes
Slightly off-topic, but the OP's closing thoughts make great advice to anyone
commenting on / reacting to some new technology:

 _I ask that you think hard about sweet.js. Give it 5 minutes. Maybe give it a
couple of hours. Play around with it: set up a gulp watcher, install some
macros from npm, and use it. Don 't push back against it unless you actually
understand the problem we are trying to solve. Many arguments that people give
don't make sense (but some of them do!).

Regardless, even if you think this isn't the right approach, it's certainly a
valid one. One of the most troubling things about the software industry to me
is how vicious we can be to one another, so please be constructive._

So often commenters seem to be people who've never tried the technology in
question reacting to imagined abuses ("Monkeypatching? I'd never allow that in
production!").

~~~
spion
The problem with jsx and sweet.js is the lack of tooling. For example, TernJS
(a static analysis engine I use in emacs) will never understand either, so no
auto-complete or hints.

Is such tooling even possible for something like sweet.js?

~~~
jlongster
Tern works perfectly well with sweet.js. It uses loose parsing so it basically
just ignores the areas that it can't understand.

For the most part, you still get all the info you used to have. Generally you
don't use macros that are overly aggressive in modifying scope, or changing
the basic rules of JS that tern looks for.

You will not get autocompleting on expressions that expand with macros, no.
But you could easily add a plugin to tern that tells it the rules for that
syntax if you really wanted to.

------
Kiro
A bit OT but without knowing much about React it looks like you need to put
your markup in your logic. What happened to the separation where the markup
lives in its own template files? Seems like a big issue to me.

~~~
baddox
Pete Hunt addresses this specifically in this React talk starting at 3
minutes:

[https://www.youtube.com/watch?v=DgVS-
zXgMTk#t=182](https://www.youtube.com/watch?v=DgVS-zXgMTk#t=182)

Basically, the idea is that most JS templating solutions separate technologies
(HTML , JS, templating languages), but don't actually separate concerns,
because the concern is simply generating and updating DOM regardless of how
many technologies you use.

Most JavaScript frameworks or libraries that handling generating and updating
DOM, like Ember's two-way binding with Handlebars and Angular's dirty checking
with directives, don't really separate markup from logic. They just invent a
new language to mix with your HTML, and that new language is deliberately made
extremely weak.

~~~
lukeholder
wow beat me by 60 seconds!

