
Show HN: Sucrase, 20x faster smaller-scoped alternative to Babel - alangpierce
https://github.com/alangpierce/sucrase
======
nicoburns
Does anyone have any idea how much performance is being left on the table by
writing these compile-to-js compilers in JS? Could we get another 10x
improvement by writing the transpilers in something like Go/Rust/C++? Frontend
compile performance is beginning to get painful with large codebases...

~~~
carlmr
Depending on what you're doing, yes I think there can be a significant
speedup.

We had a custom parser in our build toolchain in Perl that needed ~40min for a
normal build. I rewrote it in F# (similar performance to C#) and got it down
to 3min. Because I thought it was fun I ported it to Rust, and it now runs in
15 secs (mostly because of memory safe, zero-copy string handling and
generally better control of memory allocation).

Probably in C# it could have been a bit faster than in F# if you use some of
the optimization capabilities. But I doubt you could get close to Rust. Even
reading one of the files in .NET takes as long as Rust takes for everything.

In C++ it would definitely be possible to reach Rust performance, it's just
way harder to keep your memory intact without the borrow-checking compiler.

Now I don't know how Perl and JS compare, but I'd guess they're in a similar
ballpark for performance.

~~~
ricardobeat
Modern JS engines can be significantly faster than Perl in many workloads.
Perl’s closest performing neighbours are Ruby, PHP and Python.

~~~
carlmr
I don't know and I'll take your word for JS (on V8). For Perl vs Python my
experience is they are in a similar ballpark in, but Perl has an edge on text
processing and Python on numerics (mostly because of better regex engine in
Perl and numpy being pretty good in Python). So for parsing stuff Perl is
still usually the faster alternative.

I still doubt JS/V8 is faster than one of the VM languages (.NET or JVM), they
don't need to be interpreted and have much less dynamic stuff so they can
optimize better.

~~~
alangpierce
My general intuition is that you shouldn't expect JS to beat Java or .NET, but
you should expect it to be by far the fastest among similarly-dynamic
languages. Browsers have been under fierce competition for many years, and are
backed by well-funded engineering teams, so JS performance in particular has
repeatedly pushed the limits on what kind of optimizations are possible in a
dynamic language. Other dynamic languages haven't had the same incentives.

------
btown
Is there a good way to use this from Webpack?

Edit:
[https://github.com/alangpierce/sucrase/blob/master/integrati...](https://github.com/alangpierce/sucrase/blob/master/integrations/webpack-
loader/)

~~~
alangpierce
Yep, looks like you found the plugin! Note that if you're using Webpack 4, you
won't need webpack-object-rest-spread-plugin anymore, since it uses the newest
Acorn, which should handle object rest/spread syntax without needing a plugin.

Annoyingly, Webpack still spends a lot of time parsing the JS files with its
own parser, so you cut down on the transpile time but still need to do a
relatively slow parse of all files. I've thought about making Webpack (or
similar) use a fast parser like the one Sucrase uses, although it certainly
seems like it would be a lot of work.

~~~
the_duke
Do you have any benchmarks showing how much time is spent in on webpack
plumbing vs babel, comparing to sucrase?

As in, how much does this actually buy you in a real world scenario?

~~~
alangpierce
I'm not very optimistic about webpack build times getting dramatically better
with Sucrase in its current state. I've seen more clear benefits when running
tests using sucrase/register. That said, I think it depends quite a bit on
your webpack config. Certainly, if you run Babel single-threaded and don't
cache the output, then you'll probably get a significant benefit, but that's
likely not a realistic scenario.

Some specific numbers I just measured for my codebase at work:

    
    
        Babel/TypeScript, cold cache: 66 seconds
        Babel/TypeScript, warm cache: 55 seconds
        Sucrase, cold cache: 52 seconds
        Sucrase, warm cache: 49 seconds
    

This is on a codebase of about 400,000 lines of code (some JS compiled with
Babel, some TypeScript compiled with tsc). Transpilation is parallelized using
happypack (and I'm running it on a 4-core machine), but webpack
parsing/processing is all single-threaded, so it ends up taking more time.
I've done a little prototyping on using a Sucrase-like approach to speed up
webpack (most of it is indeed parse time), but it would certainly be a project
to get it working in practice.

With a warm cache, it's not running sucrase/babel/typescript at all. I think
the running time difference there is because Babel and TypeScript are emitting
ES5, which is a little more code for webpack to parse. Probably a more correct
comparison would be to configure Babel and TypeScript to target newer JS, but
these should all be seen as rough numbers anyway.

------
bpierre
How does it compare to Bublé?

[https://buble.surge.sh/guide/](https://buble.surge.sh/guide/)

~~~
rich_harris
Bublé creator here. The obvious big difference is that this supports
TypeScript and Flow, which Bublé doesn't. Bublé is designed to compile a
subset of ES6+ into ES5 (the subset of ES6+ that transpiles well without
causing bloat, which excludes some features like generators), whereas this
seems to be focused on more modern targets.

Technically there are a few interesting differences under the hood, such as
Sucrase not wasting time generating a full-blown AST (which Bublé needs to do
in order to handle things like block scoping).

Anyway, I won't waffle on any further as I'm sure Alan can explain the
differences better — I just wanted to chime in to say that I'm excited about
Sucrase. I've written a [Rollup plugin][1] for it, and I'm planning to use it
with my TypeScript projects.

[1]: [https://github.com/rollup/rollup-plugin-
sucrase](https://github.com/rollup/rollup-plugin-sucrase)

------
gymshoes
Would targeting modern Js runtimes yield faster performance in browser as
well?

I am guessing some loading time would be improved if code converted from
ES2105 to ES5 is not required

------
fiatjaf
Thank you for making this. Babel really needs some alternatives.

It would be great to have a Browserify plugin (I'm not asking for anything,
just speaking my thoughts).

~~~
alangpierce
Yep, hopefully this will motivate more of a performance focus in general. To
be clear, Sucrase's parser is a slimmed-down fork of Babel's parser, so
Sucrase wouldn't be possible without Babel. Also, Babel has a much broader
scope as a pluginizable code transformation tool, so it's much better suited
for things like prototyping future language features and doing other
nontrivial transformations.

Regarding Browserify support, PRs are welcome. :-)
[https://github.com/alangpierce/sucrase/tree/master/integrati...](https://github.com/alangpierce/sucrase/tree/master/integrations)

------
fold_left
I could imagine using the jest plugin, could be useful as a presumably faster
alternative to babel-jest.

~~~
alangpierce
That's the hope! To be honest, I haven't worked with Jest a lot, and the
plugin is sort of a first pass to get things working on the Apollo codebase
(which uses Jest), so e.g. it may not be as smart as babel-jest for things
like caching. Contributions welcome, though!

My specific use case so far has been running Mocha tests on a large codebase,
so it's replacing babel-register (which uses a cache) with sucrase/register
(which doesn't). For that use case, it seems to significantly speed up test
startup time with a cold cache and somewhat speed up startup time with a warm
cache. Just loading and saving the babel-register cache takes a fair amount of
time, at least in Babel 6.

It's good to keep caching in mind when thinking about this stuff. In many
cases, Sucrase won't be faster than loading results from cache. Sucrase helps
with initial startup time and avoids the cache fragility issues that I've seen
with lots of caching systems. And, of course, you could cache Sucrase results
and get something at least as fast as cached Babel.

