
The Controversial State of JavaScript Tooling - lolptdr
https://ponyfoo.com/articles/controversial-state-of-javascript-tooling
======
chipotle_coyote
Something that's been concerning me about the current approach to web
development in particular is the accumulation of unacknowledged -- and
increasingly unmeasurable -- technical debt.

I'm not sure that's _precisely_ the right phrase, but here's what I mean: we
all (should) know about technical debt in our own projects, but we also know
that every project accumulates technical debt. When we build our projects on
top of other projects, our work now has the potential to be affected by the
technical debt -- bugs, poor optimizations, whatever -- in the underlying
projects.

Obviously, this has always been true to varying degrees. But we've reached a
point where modern web applications are pulling in _dozens_ of dependencies in
both production and development toolchains, and increasingly those
dependencies are themselves built on top of _other_ dependencies.

I don't want to suggest we should be writing everything from scratch and
constantly re-inventing wheels, but when even modest applications end up with
hundreds of subdirectories in their local "node_modules" directory, it's hard
not to wonder whether we're making things a little...fragile, even taking into
account that many of those modules are related and form part of the same
pseudo-modular project. Is this a completely ridiculous concern? (The answer
may well be "yes," but I'd like to see a good argument why.)

~~~
Touche
More simply put, stability is not valued in the front-end world.

It's interesting to compare the dominance of Microsoft in the enterprise world
where IT runs the show was established largely due to stability. IT is run by
managers who can get fired if instability costs the company money.

In the front-end world managers can't keep up with the changing landscape and
therefore cannot make stability based decisions (in terms of software
stability). Instead, in my experience, they make decisions based on how easy
it is to hire (as cheap as possible) developers.

So, as a proxy, _popularity_ drives the front-end because developers who have
the time to keep up with the latest trends (IE mostly young 20-somethings)
because they're the ones who get hired.

~~~
soft_dev_person
It's a bit ironic to praise Microsoft for stability in this context, when they
probably have had a major role of causing instability in the web front-end
world for more than a decade.

~~~
chc
I thought most people's complaint was that Microsoft caused stagnation, not
instability. Explorer was _too_ stable for most developers' tastes.

~~~
d215
Most people's complaint was Microsoft was trying to push their own proprietary
standards (Activex anyone?) upon the www at large. Also their adherence to Web
standards at the time IE6 came out was so bad, that it caused almost
unsurmountable instability.

------
lhnz
The problem with hypermodularization is that it takes time to make a decision
on what module to use for every task. If you have to to do this with 100s of
modules, you're wasting a lot of valuable time.

Does anybody know a quick way of making the right decision?

NPM is like a crowded graveyard nowadays (or like .com domain names - where
all of the good module names have been taken along time ago and now all the
best practice modules have completely irrelevant names). There are thousands
of buggy modules in which development has stalled, and it's easy to waste a
lot of time trying to separate the wheat from the chaff. I personally
sometimes have to open up 5+ Github repositories to check their last commit,
number of contributors, interfaces, code quality, unanswered issues, etc. Only
after doing so am I able to make a decision.

In terms of knowing what's cutting-edge practice it seems you have to watch
Twitter a lot and be careful not to follow every single bad idea.

I can't imagine what it'll be like to search for a module to handle something
as common as config in a couple of years. Even when you constrain yourself to
something like '12 factor config' there are many different implementations.

Don't even get me started on the insane assembly required to get webpack-,
babel-, cssmodules-, and postcss- to all work together.

The problem is only going to get worse.

~~~
wwweston
> I personally sometimes have to open up 5+ Github repositories to check their
> last commit, number of contributors, interfaces, code quality, unanswered
> issues, etc.

Only one of these, it seems to me, is likely to be consistently related to the
quality of the module.

~~~
davedx
Don't keep us in suspense. Which one?

~~~
wwweston
Honestly, I think it's better as a question/koan than as explained. Asking
yourself how good these each of signals really is and asking yourself how they
could fail is probably much better than having a random internet commentator
try to talk you into it.

But:

\- last commit: I'm not surprised to see this here. Almost the entire industry
is deeply invested in the idea that software can never be done, only
abandoned, so deeply it's almost invisible. And projects with enough surface
area likely are like the proverbial shark: either moving or dead. But for
modules with limited, well-defined functionality, reaching a steady state
where the project is actually done and updates are rare should actually be a
sign of quality.

\- number of contributors: again, I suspect that where this coheres with
quality, it's probably correlated with size/surface area. Plenty of limited,
well-defined projects probably are good with one or a handful of contributors.

\- interfaces: are we talking about presentation of the project? Or the UX of
an app? Either one could be a sign of overall craftsmanship, or it could be a
sign that the author is concerned with appearances/marketing.

\- code quality: tautologically true (code quality is related to project
quality) but not helpful. One might easily mean "do the authors of this module
follow my favorite code style guide," in which case I think this is _extra_
likely to lead you astray.

\- unanswered issues: unanswered issues seem like a great signal. If they're
present (and real), the author(s) either can't fix them, or the project is
abandoned while actually needing improvement. Inversely, if there are
_answered_ issues, the project is being used and attended to.

------
jordanlev
I think the javascript ecosystem has a fundamental difference from all other
languages/communites that came before it: it is universal. Hence I think a lot
of the debates raging about the right tools come down to different people
using it for different purposes.

I doubt that everyone can consolidate on just one set of tools, because in
javascript-land "everyone" means something different than other places. How
can a tool that is good for a front-end designer who needs basic DOM
manipulation also be the right tool for someone building an entire application
in js and only using HTML as a delivery mechanism for their app bundle?

I wish people would recognize in these discussions that their use-case might
be different from others, and instead of talking about "the best tools",
instead talk about "the best tools for this class of applications".

So hopefully the toolset could be consolidated down to one clear choice _for
each class of usage_. Then the biggest decision to make is deciding which type
of application it is you're building.

~~~
davedx
This is definitely part of it. JavaScript tooling encompasses back end, front
end (progressive enhancement inside server generated web pages), front end
(large single page apps), and a bunch of language flavours (ES5, ES6, ES7,
TypeScript, JSX, to name a few). So it's understandable the tooling ecosystem
is large and diverse, and that plumbing it all together can result in
premature hair loss.

We're using ES5 with AngularJS at work, and it's like a breath of fresh air ;)

------
hodwik
The web community is flooded in negativity because it was swamped by kids with
overly idealistic, unrealistic, and heroic ideas about what the web was going
to be post-Facebook.

Those people then realized that the web is, like all things, both real and
imperfect. So now they're upset. This is all part of growing up.

People who have been involved with the web for a while are not in any way more
jaded then before. They already witnessed PHP, PERL, Java Applets, Flash, the
browser wars, and so on.

~~~
bshimmin
I don't think I agree. I've been developing stuff for the web since the
mid-90s, when TABLEs were just about starting to be a thing and Perl was
definitely the preferred choice on the server. There has been plenty of
excitement along the way, undeniably, but it feels like HTML5 took forever to
get to where Flash was, JavaScript on the server still underwhelms me, and it
seems almost daily to get more complex and full of enterprise beans, browsers
are far from equal, the CSS3 spec isn't finished, developing for the mobile
web is often a pretty wretched experience... and my clients still ask me to
make their logos bigger and complain that important stuff is below the fold.
Am I jaded? You bet.

But hey, at least we can centre things vertically in CSS with Flexbox now
(apart from the dubious browser support, of course).

~~~
freshyill
Flexbox took forever to arrive, but I its defense, support is at 95% these
days.

[http://caniuse.com/#feat=flexbox](http://caniuse.com/#feat=flexbox)

~~~
darkmarmot
I love flexbox (compared to what existed prior) -- but just watched layouts
break on the release of Chrome 48. Yay! :)

------
alextgordon
> People take libraries like lodash – or jQuery, as we analyzed earlier – and
> insert the whole thing into their codebases. If a simple bundler plugin
> could deal with getting rid of everything in lodash they aren’t using,
> footprint is one less thing we’d have to worry about.

If you use Google CDN, why does it matter how big jQuery is? If N people use
their own "smaller" M-byte copy of jQuery, browsers will have to download M*N
bytes, as opposed to 0 bytes if you use the cached full version. A profound
waste of bandwidth.

My advice: Use Google CDN, for less common stuff use cdnjs. Don't adulterate
libraries!

~~~
rapind
There are quite a few disadvantages to using a CDN like Google.

\- Delay for DNS resolution and the new TCP connection could be non-trivial
(some tests show 300ms+).

\- A JS CDN is also likely tracking your traffic and you may not want them to.

\- No offline dev environment (on the plane).

\- Server run (phantom etc.) tests might run super slow if they have to pull
in a remote js library.

\- Probably issues when it comes to apps that are meant to work offline,
although I think this is solved with service workers proxying the CDN request.

And if the recommendation is to continue loading large libraries from a CDN
and nevermind only what you need, well there's still a memory cost. If I have
5+ SPAs running in tabs that are each fairly complex (you know, like Gmail),
it does start to add up. Because of the way that tabs are sandboxed I
_believe_ this may mean 5 copies of jQuery or React or Angular or w/e.

~~~
SapphireSun
I've also encountered problems with China blocking jQuery or fonts from
Google.

~~~
Grue3
This is solvable by also hosting a local copy and load it if CDN is not
available. Example (from my website):

    
    
            <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
            <script>window.jQuery || document.write('<script src="/js/vendor/jquery-1.10.2.min.js"><\/script>')</script>

------
justaaron
One should consider the possibility that the entire web development industry
is a giant busy-work factory making half-baked kluges to a fundamental problem
that will never be papered over: http is a stateless protocol for sending
hypertext. markup. meta-data and context for resources in a folder somewhere.
It's a file-system thing. A web browser is a declarative-markup-tree
rendering-engine with scriptable nodes, and the language chosen for scripting
is an accident of history. Using http and web browsers in ways they never were
intended to be used is possible, albeit painful. Now that we have virtual
DOM's and isomorphic platforms like clojure/clojurescript and we compile to
JS, now that we have JS on the server, now that we have our head so thoroughly
up our ass that we forgot the point, now we can consider the circle of
ridiculous nonsense complete...

The world wide web took off, and we have to live with it's technical debt,
or...

The solution is simple, bold, and risky: 1) pick a port. (Nowadays that's even
a joke. we tunnel everything over port 80.) 2) pick a protocol with some
future (hey let's just tack websockets on as an upgrade, gradually get browser
support, etc) 3) keep moving...

I am not ultra impressed with the web of 2016... The web of 1996 was way
cooler. I want a VRML3 rendering engine for a browser.

~~~
justaaron
Oh, don't even get me started with build tools. Nowadays we have build tools
to build build tools.

I'm sorry, but when FRONT-END DEV considers build tools standard we are LOST
lost lost...

How many JS libraries and CSS scripts do we really need to embed in a page?
How many of those functions or classes are even being used in that page? Why
do I have to scroll to view source on a page with mostly text and a few
colored boxes?

Hand-coded html and css is not that hard folks...

it's just the habits, the frameworks, etc...

1/4 of the web is powered by Wordpress wtf!?

~~~
justaaron
In a nutshell, I think, with some risk, that some innovators could literally
pick a port, pick a protocol, and build a different kind of browser... once
there is some content for that platform it's a matter of time before early
adopters download one of these new class of browsers and so forth... it's how
it happened the first time, and it's how pretty much everything happened...
even games (second life) etc.

and meanwhile, the web can go back to being a directory preview/ resource
hyperlink web...

------
stcredzero
_> Tree-shaking is a game breaker_

To my knowledge, this has never, ever, worked well enough in a dynamic
environment. Smalltalkers have spent over 3 decades trying to get this to
work. What became the Smalltalk industry standard? Some form of code loading,
often based on source code management.

Anyone who is doing tooling/library work in a dynamic environment needs to
delve into the history of Smalltalk and ask if it was already tried and what
the problems were. Chances are, it was already tried, and that there's useful
experiential data there.

~~~
arohner
All of Google, and most ClojureScript users would like to disagree with you.

Google Closure has been successfully tree-shaking since it was released, in
2009. ClojureScript uses Closure for an optimization path, so the majority of
CLJS apps in production (CircleCI, Prismatic, to name a few) use tree-shaking
on every deploy.

~~~
Silhouette
It's not as simple as you're suggesting. With Closure you sometimes have to
rewrite otherwise correct and idiomatic JavaScript significantly to avoid it
breaking during compilation.

~~~
arohner
If you design your app to use Closure from the start, it's not a problem at
all, and as a sibling comment said, CLJS apps get it mostly for free.

The entire JS toolchain lost five to ten years by not adopting Closure. I
suspect that in a year or two, someone will release a webpack plugin that does
the exact same thing Closure did, with the exact same constraints.

------
TimJYoung
Speaking of controversial: I think things are going to keep being bad until
developers realize that the problem isn't technical, but rather economic. Back
before the current "give it all away for free" trend became a thing,
commercialization of software was a given and allowed "winners" to emerge from
the chaos. The merits of the "winners" isn't important here, it's the
stability that comes with it. As long as everything is given away for free and
there are no barriers to entry, you'll keep ending up with chaos and
unmanageable churn whereby your job as a software developer has now morphed
into a software tester/evaluator for every single piece of functionality that
you need and don't want to write yourself.

When you have to actually put your money on the line and have an actual
business presence on the web, it's a completely different mindset from "I'm
going to write this small library and pop it up on GitHub for free, so who
cares if there's documentation or if the software even works as described".
The fact that there _are_ developers that are professional and thorough and
_still_ give away their software is a minor miracle. But, it's not wise to
count on the charity of such developers for the long-term because it isn't
realistic. As more and more one-offs are created, it becomes much harder to
distinguish one's software from the rest of the pack, so developers will
simply not even attempt to do so. The Apple store is a perfect example of this
problem.

~~~
cgh
Python and gcc are free software too. Lots of stability in the world of Python
and C. Hopefully you don't think free or open source software is somehow
limited to the world of Javascript.

Here's why there is so much churn in the JS world: everyone is attempting to
polish a turd.

~~~
TimJYoung
I don't think those two are good examples. They both had significant
_commercial_ ecosystem support, either in the form of a large company that
pushed it (Google with Python) or in the form of other commercial compilers
that promoted the ecosystem (lots of commercial C compilers over the years -
MS, Borland, Watcom, Metrowerks,....).

------
atemerev
Babel got it wrong.

Hypermodularisation is a good thing in user-facing code. Here we should thrive
to shave off every last byte.

But Babel is a tool for developers. We don't need configuration explosion and
endless plugins. We need all batteries included. The very purpose of Babel is
"hey, I want to write hip code like the rest of cool kids of the block; now,
let it run everywhere". Who needs to configure that?

~~~
LewisJEllis
I agree that Babel doesn't get every single thing right, in much the same way
that everything manages to get something wrong, but I don't think you're
giving credit to the benefits of its modularity.

Another "purpose" of babel is "hey, I want to implement an upcoming ECMAScript
feature in an extensible and relatively self-contained way so I don't have to
go digging deep inside someone's codebase."

Or, it's also "hey, I only need these 3 features and I want my build times to
stay as low as possible."

For the use case you describe, there are presets provided to make that quite
simple.

------
sotojuan
It's interesting how none of the JS/tooling fatigue discussions mention Ember,
which has one public-facing tool.

~~~
anonyfox
So true. Ember is the antidote right now for the frontend land.

\- MVC (Backbone's big thing) \- Two way binding (Angular's big thing) \-
Components (React's big thing) \- SSR like the others now (FastBoot) \- High
performance with the Glimmer Engine (should beat react in theory) \- Books. \-
dedicated package repository (emberaddons) \- actually community driven \-
stable + mature (+ seamless upgrade paths) \- and many more advantages

plus, as you said, the ember-cli tool, that gets you up to speed fast. The
tooling makes the final success, just have a look at go... the language
appears fairly weak/trivial, but the tooling is _excellent_.

I could freak out everytime I have to set up an react project, weighing
everyones opinions about every little package for every usecase, IMO react
shifts the complexity from the UI to the tooling. Ember is just perfect and
the only real option I could recommend right now, and I'm just sick of
thousands of "me too!" approaches for everything, this extreme diversification
cripples actual progress to a halt.

~~~
realharo
Last time I had to use ember (about 2 months ago), the ember-cli tool had
_major_ performance issues. I remember rebuilds after a single line of code
was changed taking almost 10 seconds. It also generated almost 10 gigabytes of
temporary files over time. Plus I remember an issue with import paths not
matching real filesystem paths, which was messing with my IDE (and my brain
until I figured out what was going on).

It's a good tool when it works properly, but they really need to work on those
rough edges a bit.

~~~
orf
> I remember rebuilds after a single line of code was changed taking almost 10
> seconds.

Do you happen to be on Windows? My Ubuntu laptop compiles changes to our
pretty large app in under a second, but on Windows it does take up to 10
seconds.

~~~
realharo
Yes, it was a Windows machine. I did all the "make it faster" steps, like
running the recommended script that adjusts Windows Defender and search
indexing configuration, but in the end it was still very slow, even on a
fairly fast SSD.

~~~
hatsix
There are several teams at MS using ember (and ember-cli), and they've been
PR'ing improvements. (Source: MS hosted the last Seattle Ember Meetup:
[http://www.meetup.com/Ember-js-Seattle-
Meetup/events/2278796...](http://www.meetup.com/Ember-js-Seattle-
Meetup/events/227879633/))

What helped me the most was moving to Node 4+, which made memory management
MUCH better.

------
pbowyer
A fantastic post. If only to find someone else talking about the downsides of
hypermodularization.

> Unfortunately, the spirit and praise of the web in Remy’s post isn’t shared
> by many of these articles. To the contrary, the web development community is
> flooding in skepticism, negativity, and pessimism.

I started building on the web a year or two after Remy. I believe one reason
for the lack of praise is the reason people came to the industry. He & I came
because we loved the web, we loved to see what could be done.

Today, the web's an entirely different commercial being. People do this as a
job (as do I, which I'm very thankful for), they have less time to see what
can be done (constructive contributions take effort) and time is money, so
let's develop fast and move on, slagging off everything as we go.

I've been jaded, I'm guilty of being negative about everything.

But I do see hope.

~~~
kasey_junk
I heard similar opinions in 1998. I think it probably has less to do with the
state of the industry & more to do with the state of the career of the
observer.

------
gsmethells
Hypermodularization sounds like the latest euphemism for Dependency Hell (tm).

~~~
EvanPlaice
Dependency hell is caused by the inability to sanely identify and (if
necessary) support multiple versions of a dependency.

The primary cause of dependency hell comes from installing and using global
dependencies because there's no way to predetermine all of the dependencies
that every module and/or application on a system will use.

Thr Javascript ecosystem actively discourages using global dependencies except
for CLI tooling.

This is talking specifically about an optimization problem. How to minimize
source size by only inporting code that is used directly in the appplication.

AFAIK, no other platforms don't attempt to solve this problem because it's a
non-issue except where you're sending code over a network to be executed
remotely.

For example, imagine if you had to send the entire Java Runtime Environment
over the wire every time you loaded a website.

~~~
paulddraper
npm3 installs flat, right?

~~~
EvanPlaice
Yep, NPM is transitioning to flat in V3. Partly because of the nested folder
limits in Windows, partly to reduce the ridiculous amount of dependency
nesting that occurs with the current model.

JSPM already uses a flat dependency structure. If you look at how it maps
dependencies (and specific versions of dependencies) in config.js it makes a
lot of sense. The only feature it's missing is the ability to easily search
and list multiple versions of the same dependency.

------
swanson
Are there any documented cases of a business failing because their JS payload
was too large? I get that smaller code is easier to understand/work with, but
I've never been able to internalize the desire for small payload -- just
doesn't seem like it ever matters outside of philosophic reasons for saving
user's bandwidth (especially if it comes with a steep tooling cost).

It strikes me as a technical pursuit in search of a problem -- but certainly
willing to be convinced otherwise.

~~~
arohner
Google found a 0.5 second delay caused a 20% decrease in repeat traffic, that
persisted after the delay went away
([http://glinden.blogspot.com/2006/11/marissa-mayer-at-
web-20....](http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html))

Amazon found every 100ms slower the site loaded, they lost 1% in revenue:
[http://www.gduchamp.com/media/StanfordDataMining.2006-11-28....](http://www.gduchamp.com/media/StanfordDataMining.2006-11-28.pdf)

Walmart.com found a very large change in conversion rate based on page load
times: ( [http://www.slideshare.net/devonauerswald/walmart-
pagespeedsl...](http://www.slideshare.net/devonauerswald/walmart-
pagespeedslide) slide 37)

My own customers have seen 4x different conversion rate, when grouped by page
load time. (i.e. same slide as walmart page 37 above, where the peak is 4x
higher than the lowest performing group).

Your business probably won't fail because it's slow, but it will _certainly_
make less money because it's slow.

~~~
draw_down
But in this context, the issue becomes how much JS bundle weight it takes to
cause a 0.5 second delay. And there are many other things that can slow down
page load time (and especially _perceived_ page load time) that either don't
involve JS at all, or are not directly caused by the amount of JS that gets
loaded.

~~~
arohner
Depends on your connection. I gave a talk about CLJS page speed, and server
side rendering at the recent Clojure conj
([https://www.youtube.com/watch?v=fICC26GGBpg](https://www.youtube.com/watch?v=fICC26GGBpg)).
Trying to load my site on the conference wifi, 200kB took 3 seconds = 60kB/s,
so 30kB would have been enough.

"Ok, that's not real world". Today, the cable guy is here fixing my home
internet, so I'm tethering my iPhone 5s. Loading the .js for my site took 40kB
in 143ms = 300kB/s, so 150kB would have been enough.

"Still not real world". Fine. Here's the waterfall graph of a visitor from
Canada to rasterize.io today:
([https://s3.amazonaws.com/static.rasterize.com/canada-
waterfa...](https://s3.amazonaws.com/static.rasterize.com/canada-
waterfall.jpeg)). Loading the .js took took 40kB in 60ms = 666kB/s, so 300kB.

So certainly less than half a meg of JS, which is very common to see in SPAs.

Certainly, there are tons of other things that can slow down a site. But the
JS is one of the "easiest" to solve, because devs are responsible for it, and
it has well known solutions, that people don't apply consistently. (reduce
dependencies, only serve one file, Use webpack/closure, use CDN)

------
szines
I’ve been using ES6 modules in Ember.js development for two years now. Using
Ember CLI is simplified down the dev process extreamly. The whole dev app
development is so simple and fun with Ember. I don’t understand why people cry
about it, the solution is exist, they should just use it. Thats all.

------
pspeter3
At Asana we now just Bazel [[http://www.bazel.io](http://www.bazel.io)] and
TypeScript for our front end code. It made the tooling dramatically simpler.

------
napperjabber
One of the major differences between a jr and a sr IMO, is their ability to
tell you what each one of those dependencies solves. If they don't know the
sub-dependencies by heart, they didn't read the code or at least look it up
outside of understanding the API.

Than you have the maintenance developer, and everyone loves 'em. He just
figures things out and helps you improve your code while you scream at him for
not knowing the 'bigger picture'. Fun times.

------
haberman
In my experience some aspects of the JavaScript experience are _great_ , and
others are _terrible_.

npm is _great_. It gives you an easy way of specifying your dependencies in
your source tree, and makes it extremely easy for people who check out your
repo to obtain them. People can run "npm install" and now they have a copy of
all your dependencies in "./node_modules". It composes nicely too: "npm
install" also pulls the dependencies of your dependencies.

Babel is _great_. Sure, I've heard some complaints lately about their latest
changes, but so far this hasn't affected me as a user. Babel for me means that
I get to write using the most modern ES6/ES7 features and then compile to ES5
for compatibility. For me it works great and mostly hassle free.

The frameworks themselves are _great_. Not perfect sure, but there are lots of
great ideas floating around in React, Angular, d3, moment.js, etc. and the
packages built on top of them. Whatever you want to do there is a library out
there that someone has put a lot of love into. There is a lot of choice --
yes, maybe sometimes a little bit too much, but I'd rather have that than too
little.

Flow is _great_ (and I hear TypeScript is too, and getting better). I can't
tell you how nice it is to be able to declare static types when you want to
and hear about type errors at compile time. Maybe not everybody's cup of tea,
but I love it.

The build systems, minifiers, test runners, etc. are _terrible_. By far the
worst part of JS development for me is figuring out how to glue it all
together. When I try to figure it out it's like entering an infinitely complex
maze where none of the passages actually lead anywhere.

\--

For example, let's say you want to run some Jasmine tests under PhantomJS.
Jasmine is a popular unit testing framework and PhantomJS is a popular
headless browser, scriptable using JavaScript. Both very cool technologies,
but how can you use them together? This is a real example: it's something I
really wanted to do, but in the end I literally could not figure out how and
gave up.

Phantom JS claims that it supports Jasmine ([http://phantomjs.org/headless-
testing.html](http://phantomjs.org/headless-testing.html)) though it gives
several options for test runners: Chutzpah, grunt-contrib-jasmine, guard-
jasmine, phantom-jasmine. Time to enter the maze!

Chutzpah looks promising
([http://mmanela.github.io/chutzpah/](http://mmanela.github.io/chutzpah/)) --
it says it lets you run tests under a command line. It says it "supports the
QUnit, Jasmine and Mocha testing frameworks" and you can get it by using
"nuget or chocolatey". Dig a little deeper and it starts to become clear that
this is a very Windows-centric tool -- nuget says it requires Visual Studio
and chocolatey is Windows-only. Our maze has run into a dead-end.

Moving on to grunt-contrib-jasmine. I don't _really_ want to use this because
I'm currently using Gulp (Grunt's competitor), but let's check it out. We end
up at this page ([https://github.com/gruntjs/grunt-contrib-
jasmine](https://github.com/gruntjs/grunt-contrib-jasmine)). This page is sort
of a quintessential "JavaScript maze". It contains a lot of under-explained
jargon and links to other plugins. And it gives me no idea how to do basic
things like "include all my node_modules" (maybe I should list each
./node_module/foo dir explicitly under "vendor"?)

Moving on to guard-jasmine, I end up at [https://github.com/guard/guard-
jasmine](https://github.com/guard/guard-jasmine), and it's clear now that I've
entered a Ruby neighborhood of the maze: everything is talking about the
"Rails asset pipeline", adding Guard into "your Gemfile" (I don't have a
Gemfile!!). I really don't want to introduce a Ruby dependency into my build
just for the privilege of gluing two JavaScript technologies together (Jasmine
and PhantomJS).

The final option in the list was phantom-jasmine, bringing us here:
[https://github.com/jcarver989/phantom-
jasmine](https://github.com/jcarver989/phantom-jasmine). It's been a while so
I don't remember everything I went through trying to make this work. But I was
ultimately unsuccessful.

~~~
__derek__
I don't know if I'm missing something, but why not use Karma?[1] It includes
Jasmine support out of the box, lets you run tests in PhantomJS, and offers a
CLI to make project initialization pretty simple.

[1]:
[https://www.npmjs.com/package/karma](https://www.npmjs.com/package/karma)

~~~
xirdstl
This is what we use also. karma + jasmine + phantomjs. Admittedly, when we
first started, I really did not understand which each of those individual
pieces were.

------
bsimpson
If tree-shaking is your jam, you'll be especially excited for Webpack 2:

[https://gist.github.com/sokra/27b24881210b56bbaff7](https://gist.github.com/sokra/27b24881210b56bbaff7)

------
mdavidn
Interesting how the industry has come full circle. Static linking and dead
code elimination is a problem we solved in the 1970s for compiled languages.
Has the time come to adopt a proper linker for the web?

Google's Closure Compiler has supported dead code elimination [1] since 2009,
but the feature imposes some unpalatable restrictions. It never supported
popular libraries like jQuery. The process is also rather slow, as Closure
Compiler must analyze all code in the bundle at once.

[1]:
[https://developers.google.com/closure/compiler/docs/compilat...](https://developers.google.com/closure/compiler/docs/compilation_levels#advanced_optimizations)

~~~
virmundi
An interesting thing is how that applies to transpiled languages like
ClojureScript. There are a lot of efficiencies gained by using ClojureScript
with Google Closure. Even react is faster with Reagent.

------
draw_down
I thought this was a decent survey of where we're at, but I'm not sure what to
take away. I don't care for "opinionated" as a description of tools or code,
nor its opposite. It doesn't really mean anything, in my... opinion.

------
venomsnake
The modern JS ecosystem have that enterprise java feel to it of 2010.

------
dreamdu5t
Web developers need more homotopy type theory, and to curb their NIMH
hypermodularization epedemic.

~~~
draw_down
What could this comment mean, I wonder.

~~~
jandrese
I was trying to figure out if it was some Markov chain built from the article.

------
EvanPlaice
Solution: The facade pattern.

Tl;dr: Y'all need better API designers.

JS modules are still bleeding edge but they're a necessary requirement to put
the facade pattern to work.

Each library should ideally be separated into multiple modules internally.

The main source file imports all submodules by default making it easy to get
started.

    
    
      import 'rxjs';
    

As a project matures and it comes time to optimize dependencies for
performance, the main import can be swapped with feature-specific imports to
trim the fat.

    
    
      import { Observable } from 'rxjs/core';
      import { FlatMap, Debounce} from 'rxjs/operators';
    

Optionally, an additional facade layer can be included to import specific
classes of submodules.

    
    
      import 'rxjs/operators';
    

Or, alternatively:

    
    
      import { OPERATORS } from 'rxjs';
    

A facade is nothing but a js file that contains a bunch of imports that are
logically mapped to one or more exports.

The 'real' issue is, it wasn't really possible to use this before the ES6
module standard because of the divergence of implementation-specific features
of the existing module pseudo-standards (ie AMD, UMD, CommonJS) and the
overreliance on bundling required by the expensive overhead of HTTPv1.

JS has a lot of baggage and it's going to be a while before the community as a
whole learns how to design non-sucky APIs.

If you want to see this in action, take a look at
[http://github.com/evanplaice/ng2-resume](http://github.com/evanplaice/ng2-resume).
In it, I use facades extensively at multiple layers to compose smaller
components/directives into larger ones. It maximizes reuse while allowing a
great degree of flexibility.

I still have freedom to break off chunks of the source for reuse elsewhere.
For example, I'm planning to move the models to a separate repo for use on the
server-side. It's trivial to bring those chunks back in via the ES6 module
loader and a package management utility like JSPM/Webpack.

~~~
EvanPlaice
Not sure why this touched a nerve enough to be downvoted into oblivion. If it
was the cheeky tl;dr I guess I can make a point to avoid any and all attempts
at sarcasm in my future posts.

As for the rest, I genuinely think it's a useful approach to consider

1\. it's relies on future web standards not tools

Tools come and go. Web standards never go. If the goal is to avoid technical
debt, then don't write code in a way that will eventually be obsolete.

2\. 'Abstractions are the solution to every problem, except the problem of too
many abstractions'

Except, when there's a viable strategy to provide access at multiple layers of
abstraction. If we're finally getting a truly universal module standard, why
not leverage it to improve our library APIs?

It's not like we have to waste the time/energy/effort supporting multiple
module non-standards anymore. The approach I outlined above provides much
better usability with very little effort on the part of the dev implementing
an API.

3\. It doesn't rely on approaches that will eventually be obsolete

Namely bundling. Tree shaking is great if the end goal is to analyze all
dependencies and create an optimized bundle.

Except for the fact that bundling will become an anti-pattern when HTTPv2
reaches widespread adoption.

