
Why I Don't Use React Router - jkkramer
http://jkk.github.io/dependencies-your-problem
======
WhitneyLand
Justin I think you are conflating a couple issues here:

    
    
      1). OSS critiques should be more polite
    
      2). People are stupid for not doing due diligence
    

First, I agree criticism should be constructive. Yet there is nothing wrong
with 100 people saying here they don't like React Router and in fact such
feedback is crucial for the authors and the community. Maybe you agree; but
ironically the tone of your post seemed have a bit of the scorn and derision
that you are asking people to avoid.

Secondly, it's already obvious people are responsible for their own projects.
The motivation for stating the obvious seems to be to emphasize people should
seek to blame themselves before criticizing others. Wrong. These things are
orthogonal - Regardless of one's due diligence, it's perfectly acceptable,
indeed beneficial, to critique OSS, when done in a productive way.

~~~
jkkramer
Yeah maybe. To me it feels like many in the community complain and shirk
responsibility, and some stern words were needed to counterbalance that
position. If the community wants stability in its projects, it needs to learn
what it takes to achieve that.

~~~
jkkramer
Further thoughts: choosing an unstable library and then criticizing it for
being unstable seems silly. There are two sides: library authors need to value
things like careful design, real world testing, and backwards compatibility.
Library consumers need to advocate for the same things, plus learn how to
identify risk (semver isn't going to save you), and take ownership of their
choices.

~~~
lomnakkus
> Further thoughts: choosing an unstable library and then criticizing it for
> being unstable seems silly.

If so many people (apparently) missed the "expect this to be unstable" bit, I
wonder if it's just a question of not being signaled effectively enough on
Router's home page. Which I can understand since no developer actually
expects, let's say, their v3 to actually be completely revamped into a v4. If
they knew ahead of time, they'd presumably just have chosen the v4 design.

I guess it's kind of catch-22. Maybe the right thing here is to explicitly say
"we currently fully _expect_ this to be stable for the foreseeable future, but
cannot predict the future, and _are_ prepared to break everything if a better
design is discovered"?

EDIT: I suppose another way to alleviate the problems would be to pledge
support for the previous version for a period of time... but no developer
working in their spare time really wants to do that. (For very understandable
reasons.)

------
tptee
If you're looking to jump ship and your projects use Redux, you might find
[https://github.com/FormidableLabs/redux-little-
router](https://github.com/FormidableLabs/redux-little-router) to be a nice
alternative. RRv4 still hoards URL state within a component, while Little
Router just puts it in the store. This makes deriving most of your app from
URL state a reality.

Check out [https://formidable.com/blog/2016/07/11/let-the-url-do-the-
ta...](https://formidable.com/blog/2016/07/11/let-the-url-do-the-talking-
part-1-the-pain-of-react-router-in-redux/) for more on how we differentiate
from the RR philosophy.

~~~
rkwz
There's also Navigo, a minimal router -
[https://github.com/krasimir/navigo](https://github.com/krasimir/navigo)

------
brian-armstrong
It's great to someone express this point. Though it's hard to beat the
convenience of dropping someone else's library into your codebase, each new
dependency adds more security surface area and bloat to your application. I
wish people considered this balance more carefully.

In general I think a littlw NIH is a good thing. Even if there exists a
library that does what you want, it might also include much more that you
don't need, and perhaps the kernel of what you want fits into a small function
you can write and vet yourself.

~~~
bcherny
An idea of open source is that it's very likely that your own implementation
is buggier, slower, and more poorly specified than the existing state of the
art open source implementation.

[https://en.wikipedia.org/wiki/Wisdom_of_the_crowd](https://en.wikipedia.org/wiki/Wisdom_of_the_crowd)
[https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect](https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect)

~~~
hinkley
Even if your code is better than the OSS alternative now, are you going to be
able to maintain it at that level, given all of your other responsibilities?

I'm dealing with a bunch of people now who did something like that. At the
time they made these decisions they might have had good reasons, but now
they're doing other stuff and the custom things they wrote are a huge
liability.

Don't write it unless you intend to own it.

~~~
Chris_Newton
_Even if your code is better than the OSS alternative now, are you going to be
able to maintain it at that level, given all of your other responsibilities?_

If someone was able to produce better code in some way before, despite any OSS
alternatives available at the time, why would anyone assume they could not
also maintain and develop that code more effectively than the same OSS
community and projects in the future? That makes no sense.

A lot of developers seem to make dubious assumptions today about the quality
of something they could import compared to the quality of what they could
build in-house. There’s little reason or evidence to support a lot of those
assumptions, but people continue to make them regardless, because hype and
inexperience are things.

If you’re talking about a huge OSS project with many contributors, quite a few
experts involved, well-established infrastructure and funding, and so on, then
sure, it’s a tough thing to beat. I couldn’t set up and maintain a new
operating system with the same capabilities as something like Debian or
FreeBSD, and neither could any other small team.

But most OSS isn’t like that. Lots of OSS projects have only a single main
developer, or maybe a small team of contributors, and those people may or may
not be as skilled, experienced, dedicated or simply available as your in-house
team. Lots of OSS projects effectively get abandoned, sometimes even well-
known ones with lots of contributors, major commercial backing and a large
user base. Lots of OSS projects are highly unstable, and if you depend on one
that needs constant updates to keep it working, that’s an overhead of its own.
There is precious little evidence that OSS quality is better _in general_ than
something a good team could have built in-house.

And even if none of those things were true, your own in-house development
would still be focussed on your specific needs and priorities, instead of
trying to be a generic tool with potentially a lot of functionality (and risk)
you don’t need, and potentially being steered in a future development
direction than doesn’t meet your needs as effectively.

 _Don 't write it unless you intend to own it._

The flip side of that is that if you do want to own it, writing it in-house
may well be a better option.

Obviously there has to be a balance, and reinventing the wheel (or a sports
car) for every project isn’t necessarily a good use of time and resources.
Bringing in a good external tool or library that solves a problem for your
project effectively can be a huge win, particularly if it’s in an area that
isn’t a core part of your own project.

However, I believe the current culture in some parts of our industry is
_crazily_ biased towards bringing in external libraries to do every little
thing. That is a very dangerous trend that we must challenge, because we’re
writing an awful lot of awful software as a result.

~~~
true_religion
True, but Javascript libraries tend to be pretty small and deal with well-
known concepts. Many experienced dev's have seen a router 10x by the time
they've come to React-Router, and the entirety of the source code can be read
in a single day.

If you both (a) have experience and (b) have read and understood the entirety
of a library, then you are in the best position to claim that you can do it
better in house.

The prevelance of libraries doing the same thing in Javascript shows that alot
of people have different ideas, and due to culture decide to open source it
instead of keeping it in-house.

Other communities do the reverse. There are probably a million homebaked Java
frameworks that will never see the light of day because people in Java land
don't think that MVC is so extra ordinary that they need to release their in-
house needs specialized framework.

~~~
douche
One of the weaknesses of Javascript is its absolutely terrible standard
library. In comparison to languages with solid, comprehensive, "batteries-
included" standard libraries, you spend a lot of time re-implementing basic
functionality. Or you import jQuery, or LoDash, or underscore, or use pieces
of Ember or Angular, or React, etc, etc. Or some bastard abomination of all of
those.

~~~
bcherny
I'd argue that's a strength. Note how scala is modularizing its standard lib
in Dotty, and TypeScript is doing that same for stdlib typings.

Standard libs are great, but they need to be modular. If they're modular then
they are versioned separately from the language, and at that point there's no
difference between a well specified and maintained lib (eg lodash) and a
stdlib.

------
SwellJoe
I recently started working seriously with node.js (I've tinkered over the
years since it was launched, and we provide some support for it in our
products, but never actually built anything with it). I went looking for a
library to deal with logins, authentication, password resets, etc. Normal
stuff that most web frameworks have some solutions for.

I found a package on npm that sounded like it did everything I wanted (plus a
few extra things, but I figured I could ignore those). It took longer than I
expected to install...so, I did a little digging. It had installed over 53,000
files, and the resulting directory was 110 MB in size!

I was absolutely flabbergasted. I couldn't believe installing one package, for
something seemingly simple, could balloon up that large. I won't name names,
as I did a little more poking around, and realized that _most_ npm
installations pull in thousands of files via automatic dependency resolution,
though this one was a particularly egregious example. I've gotten to where I
only install stuff via npm when I'm on a free connection; I normally work on
mobile broadband, which is very expensive (and adds up to almost $300/month
even before I started playing with npm).

Now, to be fair, it was pulling in a web framework...maybe Express or Hapi, I
don't remember which, and all of _its_ dependencies, so it was actually a lot
more than just the login module. The kind of annoying bit was I already had a
global installation of both of those frameworks from following tutorials, but
it still seemed to insist on pulling in its own preferred versions of stuff,
and putting them into the project directory.

I come from the Perl world, where if you don't spend at least half your time
looking for and evaluating libraries _before_ you start writing code, you're
not being very productive. I'm, frankly, overwhelmed by how big and unfiltered
the npm ecosystem is. I've found myself relieved to start tinkering with more
"all in one" libraries and frameworks, because I don't have the time or
knowledge to evaluate libs on my own. I ordinarily prefer a more a la carte
approach, where you just pull in what you need, and so big libraries and
frameworks don't fit that. But, I can't make sense out of the ecosystem
without some guidance. There are over 70,000 npm packages! Curation really has
turned out to be one of the big problems in computer science.

~~~
pyre
> I've gotten to where I only install stuff via npm when I'm on a free
> connection; I normally work on mobile broadband, which is very expensive

You could always use this:

[https://www.npmjs.com/package/npm-proxy-
cache](https://www.npmjs.com/package/npm-proxy-cache)

It caches the package listings _and_ the packages that you download. It will
act as a pass through that with a limited TTL on the cache, but there is an
option to fallback to the cache if you can't connect to upstream.

Granted, you have to have already installed something for it to work as an
offline cache.

Also, part of the problem with _all_ those files is that npm allows packages
to installed pinned dependency versions. If package-a requires lodash 2.x and
package-b requires lodash 3.x, then _both_ will be installed within the
respective package's directory. For example let's dive into the node_modules/
in one of my projects.

    
    
      $ ls node_modules/**/lodash.js
      node_modules/cordova-lib/node_modules/lodash/chain/lodash.js
      node_modules/findup-sync/node_modules/lodash/dist/lodash.js
      node_modules/findup-sync/node_modules/lodash/lodash.js
      node_modules/globule/node_modules/lodash/dist/lodash.js
      node_modules/grunt-contrib-less/node_modules/lodash/dist/lodash.js
      node_modules/grunt-contrib-less/node_modules/lodash/lodash.js
      node_modules/grunt-contrib-watch/node_modules/lodash/dist/lodash.js
      node_modules/grunt-contrib-watch/node_modules/lodash/lodash.js
      node_modules/grunt-curl/node_modules/lodash/dist/lodash.js
      node_modules/grunt-curl/node_modules/lodash/lodash.js
      node_modules/grunt-legacy-log-utils/node_modules/lodash/dist/lodash.js
      node_modules/grunt-legacy-log-utils/node_modules/lodash/lodash.js
      node_modules/grunt-legacy-log/node_modules/lodash/dist/lodash.js
      node_modules/grunt-legacy-log/node_modules/lodash/lodash.js
      node_modules/grunt-legacy-util/node_modules/lodash/lodash.js
      node_modules/grunt-ng-constant/node_modules/lodash/dist/lodash.js
      node_modules/grunt-ng-constant/node_modules/lodash/lodash.js
      node_modules/grunt-protractor-runner/node_modules/lodash/lodash.js
      node_modules/grunt/node_modules/lodash/lodash.js
      node_modules/jshint/node_modules/lodash/chain/lodash.js
      node_modules/lodash/chain/lodash.js
      node_modules/phantomjs-prebuilt/node_modules/lodash/lodash.js
      node_modules/preprocess/node_modules/lodash/lodash.js
      node_modules/protractor/node_modules/lodash/lodash.js
    

That's 24 copies of lodash.js installed that could _all_ be a unique version
of lodash used only by said module.

~~~
vmasto
You're being unfair. You're obviously using npm 2.x, we've since moved to 3
for a very long time now where the dependency tree is flattened and this issue
is avoided.

~~~
ksherlock
It's better but it's still an issue when multiple version of a module are
needed.

------
ianamartin
I really need to get my blog up and running so that I can write about things
like this.

My view is that dependencies are someone else's solution to my problem of
technical debt.

I'd be a straight-up liar if I claimed to be proud of every line of code I've
written, either for an employer or for myself. Sometimes you just have to
hammer a square peg into a round hole and be done with it because deadlines.
Or lazy. Or boredom. Or whatever this project is going nowhere anyway, so wtf?
Hack the shit out of it.

I always tell myself I'm going to get back to that later and clean it up, but
I often don't because, well, moar deadlines.

Dependency updates--particularly breaking ones--are things I love to hear
about. Dependency updates give me an excuse, both professionally and for my
side projects, to revisit stuff that I knew was janky and crappy and broke
when I wrote it, but have since come to accept.

Security updates are absolute gold in this game of not wanting to suck but
still having to meet deadlines.

"Sorry boss, but there's a vulnerability in lib x. We have to update. But it's
breaking. So now we have to refactor. Two weeks, at least. Maybe more."

I just got rid of a crap-ton of bad code while I was updating for that
dependency. Oops.

~~~
rexpop
I didn't get my blog up and running until I a) started keeping a journal on
750words.com and b) started writing in a spiral-bound notebook every spare
half-hour.

Those two habits and [https://blot.im](https://blot.im) made blogging nigh-
effortless. Like dandruff, I get it for free.

~~~
ianamartin
Oh my problem isn't that at all. I have no problem writing endless amounts of
crap that no one will ever read. It's just that I really care about my
writing, and I want it to have a perfect home. So I'm constantly writing and
rewriting blog engines.

In this case, the perfect is the enemy of no one. A blog engine is the one
side-project that I don't just hack. It's my one and only place for writing
pure, elegant code.

And I won't let myself write at length again on someone else's platform until
I get this exactly right.

Everyone wins: I don't clutter the internet with inane crap; you aren't
tempted to read it.

If I ever decide to start publishing my writing, the best part of it will be
the code that presents it.

~~~
dragon_king
I was(am?) in the same boat as you. I don't like using other platforms because
they don't do exactly what I want. I go through the same decision points while
trying to maintain a todo list. I guess that is the drawback of being a
developer, you know you can do better(for your usecase) job of developing a
great application and eventually, writing a blog or maintaining a todo list
becomes an excercise in yak shaving
([http://www.dailymotion.com/video/x2est2c_yak-
shaving_fun](http://www.dailymotion.com/video/x2est2c_yak-shaving_fun) ).

I am trying to get over this habit. Any advice/suggestions would be
appreciated.

------
n0us
As the person who wrote the top comment in the thread that you link to, I will
admit: You are right for the most part. The choice to use an unproven library
is my fault as it was my choice and it is also my responsibility to deal with
the consequences and costs of the instability. This is why I will be removing
React Router from my project as soon as I can find a stable and suitable
replacement.

Recognizing the responsibility for your own actions does not preclude you from
being frustrated and at times outright angry at the outcomes of those actions.
In this case I (wrongly) assumed that after a 1.0 release the library would be
relatively stable and have been repeatedly duped into believing that this time
the major version release would be the stable one. I don't really see anywhere
that people were issuing an outcry for V4, so I am still confused as to why it
was so urgent to release it. It wasn't perfect but it was fine. Unfortunately
in my frustration and confusion I chose to write a very strongly worded
comment that apparently some people did not like.

> To be clear: developers that release libraries and then iterate the API in
> public do not deserve personal scorn for doing so

I have never gotten the impression that react-router was someone's personal
library, rather it is a community project that is maintained by notable
members of the javascript community and it is my belief that delivering half-
baked stuff to the people who counted on them, and who they led to believe
could count on them, is not a fair thing to do. I don't believe it is
unreasonable to be frustrated with fickle leadership from people who stepped
up to lead the project. If they can't deal with the criticism or don't have
the time/effort/inclination/whatever to lead in a way that is agreeable to
most of the community then perhaps someone else could lead. When projects have
thousands of stars on github, making rapid successions of breaking changes
throws all of those people for a loop.

If you do in fact view the repo as your personal project and want to make huge
changes all the time, like every 6 months, I don't think that the version with
thousands of stars is really the place to do that. Why not just do it on your
own where it isn't going to affect so many people.

Note: Neither this, nor my original comment should be taken as personal
attacks on the maintainers although I am aware that both are probably over
stated. I'm sure they are all extremely talented developers and kind people.

~~~
jkkramer
Thanks for the thoughtful reply! I don't have a problem with strong opinions
and harsh words when warranted, with the understanding that we're all
learning.

What I wrote wasn't entirely in response to the HN thread. Similar thoughts
about the JS community have been brewing for a while and the discussion
prompted me to speak my mind.

------
jhgg
We use React Router on our project. Looks like 4.0 does break pretty much
everything.

My thoughts on the upgrade are pretty much:

\- Do we actually need to upgrade? Old react router works fine.

\- If we do need to upgrade - how long will it take? If it's just a few hours
to shift some code around, maybe it's not that bad. Especially if it's moving
to a cleaner more "react" API.

At the end of the day, we could write our own routing class, or another. We'll
probably stick with react-router though - but save the upgrade for a day where
we've got nothing else to do, or if an engineer has some free time.

~~~
cle
> but save the upgrade for a day where we've got nothing else to do, or if an
> engineer has some free time.

Or when you stumble on a show-stopping bug right before a deadline, that is
only fixed in supported newer versions.

~~~
pavel_lishin
Is that more or less likely with an older version of something like React
Router, or a fairly new release?

------
lstamour
I'm reminded of [https://medium.freecodecamp.com/you-might-not-need-react-
rou...](https://medium.freecodecamp.com/you-might-not-need-react-
router-38673620f3d#.6i54dhtjb) which discusses how to write your own such
functionality from scratch. I often find that before adopting third party code
you really have to consider it and perhaps even leave it out and feel the pain
before accepting that it's necessary and adopting it. And even once you do,
you should be open to maintaining or replacing it. There's a cost to roll-
your-own though, you might need to write additional documentation. For routing
though, it's usually one function and easy enough to grok on its own.

------
linkmotif
Don't get it. It's FOSS. npm install the version you want. A new version
released you don't like? Keep using the old version. Don't like the old
version? Write your own. Fork it. It's free and awesome global collaboration!

------
diggan
I might be missing something (after reading the article twice) but the author
says nothing about why he doesn't use React Router...

The only mention I can find is "But the API smelled funny to me, and had not
settled, so I continued to wait" that hints that the API is changing and looks
weird. But other than that, there is no constructive criticism here.

I'm all for being able to write freely about software we find bad, but without
any concrete examples or even pointers to what is bad, I don't think this
article have any merit (except that it's good to make sure you understand the
dependencies you have).

~~~
jkkramer
It looked like a risky dependency, simple as that. Despite the HN title, the
point of the article was not to talk about React Router.

EDIT: Additionally, I'm not saying React Router is bad per se, just that it's
not fully baked. I'm glad to see people working on the problem. I look forward
to leveraging the fruits of their labor in the future (in fact, I already do -
I use the history library on which RR is built).

~~~
diggan
I still feel that you're not giving reasons of _why_ it's a risky dependency
or fully baked. And probably we/you should rename the title if it doesn't
convey the article body...

Not that I am defending or even using React Router myself, I'm just saying
that it's usually better for everyone if the feedback is better explained than
just "It's bad" or "It's risky to use".

------
jameslk
If anyone is wondering what leads to a Not Invented Here mindset, this
probably would be a good example of it. Sure, it's not the fault of the open
source library author that you depended on their work. However, when OSS fails
to deliver or we criticize others for counting on it, we only discourage the
dependability of OSS and push others away from the community. If we care about
the continued adoption and growth of OSS then we'll need to have higher
standards as well.

------
cle
This is a very important dimension to consider when making technical
decisions. I've never been able to find an easy way to assess this risk,
beyond hearsay and word-of-mouth.

Is anyone aware of an easier way to assess these longer-term risks of a piece
of technology? Things like API stability, community strength and
responsiveness, backwards compatibility, upgrade paths, etc.

~~~
biocomputation
A large margin of safety / margin for error / margin for refactoring is the
correct way to deal with uncertainty of any kind.

Humans are very poor at reasoning about low probability events that have
serious consequences, i.e. large earthquakes with very long return periods.

For this reason, it's difficult to assess the 'risk' of external code because
risk management processes don't work very well when high uncertainty is
involved.

If you decide to use external code that has a high degree of uncertainty, just
make sure to have a generous allowance of time/money/effort for future
problems.

------
wesleytodd
Yep, this is a thing I push with my team all the time. Always vet your deps.
In the long run is is WAY worth it. In the short run the cost is rather
minimal unless you have devs that cannot actually deliver on the needs the
dependency covers.

~~~
dustingetz
> unless you have devs that cannot actually deliver on the needs the
> dependency covers.

You say this off-handedly like its not a big deal. No single person or company
understands all the dependencies from the app all the way down to the
electricity. That's the real reason we use dependencies - for things that
aren't core to our business, that we don't want to have to understand.

~~~
wesleytodd
I guess I just assumed everyone understands that no team will have the ability
to cover all your needs. So you actually need to judge when your team cant
deliver, then pick the dependency to pull in.

------
dustingetz
I just wanted to say that the clojure ecosystem is littered with the carcases
of half baked libraries. Including some of my own. Its not a JS problem, its
the beauty of open source & low friction sharing.

~~~
jkkramer
You're not wrong. But it feels worse in JS. In both cases lib authors would do
well to set proper expectations - if your lib is half baked, be up front about
it.

------
TheAceOfHearts
I generally agree. I've been using RR for a while, and I'm not the biggest fan
of their APIs. Nothing wrong with it, it just doesn't do what I want it to do;
or rather... It does too much stuff, most of which I don't use. Just my
opinion.

With that being said, they claim you'll be able to migrate slowly to v4, so
hopefully it's not so bad. If they just broke backwards compatibility without
a migration strategy, I'd certainly feel much more frustrated. I went with RR
largely because it's widely used. I also don't think it's an unreasonable
expectation for a widely used library to provide a migration strategy. Not
that they're under any obligation to do so, of course.

My personal bar for adding dependencies is asking myself: would I feel
comfortable debugging and fixing this? I recognize that it's not a free ride.

------
snippet22
That's why they call it angular 2. It's finally out and ready to be used.
Works really well with routers. Although you don't get the constant updating
but you do get codeless directives that are really easy to understand.

------
oceanswave
Every ecosystem is full of "half-baked" libraries. JS has more as the
ecosystem is bigger and the barrier to entry is very low. It's the nature of
open source, it's also the nature of open source to evaluate dependencies.

