I loved Bundler's deterministic builds but chafed against the Ruby limitation of only having a single version of a dependency at once. npm solved this problem elegantly, but still struggles with non-determinism. I had resigned myself to thinking that maybe these were just fundamental tradeoffs in package management that could never be resolved.
Then I had the opportunity to use Cargo, the package manager for Rust. It synthesized what, in my mind, are the best features of npm and Bundler into a single package manager, with terrific speed to boot.
This also highlights a shrewd move on the part of npm: a clear decoupling between the npm registry and client, with a well-defined protocol between the two. The strength of Node is in the staggering size of its ecosystem; how those bits end up on disk is an implementation detail. This smart separation allows for this kind of experimentation on the client side, without causing the ecosystem fragmentation that happens when a new package manager requires a new registry.
I'm also happy to see that a significant amount of work has already gone into the governance model. Despite the largest contributors being Facebook employees, it looks like they've really outdone themselves making sure this is a community-run project. A BSD license, no PATENTS file, and an Ember/Rust RFC process; this has all the hallmarks of a community open source project. That's critical for open infrastructure projects like this, and they've nailed it.
I'm very much looking forward to using Yarn in my own projects because it looks like it solves a lot of real problems I encounter every day. Thanks for all the hard work!
And a shrewd move by FB: to not announce their new registry on the same day.
Time will tell whether they only want to be proxying NPM or will allow direct pushing to their own registry. If they do, JS ecosystem might see another big shift.
So I don't think interests will ever align to create a new registry. Nobody wants to do that. That would have serious consquences for the JS community and would take years to recover, in my opinion.
For example, if I refer to 'left-pad', it would default to 'npmjs.org/left-pad'. If the author goes rougue, I think it would be great to enable people to publish and consume 'thirdparty.com/left-pad'
Disclosure: I'm a FB employee with no knowledge of our plans in this regard
doing this at the moment does nothing for the community other causing a lot of pain points. ex) now npm authors will have to publish to both registries so developers don't have to dig to find where it was published to, then they also have to hope that someone else didn't register the module name in one of the registries..
there is just too much splintering if facebook decided to become a competing registry rather than just using npm's registry and building on top of it.
Only to resurface again: http://status.npmjs.org/incidents/dw8cr1lwxkcr
And it will, no doubt, resurface again and again and again
Possibly not soon, at least from a technical perspective, but I could definitely see a PR fiasco (security, bias, etc.) causing a loss of confidence in its stewardship.
And I don't think it'd be that big of a disruption if it were to happen; for the hypothetical case of Facebook+Yarn, they're already proxying to NPM, so they could easily continue to do so while also accepting packages that are Yarn-only.
- strictly defined version update time intervals, e.g. you can't update your package more than once a week (or have to take some special actions for critical updates, e.g. contact support)
- "delayed" publishing, e.g. when you submit your package it will only be published in 24 hours, until then you can re-submit updates, etc.
- similar to above, but your package wont be published until it was tagged on github (or elsewhere) for a certain amount of time
- published packages can not be removed, but you can disassociate them with your account by marking them as "not maintained" and possibly assign new maintainers for it
- maybe introduce some way for developers to mark new versions as "backwards incompatible" if they do break backwards compatibility
I think there is definitely a "market" for some stricter node package repo.
According to http://blog.npmjs.org/post/151660845210/hello-yarn, it seems it doesn't work with private packages yet, which may or may not be an issue for your project. But it seems this is a complete CLI replacement for NPM.
I personally don't know much about either tool (don't do a ton of JS), but it's possible that fixing the existing client without either breaking backward compatibility or making it too complicated (multiple modes of operation) was too difficult or not worth it.
Also, I'm having a really hard time understanding the complaint about a new client. The value is in repository of reusable code, not the client. That you can use different clients with the same repository is a feature, not a bug.
These issues have been raised a few times, along with the shrinkwrap/reproducability stuff, and it didnt seem like it was a big priority for the core team. Understandably I guess they seem more focussed on the enterprise/private repo side of things and just keeping things running on the client side.
npmjs.com = package repo which Yarn can use.
at least as far as I can tell
Also, npm isn't deterministic and it got even worse with v3. Sometimes you get a flat list of libs, if a lib is used with multiple versions, the first usage will get installed flat, the rest in the directory of the parent lib, etc.
The npm-cli is basically a mess :\
You can do the same kind of version range tricks in typical Java builds, for example (Maven), but most people hardcode the values to keep builds as deterministic as possible.
For some reason, the JS community seems to prefer just trusting that new versions won't break anything. Its either very brave of them really (or maybe just foolish).
Let's not pretend that we aren't all blindly tossing in random libs of dubious quality and origin we find on github into our package.json and hoping for the best anyway. My company talks a mean talk about "best practices", but, my god, the dependencies we use are a true horror show.
Say that I use yarn to depend upon some module X, specifying a fixed version number like the good boy scout that I am. Module X resides on npmjs and in turn depends upon the bleeding edge version of module Y. And then one day module Y goes all Galaxy Note and bursts into flames.
Can yarn shield my app from that?
You can do (and are supposed to do) the same with npm's own shrinkwrap, but people claim that it doesn't work as intended.
I don't think Yarn solves any of these problems, tbh. It seems like what we really need is a package manager that tests the api of each package update to make sure nothing in it has broken expectation in accordance with Semver.
It's not that npm devs were naive enough to believe that unpinned deps would be safe for reproducible builds.
However I've heard several people allude over the years that 'npm shrinkwrap' is buggy, and isn't fully reproducible (though never experienced any problems personally). This is the aspect yarn claims to address, along with a supposedly more succinct lockfile format.
JS dev has been a minefield like that, the entire ecosystem reminds me of C++ except lower barrier to entry means a lot of noise from newbies on all channels (eg. even popular packages can be utter crap so you can't use popularity as a meaningful metric)
this obviously doesn't fix anything and I think the points in this discussion stand, but I've never understood why the defaults are not more conservative in this regard.
What we do currently is we lock everything to an explicit version - even libraries.
At least it's possible to get deterministic builds if you are willing to do a bit of work carefully / manually updating all of your dependencies at once.
This means your library tests will not be deterministic.
Using the Cargo.lock file for libraries does solve this, but then every binary package you build that references your library will have to be specifically tied to the versions in the library.
We do this internally because it's the only way to provide deterministic builds over time.
Over time I suspect it will get harder and harder to keep this rigid web of dependencies working.
One thing we might try is to enforce the rule 'no library will contain any tests'. At first glance this kinda makes my skin crawl, but maybe if we could find a way where every project that used a library could somehow share the tests for that library it could actually work.
git submodules might actually be able to provide these tests for each binary package. If only git submodules would stop corrupting our git repos... :-(
Anyways you can put a lockfile with your library, and it shouldn't affect downstream.
And if you want to lock versions for your entire dependency tree, npm shrinkwrap is what you're looking for (It's essentially the same as lockfiles in other development package managers). Though for security reasons I prefer to keep things locked to a major version only (e.g. "^2.0.0"). Shrinkwrapping is useful in this instance too if you need to have predictable builds (and installs as it'll use the shrinkwrap when installing dependencies too if it's a library rather than the main app) but want to ensure your dependencies stay up to date.
It's not perfect by any measure, but there are ways to make it work the way you want.
From a build tool perspective (we use npm scripts for basically everything [and some webpack]) I am also not missing something particular.
Looking at other comparable solutions (from other languages) I'd say npm does a pretty good job.
but it still far from perfect.
The problem isn't really fundamental. Bundler makes almost all the right choices already. Its major disadvantage is that it only works for Ruby.
Yarn does offer a `--flat` option that enforces the highlander rule, and I'm hopeful that the existence of this option will nudge the ecosystem towards an appreciation for fewer semver-major bumps and more ecosystem-wide effort to enable whole apps to work with `--flat` mode.
Plz send halp!
So go crazy and install every version of lodash! Nothing will break.
There was actually a high profile bug when someone ended up with two versions of react included in the same page: https://github.com/facebook/react/issues/1939.
I'm not a React expert, but I don't see why that situation would only affect multiple React copies with different versions?
It's placing your own release cycle at the whims of your dependencies' release cycles. In the corporate world that would not be a viable solution.
Edit: I haven't dug into this, but it might be possible to use Typescript namespaces to distinguish a and b's versions of x. https://www.typescriptlang.org/docs/handbook/declaration-fil...
If a depends on x v1 and exports this dependency then your application also imports x v1.
If b depends on x v2 and exports this dependency too that means your application is transitively importing both x v1 and x v2 which is not possible.
If a or b only use instances of x internally and do not accept instances of x as parameter or return an instance of x they don't have to export their dependency on x.
If either a or b do not export their dependency on x then there is no problem. Your application will only depend only on x v1 or x v2 directly.
What it _won't_ let you do is pass a v1.0 type to something that requires a v2.0 type. This will result in a compilation error.
There's been some discussion around increasing Cargo's capacity in this regard (being able to say that a dependency is purely internal vs not), we'll see.
If so, different major versions of the same dep should be considered different libraries, for the sake of flattening. Consider lodash for example.
Personally I'm not really sure I like it. If I specify an exact revision of something, chances are I really do mean to install that exact revision. I don't see why I need an extra flag for that.
* LA: handles linear algebra and defines a matrix object.
* A: reads in a csv file and generates a matrix object using LA
* B: takes in a matrix object from LA, and does some operations on it
In this case, if B depends on version 5 of LA and the new version of A depends on version 6 of LA, then there's going to be a problem passing an object that A generated from version 6 and passing it to B which depends on version 5.
* Figure out early on (before 1.0) what your base interface will be.
For example, for a promise library, that would be `then` as specified by Promises/A+
* Check if the argument is an instance of the exact same version.
This works well enough if you use `instanceof`, since classes defined in a different copy of the module will have their own class value - a different unique object.
* If instanceof returns true, use the fast path code (no conversion)
* Otherwise, perform a conversion (e.g. "thenable" assimilation) that
only relies on the base interface
However, since A exposes outside methods that take a matrix as an argument, it should not assume anything beyond the core interface and should use LA-6's cast() to convert the matrix.
The problem is partially alleviated when using TypeScript. In that case the inferred type for A demands a structure containing the `normalize` method, which TypeScript will report as incompatible with the passed LA-5 matrix (at compile time). That makes it clearer that `cast` would need to be used.
Another solution is to provide version overrides and make B depend on version 6.
However if there are differences in the matrix class between different versions of the library then you're forced to write a compatibility layer in any case.
That alone has me super excited about Yarn, before even getting into the performance, security, and 'no invisible dependency' wins.
This is due to the way that Ruby includes code, where it's available globally, vs Node where code is scoped to the including module. I'm not sure how Ruby could support multiple versions with changes to the language and/or RubyGems
1. In the browser we have less control over the environment the code runs in, requiring tooling to achieve a stable, normalised environment which we can improve at our own pace, rather than the rate at which browsers are updated.
2. JS has become very widely used very quickly, which means that there have been lots of hastily built things, which have been replaced by better things (and for every 'better thing' there are many alternatives which fell by the wayside), and a larger pool of people to build them.
3. It's easier than ever to build new tools (or any kind of app, for that matter), because the npm ecosystem makes it trivial to combine existing smaller pieces of code into new combinations.
It does take some getting used to, but when you stop looking for one big tool that can solve 5 semi-related problems, and start looking for 5 little tools that each solve one of the problems, things get much more clear.
And yes, I know that this kind of reliance on dependencies can cause issues, but from my experience, those issues don't seem to bite that often, and many of them can be resolved with even more tooling.
don't forget about mobile devices, with ios having such a difficult-to-write-anything-new version of mobile safari.
i'm writing a new app for myself, so i don't have to worry about any backwards compatibility issues, except it needs to work on my iphone, which uses mobile safari webviews, which means i have to either use shims or restrict myself back to es5, non-html5 features (or work around their html5 limitations).
example: you cannot cache audio files to play using <audio> tags in mobile safari. safari just refetches it from your server (also has to recreate the audio object, which causes a lag in the audio being played). technically you _can_ cache if you convert all your audio files to base64 and use the webaudio api. i'm probably going to have to go this route because i'd like the app to be offline usable.
so rather than spend time on my app, i now have to write out a build step for converting audio to base64. then test that it works fine in ios and desktop browsers (at least chrome, firefox). it all just keeps adding up.
We're just watching a sped up version of a language maturing, and it's painful as individual developers trying to keep up, but I don't think it's as negative as HN often makes it.
> Are there aspects of the language or runtime that reward multiple layers of configuration management?
A significant fraction of the node community likes to split their project into really, really small packages and treat them as if they're independent even if they come from the same source repo and tend to get used together. As an example, the popular collection processing library Lodash publishes a package for each method and people actually do depend on individual methods.
There are two major rationales for this. The first is that JS is a dynamic language and in dynamic languages, smaller chunks of code are easier to reason about than larger ones. The second is that the JS community as a whole cares more about artifact size than pretty much any other community. Having smaller packages allows you to produce a smaller payload without having to rely on techniques like tree shaking.
I find micro packages maddening but people give me their code for free and it mostly works so I don't complain about it that much.
Not really. The npm client is just flaky, very slow, buggy, has some major design flaws in its v2 incarnation, and has a completely different set of major design flaws in its v3 incarnation.
The core architecture works fine for small packages.
> Seriously up to now it was impossible to have a build server isolated from the internet if you didn't want to check-in all the dependencies
Of course not, although I admit the linked article implied it was if you didn't read closely. There's a wide number of solutions, including running a private registry, or a copy of the public registry, or a local cacheing proxy of the public repository, or a custom npm client that hits a local cache, etc. Some of the solutions work quite well, and in fact yarn itself is just a re-implementation of some existing solutions.
> I really can't understand how people can even think to use a dependency manager system that doesn't satisfy the essential requirement of having your CI server sandboxed.
As the article noted, Facebook was sandboxing their CI server and using NPM; they just didn't like the existing solutions. (Nor should they; they were a bit naff.) But that doesn't mean (nor is anyone claiming) that there were no existing solutions. Yarn looks great, but it's an extremely incremental change.
All of this is why people are excited about Yarn, but to me it's a band-aid on several architectural mistakes.
It really isn't. The NPM client is just really, really poorly written, with no parallelization to speak of.
Node libraries are typically several orders of magnitude smaller and more single purpose than java libraries. Java libraries like Guava and Spring are typically quite large. A practical example is from unit testing: In Java, you might use JUnit+Mockito for your test dependencies. In Node, to get similar functionality, I have the following deps: Mocha, Sinon, Sinon-As-Promised, Chai, Chai-Moment, Chai, Chai-Things, Chai-Shallow-Deep-Equal, Chai-Datetime, Sinon-Express-Mock, Supertest.
It's a much newer technology than java and there is quite a lot of churn around tooling in general as the nature of the way applications are built change.
Though I get that you are referring to the js eco systems (commonjs, npm) compared to the much more mature Java ecosystem.
Similar with http://mithril.js.org/ over React, Angular and Ember. A single drop-in library with 13 functions in it's API. The library itself is a single, understandable source code file. Mithril is pretty much the exact opposite of Angular and Ember.
A lot of front end devs are eschewing frameworks completely. With modern DOM and web APIs, you can do a lot with "just" vanillajs.
I didn't add sinon-as-promised for fun, I added it for in-memory sequelize integration tests. same for all the other libraries I mentioned.
My point is that by keeping libraries small, you will end up with either more dependencies, or a lot more hand-rolled code. Neither of these things are inherently bad or good, and I'm not defending them. What I am trying to do is explain to the commenter why the node ecosystem seems to have such an emphasis on package management and the tooling around that.
As an example: you _could_ have one SpockJS framework that gives you React + Redux + ReactRedux, but instead each of these libraries are separate so you can use them without each other. This allows you to, say, move from React to Angular without changing the state of your application.
With everything moving so quickly in JS, this separation turns out to be valuable enough to tolerate a lot of packages and dependencies. It also makes it easier for communities to exist for solutions to specific problems, which at least in theory results in better solutions.
It's not necessarily better, just a different approach.
Like I said, it's not better just different.
There are tons of libraries for C, many of which aim to do the same, only faster/smaller/scalable/whatever. There are many compilers, both open and closed source. And there are documentation tools, syntax checkers, unit testing frameworks and whatnot.
Of course there are also languages like C# where there’s a de-facto standard for almost everything: IDE, compiler, base library, GUI framework(s).
I think choice is great!
There are tons of tools, of course also including jslint, debuggers (e.g. IntelliJ, dunno about others). There are editors and even IDEs. Then there’s the problem of bringing (NPM) modules to the browser using tools like webpack or SystemJS or whatever. There are general purpose build/automation systems like gulp. There are transpilers that “compile” ES6 to ES5. For any given task, you can probably find at least five programs that do it.
And if you look at all the topics, you’ll find that library management makes up only a small fraction of everything.
Yes, thousands of them, npm alone has 1200 results for linter:
Yes, as libraries. There are multiple linter options that can be installed with a project. Debugging is available, usually through an IDE or browser.
There are various packages that are extremely small (see leftpad and associated controversy). They often wind up in a slightly larger package, which itself winds up in a slightly larger package, which recurs until you finally wind up with one actual package of consequence that you're targeting. For example, Express (a very common Node.js abstraction) has 26 direct dependencies, yet 41 total dependencies.
A lot of this results from early Node.js' mantra of having small, focused packages. This could potentially be a good thing because instead of having three major dependencies that have their own way of leftpadding they can all rely on one library and thus code is efficiently reused. This can be bad, however, when they each need their own version of the dependency - or worse - the dependency is removed from the registry (see Leftpad and associated controversy).
One of the legitimate problems that I've seen is that there are various libraries that are just thin wrappers over functionality that is native to Node.js - but written to be "easier to grok". Thus, these thin abstractions become practical code duplication out of some mixture of Node's developers lack of effective verbosity/documentation and application developer laziness. But then they creep higher up the dependency chain because someone at a lower level used it.
On one hand it can be quite simple (and rewarding) to write code while minimizing dependencies. On the other hand, abstractions such as Express make it very easy to write code and it feels like you only have "one dependency" until you look under the hood.
Babel, the most popular transpiler, is 335 total dependencies, and it doesn't even do anything out-of-the-box. You need to add babel-preset-latest for 385 total dependencies if you want to actually transpile anything.
Want a linter? ESLint is 128 total dependencies.
For browser work, you can do almost everything you need with some jQuery, handlebars and moment if there are dates. A few other small libs here and there and you've got almost everything covered. Some edge cases may drive you to use some library that is heavily dependent on 32 different sub-libs but it's really not that often.
Server-side Node.js is the issue, not browser compatibility.
Yes, because the standard library provides practically nothing useful in itself.
Add those together, and you get the need for real complex tooling.
With that said, tooling is nowhere as complex as it is for languages like C++ or Java. People are just used to it on those platforms, and the tooling is more mature/understood because it already went through the countless iterations, 15-20 years ago or more.
Java isn't too complicated either. I've used Gradle and Maven but doing things manually like in C++ can also work.
Crucially, they don't reinvent the wheel every couple of months so a C++ or Java dev can focus on building things instead of relearning how to build things.
C/C++ tooling is a plastic butterknife you get with airplane food.
- open, community governance that will support long-term evolution
- the technical details get a lot right out of the gate (decent performance, predictability, and security)
npm chooses the first version of a package it encounters to install at the top level. Every other version is installed nested. If you have N uses of version A and one use of version B, but version B is installed first, then you get N copies of the package at version A.
Absolute and utter hogwash.
I don't even need to get into versioning to make this argument but I'll exclude it for clarity's sake. Simple scenario:
root -> A
root -> B
root -> C
A -> B
B -> C
NPM version@3 could not be further from the ethos of "engineering." It's an outright hack, and it quickly betrays its limitations. It could have been far simpler, far more considerate of pre-existing wisdom.
Just tried using npm3 explicitly and I stand corrected.
"$(npm bin)/bin-cmd" is getting really old
Are there any branches of ember-cli implementing yarn?
> unless you've actually shipped a product using this tool
(I don't work at Tilde so I can't tell you if it's been shipping with yarn, but it would shock me if it wasn't.)
I'm sure this will be added in the future, but it is currently a dealbreaker for me - I'm using a single package that's just straight hosted in a private git repo, as the package itself isn't ready to be published yet. I'm sure other people are using private packages for more normal reasons (closed source, pre-production, etc).
If yarn actually made it simpler to refer to a local package during development, I'd be on board with that. (I am developing the dependency, but want to also work on the thing that depends on it, so I'd rather just save locally and refresh npm on the outer project, but that's hard to get right - file urls don't always update, the easiest workflow seems to be delete the package from node_modules, and npm install, with the package having a git url rather than a version in package.json).
I agree local development with many packages can be hard. For Jest we adopted `lerna`
See https://github.com/facebook/jest and https://github.com/lerna/lerna which makes cross-package development in a mono-repo a lot of fun. Maybe that solution will work for you?
With npm I've been frustrated a few times whereby you can fork the repo and point package.json to the git repo but after running npm I usually get a bunch of compilation errors.
It would be wonderful to make this as simple as with bundler within the ruby ecosystem, point to forked url and running yarn handles the rest.
That's insane. It wouldn't surprise me if yarn becomes popular and its proxy to npm slowly turns off and boom, everyone would be migrated and npm would be left with little to no users. Yeah that's probably unlikely for the immediate future but Yarn is positioned perfectly to siphon off npm's users very painlessly.
I think it's a great example of moving a little too quickly, honestly. Especially since it bricked plenty of npm installs instead of being "just another bug".
I love the service they offer. Not a fan of the tools or pace. Npm comes in the node installer so why do they even have to care about improving?
Just my thoughts.
So today by default yarn goes through its servers which then proxy to npm's. As time goes Facebook can add more features to its own repo (custom submissions instead of to npm or maybe it does a federated publish to both at first, nice looking pages with community engagement tools built in, better metrics (npm provides me with almost worthless metrics), etc). Then when enough people are using Yarn Facebook could simply...flip the switch. Then boom, zero npm usage.
I would be terrified if I were NPM right now. They've sat on their hands for years not improving npm. I mean npm3 had a silly CLI animation that made all downloads take orders of magnitude longer simply because of the animation!
P.S. if you're trying to use yarn with a custom npm repository just drop a .yarnrc file into your project's directory then add the following in that file:
registry "http://<my register>"
From their perspective it's very difficult to make breaking changes to the client because of the sheer number of people depending on them. (We've faced similar problems on Babel)
Yarn still makes use of the npm registry and all the high quality infrastructure they've built and support they provide for the ecosystem. They are a critical piece of this.
Granted it's far too early to write npm off. But with how slow they've moved over the years I'm unconvinced that yarn won't take over the space unless it runs into some bad problems. Npm 3 and its launch was an absolute mess and, ultimately, it's almost just a package managing tool. I am unconvinced that breaking changes is much of an issue if at all for them. They could abandon shrinkwrap into a better system and I think everyone would be happy for it; no need to keep propping up that awful system.
Would they, though? If Yarn turns out to be the better tool that doesn't change the way they work (beyond solving some pain points) then would they even care? Facebook could even offload the hosting to some open source group like Apache.
It's purely anecdotal but a lot of people I have worked with would love to see an alternative to npm. It's just slow and the features it gives to authors is pretty limited (I want better metrics, please!). I'm not even sure I would call it sketchy, it would just simply be phasing out something deprecated (I'm not talking about a swift take over more like a slowly-boiling-the-frog-in-the-water takeover).
It's only my opinion and if Facebook really doesn't want that to happen then I guess it won't but it's pretty easy to imagine it, IMO.
When did it start becoming reasonable for a front-end only part of the MVCC pattern to have 68 dependencies?
Or for a transpiler like Babel to add 100k+ files? I'm sorry I just find it ridiculous that instead of taking a look at the disease (unbounded complexity), we are looking to engineer our way out of the problem by creating a package/dependency manager that "scales". Are the frontend problems at Facebook of showing HTML forms and buttons on a page really that complicated to warrant such a behemoth of a system?
This harkons back to the days at LinkedIn where I spent my time in a misguided effort to create a distributed build system because our codebase had grown so massive that it literally took a distributed system to build it and still release within the day. I feel bad for people working on these systems because while it is fun and "technically interesting" to solve "problems at scale", you're really just digging the ditch deeper for your company. You are putting lipstick on the 500 lbs pig; making room in the dump to pour in more technical debt.
Just boggles my mind.
Do any other software platforms out there face the same magnitude of complications?
On a happier note, these are good problems to have. This is another indicator of how successful the web platform has become. You can build some absolutely incredible projects these days, and they are instantly available to billions of people who are operating thousands of different hardware devices (with different specs and dimensions) running dozens of operating systems. No gatekeeper, bureaucrat, or corporate curator must approve your work. Just simply deploy it, and anyone with the URL can access it. It is a mind-blowing human accomplishment. I look forward to seeing how the web continues to progress over the coming decades.
Absolutely: this is overkill for your blog or portfolio website, but part of the rising complexity with the tooling and dependency system is due to the rising complexity of the applications themselves.
- Why didn't Facebook contribute the updates to NPM directly?
- They are coupling a package manager and registry proxy; the latter has many existing implementations already
- The differenced between Yarn and NPM+Shrinkwrap do not seem substantive; NPM made the design decision to use a sometimes non-deterministic install algo in NPM3 to speed up install times - when network is involved, there is a serious trade off between idempotency and speed
In general, FB seems to love building their own versions of existing tools:
- Flow, introduced 3 months after TypeScript
- Nuclide instead of Atom/Sublime/VSCode
- Jest instead of Jasmine/Mocha
- DraftJS instead of (insert of of the existing 100 text editors here)
I get that these are useful internally at FB, but I don't think they help the community much. It would be better to work with existing tools and contribute back to them than to reinvent the wheel every time something doesn't work perfectly for their use case.
I get that FB uses these as recruiting tools, it's useful for them to have rights for these projects, and it gives their engineers something to do and be excited about, but I do not want a whole dev tool ecosystem controlled by big FB.
Also, I find IED's approach to speeding up builds far more novel and interesting - https://github.com/alexanderGugel/ied
Do you think Flow was built in 3 months as a reaction to Typescript? Not that it was a big enough problem that two different groups of people independently decided to try to solve it?
EDIT: disregard this, Flow was introduced much longer than 3 months after TypeScript. The blog post introducing Flow mentions TypeScript, and is probably worth reading for background 
> Nuclide instead of Atom/Sublime/VSCode
Nuclide is an Atom plugin.
> Jest instead of Jasmine/Mocha
Jest provides a superset of Jasmine functionality, and uses Jasmine as its test runner.
Did Flow take 2 years to get implemented? I didn't think the initial release looked like a project that had 2 years of dev time on it.
They probably knew that TS was in progress when they started, but I don't have a window into that. I did ask some Flow devs a few weeks ago if they are interested in merging with TS, and they answered with a categorical no.
As a caveat, I am a daily TS user, and have only played around with Flow.
What this means in practice I don't fully understand but it seems to result in enough architectural differences that it apparently makes sense to have both despite the superficial similarities in syntax and behaviour.
Flow on the other hand solves one problem and solves it well. It behaves as any other tool would.
As i said, try to transpile this in TS: http://redux.js.org/docs/basics/ExampleTodoList.html
This is why they made Flow.
Object spread is not part of the ES7 spec. It is a proposal which has not made it to stage 3 yet, and might still be dropped. That said, there is a TS issue tracking adding support for it: https://github.com/Microsoft/TypeScript/issues/2103.
Finally, test runners tend to get stale. Any large organization will run into trouble with any kind of tooling as companies scale up engineers or codebases. Testing is no different and we aimed to solve those problems for Facebook; getting to open source our solutions is an added bonus and we are excited to work with the JS community to build the best tooling we can possibly build.
So yeah, I'm definitely on the "keep reinventing" side of this debate.
Many of those are perfect examples of where "improving" a current tool with the changes they wanted would end up with a more complicated and difficult to use tool.
- NPM makes tradeoffs that the majority of developers in the JS world want right now. Like it or not, deterministic installs isn't something that most people want or need (at least, they don't know that they might want or need it). Making a new tool that anyone can use when that is needed is a great solution, and it avoids bloating NPM with a hundred little flags and tons of code to cover edge cases. It's the perfect example because anyone can switch to using yarn when it's needed without any extra work.
- Nuclide is actually just a layer on top of Atom. They wanted a bunch of things to all work one way, so rather than try to get Atom to change to include what they wanted, or require developers install a ton of extensions, they packaged them up on top of Atom as one system.
- Jest had different goals at the start. Testing react components was ugly and difficult, and Jest wanted to solve that first (IIRC, i'm not very familiar with jest unfortunately). Again, it's something that can be used if needed, and can even be combined with other testing tools if you want (we almost did that at my work, used Jest to test the front-end components, and our current mocha+friends setup to test the "business logic").
Bullshit, nix systems rely on stable global dependencies, not 50 versions of the same library installed locally. I don't buy the "unix philosophy" excuse. The nodejs community just doesn't care about API stability which is why there is that inflated number of modules with 90% never maintained more than a year.
- Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
Which is what this thread is about.
- Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
JS has wonderful stream support, and it's used everywhere in many tools. This is probably where JS is the weakest out of these.
- Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
I mean, if JS isn't this taken to the extreme, i don't know what is...
- Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
This also describes JS to a T. Tons and tons of tools, each often only usable by a small number of people in niche situations.
Now i'm not saying that JS is as well designed as unix, i'm not saying that it's exactly the same as unix, i'm not saying that Unix doesn't do things better, or that JS doesn't have it's warts. I'm just saying that it's the "unix philosophy" taken to the extreme. It might not be good, it might not last, but it's working out pretty well for many people right now.
Are you seriously telling me you've never had a linker (static or dynamic) choose the wrong library?
Given npms years of crap I'm ready to give it a shot anyway. This might be one of FB's projects that actually gets some traction, especially given its developed in collaboration with other big names.
They should mention this at the very beginning. Multiple big players investing in this package manager means that we should maybe inspect a little bit more before chanting xkcd.com/927.
Basically, yarn is npm client done right reusing the same npm package repo.
Exponent uses a monolithic code repository (rather than many small code repos) to manage our many interdependent projects. Our repo is actually setup very similarly to Facebook's mono-repo, though obviously considerably smaller in size. We were encountering _many_ of the same problems as Facebook with our NPM setup -- long CI/CD build times, indeterminate node_modules directories that caused our projects and tools to work for some people on our team but not for others, and the inability to do "offline-only" npm installs in CI.
We actually talked with our friends at Facebook about these problems, and tried many of the same approaches they did -- shrinkwrap everything, checking in node_modules, uploading the node_modules folder to separate place (we used another completely separate Git repo), etc. All these approaches either didn't work well or were difficult to maintain, especially on a small team.
Yarn has fixed all these issues for us.
One not-yet-super-publicized feature of Yarn (though I'm told there is a blog post coming) that has been super useful at Exponent is the ability to build a distributable offline cache of dependencies. We have a "node_modules-tarballs" folder in our mono-repo that contains tarballs of all the dependencies for every single project that we maintain at Exponent. Yarn will pull from this cache first before fetching from the NPM registry. This offers HYPER-predictability...it's a level of confidence over and above the "yarn.lock" file, and let's me sleep well at night knowing that my team can get up and running quickly no matter where they're working, how reliable their internet connection is, etc. No more worrying about "did I forget to update the shrinkwrap" -- things just work.
As we've started to use Yarn at Exponent, both on our engineer's machines as well as in our continuous integration and deployment environments, we've at least had one less tool to fight with. Yarn works, works fast, uses the great parts of the NPM ecosystem that we know and love, and most importantly, is _predictable_.
Definitely looking forward to that blog post from the Yarn team, but I'd definitely be interested in your own comments on how to use that "checked-in tarballs" approach.
That's exactly what we need! How do you tell Yarn to build and use this offline cache?
yarn is framework for doing X in web programming, and this a framework for doing Y in web programming. not sure if dependency management and cluster system management are far apart enough.
Search "go" on google, and the first link is the programming language. Search on any other search engine, and as one might expect, the definition or the game of go is ahead of the language.
warning electron-prebuilt-compile > electron-compilers > firstname.lastname@example.org: Jade has been renamed to pug, please install the latest version of pug instead of jade
in npm, that would have just said the part after "email@example.com" which was really vague and didn't really make you want to "fix" it because which npm module do you have to go into? who knows because npm (the package manager) didn't tell you.
One interesting little tidbit I found from diving into the source:
Oh! And open governance: https://github.com/yarnpkg/rfcs !
npm, Inc strongly encourages people to build and use mirrors. for example, cloudflare (where the mirror lives, and where seb used to work) has been running https://npmjs.cf/ for a long time.
Nix and ied (which borrowed from Nix) got these problems pretty much solved.
I don't understand what spoke against these approaches?
I mean okay, nix has its own language, which is probably a turnoff for JS devs, but ied?
(Disclaimer: I'm the author of ied.)