I've worked on enough large projects that relied on prayer and optimism instead determinism when resolving dependencies, and eventually you end up losing hours or days trying to get the app running on a new server or a new developer's box.
Not requiring a manifest is a bug, not a feature. If people can specify dependencies, and information about the version used is lost, then you can be sure that it will happen in practice. Code designed for "proofs of concept" has a funny way of making it into production.
As we know from studies of things like organ donation rates, even the smartest humans get tired or distracted and make bad decisions. The only guard we have against it is choosing sane defaults. (See Yehuda Katz' RailsConf keynote for some interesting insight into how this interacts with convention over configuration: http://www.confreaks.com/videos/3337-railsconf-keynote-10-ye...).
While the simplicity of DuoJS allows it to win the Pepsi Challenge against other options, where the first sip tastes very sweet, I would never again willingly choose a dependency manager that makes reproducibility anything other than mandatory.
@tomdale We're on the same page. I've worked with many large JS codebases as well and consistency and repeatability are a must.
However, as with any tool, it's effectiveness comes from how you use it. There are so many NPM modules that use "*" in their manifests. That's not very robust. Having a manifest isn't a solution, it's how you use the manifest.
In Duo you can pin down any dependency right in the file via require('some/repo@tag'), which is the same as having a manifest with a pinned version.
With that being said, I'm open to any and all contributions or ideas on how to make Duo more consistent and repeatable. I love what Docker does for it's image builds and I think if we could get that kind of robustness going on the frontend, we would all benefit.
That will do nasty things for your ecosystem if people start doing that in libraries. You'll end up with version lock on shared dependencies:
- My app depends on foo and bar.
- foo and bar both depend on baz.
- They both specify different tags.
Now, at runtime:
my_app calls foo.giveMeAWidget()
foo calls new baz.Widget() on its 1.0.0 version of baz
my_app gets that back
then it calls bar.doSomethingWithAWidget(widget) and passes it in
bar calls baz.flummoxAWidget(widget) and passes the widget in
baz.flummoxAWidget() starts doing stuff with the widget
Note that this isn't statically detectable. It's based on how objects flow around in memory at runtime.
Now, in cases where you don't pass objects around like this, you'll avoid this problem. But it's really hard to tell when that's the case and when it isn't.
What you're describing is an inappropriate intimacy antipattern.
What's going on in the pattern described is that you're getting a `BazWidget1` object from the `foo` module. Then, you're passing that object you got from `foo` into `bar`, which passes it to `Baz2Flummox(BazWidget2 widget)`.
Why are you flummoxing widgets that you don't know the origin of?
What kind of program needs to mutate state on a object, by passing it from one object-mutator to another? Why do you not have clearly defined ownership of objects, and clearly defined functions that take arguments and return values?
"Inappropriate Intimacy" is a code smell where one class is delving into the inner workings of another, depending on things that ought to be private. The `BazWidget` should be a private implementation detail of `foo` and `bar`, but instead, we are passing this implementation detail from one function call to another.
More specifically, we have `BazWidget1` and `BazWidget2` objects being used interchangeably. It's tempting to blame this on the package manager or module system, but it is just a badly designed program.
Personally, I have no strongly held opinion about who's best to handle this responsibility: the compiler, the caller, or the callee. There are benefits to "loosely" typed languages as well, and I'd rather not make this about that.
I've referred to this sort of thing as a "gumby baton". You're passing an object from one worker to another, and each one mutates it a little bit, like runners in a relay race where the baton is made of clay, so it gets the finger print of each worker in turn.
This is a terrible antipattern! This is how we end up with middleware depending on other middleware having been executed in exactly the right order. It is terrible for reasoning about program behavior, and results in unexpected behavior when workers are combined in novel ways. Making programs harder to reason about makes security virtually impossible, and increases the cost of maintenance and re-use. Even up front, it is a challenging pattern to use in building an application, though it sounds appealing in principle if you've never been handed a warped and mangled baton.
So, when I say "Doc, it hurts when I do this", I'm implying that the proper response is "Don't do that".
Ultimately, it's not the compiler's fault. Gumby batons exist in C++ and Java and C and are even possible in pure functional languages. Be on the lookout for it. The compiler won't protect you. The module system won't protect you. You have to use your human brain.
Another caveat just to avoid any "tu quoque" responses: I've made this mistake (and sworn to never do it again!) many times. The most egregious offender in Node is mutating the req and res objects. But, it can be very subtle and hard to spot in the initial design. We just fixed a bunch of really subtle bugs in the lockfile module by changing how it was handling the options object, because it had taken on a gumby-baton behavior internally.
# Get the current time in the PST time zone (returns a 1.0 object)
now = timezone.now_in_zone("PST")
# Format the date for display (accepts a 1.0 object but depends on 2.0)
formatted = dateformat.format(now)
(Switching month representation is a rather drastic example, but it's also clear. Substitute a more subtle data format change if you'd like.)
There's no mutation here, and the dependency graph is trivial. I just received a datetime from one library and passed it to another library.
It's possible that I'm missing something, but I've been asking Node users about this for a couple years and I usually get blank stares. I also have no horse in this particular race. Vendoring seems like a great idea to me, but I fear the uncertainty of this version mismatch situation.
This case actually has nothing to do with nested dependencies. Your time zone lib is returning a datum with one type, which you're passing to a formatting lib that expects a datum with another type. This can happen if your libs have dependencies that are totally different libraries instead of the same library with different versions. It can also happen if your libs have no dependencies at all. This is not an issue of dependencies but of you not understanding your libs' APIs.
When using NPM, I don't have that guarantee. The docs for these libraries could clearly state that they'll integrate around the datetime type, but that may be false in practice. And I'll only know that ahead of time if (1) I know that they both use datetime internally, (2) I know exactly the versions of datetime that they use, and (3) I know exactly how datetime changed in between those versions. With non-vendoring package management, I don't have to know any of these things.
Sure, but what are the types of those arguments and return values?
If you call what I describe an anti-pattern, you're basically saying that packages can only interact using primitive types defined in the language. You can reuse code, but not data structures.
I think that's too much of a limitation. I want to reuse code that defines matrices, and vectors, and interesting collections, and business model objects like mailing addresses and currencies. I want to make games that use a mesh type defined in one package and pass it to a collision engine in another.
Saying "you can't use any user-defined type in any public API" is an incredibly harsh limitation, and what do you get for that in return? The ability to bloat your application with multiple versions of the same library?
(That's not to say that the bundle size isn't important -- just that I would like to know if it's the only drawback)
Your coworker is having problems with Baz. How many collisions are in your brain right now when you think about Qux.GiveAnother ?
Am I missing something?
If a component.json is present, it'll use that to get the versions and you just require('emberjs/ember-component')
Duo falls back to the manifest if it cannot resolve the path on it's own.
For proof of concepts and hacks though, adding a manifest is a waste of everyone's time.
npm install foo --save-dev
Does this have any more to offer other than deducting what to download&install from the dependency tree? Because if that's "all", it would be easily added to e.g. webpack, I suppose. Is it really worth making yet another dependency and build tool just for that one feature? If you build on top of webpack, you get a lot of stuff for free, like speedy and dependable file watching, hot-reloading development servers, support for nearly any imagineable frontend language, and a remarkably decent extensible architecture. All this has to be made again for Duo.
Both dependency management and frontend building are highly complex tasks. I'm not saying that therefore it couldn't be done better, but I do honestly, without judging, wonder whether the authors seriously considered existing solutions and ran into impossible problems, or whether this is just the Not Invented Here syndrome at work.
- Transparently supports modules from CommonJS, AMD, ES6 or globals.
- Enforce a manifest (config.js) that let you pin dependencies (incl. transitive dependencies) to exact versions. Unlike RequireJS config, jspm automatically manages that file for you.
- Support multiple package providers, e.g. NPM, on top of Github.
- Based on SystemJS, a polyfill for the upcoming standard System loader. This hopefully makes it future-proof.
- Does not require a compilation step: dependencies can be pulled dynamically from a CDN over SPDY. Alternatively they can be cached locally as well. A compilation step (jspm bundle) is still available.
- Works both in the context of Node and the browser.
We've been successfully using jspm and SystemJS in production at the Guardian. It's still early days, but the devs are very active and responsive.
This isn't meant to distract people away from taking a look at Duo and making up their own mind, but I noticed nobody mentioned jspm in this thread and thought people may want to look at both and compare.
npm install --save dep
With this, I either have to be satisfied with not locking down the version, or go lookup the current version manually before adding the reference to my code.
It also looks like upgrading a dep would mean changing every require.
In the case of Go, it's more meaningful to note that most larger Go projects have been developed for quite some time without needing any of these tools.
If you need specific versions, the standard, idiomatic Go approach is to vendor your dependencies. Google, for example, does not use any Go-specific tool for vendoring IIRC. The last tool you mentioned, Godep, is one Go-specific approach at vendoring. It's also the one that I have anecdotally heard recommended, but this comes with absolutely zero personal experience and is solely hearsay.
But most projects don't even need to vendor - I've been writing Go as my primary language for almost 2 years both at work and for personal use, and I have never once needed godep. YMMV, obviously.
(Looking at tools that exist isn't meaningful in se, because people will try to build libraries to replicate pattern from other languages whether or not they apply. They will look for what they're familiar with, see that it's "missing", and then decide to write one, rather than questioning whether that idiom is actually appropriate for this new language.)
Anyway, thanks for sharing your experience! Your comment suggests to mean that you haven't had the need to fix the versions of any of your dependencies. Is that really true? I would find that quite surprising given that you've been coding in Go for a couple years.
Who doesn't need this? Do you really want different people on your team accidentally using different versions of dependencies?
> the standard, idiomatic Go approach is to vendor your dependencies.
How does that work with transitive dependencies? Do you vendor those too?
> Google, for example, does not use any Go-specific tool for vendoring IIRC.
How much third-party code does Google actually use?
> Do you vendor [transitive dependencies] too?
https://github.com/pote/gpm && https://github.com/pote/gvp
It would be interesting for someone with more expertise to do a compare/contrast further down the line of all three.
"...I show you how deep the rabbit hole goes" - Morpheus
My Gulpfile has 200 lines and does everything from asset-manifests to css minification, js-reloading etc.
Dude, you're so behind the times. It's Yeoman and Bower now. Wait, Broccoli and Duo.
For example I created a quick index.js file and required angular and restangular. It immediately errored out because the angular.js repo doesn't use semver on it's master branch. Ok fine, switch to angular/bower-angular, nope, it looks for index.js so my require now has to read: require('angular/bower-angular:angular.js'). Run again, error, same issue with restangular, it's looking for index.js. That require now reads: require('mgonto/restangular:dist/restangular.js'). I had to actually find where the files are that I wanted to require and explicitly state them in the require. Shouldn't this automatically parse bower/component/package.json files for this info, especially if you're touting the 'No Manifest' thing.
More seriously: do you feel like it incorporates all the lessons of its predecessors?
Typically, let's say for an app, you'll run a watcher which will trigger the builder, so you can just spin it up and leave it alone. For smaller components that are meant for consumption, a simple Makefile is more than enough and running the command isn't too terribly bad.
Maybe it could.
Maybe it could also work with `require('email@example.com')` and other sugars like this.
var uid = require('matthewmueller/uid');
as in the home page example, what gets bound to the uid identifier?
The point is that a package manager does not only need to fetch dependencies, but also to specify relations between modules. This is why, for instance, Bower only does half the story (fetching) and it has to be coupled with a tool like Require.js to actually provide modules.
In the frontend world there just isn't such a standardization on paths, and this is what makes it difficult to locate a module even when the Github repo for it has been downloaded.
How does Duo solve this issue?
1) Download https://github.com/matthewmueller/uid from github
2) Find it's entry using https://github.com/duojs/main
3) Parse it's entry, and extract the dependencies
If so, manifests could be written in a very neat way. See: https://gist.github.com/aymericbeaumet/22c3a9deba54549821e3
Although there's no real benefit of doing that over using the manifest other than it's kinda cool :)
Also, can you host your own "duo registry" (if the concept makes sense here). And if so, can you point to the private registry instead of using the public GitHub?
Bower and npm supports these, and that's what I'm using currently.
Point is, just use the git repos instead of needing that registry at all.
And considering that any real project needs a manifest, the versioning syntax doesn't seem especially compelling.
I'd like to be proven wrong, because this looks cool... but I don't see the value over npm and browserify other than some syntactical sugar (which could probably be achieved with an npm wrapper). Am I missing something?
here's a rabbit hole of additional information: https://github.com/npm/npm/issues/3014
I can't see why this is browser only, am I missing something?
On large frontend codebases, these tasks can take an annoyingly long time. Hence, some tools cache (e.g. see browserify vs. watchify).
Oh my. You've essentially killed Bower and NPM in one swoop then haven't you?
An exciting time to be a web developer to say the least, this looks simply amazing.