Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Duo – a next-generation package manager for the front end (duojs.org)
168 points by matthewmueller on Aug 21, 2014 | hide | past | web | favorite | 100 comments

One question I always ask when looking at a package manager is, "Will this help me have reproducible builds?" (http://martinfowler.com/bliki/ReproducibleBuild.html)

I've worked on enough large projects that relied on prayer and optimism instead determinism when resolving dependencies, and eventually you end up losing hours or days trying to get the app running on a new server or a new developer's box.

Not requiring a manifest is a bug, not a feature. If people can specify dependencies, and information about the version used is lost, then you can be sure that it will happen in practice. Code designed for "proofs of concept" has a funny way of making it into production.

As we know from studies of things like organ donation rates, even the smartest humans get tired or distracted and make bad decisions. The only guard we have against it is choosing sane defaults. (See Yehuda Katz' RailsConf keynote for some interesting insight into how this interacts with convention over configuration: http://www.confreaks.com/videos/3337-railsconf-keynote-10-ye...).

While the simplicity of DuoJS allows it to win the Pepsi Challenge against other options, where the first sip tastes very sweet, I would never again willingly choose a dependency manager that makes reproducibility anything other than mandatory.

Author here.

@tomdale We're on the same page. I've worked with many large JS codebases as well and consistency and repeatability are a must.

However, as with any tool, it's effectiveness comes from how you use it. There are so many NPM modules that use "*" in their manifests. That's not very robust. Having a manifest isn't a solution, it's how you use the manifest.

In Duo you can pin down any dependency right in the file via require('some/repo@tag'), which is the same as having a manifest with a pinned version.

With that being said, I'm open to any and all contributions or ideas on how to make Duo more consistent and repeatable. I love what Docker does for it's image builds and I think if we could get that kind of robustness going on the frontend, we would all benefit.

> In Duo you can pin down any dependency right in the file via require('some/repo@tag'), which is the same as having a manifest with a pinned version.

That will do nasty things for your ecosystem if people start doing that in libraries. You'll end up with version lock on shared dependencies:

    - My app depends on foo and bar.
    - foo and bar both depend on baz.
    - They both specify different tags.
Now your solver has a sad, and your user has a sad.

you can require two different versions of the same library and it works just fine. uses a similar process as browserify.

This isn't a problem on server side javascript where packages can have different versions of the same dependency.

This may not be a problem on client side javascript if your dependencies don't export to the global object.

Sorry, I think I'm missing something because I don't see the problem -- why can't foo and bar just use different versions of baz? Is it not possible to do that?

Let's say your app my_app uses foo and bar. They both use baz. You give them each their own version of baz: foo gets baz 1.0 and bar gets baz 2.0.

Now, at runtime:

    my_app calls foo.giveMeAWidget()
    foo calls new baz.Widget() on its 1.0.0 version of baz
    my_app gets that back
    then it calls bar.doSomethingWithAWidget(widget) and passes it in
    bar calls baz.flummoxAWidget(widget) and passes the widget in
    baz.flummoxAWidget() starts doing stuff with the widget
The last step is bad. You have baz 2.0.0 code that assumes it has a baz 2.0.0 widget, but it's actually a 1.0.0 one. It could work, crash, or fail in some subtle way.

Note that this isn't statically detectable. It's based on how objects flow around in memory at runtime.

Now, in cases where you don't pass objects around like this, you'll avoid this problem. But it's really hard to tell when that's the case and when it isn't.

"Doctor, it hurts when I do this."

What you're describing is an inappropriate intimacy antipattern.

Because people keep pinging me on twitter about this, I feel like I ought to explain a bit more.

What's going on in the pattern described is that you're getting a `BazWidget1` object from the `foo` module. Then, you're passing that object you got from `foo` into `bar`, which passes it to `Baz2Flummox(BazWidget2 widget)`.

Why are you flummoxing widgets that you don't know the origin of?

What kind of program needs to mutate state on a object, by passing it from one object-mutator to another? Why do you not have clearly defined ownership of objects, and clearly defined functions that take arguments and return values?

"Inappropriate Intimacy" is a code smell where one class is delving into the inner workings of another, depending on things that ought to be private. The `BazWidget` should be a private implementation detail of `foo` and `bar`, but instead, we are passing this implementation detail from one function call to another.

More specifically, we have `BazWidget1` and `BazWidget2` objects being used interchangeably. It's tempting to blame this on the package manager or module system, but it is just a badly designed program.

This is one class of errors that a "strongly" typed language can often detect and prevent at design time. However, I've seen C++ and Java programs with the same problematic antipattern, just with more layers of Interface wrapping. And, whatever, JavaScript is what it is, and is not "strongly" typed. It lets you pass any value as any argument to any function, and leaves it up to the callee to decide what to do with it.

Personally, I have no strongly held opinion about who's best to handle this responsibility: the compiler, the caller, or the callee. There are benefits to "loosely" typed languages as well, and I'd rather not make this about that.

I've referred to this sort of thing as a "gumby baton". You're passing an object from one worker to another, and each one mutates it a little bit, like runners in a relay race where the baton is made of clay, so it gets the finger print of each worker in turn.

This is a terrible antipattern! This is how we end up with middleware depending on other middleware having been executed in exactly the right order. It is terrible for reasoning about program behavior, and results in unexpected behavior when workers are combined in novel ways. Making programs harder to reason about makes security virtually impossible, and increases the cost of maintenance and re-use. Even up front, it is a challenging pattern to use in building an application, though it sounds appealing in principle if you've never been handed a warped and mangled baton.

So, when I say "Doc, it hurts when I do this", I'm implying that the proper response is "Don't do that".

Ultimately, it's not the compiler's fault. Gumby batons exist in C++ and Java and C and are even possible in pure functional languages. Be on the lookout for it. The compiler won't protect you. The module system won't protect you. You have to use your human brain.

Another caveat just to avoid any "tu quoque" responses: I've made this mistake (and sworn to never do it again!) many times. The most egregious offender in Node is mutating the req and res objects. But, it can be very subtle and hard to spot in the initial design. We just fixed a bunch of really subtle bugs in the lockfile module by changing how it was handling the options object, because it had taken on a gumby-baton behavior internally.

Mutation isn't necessary to demonstrate the problem. Consider three libraries: first, there's a basic, widely-used datetime library. There's also a timezone library depending on datetime 1.0 and a dateformat library depending on datetime 2.0.

  # Get the current time in the PST time zone (returns a 1.0 object)
  now = timezone.now_in_zone("PST")
  # Format the date for display (accepts a 1.0 object but depends on 2.0)
  formatted = dateformat.format(now)
Now the problem: imagine that datetime 2.0 switched from one-indexed months (1 is January) to zero-indexed months (1 is February). The timezone library depends on datetime 1.0, so it used "1" to indicate January, giving me a datetime value with month=1. The dateformat library depends on datetime 2.0, so it incorrectly interprets that month=1 as February. All of my January dates will now be incorrectly formatted as February.

(Switching month representation is a rather drastic example, but it's also clear. Substitute a more subtle data format change if you'd like.)

There's no mutation here, and the dependency graph is trivial. I just received a datetime from one library and passed it to another library.

It's possible that I'm missing something, but I've been asking Node users about this for a couple years and I usually get blank stares. I also have no horse in this particular race. Vendoring seems like a great idea to me, but I fear the uncertainty of this version mismatch situation.

The problem is here: "(accepts a 1.0 object but depends on 2.0)"

This case actually has nothing to do with nested dependencies. Your time zone lib is returning a datum with one type, which you're passing to a formatting lib that expects a datum with another type. This can happen if your libs have dependencies that are totally different libraries instead of the same library with different versions. It can also happen if your libs have no dependencies at all. This is not an issue of dependencies but of you not understanding your libs' APIs.

In most languages, a type mismatch would always correspond directly to a type name mismatch. In e.g. Python (since it has a clear module system), if I know that f() returns a datetime.datetime, and I know that g(t) takes a datetime.datetime, then I know that they will compose. (Modulo bugs of other types, of course.)

When using NPM, I don't have that guarantee. The docs for these libraries could clearly state that they'll integrate around the datetime type, but that may be false in practice. And I'll only know that ahead of time if (1) I know that they both use datetime internally, (2) I know exactly the versions of datetime that they use, and (3) I know exactly how datetime changed in between those versions. With non-vendoring package management, I don't have to know any of these things.

You've correctly identified a guarantee that you don't have with npm. In practice, anecdotally, I've never run into this issue, while I have many times and with much pain dealt with conflicting deep dependencies using bower and bundler. Which may explain the blank stares. It's a tradeoff I am personally happy to make.

> clearly defined functions that take arguments and return values?

Sure, but what are the types of those arguments and return values?

If you call what I describe an anti-pattern, you're basically saying that packages can only interact using primitive types defined in the language. You can reuse code, but not data structures.

I think that's too much of a limitation. I want to reuse code that defines matrices, and vectors, and interesting collections, and business model objects like mailing addresses and currencies. I want to make games that use a mesh type defined in one package and pass it to a collision engine in another.

Saying "you can't use any user-defined type in any public API" is an incredibly harsh limitation, and what do you get for that in return? The ability to bloat your application with multiple versions of the same library?

In this situation, "foo" or "bar" or both should not depend on baz, but have it as a peer dependency. There is nothing broken, just a very specific case not so easily identifiable, but very easily fixable.

Sadly, this isn't really a solvable problem. On one extreme you force all transitive dependencies to use the same version. On the other extreme, you bundle n different versions of a library. Organizations such as Google have strict policies to enforce the former extreme - only one version of a library is allowed at a time. Unfortunately, this isn't possible for the OSS community where there's no enforceability.

With Duo you can actually have multiple versions of the same Javascript dependency. And then you can use the CLI to figure out what duplicates you have if you want to slim them down.

Please forgive my ignorance, but what is the problem with bundling n different versions of a library? Is it just that the bundle size increases? Or are there additional problems?

(That's not to say that the bundle size isn't important -- just that I would like to know if it's the only drawback)

Foo calls Bar and Baz, Bar calls Qux:0.01 and Baz calls Qux:0.05. Your project is Rumba, which calls methods from Bar and Baz and also needs Qux:0.2, although it turns out that you can use anything from Qux:0.03 through 0.3 and you just specified it as Qux, unversioned.

Your coworker is having problems with Baz. How many collisions are in your brain right now when you think about Qux.GiveAnother ?

Depends on the language.

Talking about reproducible builds : is there something better than nix http://nixos.org/nix/ ? (honest question)

Nope. Nix is the best.

It sounds like this means that if I wanted to use a specific version of Ember, I would have something like the following line in many files:

Doesn't this mean that to update an Ember package, I have to change every file that references that package? That would mean having to touch the JS file for every component in your project to bump the version.

Am I missing something?

You'd use a component.json to lock it down for an app. The inline versioning is really just a way to quickly write code and get something up and running. Manifests are just a pain when you're just getting started with a project or a small script.

If a component.json is present, it'll use that to get the versions and you just require('emberjs/ember-component')

Ah. I would probably clarify the language to make it clear that having a manifest approach is the intended approach for real-world apps, but that it can be left out when playing around.

The manifest is optional, so for big projects I would probably recommend using a manifest to keep all your versioning in one place.

Duo falls back to the manifest if it cannot resolve the path on it's own.

For proof of concepts and hacks though, adding a manifest is a waste of everyone's time.

I personally haven't had much trouble with:

  npm install foo --save-dev
When you go beyond a single developer playing around, having a quick, well-known place to look to learn what dependencies are being used is very useful.

People mentioned the manifest file, but I went one step further to try and get duo to adopt a lock file: https://github.com/duojs/duo/issues/220

I like the ideas, but it feels like it's just a small incremental step over established solutions like webpack and browserify.

Does this have any more to offer other than deducting what to download&install from the dependency tree? Because if that's "all", it would be easily added to e.g. webpack, I suppose. Is it really worth making yet another dependency and build tool just for that one feature? If you build on top of webpack, you get a lot of stuff for free, like speedy and dependable file watching, hot-reloading development servers, support for nearly any imagineable frontend language, and a remarkably decent extensible architecture. All this has to be made again for Duo.

Both dependency management and frontend building are highly complex tasks. I'm not saying that therefore it couldn't be done better, but I do honestly, without judging, wonder whether the authors seriously considered existing solutions and ran into impossible problems, or whether this is just the Not Invented Here syndrome at work.

People interested in Duo may also want to have a look at jspm.io. It solves a similar problem, but with a few differences which to me are advantages:

- Transparently supports modules from CommonJS, AMD, ES6 or globals.

- Enforce a manifest (config.js) that let you pin dependencies (incl. transitive dependencies) to exact versions. Unlike RequireJS config, jspm automatically manages that file for you.

- Support multiple package providers, e.g. NPM, on top of Github.

- Based on SystemJS, a polyfill for the upcoming standard System loader. This hopefully makes it future-proof.

- Does not require a compilation step: dependencies can be pulled dynamically from a CDN over SPDY. Alternatively they can be cached locally as well. A compilation step (jspm bundle) is still available.

- Works both in the context of Node and the browser.

We've been successfully using jspm and SystemJS in production at the Guardian. It's still early days, but the devs are very active and responsive.

This isn't meant to distract people away from taking a look at Duo and making up their own mind, but I noticed nobody mentioned jspm in this thread and thought people may want to look at both and compare.

I'm not sure this actually improves the flow for me. I (like most sane devs), like to lock my deps to specific versions (or vendor them). Currently, it's just a matter of running:

npm install --save dep

With this, I either have to be satisfied with not locking down the version, or go lookup the current version manually before adding the reference to my code.

It also looks like upgrading a dep would mean changing every require.

Personally I think I'd much rather have my dependencies be explicit rather than inferred from what I require across my code base. The idea of a tool which supports both Bower and Component packages is quite nice though.

I used to think this too, but the more I use Go the more I enjoy not needing to maintain a silly manifest

I keep meaning to spend some time playing with Go, but I haven't yet so I don't know how that feels. However, I wonder what it says that several dependency management tools have emerged for Go.




> However, I wonder what it says that several dependency management tools have emerged for Go.

In the case of Go, it's more meaningful to note that most larger Go projects have been developed for quite some time without needing any of these tools.

If you need specific versions, the standard, idiomatic Go approach is to vendor your dependencies. Google, for example, does not use any Go-specific tool for vendoring IIRC. The last tool you mentioned, Godep, is one Go-specific approach at vendoring. It's also the one that I have anecdotally heard recommended, but this comes with absolutely zero personal experience and is solely hearsay.

But most projects don't even need to vendor - I've been writing Go as my primary language for almost 2 years both at work and for personal use, and I have never once needed godep. YMMV, obviously.

(Looking at tools that exist isn't meaningful in se, because people will try to build libraries to replicate pattern from other languages whether or not they apply. They will look for what they're familiar with, see that it's "missing", and then decide to write one, rather than questioning whether that idiom is actually appropriate for this new language.)

Fair enough that the existence of tools isn't really indicative of anything. Although at least the three I mentioned have a few hundred stars on GH which suggests some people are actually using them.

Anyway, thanks for sharing your experience! Your comment suggests to mean that you haven't had the need to fix the versions of any of your dependencies. Is that really true? I would find that quite surprising given that you've been coding in Go for a couple years.

> If you need specific versions,

Who doesn't need this? Do you really want different people on your team accidentally using different versions of dependencies?

> the standard, idiomatic Go approach is to vendor your dependencies.

How does that work with transitive dependencies? Do you vendor those too?

> Google, for example, does not use any Go-specific tool for vendoring IIRC.

How much third-party code does Google actually use?

    >  Do you vendor [transitive dependencies] too?
Yes. Here's an example: https://github.com/soundcloud/roshi

That sounds like a nightmare to maintain. :(

It's trivial.

Then when you need to pin down versions and such, it's a massive pain (in terms of not being able to have a manifest file in Go's native case).

Pote's GVP/GPM works really well here. It's such a small system to learn and use, but does everything you could ever need when versioning godeps:

https://github.com/pote/gpm && https://github.com/pote/gvp

I think the classic solution of vendoring your dependencies would be the best strategy here.

Just to be clear, you can actually pin down the dependencies by creating a `component.json` manifest and adding the specific versions you want. You'd want to do this when publishing your own components, or when building a large app, but for quickly sketching out ideas you can just require them inline. Basically the manifest is optional, so you can choose when it makes sense to lock things down.

That's cool :) I assume the manifest can be auto-generated by examining the code? In that case, that makes me much more interested.

Not quite yet, but it totally could. We've been talking about trying to find a way to directly pin in the source with the help of a nice CLI instead. So that we can keep having no manifests, but get pinned deps at the same time without having to manually go through them all.

So earlier this summer I learned how to use RequireJS + a smidgen of Grunt, then I felt the need to move towards Gulp and Browserify (which I've just recently started), and now I'm excited about Duo.

It would be interesting for someone with more expertise to do a compare/contrast further down the line of all three.

"...I show you how deep the rabbit hole goes" - Morpheus

Also, does someone know how the Closure compiler fits into all of this? It might be totally unrelated but I'm trying to learn more about JavaScript application architecture, and I'm not sure where that fits in.

I tried a lot and use RequireJS + Gulp now (same Gulpfile on every project). Works for me, I don't see a compelling reason to change something.

My Gulpfile has 200 lines and does everything from asset-manifests to css minification, js-reloading etc.

> I felt the need to move towards Gulp and Browserify

Dude, you're so behind the times. It's Yeoman and Bower now. Wait, Broccoli and Duo.

Yeoman isn't a build tool though, it just generates the application skeleton.

Maybe it's just me but having no manifest makes using this tool quite painful. It has taken me 10 mins just to figure out how to structure the require()'s so that it doesn't error out, and that's with only requiring two packages.

For example I created a quick index.js file and required angular and restangular. It immediately errored out because the angular.js repo doesn't use semver on it's master branch. Ok fine, switch to angular/bower-angular, nope, it looks for index.js so my require now has to read: require('angular/bower-angular:angular.js'). Run again, error, same issue with restangular, it's looking for index.js. That require now reads: require('mgonto/restangular:dist/restangular.js'). I had to actually find where the files are that I wanted to require and explicitly state them in the require. Shouldn't this automatically parse bower/component/package.json files for this info, especially if you're touting the 'No Manifest' thing.

index.js as the entry point for a module is pretty standard with projects built around Component and npm. It's bower and the lack of the standard there that creates the issue.

As an author of a (small) number of npm+bower packages I must admit that nope, that's not "standard" in any meaningful way.

It's definitely standard in Node land and also Component(1). Bower, on the other hand, does not and thus is the culprit.

index.js is only 'standard' in node insofar as it is the fallback entry point if a package.json file is not found or does not specify a 'main' file.

It's only moderately common. Just took a look at a project of ours, and of 565 packages, only 261 (46%) had an index.js.

But, but.. I _just_ decided to use webpack :(

More seriously: do you feel like it incorporates all the lessons of its predecessors?

I think I'm missing something. I see how this would be great when it's build time but during development I don't want to have to keep running build commands each time I want to use a new package I'm development. Using Grunt and Bower may be a little time consuming up front but once things are set up its very easy to keep a separate Dev and prod environment in sync between different contributors. I only see the value at build time with Duo which is why I'm sure I missed something.

You inherently need a build step when working with remote dependencies or even any dependency. Especially when you're working with various assets like templates, css, images, etc...

Typically, let's say for an app, you'll run a watcher which will trigger the builder, so you can just spin it up and leave it alone. For smaller components that are meant for consumption, a simple Makefile is more than enough and running the command isn't too terribly bad.

Don't you already have to run some command to install the new package anyway? Not really sure I see how this is different in that respect.

You need to run a build script for Browserify as well, which IMO everyone who is using Bower/RequireJS should switch to.

It looks like the cli utility has a built in watch flag that can be passed to automatically rebuild on changes

I imagine it would possible to wire up a simple watcher in gulp/grunt that would do this automatically.

Does a simple `require('package')` without the slashes fetch the thing from NPM?

Maybe it could. Maybe it could also work with `require('package@0.5.1')` and other sugars like this.

require('package') will look into the manifest to see if there's a remote path and version to use, otherwise it will be set as unresolved and ignored in the build.

I am a little confused about how Duo actually transforms dependencies into JS values. When I write, say,

var uid = require('matthewmueller/uid');

as in the home page example, what gets bound to the uid identifier?

The point is that a package manager does not only need to fetch dependencies, but also to specify relations between modules. This is why, for instance, Bower only does half the story (fetching) and it has to be coupled with a tool like Require.js to actually provide modules.

For clarity, let me make a comparison with the JVM. On the JVM, when one requires a class in a package, it is up to a classloader to find it. The default classloader will look up the class code in a directory structure based on the package name. Where to locate this directory structure is decided based on environment variables, but usually it ends up to having multiple entry points, either on the filesystem or inside zipped files called JARs. A separate tool, like Maven on SBT, can help you fetch JARs from repositories. But what actually makes everything click is the fact that once you have fecthed the JAR, you know where to locate classes inside it based on their name.

In the frontend world there just isn't such a standardization on paths, and this is what makes it difficult to locate a module even when the Github repo for it has been downloaded.

How does Duo solve this issue?

The flow is...

1) Download https://github.com/matthewmueller/uid from github

2) Find it's entry using https://github.com/duojs/main

3) Parse it's entry, and extract the dependencies

4) Recurse

I have a hard time seeing what exactly the point of having another package manager is. We already have a very established way of managing front-end packages: NPM and Browserify. I don't see why you would want any of the features that are advertised in this package manager, namely using Github repos vs. npm packages, and manifestless packages. All it gives you is a more fractured ecosystem and code that won't run on the server.

Is Duo smart enough to remember when it finds a library requirement with a specific tag? So that it is not necessary to indicate the tag on the other places where this same library is required.

If so, manifests could be written in a very neat way. See: https://gist.github.com/aymericbeaumet/22c3a9deba54549821e3

You could also just export from each of the requires in that manifest.js file you have there.

Although there's no real benefit of doing that over using the manifest other than it's kinda cool :)

A one could indeed export its requires. However, my idea was was to stay close to the require caching system existing in Node.

As a personal taste, I by far prefer to configure my application in a JavaScript file rather than a JSON file. And I consider dependencies as a part of the configuration process. It agrees with the Duo philosophy which aims to simplify the building process from A to Z.

That is a pretty cool idea! I wouldn't have even thought to do that

Is there support for symlinking for easy development? I mean the equivalent for 'npm link' or 'bower link'.

Also, can you host your own "duo registry" (if the concept makes sense here). And if so, can you point to the private registry instead of using the public GitHub?

Bower and npm supports these, and that's what I'm using currently.

For private packages, you'd just make the repo private on Github and make sure you have auth details in your .netrc file. So you don't really need your own "registry". There is talk about supporting more than just Github too.

That's a rather centralized solution. There are many companies who host their own npm/bower registries, and they won't move to Duo.js until that's possible.

But if you're at that point you probably have your own private git hosting somewhere too (or just private repos somewhere else). When there is support added for different remotes you could just require from there instead of Github.

Point is, just use the git repos instead of needing that registry at all.

Yeah, that's working solution, when support for that is added. Assuming versioning works with these git urls.

Just released a new library so Sass users can now use Duo.js with duo-sass (https://github.com/stephenway/duo-sass). Check it out!

Reading the copy on the page, I wonder if the author has heard of npm init and npm install --save. I haven't had to edit package.json in quite a while.

Reading the copy on the page, I get the impression that the author dislikes the idea of a manifest (at all) rather than just the act of editing it.

npm supports git repositories as dependencies already (git://github.com/...), so that feature doesn't seem like much added value to me.

And considering that any real project needs a manifest, the versioning syntax doesn't seem especially compelling.

I'd like to be proven wrong, because this looks cool... but I don't see the value over npm and browserify other than some syntactical sugar (which could probably be achieved with an npm wrapper). Am I missing something?

don't remember the details, for simple repos it's fine, i think it's a dependency of dependency problem, that isaac's didn't want to support for some reason.

here's a rabbit hole of additional information: https://github.com/npm/npm/issues/3014


Things are moving fast in the front-end world. Just as we've stopped using RequireJS and accept Browserify as the new king DuoJS comes and overthrows everything again.

I love simple things, this gets my +1, look forward to learning more.

I can't see why this is browser only, am I missing something?

I'd imagine doing this for node.js would require modifications to node itself since the behaviour of require would need to change. On second thought, I suppose you could just rename packages and mess with their path in node_modules to make things work. But that would mean moving out of the npm ecosystem which seems rather bold.

Duo.js could implement duo_require for Node.js. Every "Duo.js Node.js" app would need to be bootstrapped with Node.js require, though.

It would be beautiful to have such a simple and open package manager for node. Git FTW :-)

The Go link should point to http://golang.org

Is it fast during development on large codebases? (e.g. by compiling incrementally)

The slow part is fetching dependencies. The "compiling" isn't compiling in the traditional compiled-language sense, where that would most likely take a significant amount of time.

The tool might have to recursively parse each file into an AST to find the `require` statements, and potentially apply various transforms (e.g. coffeescript).

On large frontend codebases, these tasks can take an annoyingly long time. Hence, some tools cache (e.g. see browserify vs. watchify).

Yup, it will cache the files automatically and check mtime's so that only the changed files need to be reparsed for dependencies. It ends up being really fast to build once dependencies are installed.


This looks superb. Great work.

You can require dependencies and assets from the file system or straight from GitHub:

Oh my. You've essentially killed Bower and NPM in one swoop then haven't you?

An exciting time to be a web developer to say the least, this looks simply amazing.

well, except for the `npm install -g duo` bit :-)

npm already does this from any git repo. I have no idea why anyone would want to tie themselves to github only.

Did Duo the same as gulp+browserify ?

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact