I'm amazed more people haven't heard of webpack, it solves a ton of these problems.
I'd say that nearly all the tools listed in your comment serve a purpose and solve specific problems and ease certain pain points that I deem them useful and assets in my workflow.
It feels a bit like going to an all-you-can-eat restaurant and eating both too much, and insisting on trying everything they have on the menu. As a result, you might get sick from overeating and from mixing food that you shouldn't mix. And while such behavior is human nature, and while it can be argued that this alone is a good reason to avoid such a restaurant in the first place, it can also be argued that much of the problem could be avoided by applying a little more restraint.
Gulp might be great in some cases, but why not use the 'npm script' approach until you need it? Installing packages for everything is seductive and just one command away, but it's not all that different from the cut-and-paste-from-stack-overflow programming that most of us learned to avoid.
Now I'm not saying that the problems in the 'front-end ecosystem' aren't real. I'm sure they are as I still run into them even though I try to be very conservative these days, and I'm sure many of the complaints are from people who are better coders than I am.
But at least for myself, my Node.js work has become significantly more fun, and has involved significantly less 'grunt work' (heh) by simply avoiding the urge to add yet another dependency, API or tool "just because it's there".
And they choose them, because they follow fashion.
I seriously find this to be very implausible.
Why? If they're already using gulp and have people in-house that understand gulp, why is it so unreasonable that they don't want their consultants adding a new build system to the mix?
As already replied by other HNer, most of the projects are existing ones.
Our customers are the enterprise, not startups.
The consultants are expected to bring value within the customer's IT stack.
Usually even the dev environments are provided by IT, to be returned on project termination.
If the customer hires us to assess their stack and provide feedback on what to change, that is another matter.
Which still needs to be approved by their IT anyway.
Besides I never liked them that much.
> Package.json also doesn’t support variables
Not sure what he means by this, npm scripts do support env vars from package.json config values:
Consequently, it takes several lines of awkward boilerplate just to run a simple Browserify job that is a one-liner at the CLI. I've also seen Gulp adapters for popular tools, such as ESLint, that break suddenly when something apparently unrelated in the NPM set-up is updated, even though ESLint itself also still runs just fine with a one-liner at the CLI.
In fact, those were the two examples I used when I put my foot down over build tools on one project I work on a little while back. Within 20 minutes we'd replaced well over 100 lines of Gulp code with about 10 one-liner CLI jobs that did all of the same things. The new version has the added advantages of actually working, which the Gulp-based code used to but didn't any more, and of requiring a total of 0 changes since then to remain working, other than when the tools themselves changed their interfaces.
Also streaming stuff around inside a Gulp process is good in terms of speed, but imo quite complicated conceptually, and doesn't match the simplicity of piping stdout between one-liner CLI commands.
This statement is sensationalist and I don't think it presents a useful argument. Sure I could write a script that uses one of those packages but that has nothing to do with whether or not any of those packages actually helps me achieve my goal.
That said, I have nothing against Gulp and it's quite possible I might use it in the future. For example, the approach of using npm scripts can be problematic if you need to work on different operating systems. And I'm sure there are plenty of cases where the complexity is best managed with Gulp or its ilk.
I'm now working on the next step. It's not quite ready to be announced yet, but I'm cooking up modd, a similarly focused tool for monitoring the file system and responding to changes (https://github.com/cortesi/modd). Modd has already supplanted gulp entirely for many of my use cases, and has replaced supervisor and a bunch of other tools to boot. Many of the actions triggered by modd for front-end projects are precisely invocations of npm scripts as described in the article. A few more features (desktop notifications with Growl/notify, and script access to the list of changed files), and modd will be ready for me to ask for public feedback.
Both modd and devd are small, single-purpose tools written in Go, released as statically compiled binaries with no external dependencies. I've tried to make them tight and focused, and if I get it right, they will hopefully be a refreshing change after gulp and grunt.
I guess, if your end goal is to move everything away from the node ecosystem. To each their own.
Second, not everything is node. Devd has lots of uses outside of node and even outside of front-end development.
Third, I probably wouldn't have written it if I didn't believe that devd did at least some things better than the current tools (at least for some people). You should try it - maybe you'll like it, and if you don't then that's fine too. As you say - to each their own. ;)
Live-server is capable of a lot more than I mentioned. There's also lite-server which is geared more toward SPAs.
Anyway, if you enjoy hacking on devd I won't fault you for it.
I'm not really looking to switch to go. I'm one of those weirdos who actually really likes JS. The massive NPM ecosystem just makes everything that much more fun.
Using npm scripts to build an asset pipeline - Modulus
I haven't found a good build process using npm as a task manager or build tool; I haven't seen any good example on any article I've read so far; gulp was built for, and it's much clear in my opinion!
Use JSPM for module management, transpiling (Babel, Traceur, Typescript), and bundling
live-server for LiveReload. With zero config it injects the LR stub and sets up a file watcher on the directory it's run in.
BrowserSync can be loaded via a npm command:
SASS can be setup with:
Here's what I use to build my Angular2+ES6 app:
Using npm as the default task runner is becoming more common so it's not to difficult to find good examples thru Google.
$ jq .scripts package.json
"start": "npm-run-all --parallel client server live",
"client": "beefy client/index.coffee:index.js $npm_package_config_beefy_port --cwd client -- --transform coffeeify --extension '.coffee'",
"server": "BEEFY_PORT=$npm_package_config_beefy_port LIVE_PORT=$npm_package_config_live_port PORT=$npm_package_config_port nodemon --watch server --watch template --ext coffee,jade server/app.coffee",
"live": "live-reload --port=$npm_package_config_live_port --delay=3000 client server template"
I ran into a strange problem where if you use node-sass to watch a directory for some reason adding a new partial and the @import statements doesn't quite work as it should. If you change something in the imported file, node-sass won't recompile so you don't see your changes. The only way around it is to re-save the file that does the @import, or restart the process.
The solution I found is to use the more generic 'watch' package to watch the directory, and make it simply re-run node-sass without the watch flag. Aside from solving the problem, it also ended up feeling cleaner to use 'watch' everywhere I needed it instead of relying on all the different watch approaches of the different tools.
The one downside is that certain tools' own watch capabilities are faster or do partial updates. In the few cases where that is true I would just use the built-in watch functionality of course. But it's rarely been an issue for me.
Furthermore, I wrote a related blog post  about this a few months back. You don't need gulp.
Leave Gulp to be gulp and do the heavy lifting while using npm scripts to run the gulp tasks and anything that doesn't live inside gulp.
It gives the simplicity of allowing people unfamiliar able to look at the package.json file and no further if they just want to use it, and allows all the "plug and play" style plugins from gulp.
- You need to run both server and watcher from the same shell, and the proposed way is to use parallelshell which is not a robust tool such as Gulp. Specifically trying npm run dev as suggested leads to some error that, after termination, leave 4 processes running in background that you have to kill manually, or else you can't access the same ports. Not fun.
- It requires to "highjack" your source files with script tags that I don't feel belong there:
Why Gulp plugins are better?
- Fast: use streams, no temporary files.
- Gulp plugins have uniform API: stream in, stream out; no massive command line options.
- Convenient and expressive node-glob abstraction to select files/directories to be watched.
- Less magic, more control and understanding of what is going on, less chance and dependence on bugs.
Here is the absolutely basic LiveReload setup that I wasn't able to achieve with Npm scripts:
This argument of dropping Grunt/Gulp seems the same to me. "Don't use a pre-coded builder or task runner, just build it yourself". I'm not sure I agree that's necessarily the best route for every project and/or coder.
Right now I've been playing with node and gulp, just for fun. So far I've found the most comfortable route for me is to have gulp kick off the process and then use non-gulp packages when it feels right. For instance, I don't use a gulpified version of Handlebars, I just use the actual npm package. I can't say this is the "best standards" way to go about it, but it feels right for me.
I respectfully disagree.
Using jQuery is helpful because it makes things work reliably where raw JS sometimes runs into browser quirks, and because it makes the code for various common functionality cleaner and more concise.
The trouble with tools like Gulp in the current ecosystem is that while they were supposed to do something similar, in practice they often seem to have the opposite effect: they introduce quirks and instability even when the tools they are running work just fine, and they make things that should be simple one-liners take a bunch of boilerplate and actual thought to set up.
This argument of dropping Grunt/Gulp seems the same to me. "Don't use a pre-coded builder or task runner, just build it yourself".
But here your default assumption seems to be that this kind of tool is necessary, and that not using one is the active change.
Why do I need a task runner to automate running something like
browserify src/main.js -o dest/main.js
eslint --parser babel-eslint --plugin react src
I've heard similar arguments about jQuery as well. But are you saying there's a fundamental problem with gulp itself, or how people are using it?
As for your browserify example, keep in mind I'm not heavy into node/gulp at the moment, I would say that it is rather simplified as compared to what gulp actually offers. From my way of thinking your example only handles the single file, what happens when you complicate the matter a bit more? Do you end up recreating gulp in the end?
As far as I can see, Gulp relies on plugins to be useful, and too many plugins for important tools seem to be either unstable or overly complicated.
The Gulp ecosystem is also a smaller illustration of the general Node/NPM culture problem that almost anything, no matter how simple, seems to get its own package to install. This very fine-grained packaging results in its own kind of dependency hell and a sort of artificially forced update schedule where updating one package to fix a bug or get some useful new feature has a knock-on effect that forces updates of numerous other things to remain compatible (or not, if those updates themselves break things or introduce new bugs).
From my way of thinking your example only handles the single file, what happens when you complicate the matter a bit more? Do you end up recreating gulp in the end?
Yes, that kind of command would be for a web app with a single entry point JS file generated from a single starting source file. But even that case, the simplest possible, requires a lot of boilerplate for no benefit at all in Gulp. Worse, the required boilerplate has changed several times within a year or two just to keep the Gulp infrastructure working.
I'm not sure what you have in mind for "when you complicate the matter", but I have yet to see an example related to Browserify that Gulp made any easier, even on projects with multiple targets each with their own starting point for building. So no, I don't see how any greater complexity I've encountered on a real project would wind up recreating a Gulp-like tool in the end (though of course that's just my own experience and doesn't mean no-one else has such a project).
Could you not say the same for any system that relies on or offers third-party additions? If I use a npm package that's badly coded, do I blame node or npm?
As for the rest of your comment, I appreciate the feedback. But by complicate the matter, I mean multiple files with multiple tasks to perform on those same files that results in a single output for each original file. A simple example from a gulp task I've been playing with; start with a folder structure of markdown files, iterate over each getting front matter for options and then removing front matter, convert each markdown file to html, compile with front matter listed Handlebars template file, change name of file to match front matter title (although I'm thinking of skipping this step and keep original markdown file name, which means I do nothing), output to single file in a build folder but keeping original folder structure. I also have a watch task that looks for any markdown file being updated and it runs this task for me. Now, I'm assuming all this can be done without gulp, but does using gulp not help with this task?
Of course. Any external dependency is a trade-off between the extra value you get, typically through saving time and/or solving a problem better than you know how to do yourself, against the overheads.
The key point in this case is that I just don't see much extra value in Gulp and its plugin ecosystem. As shown in my examples above, it's a whole load of extra mechanics to replace literally one-liner CLI operations. On most projects I've seen try to adopt Gulp, it wasn't clear that the Gulp-based tooling was actually more effective even when everything did work properly, which is why as you've probably figured out by now I'm not much of a fan.
But by complicate the matter, I mean multiple files with multiple tasks to perform on those same files that results in a single output for each original file. [...] Now, I'm assuming all this can be done without gulp, but does using gulp not help with this task?
I don't think I can give an informed opinion about your specific example without knowing more about the details. All I can say with confidence is that in the more complicated multi-stage build processes I've worked with, I've yet to find Gulp an advantage over plain old scripting of the kind we've been using for a decade or five. Shell scripting was made for this kind of automation work, and if you prefer more general programming tools or need cross-platform portability then you can write similar logic straightforwardly in any number of programming languages, without all the complication that these "task runners" introduce.
At that point, you have far more flexibility as well. That makes a noticeable difference if you want to do things not quite within the target area of a tool like Gulp, such as processing of two different types of file together. How do you wrangle Gulp's stream-based approach into, say, hashing CSS or JS filenames for cache-busting purposes and then incorporating the appropriate hashed filenames into your HTML files' headers? You can do it -- it's ultimately just JS code, after all -- but is Gulp's framework really helping at that point, or are you just coding your way around it? In my experience, it's been the latter in 100% of cases, but again, YMMV.
Plus, I've never heard of npm/node as being described as a task runner. I would appreciate an explanation as to what you mean by that.
But you're right that it's maybe not immediately obvious unless you've already played around with the scripts functionality and task runners in general. And I suppose you do need to have a basic familiarity with the concept of piping data from one unix command to another, running commands in sequence or parallel, the difference between "&", "&&", ";", and "|" etc. Not to mention the difference between STDERR and STDOUT which I still don't fully master. Thankfully this has almost never been an issue in practice.
That said, I strongly suggest you take this opportunity to continue diving into this issue and maybe trying out both approaches yourself. My own experience getting into the node ecosystem and back-end stuff has been that I wasted countless hours on figuring out task runners, solving weird bugs and issues, switching from grunt to gulp, where I could have avoided all that by learning how to use the npm scripts functionality instead.
As a bonus, becoming more familiar with *nix approach to stringing together commands has been incredibly useful for many other things! For example, I've been able to replace certain scripts (backups, ftp synchronization, data conversion) with much simpler solutions that rely on this OS-wide functionality.
Where your analogy does make some sense though, is cross-platform support. In the same way that jQuery solves cross-browser issues, a task runner solves the problem where your npm script commands might not work on non-*nix systems, or vice-versa. But that's a bit like using jQuery exclusively to solve XHR incompatibilities, instead of a library like superagent.
I think it is the case that most groups simply write lazy Grunt code in ways that make it hard to test. This is a chronic problem throughout the devops space; the code is frequently terrible. So don't do that. Don't drag the Grunt instance out into your code. Separate your concerns. Write testable functions and classes.
or do it in python like I did - http://kopy.io/yvEGl it works amazingly and gives you endless flexibility without require bash.
Also, minor nit: The author claims `&&` is "cross platform". I regret to inform him it is not. It may well work for bash and cmd.exe but it has been disallowed as a command seperator in powershell for a few versions. Aiming for cross-shell compatible commands is more of a minefield than you'd expect - I actually knew someone who'd rewired a ruby DSL to use for a shell. I've found it best for crossplat to write as much in JS as possible and just let npm tasks call out to scripts, to use as little of the shell as possible.
Checkout my postversion hook!
By the way, even if JSON does not support comments, sometimes I add a "#TODO:fooscript" prop in the scripts, and add comments as well.
Another interesting trick is that a script can call other npm scripts.
Also another nice feature is that command in node_modules/.bin folder are added to PATH.
Yes, npm scripts rock!
I'm a little unclear on why you'd try to integrate Webpack into Gulp - Webpack is more than sufficient by itself. And I've tried using NPM-only scripts, but they haven't yet matched the dev server, hot reloading, production-readying functionality inside Webpack.
Webpack configuration files are absolutely gross, though.
BTW. If you're not into bash for writing long scripts, it's pretty easy to write shell scripts in nodejs. Node 0.12+ has native `execSync` that is nice to have to write build code in JS in synchronous way. I wrapped it with two helper functions to have sth like `execAndPrint` and `execAndReturn`:
It's not as powerful as bash script (doesn't support | syntax for redirecting etc) but if you need that, you can extract it away to a helper bash script.
See also https://github.com/shelljs/shelljs
With so much conflicting wisdom to sort through - "yuck, monolith!", "omakase!", "YAGNI!", "not enough abstraction!", "too much abstraction!" - we should admit that we often end up leaning on instinct/emotion when making these decisions.
Just remember that you don't have to pick one philosophy and stick with it for all time exclusively. Often down the road you'll see the value in something you thought you had sworn off.
The main problem with this line of reasoning is that as things get more complex and expansive over time (and they will do as night follows day) on the npm scripts front, the developer would likely resort to build abstractions to hide all the tangled wires and simplify the dev process and then we're back to square one.
Abstractions are necessary evil but they should be built and also managed efficiently and wisely to avoid the common pitfalls and pain points and make the development process easier and less annoying.
The real consequence of having the community diverging on build tools is the added work required by tool creators to support them. I created a tool that included a grunt wrapper just before gulp became the new 'hottness' and quickly became fed up with the whole mess.
Soon after, I discovered that npm scripts can call the bin scripts of locally installed dependencies. I haven't looked back since.
NPM scripts make composing tasks nice... and there's nothing stopping you from adding custom configs to the package.json object.
It's quick, easy to debug (arguments to commands can be dumped in debug mode, run manually etc).
Also for certain tasks Python annihilates Gulp plugins.
This is a production example (not pretty code per se) http://kopy.io/yvEGl when combined with intellij watchers this makes it very easy to rebuild on change.
Using virtualenvs I can also keep one project isolated from another in terms of jumping around the filesystem.
The convenience is a major win and since I use python for everything other than the web framework which is PHP it actually ties together nicely.
Unconventional but it works really well.
The author fails to acknowledge that gulp is also asynchronous. This is what makes it "fast". Streaming is just a nice abstraction for passing data through a pipeline.
Sure you can do async on command line but that's not trivial and the author doesn't address that.