Hacker News new | past | comments | ask | show | jobs | submit login
The Ultimate Frontend Build tool: make (rdio.com)
176 points by jmtulloss on March 24, 2014 | hide | past | favorite | 117 comments



God I hate make. I hate make so very, very much. Compiling code isn't that hard. I swear to the programming lords on high it isn't.

Here's a wonderfully useful open source project - Google gperftools [1]. It includes TCMalloc amongst other things. The Makefile.in is 6390 lines long. Configure is 20,767 lines long. Libtool is 10247 lines long. That's fucking insane.

Compiling OpenSource is such a pain in the ass. Particularly if you trying to use it in a cross-platform Windows + OS X + Linux environment. One of my favorite things about iOS middleware is that one of their #1 goals is to make it as easy as possible to integrate into your project. Usually as simple as add a lib, include a header, and call one single line of code to initialize their system (in the case of crash report generation and gathering).

I work on a professional project. We have our own content and code build pipeline. It supports many platforms. I don't want anyone's god damn make bullshit. I want to drop some source code and library dependencies into my source tree that already exists and click my build button that already exists.

</rant>

[1] https://code.google.com/p/gperftools/


What you are describing is autoconf, not make. Make by itself is actually a very handy tool for performing tasks that have a dependency graph.

Autoconf.. Well, I can't disagree. It's a hack built on top of a hack and should probably be rethought. Once autoconf is done generating Makefiles, make itself is generally trouble-free.

http://freecode.com/articles/stop-the-autoconf-insanity-why-...


Everyone says that, until they're put in charge of babysitting an old HPUX or AIX box and it's time to install something. Then no one complains about Autoconf again (though they can't bring themselves to praise it, either).

Automake, on the other hand...


I'm okay with autoconf. Automake however is an abomination.


mk-configure is a "lightweight" autotools replacement

<http://sourceforge.net/projects/mk-configure/>


That's a GNU autotools setup, not a pure Makefile setup. A simple Makefile setup is much much easier. A current C project I'm working on has around 150 lines worth of Makefile stuff, and that includes both compiling source, running flex over lexer files, an option for creating a .tar.gz of the current source, and also an option to run tests on the compiled object files. It's also dead simple to maintain (The only time it requires a direct update is when adding test cases, and that's because you have to specify which source .c files to test).

Now to be fair, I am sacrificing some options here. The biggest is that autotools runs tests on the installation environment and tests the available functions and standards compliance, which in theory allows compiling the source on any system with has autotools for it, which is why it's so huge. You can't do that with standard Makefiles. I just stick close to the standard and avoid any non-standard extensions that I don't need.


Autotools is crap. Configure on it's own isn't godawful -- it scans for a LOT of Unix variants, and does sniff for features reasonably well. On my more cynical days, I'd say it does a good job of making things portable from unix to unix. Everything else about autotools is unimitigated crap, though.

I've written my own generic make library. The library itself is 95 lines of script, and handles all the dependency sniffing, library and binary building, and so on that I need in a reusable way.[1]

The makefiles themselves just list the sources and object files. They're typically only a few lines, like so[2]:

    BIN = mybinary
    OBJ = foo.o bar. baz.o
    include ../mk/c.mk
That's it.

[1] http://git.eigenstate.org/ori/mc.git/tree/mk/c.mk

[2] http://git.eigenstate.org/ori/mc.git/tree/6/Makefile


And yet most contemporary hipster build systems are all just shiny make reinventions.


Actually, most contemporary hipster build systems are bad reimplementations of make. Yes, make is a PITA, but every other build system is worse.


UI have yet to see a 'hipster build system' that mixes shell and make language or uses punctuation for variables.

Are all things made since 1977 hipster?


Yes.

Get off my lawn, hippie.


Most new things are inspired by something else to the extent that they can be viewed as reinventions. So that's a moot point, and doesn't contribute at all.


My preferred hipster build system (CMake) actually leverages make on UNIX platforms and nmake on Windows :). It just replaces the most of the autoconf mess.


> It just replaces the most of the autoconf mess.

With a non-standard mess on its own ...

CMake is pretty bad at doing things (standard paths, install targets etc.) that the GNU folks solved a long time ago. Yes, the Autotools are a royal PITA but at least a pain that one knows how to deal with.


With a non-standard mess on its own ...

Perhaps, but not any that I have had problems with.

E.g. an application that we distribute uses Qt, Boost, Berkeley DB XML, libxml2, libxslt, etc. Producing signed application bundles for OS X, MSI installers for Windows, and packages for Ubuntu has been nearly painless. And that's with clang on OS X, Visual C++ on Windows, and gcc on Linux. If it's easy to produce binaries on the three most popular platforms, with three different compilers, I don't see the problem.

We have tried autotools before. But it's a pain on Windows with Visual Studio. Let alone that I can quickly generate a Visual Studio project to do some debugging.


Sure, if you do the packaging for a restricted set of environments yourself, CMake certainly works fine. I do the same for a lot of projects and know what CMake is capable of.

But when it comes to the differences between all those Linux distributions, the respective packager will be very glad to see that he can customize install prefixes (no, CMAKE_INSTALL_PREFIX is not enough) and use standard make targets.

Your list of dependencies shows libraries that are well covered by the stock CMake modules but try getting a build variable that is not LIBS or CFLAGS from a library that can only be queried with pkg-config. Impossible.


But when it comes to the differences between all those Linux distributions,

That's a fair point. However, most often, I am more interested in accommodating the 99.8% of the population that uses Windows, OS X, or one of the major Linux distributions, than the tiny group that runs Sabayon and is able to get things compiled themselves if necessary.


CMake certainly has its own set of peculiarities, but sometimes it can work to let you build stuff on Windows with VC++ or plain MinGW without having to use MSYS/Cygwin.

Creating portable software and then distributing it with a Posix only build system seems wasteful.


Reinventions that are cross-platform (this actually matters).


What platforms doesn't make support? AFAIK GNU make supports pretty much everything and there are versions of make for Solaris/BSDs.


Make exists everywhere, but you have to explicitly write separate rules for every system.


Auto* is still the most cross-platform build system I've ever used. It let me compile a project from 2003 on windows vista; how many other systems can you say that for?


Good enough for me.


Count me in as a Makefile hater. I'm even using it to manage a portable /home directory and it's just making me hate it even more. Why did all the alternatives have to fail or be worse than Make?


If you're managing dotfiles with Make (which I attempted once...) then may I direct you to GNU Stow instead? It's much easier to manage a bunch of files centrally and just symlink to them all:

http://brandon.invergo.net/news/2012-05-26-using-gnu-stow-to...


The problem I seem to be having with various dotfiles tools is they seem to be doing too much, where a script with a bunch of calls to ln -s would do.


May I introduce you to dircombine? Take multiple directories full of dotfiles and symlink them all into another directory.

http://git.kitenet.net/?p=joey/home.git;a=blob_plain;f=bin/d...


I've come to think that build systems are a very personal utility. Everyone has their favourite. Mine's fabricate.py, for example. Over time I've built up a library of script snippets and shortcuts and so on which I'm familiar with, comfortable with, and exactly fulfil all my use cases. It's all very clever. :)

Yet when I download some random project's source code, I groan at any sophistry in the build process at all. I'm not interested in your build system - I'm interested in the application itself. Maybe I want to try and fix a bug, or have a half-baked idea for a new feature. I don't need dependency checking, incremental rebuilding, parallel building, and all that stuff you get from a fully operational build system at this point. I only need to build the project - once - as I decide whether to stick around. Sure, if I start working on it for serious, rebuilding over and over - then I'll bother to learn the native build system, and read any complicated scripts. Build systems are an optimization for active developers. They're a utility that is supposed to save time.

Of course, you're never going to get everyone in the world to agree on the same build system. We all have different desires and needs for what machines it should run on, how automated, how much general system administration it should wrap up, how abstractly the build should be described, etc. It's a bit like one's dot files or choice of text editor - my ideal build is tailored just for me but I wouldn't expect it to satisfy anyone else.

So now I wish that everyone who distributes software as source code would do this: include a shell script that builds the project. Just the list of commands, in the order that they are executed, that carries out a full build on the author's system. That's what it comes down to, in the end, isn't it? Your fancy build system should be able to log this out automatically. (Of course then you still include all the fancy build stuff as well, for those interested.)

Of course it's extremely unlikely that your shell script will work on my system without modification. There's probably machine-specific pathnames in there for a start. We might not even use the same shell! It's basically pseudocode. But if I'm faced with a straight list of imperative shell commands that doesn't work, and a program of some sort with its own idiosyncratic syntax and logic and a hundred-page manual and the requirement for me to install something - which also doesn't "just work" - well, as long as you know how to call your compilers and linkers and so on - which you should - the former is going to be easier to tweak into submission, to get that first successful build. After all, if I need much more than that I'll probably just recreate the build in my favourite system anyway.


Thank you for making me feel a little bit vindicated. This is precisely how I build my current side project, I essentially mask all of the build/runtime options/tweaking behind a shell script and call all of it through that. ('./run remake', for example, or './run with valgrind' if you're in a debugging mood)

For me, the makefile itself wasn't the problem, I've been rather aggressive to keep it as pretty much just a dependency enumeration with flag lists and (arrogantly) it is rather clean, but the runtime flags/things I need to wrap the executable around pushed me to the script.

(I honestly worried that this was sloppy since it indicated exactly what it did, mask really ugly complexity behind a shiny frontend, which always makes me wonder if that complexity wasn't undue, but it does give the advantage that your last paragraph mentions, that it gives a more modular pseudocode of the various components of building/running.)


With the theme of ageism and NIH being brought up a lot lately, I'm glad to see that the neck-bearded unix philosophy (tm) of composition and single purpose are winning over these complicated object oriented frameworks.


Exactly my thoughts too, after reading the ageism article. An "old guy" (like me) would not even think twice and just "make" great use, as he has always been and with great success nor the time-consuming need of learning a possibly "unpolished" tool that does not have all the bells and whistles. This is very similar to the NoSQL trap, and countless other technology short-cuts you have to run into before fully understanding why they might be a trap for your org. And its the same reason why we old guys are sometimes disliked - for being critical about all these shiny, new "reinventing the wheel" tools. A tool that is not a decade old in many cases simply isn't battle proof. The problems with us "old ones" maybe starts if we make this a rule rather than a principle of caution, though...


Why not drop the "neck-bearded" bit? Evi Nemeth had more Unix chops than 99.86% of the people on HN and no neck beard.


Neck-Beard is not about the beard, it's a state of mind.


Your 'inner' neck beard, as it were.


Yes, but that sounds even worse.


Nobody was talking any shit on women.


I'm not sure I would go as far as calling Grunt a complicated OO framework, but the first thing that crossed my mind when looking at Grunt was 'why not use make?' It may not be perfect, but at least you get incremental builds.


incremental builds are possible with Grunt using grunt-newer: https://github.com/tschaub/grunt-newer


The problem with make is not that it's bad, it's that's it's only really good at doing two things:

1) mapping a source pattern to an output pattern

2) managing dependencies between rules

To be fair it's good at those, and often the sorts of things you can do with a rule are quite complex (being basically shell scripts).

However, the problem is that 1) it's an obscure DSL and 2) that it is really rubbish at doing more complicated things.

For example, grunt-contrib-clean lets you: delete any files that match a regex, not leave alone any files that match a different regex. Grunt also has a built in templating language that can be used to expand configuration files from submodule into local build scripts without copying the entire gruntfile. grunt-open launches a browser to a dev url cross platform. The list goes on and on and on.

Make is terribly terribly bad at complex tasks like this, that's the problem.

You can write a custom shell-script / ruby-script / python-script for these tasks, but why would you? Someone else already has. Dont repeat all the things every time with your own code.

If all you need to do is map .c to .o, or .scss to .css and .coffee to .js, use make, totally. It's good at that.

Otherwise, stay away.


There are other benefits to make. First, you'll need to ask users to install grunt, whereas make is standard and is just there (except for Windows, probably). Second, make provides a language that is optimised to express the information it needs, whereas grunt uses a JSON file. It gave me headaches when I needed to edit grunt config at my last job, and I never touched grunt ever since. Third, all the tasks you've specified can be done with standard UNIX utilities, whereas you depend on third-party libraries with grunt. Also, I do not think that any of those tasks are complex.

I'd rather rewrite a 4-LOC shell script in my new project, instead of depending on a build tool that depends on a non-standard, infant runtime itself, and also depends on third party libraries for deleting files.


i was going to say something in response to the article about dependencies from a positive standpoint for grunt, but i think it makes sense to put it here.

rather than having a unix development environment for the browser platform, thanks to lighttable, grunt, and others, it's possible to use node/web-browsers as a development environment as well. i guess node isn't as venerable as unix or even the jvm for that matter, but is "infant" really a fair characterization? i've heard that node can actually do some types of string processing faster than the jvm.

i totally agree on starting with shell scripts, though. i tend to put short scripts all throughout repos, even if they just run a single command with a few arguments. sometimes they grow into longer scripts, and sometimes they get changed to run different programs (e.g. grunt<->make), but having a few consistent names for tasks like build, run, test, and deploy (.sh) across projects and languages goes a long way to cut back on cognitive load.


> ... but is "infant" really a fair characterization?

The point I wanted to make was that one needs to install node to run grunt; I don't believe there's any OS that distributes node in the base distribution, whereas make and sh is a part of the POSIX standard, and perl/python is available by default on most OS's.

Also, node is indeed infant and young, don't consider that a bad characterisation, I do enjoy playing with node. My plea is that one should not need to install some software to run the build scripts in a project they download.


    grunt uses a JSON file
Actually grunt uses a javascript file you can drop node code into (not a stupid DSL, an actual programming language no less).

    I'd rather rewrite a 4-LOC shell script in my new project...
Anyway, if you want to throw away your precious developer hours writing yourself a new makefile build system for every project and throw away the nice principles of code reuse, you can.

...just maybe don't tell who ever is paying you that you're doing it that way.

(or wait, you can reuse code with make can't you? It's called autoconf or something like that...)


I'll offer my own input here, goaljobs[1]. (By the way I don't recommend people to use goaljobs unless you're prepared for a lot of assembling and are interested in understanding what it does -- it's not easy to use at all).

You can break down complex tasks like "Has my software been delivered through the app store" down to goals that have to be fulfilled by carrying out (recursively) many layers of rules, like "did it pass human evaluation?".

It's a generalization of make / build systems.

[1] http://people.redhat.com/~rjones/goaljobs/


I'm glad to see the resurgence in make's popularity among front-end developers. It really is a great tool for not just building apps, but generating files that depend on other files in a declarative way. I manage my website with just make, pandoc, and rsync; and I manage my various ssh configurations with make and m4 (ssh config doesn't have an include directive!). A while ago I wrote an article to help out some front-end dev friends get acquainted with make, maybe someone else will find it useful: http://justinpoliey.com/articles/make-for-front-end-developm...


It's also very handy for anything you write in LaTeX: add figures etc. as dependencies, make targets that run R or gnuplot to produce the figures in the right format, etc.

make is great for a lot of build tasks.


redo[1] is almost never mentioned in these discussions. Has anyone tried it in anything significant? It looks good and is a djb design.

[1] https://github.com/apenwarr/redo


I've (ab)used apenwarr's redo for a couple big data processing projects, with mixed results.

One was a news recommendation engine. We pulled down and parsed RSS feeds, crawled every new link they referred to, crawled thumbnails for each page, identified and scraped out textual content from pages, ran search indexing on the content, ran NLP analysis, added them to a document corpus, ran classifiers and statistical models, etc.

Every step of the way took some input files and produced an output file. We used programs written in many different languages -- whatever was best for the job.

So a build system was the obvious way to structure all of this, and we needed a build system we could push pretty hard. Our first version used make and quickly ran into some limitations (essentially, we needed more control over the dependency graph than was possible with the static, declarative approach) so we turned to redo, which lets you write build scripts in the language of your choice.

One thing we needed almost immediately was more powerful pattern matching rules than make's % expansion. No problem: invent a pattern syntax and a special mode where every .do script simply announces what patterns it can handle. Collect patterns, iteratively match against what's actually in the filesystem, and then you've got the list of target files you can build. (This already differs from make, which wants you to either specify the targets explicitly up front as "goals," or enumerate their dependencies via a $(shell ...) expansion and then string transform them into a list of targets which are ALSO matched by some pattern rule somewhere...okay you get it, it's make, it's really disgusting.)

Another thing we needed was to say, here's a list of target files that appear in the dependency graph, give me them in topologically sorted order. This allowed us to "compact" datasets as they became fragmented, without disturbing things downstream from them in the dependency graph. Again, this was not difficult with redo once we had some basic infrastructure.

Now, was all of this maintainable, or was it just kind of insane? I think in the end it ended up somewhat insane, and most importantly, it was an unfamiliar kind of insane. The insanity that you encounter in traditional Makefiles is at least well understood. And treatable.

With redo, you can do almost anything with your build. You can sail the seven seas of your dependency graph. It's awesome. It's also terrifying, because there is very little to guide you, and you may very well be in uncharted waters.

But give it a shot anyway. YMMV.


Inspired somewhat by redo, I wrote "credo" as a set of small command-line build tools, so build description files are in the shell language rather than a standalone DSL.

https://github.com/catenate/credo

(I don't like how makefiles have so many features that reimplement what you can do in the shell. I also don't care for big languages with build-tool DSLs--though you could say credo is a build-tool DSL for the shell, like git is a version-control DSL for the shell. With only language directives, no constructs.)

I wrote it in the Inferno shell to take advantage of some nice OS features and its cleaner shell language. One of these days I should port it to bash, so other people might use it.


FWIW, I implemented redo in Go and released it a couple of months back.

https://github.com/gyepisam/redux


Thanks for pointing this out! I've been hoping djb would get around to releasing redo, but this is the next best thing.


I couldn't agree more here. After reading the source code on the twitter css bootstrap makefile a few years ago I got inspired and wrote my own rewrite of what they had... I was blown away by the gains at it gave me. Of note I bound make to cmd+b in Sublime Text, and I just compile when I see fit.

I've now tweaked my make files so they are almost unrecognisable from the twitter ones, it can run PHP, JS unit tests, fires up phantomJS and tests individual modules, can release minified for production, or not for debugging purposes. I can't stress how useful it is being able to just add a line and you get such powerful support.

I haven't had time to add git hooks yet, but thats the next stage, I plan to set the hooks to run tests and clamp down on poor quality code (I work with interns quite often... sometimes I cry for just average quality code coming in).

For a story of the production gains I've had. I moved the whole company CSS into a CSS preprocessor and cleaned up all the existing structures to fit the make file release procedure. It came back a thousand fold away a rebranding and I had everything done in two days. I was blown away with that alone, I've been involved in so many rebrands over the years that go on for months... not hours.

Do it, if you're bit unsure where to start check the twitter bootstrap build file and muck around with what they have done.


They've used Grunt for builds for a while now[0].

It seems most of the twitter-style web developer community who happily replaced `make` with `rake` when developing for Rails simply followed that with `grunt` when they moved to JavaScript.

[0]: https://github.com/twbs/bootstrap/commit/0d33455ef486d0cf06c...


Cool, I've read good things about grunt but haven't had the time to look into it. Its direct access to the CLI thats the killer for me with make ... and to be frank ... if I was going to script in a third party language it would probably be python and not javascript but each to there own.


Don't rely too much on Grunt, only use Grunt/Gulp when npm scripts doesn't do the trick. npm scripts has the same direct command invocation that make has so no need for plugins.


While I'm a big fan of the Unix Philosophy, I've found gulp to be incredibly satisfying as a build tool. I cd into my project directory and run gulp, then all my SCSS/CS gets compiled, watched, and re-compiled on change, with LiveReload pushing style changes straight to the browser; and whenever I make a change to my server (Node.js) code, it gets compiled and run against my tests. Then when I'm ready to push to production, I run gulp build and then push it up :) Oh, and my whole gulpfile is <100 lines, a fair amount of which is whitespace/stylistic. It's incredibly readable and flexible: composing complex supertasks is easy.

I don't have anything against Make, I've never used it; judging by the code samples in the article, however, and contrasting that with my gulpfile (which, it's worth noting, didn't require me to learn a new language/DSL, just a dead-simple API), I feel much more empowered by gulp than make.

Somewhat relevant: I've also found gulp much easier to use and maintain than Grunt.


I like Gulp a lot, but I'd hardly say it's perfect. It's weird, it doesn't really make sense that everything is a stream. For example, why is watching something a stream?

With that being said, it's miles above Grunt.


Yeah, once you commit to any sort of "everything is ______" philosophy, you're bound to find some edge-cases where things get weird. The file-watch in particular does make sense to me, albeit in an odd way: basically, the file just gets polled on an interval, then it gets streamed into whatever tasks you've attached to the watch. Thinking about the act of watching itself as a stream does seem hinky.


The thing I like about Gulp is that all it needs is NPM to install all its stuff. Make you have to worry about different system libraries, etc. Also, with gulp I can leverage other node based libraries, so for one project, I can get frontend guys set up using a dev server which automatically both proxies API requests to a backend server, and also watches and livereloads files (as they like), with three damn commands - npm install && bower install && gulp dev. Bam. Working dev server running on port 8000 that does all that they need it to.


I would use npm scripts as the first way to execute build steps in a package and only fall back to Gulp when it's not powerful enough https://www.npmjs.org/doc/misc/npm-scripts.html


For simple builds with dependencies, I find makefiles hard to beat. The only thing that has me pulling out my hair every time is the whole tabs vs. spaces thing, especially when on a machine that I haven't pulled in my .vimrc yet.


Count me in as a fan of makefiles too. Haven't done much with them for frontend stuff but will have to give it a shot. My biggest problem with make is that, kind of like javascript, it's been around for so long it's really hard to find good information on it. At least the core functionality really hasn't changed all that much so even very old information is still quite relevant.


Me too.

I use make together with a markdown compiler, and the m4 preprocessor, to keep devdocs up to date, in one huge document, where everything is included, and the various sections as stand-alone docs. The markdown version of the section files is almost uncluttered from m4 and html. I link from any external doc to any other via links in the toc in the main doc index.html, to keep everything as simple as possible. It's sweet.


I love make too, but still settled for Gulp (and previously Grunt). Writing a Makefile just isn't for the faint of heart and can be very frustrating for front-end folks.

It's a trade-off between simplicity and convenience.


I just couldn't get comfortable with grunt, with gulp I had no issues.


FYI, this only loaded after disabling ghostery and adblock.


It gets blocked by easylist+easyprivacy in adblock:

  Filter: /logger.js

  Exception: @@||algorithms.rdio.com^


Although make does have a lot of weird quirks, its winning strength is that it is installed virtually everywhere.


That's true, but it's the same argument that justifies Javascript :)

The thing I don't like about make is the poor debuggability when something goes wrong. I having been using make for 15 years and I'm ready for something that gives me more traceability.

A poster above had it right - it's perfect for mapping input files to output, but inbetween and above that, it stumbles pitifully.


The problem that I've found when I was trying to port my installer to FreeBSD was their make doesn't like GNU Makefiles. Not sure when they diverged from one another so having Make everywhere, doesn't necessary mean your Makefile will work everywhere.

I liked how security centric FreeBSD was but they seem to be anti-enterprise friendly. The process to install Oracle Java was painful and EnterpriseDB didn't even bother creating a Postgres installer for it. Maybe the freedom that the BSD license offers, is making their environment too stagnant when compared to GNU and Linux.


> The problem that I've found when I was trying to port my installer to FreeBSD was their make doesn't like GNU Makefiles. Not sure when they diverged from one another so having Make everywhere, doesn't necessary mean your Makefile will work everywhere.

As with many other utilities, you can always run "gmake" for GNU make. If you can get what you need to do done with portable make, by all means do so, but if you need to depend on GNU make, it's widely available.


A cursory google search should have shown that gmake (GNU Make) is widely available and used on FreeBSD.


Yes it is available and I guess I sort of boxed myself in when I said not available. It's not available as a default install ... at least not on the FreeBSD 10 install disk I used.


FreeBSD tries not to install much third party software with the default install. Everything is available in ports/pkg. The things included in the default install usually BSD licensed/BSD versions. Personally I prefer having third party software disassociated from the core operating system. For me at least, it makes tracking critical updates a lot easier.


There us Premake which is Lua version and can be configured to do anything you want, and will run on anything.


Premake is pretty nice. Much better than CMake with its weird homegrown macro language.


I think a lot of FE devs aren't fully aware of npm scripts https://www.npmjs.org/doc/misc/npm-scripts.html - it's a handy way of calling commands just like you would in make. Quite a lot of the tasks that people use Grunt/Gulp for could be executed with it.


I've been using make for front-end builds for a while. two tools I've been finding to be indisposable are jq and mustache templates. I made a php based mustache/cli implementation here:

https://gist.github.com/Breton/7556390

This enables you to put a lot of your configuration in a json file, such as actual lists of files (which make is actually terrible at), have a decent way of getting json data out into make processes, and build files templated out of that json configuration, such as ssi files that set whether to load built JS or seperate script tags (also generated out of the same json config).

SO you get the declarative stuff in the declarative JSON format, and the stuff that make is good at: incremental builds, stays in make.


OK, I get the simplicity of using make, but, yikes, creating your own "little programs" that parse AND EDIT (?!?!) code is a simple solution? IMO the author glossed over that part very smoothly without even bringing up the potential pitfalls (bugs in author's "tiny little programs", invalid html causing the parser to barf, etc etc etc). Really I think you'd have to be pretty stubborn to not see the value of Grunt when you decide that you need to implement an HTML parser as a substitute for if-statements. Lol geeze.


Is anyone familiar with SCons? I stumbled across it the other day when I was playing with gpsd.[^1]

[^1]: ESR lovefest: http://esr.ibiblio.org/?p=3089


I found it to be simple & quick to use, while providing great results.


I last used it 7 years ago. It was nice, but extremely slow, and required more tweaking than originally seemed. I don't know if either thing improved in the last 7 years.


I've started using Make to build and run docker containers in the dev environment. It's not the perfect tool, and neither are my skills that great with Make, but it sure gets the job done and enables docker commands to be shared between devs. For example this Makefile (still under development) for PostgreSQL https://github.com/GlobAllomeTree/docker-postgresql/blob/mas...


You can't write a blog article about make while comparing it to a bunch of Node.js build tools and not mention the word Windows once. Make is good for you because you're not an open-source project built on a cross-platform development platform. All of your employees use Macs and your servers are Linux. For OS Node projects make is simply out of the question because it is janky on Windows and every makefile ever written is full of bashisms.

Next.


Certainly you can. In fact, the OP did exactly that and his article is at the top of HN.

My hope is that we can just carry on with our dev work and Windows will slowly fade away as a platform for developing anything but SharePoint intranets...

[Note: I don't have a neckbeard, but I do run Debian testing.]


But... but... some of us our doing our development on Windows and loathe Sharepoint.


You will save so much time developing node if you install Ubuntu with virtual box and vagrant. All are free, and then you can really participate in the os world without having to hack everything to work on windows.


Now this is going to sound weird, but I use Make for my PHP development. Seriously. It's perfect, and is installed basically everywhere I could want to run it, and it's super lightweight but just powerful enough.

For my cross-platform game development I've moved to CMake, as it takes care of some annoying bits for me. Not a massive fan of it though, and am tempted to go back to Make for my C++ game dev stuff. Any recommendations for it?


90% of the time if a project uses CMake I'll look for an alternative as I can count the amount of times I've had hassle free compilation with it on both hands. It's a perfectly fine build system that does handle a lot of weird cases, but I've found reliability wise it's far flakier than standard make.


I'm using https://github.com/rags/pynt right now. Is very simple!

Have use http://www.finalbuilder.com/, is a great tool that put me in the automated build mantra.


One thing I should note - the difference here is that make enforces working in a Unix-like environment...which cuts out Windows users. I know, most devs will use OS X or their favorite flavor of Linux anyway, but at least personally speaking, there are occasions when I do some development on Windows.


I'm tethered on a mobile connection and all I can see when I try to view this is a blank page. :(


It's not the most robust blog in the world, sorry about that :)


Heh, it's okay. I'll read it when I'm not browsing on a potato! :)


It appears to be a lot of Javascript.


What is the purpose of blending the text with the background and making it barely readable?


For those wanting to check out more about the GNU flavor of Make, there is this little book: http://shop.oreilly.com/product/9780596006105.do


I have seen this idea floating around… personally I would rather write JS than learn another weird file format that I will have to Google every single time I mess with the Makefile, but the idea is decent.


When you are learning that new JS framework, do you not keep going through their docs and googling?

I would say the documentation and google results for a decade old tool would be more available when you try to do something a bit complex.


> When you are learning that new JS framework, do you not keep going through their docs and googling?

Yes -- and then I proceed to use the thing every day. I wouldn't be adjusting a Makefile very often so I would keep forgetting whatever I "learned" every time I changed it.


Let's say the JS framework is Grunt.

If you are using Grunt everyday, doesn't that mean you are modifying it? Then you would be in the same exact place as with a Makefile.


The point is that the JS syntax of the gruntfile is a syntax I'm using every day. Whereas make's syntax rules are bizarre and used nowhere else (IIRC it makes a distinction between tabs and spaces?)


Here: https://coderwall.com/p/aawcnq My point is at not having to install dependencies globally. And keep the benefits of tools like gulp/grunt...


For my own projects, I have found make's alternatives (eg. scons) to be a much superior choice. Make is just too complex for what it is intended to do (at least as used with autoconf, etc.).


It's shit like this, HN. Why you guys make this nonsense end up on the frontpage is honestly beyond me.

So this guy re-implemented the most useful grunt tasks in his own language. Grats. You've wasted time instead of using Grunt. Saving time and not re-implementing things was the point all along. Very few people want to write "small programs" to chain together build tasks. They should be readily available, just work, be continously updated, and work cross platform. I don't want to go look into the intricacies of compiling Handlebars templates just to do that shit.

Also, a make file syntactically looks like garble compared to a well honed Gulp / Broccoli file.


I spent two days trying to get grunt, nodejs, ruby working on my PC some month ago, all with the help of seasoned front-end devs. To no avail.

So, well, your argument can be reverted: why try to install a big "let's do it all" machinery which depends on the latest versions of very recent tools, while a simple Makefile, which will run everywhere, would do the task?


That's because, I may presume, you have never (really) used make. For somebody who does, exactly the opposite will be true.


No, read his comment carefully.

Grunt comes with a set of existing (and maintained by the grunt authors) plugins for many common tasks that lets you get away with having someone else do all the complex shell-script-custom-language-scripting you need to do with make for the 'complex logic bits'.

Writing those by hand is a terrible and tedious burden in using make.


Making small shell scripts and the appearance of the syntax is a matter of taste, I guess. I personally quite enjoy knowing exactly what a script does but the appeal of make to me is that I can use it in non-JS projects, too.

I recently had the need to automate batch of converting XCF files to PNG files. Writing and publishing a Grunt module just for that seems a bit over the top, and it would end up a wrapper for a few lines of bash anyway.


You can use Grunt and gulp for any sort of projects, not just ones involving JS.


I've only used Grunt briefly so I could be completely wrong, but I was under the impression you still need to create a node module for your task if it doesn't exist? You need to package it up somehow, link it in the gruntfile and so on.

In my use case, I was using a CLI utility called xcf2png. Writing a wrapper module seems like loads more work than a bash one-liner that calls xcf2png, no?


grunt-shell let's you call out to the shell. Very easy to set up

https://github.com/sindresorhus/grunt-shell


The hipster rage is string in this thread.


Couldn't agree more


IIRC make has a lot of macro.


More and more I notice everything eventually comes back to Unix and C or looks and acts like it.

And that's a good thing.


This is the end if the tech industry right here. make, really?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: