Hacker News new | past | comments | ask | show | jobs | submit login
Using Makefile(s) for Go (danishpraka.sh)
137 points by prakashdanish 44 days ago | hide | past | web | favorite | 101 comments



I use Makefiles for Go projects all the time, but not in the way the article describes. First off, in a pre `go mod` world, if you had dependencies to check before running the build, then a Makefile was the easiest way to manage that. But even in a post `go mod` world, there are good reasons to use one that the article totally overlooks:

* Makefiles introduce a topological sort to build steps. This is the reason you use it instead of build shell scripts: it allows build steps to run in parallel, it guarantees order by dependency which is the best way to read build steps, and it makes file freshness an easy element to check for a build step, which is still needed for Go projects with multiple subpackages.

* Go projects usually have more than go files that are required in making an executable. If you run a web server and you are bundling static pages into your executable, Makefiles are the best way to handle that.

* If you are building for multiple architectures, or want to encode the git tag/branch into the executable, it is better to have that Makefile bake in the necessary options on the build step and keep it uniform across the build.

* If you write a Go file and bake that into a Docker image, I find it best to drop the image and container hashes into files so that I can get to them easily for docker exec/attach/rm/rmi commmands.

But there is one bigger reason Makefiles work for our entire team. We standardize on using Makefiles as the entrypoint for our builds. We have a polyglot environment at work so sometimes it gets confusing to figure out how to build a project. By standardizing on running 'make' we are all on the same page. Have a Javascript project to webpack? Run make and have make call yarn. Have a python wheel to construct? Run make and have make call python setup.py. You have a Java project that requires a sequence of maven commands to build? Run make and have the makefile call maven.

Is that inefficient? You bet it is. Does it make it easier to sort out what to do to build a project for the first time? Yes it does. Does it make it 100% easier for our CI/CD framework to work with multiple languages and scan for the necessary compilers and dependencies? Heck yeah.

[edited for lousy formatting]


That was an excellent comment!

I use make for almost all my projects (regardless of language) and I have a system where "make init" sets up the environment (install packages, set up containers, and so on) and "make run" runs it and "make test" tests it.

Now I can come back to projects from 5-10 years ago and get them running with minimal effort since all the magic is in the makefile and not my in forgetful brain.


Do you have a tutorial or book you'd recommend for how to pursue this kind of workflow? I'm definitely interested as I dabble in more languages and am beginning to struggle with some of these kinds of environmental details


The GNU make documentation is the best introduction to make I know of, and a stellar example of technical writing.

https://www.gnu.org/software/make/manual/make.html


Managing Projects with GNU Make by O'Reilly is a good start. Make is much less mystifying if you get through all of its quirks. Make is not great in many ways (significant tabs are annoying, for example), but it's no SBT.


Also - Make will exist pretty much in any Unix-based environment. Any alternatives will require prerequisite installation of things.

Although, I work in a team where a lot of devs are on Windows, and they complain about it.


I'm pretty sure MacOS and most of the headless Linux distributions require you to install it (from a package manager or similar). It's not hard, but usually the alternatives aren't hard to install either.


I've never had that issue. Make existed on all Mac systems I've worked on. Perhaps it got pulled in via other tools, but recently I ran into an issue with the Make on my Mac, and only because the version out of the box is a 3.x, from 2006.


I might be mistaken about MacOS, but it definitely doesn't exist on centos, ubuntu, debian, or alpine--at least not their headless/server variants.


You can run into issues with BSD Make being provided, but needing GNU Make though. Not hard to work through, though.


> Although, I work in a team where a lot of devs are on Windows, and they complain about it.

Interesting. Is WSL somehow not up to running make compatibly?


What do Windows devs prefer instead of Make?


From my experience, clicking the "build" button (or using the equivalent keyboard shortcut) in Visual Studio.


Pain.


> This is the reason you use it instead of build shell scripts: it allows build steps to run in parallel, it guarantees order by dependency which is the best way to read build steps, and it makes file freshness an easy element to check for a build step

This is indeed true way beyond Go. Alternatives/replacements to make (rake, scons, bespoke shell scripts, whatever) make this in a range going from painfully non obvious to downright impossible.

For all its limitations and reputation for complexity, Makefiles can achieve a form of simplicity that renders this very simple, outrageously self documenting, and language independent.


I've seen a lot of developers, especially developers with C backgrounds, reach for Makefiles when approaching Go development, and I think it's a harmful practice. So, I'd like to offer a respectful but firm rebuttal to this article. :)

I dislike using make(1) with Go for two reasons.

The first is that make was developed for building C projects, and therefore is oriented around that task. Building C projects is a lot different than building Go projects, and it involves stitching together a lot of pieces, with plenty of intermediate results.

make(1) has first class support for intermediate results, which are expressed as targets.

If you look at the article, the author has to use a workaround just to avoid this core feature of make(1).

The second reason I dislike using make(1) for Go projects is that it harms portability.

A Go project should only require the Go compiler to build successfully. Go projects that need make(1) to build will not work out of the box for Windows users, even though Go is fully supported on Windows. For me, this puts Makefiles into the "nonstarter" category, even though I do all of my own development work on Linux. There is just no reason to complicate things for people who don't have make(1) installed.

For code generation and other ancillary tasks, Go includes the 'go generate' facility. This feature was created specifically to free developers from depending on external build tools. (https://blog.golang.org/generate)

For producing several binaries for one project, use several different main packages in directories that are named what you want your binary to be.

Edit: corrected some terminology.


I think there’s a distinction to be drawn here between a couple use cases for Makefiles (specifically for building software):

* Makefiles can act as shortcuts for common existing functionality of the build toolchain

* Makefiles can add new functionality that is not part of the build toolchain

* Makefiles can add new functionality that replicates existing functionality in the build toolchain

An example of the first case is one of the first examples in the article: using `make build` to run `go build`. The second includes things like the later example for `make docker-push`. The third includes things like makefiles that generate intermediate files or other things that `go generate` could do.

Only the 3rd thing can really meaningfully harm productivity, but in my experience it’s the least common usage of `make`. A Makefiles that wraps `go generate && go build` into `make build` seems fully outside the scope of the portability concern, since a user without Make could just run the same commands themselves. Likewise, a Makefiles that adds `make release` which uploads the build artifact to GitHub Releases or similar isn’t replacing something the go toolchain could do, so it’s also not affecting portability. The user without Make couldn’t have used docker-push anyways, since the go compiler doesn’t support pushing release assets.


I use make for the first case a lot. If there was a tool for running bash functions from a predefined file just as easy and ubiquitous as make, I would switch in a heartbeat.


Put "$@" at the bottom of the file. Then type ./filename function.

You can extend that to a fancier function dispatcher, argument checker, etc, if you want.


Well technically simple to do, that misses the point.

That is neither ubiquitous, nor (as a result), trivial for a newcomer to understand.

If I clone a repo and see a makefile, I know what to do.

If I clone a repo and see './hack.sh' or './runme' or './do' or whatever you chose to call it, I have no clue whether I should invoke it or not.

There are few alternatives to make that have the same level of mindshare, and thus the same ease of use for a given newcomer to grok what to do.

Most alternatives are language specific (e.g. "rake" in rails did a good job of building programmer expectations that you use 'rake ...' to do various common tasks).


If you name it "configure" then most people will run it; you can put the instructions there.


You make it sound like `go install` and `go test` are the only things you're ever going to run in a Go repository. This is blatantly untrue. For example, these are the invocations for the test suite for one of my Go programs:

https://github.com/sapcc/limes/blob/364317fa9a25065bcf9384c8...

Why should I have to enter all of this manually every single time?

(And before you argue that gofmt, golint and go vet run in the editor if you've set it up properly: That's true, and that's how I have my editor set up. That part of the test is to catch the external contributors that don't.)

> The second reason I dislike using make(1) for Go projects is that it harms portability. A Go project should only require the Go compiler to build successfully.

For many of my projects, a Makefile is the main reason why repos work with `go get` at all. I use `make` which prepares all the generated files and non-Go artifacts (typically bundled into `bindata.go`), so that I can commit these in the repo. Then when a user comes along, they can `go get` the application because all the bespoke compilation steps have already been done by me via my Makefile. An example of this: https://github.com/majewsky/alltag/blob/df161b55fa4c7eba0abe...


>A Go project should only require the Go compiler to build successfully.

But they don't. The go compiler doesn't support yarn, npm, protobuf, open-api generators, doc generators like md2man, go-bindata-assetfs, gox and everything you need to complete the code generation done in modern Go applications.

So how do you orchestrate this? People use Makefiles, bash scripts, go scripts and everything in between and combined. It gives you a plethora of bewildering and confusing build options which can't be solved with `go build`, and neither with `make` in a straight forward fashion. Add `go get -u ... && go mod vendor` with some `npm install` in the Makefile, along with some overriding and/or ignoring `$GOFLAGS`, `$LDFLAGS` and `$CGO_LDFLAGS` and you got yourself an ecosystem hostile for packaging and compilation.

A go project can't use `go build` by itself - but it can't really use the Makefile either as people overengineer the process.

But let me stress this. Always use plain Makefiles over any other methods. It's there and has been used for decades for a reason.


I am generally able to keep things so that during development, "go build" works. This is great for quick turnarounds during dev and early testing.

Since I consider it necessary for production executables to explain where they came from, and I therefore embed the Git commit hash and other such information into the executable, it is simply a non-starter to ship my production executables coming from a bare "go build". So I have a shell script-based release process for all my projects that handles all that. In the spirit of Go, it's actually something I've been copy/pasting from project to project, because it always turns out that each one deviates so far from the "base" for its own individual reasons that there's hardly any reason to try to extract out any sort of "base" script. Since my Go projects are generally small-ish (I don't necessarily do "microservices" but I don't do massive monolithic exes, not because I'm super-awesome but just due to the domain I'm working in), make doesn't bring a whole lot of value since the entire final compile for me is under 5 seconds. YMMV. This script also handles tagging in a coherent way and some other basic software engineering maintenance tasks.

I really recommend this approach, and if necessary, doing the work necessary to maintain the ability to quickly do just a "go build". It helps "go test" keep working properly too since the rules for having "go build" work are pretty much the same as having "go test" work.

(In fact, as appropriate, I recommend it out of the context of Go, too. It just isn't always as easy. But prioritize keeping that dev turnaround down. If you're sitting there staring at a build process, use that time to think about how you can cut it down. It isn't just about the raw temporal efficiency... it's about your human brain and the way it stays motivated. The time loss of a one minute build is utterly insignificant next to the slowly-drained motivation and enjoyment the one minute build costs you.)


Let me stress that packaging software and deploying software has two different concerns. Your script might work wonders for deployments and shipping it to your infrastructure. But it might be completely broken if anyone want to package up the code and redistribute the software in a linux distribution.


Absolutely it would be. But I'd be in a lot of trouble if the software in question showed up in a Linux distro. :)


I went through a phase of using Grunt, Gulp, npm scripts, etc. for web projects.

Recently, after reading this article on make[1], I decided to give it a try and I've been pleasantly surprised with how much easier using make has been vs. some of the FUD I've read about it.

That's not to say there aren't some issues, but that's true of something that's been around since the '70s, including Unix itself.

But that also means it can address almost any situation where files need to be generated and there are dependencies involved, including what I'm doing with generating a static website, compiling and minifying CSS files (or not minifying but including a sourcemap if it's a dev build) and deploying to a production or staging server.

I don't love the syntax but it’s not that bad once you get used to it. Actually, I prefer it to the seemingly endless nesting of JavaScript objects in a Grunt or Gulp configuration file. My next step is to take advantage of Vim’s built-in support for a make-based workflow.

[1]: https://www.olioapps.com/blog/the-lost-art-of-the-makefile/


I agree with your point, but make solves the problem rather poorly. Besides the poor UX, it only knows the thing it built most recently, as opposed to maintaining a cache of things it has _ever_ built. And since it doesn't have a cache, it definitely doesn't have a distributed cache, although you could conceivably try to shoehorn NFS or something similar. Lastly, it's not reproducible. It's not going to fail a build if you accidentally let it depend on a file that isn't a formally specified dependency. These are all important concerns, especially for a CI environment.

Something like Bazel theoretically solves this problem, but it's poorly documented and non-trivial to use or operate. Other Bazel/Blaze derivatives are worse with respect to usage/operation/correctness. Nix improves on correctness but is even harder to use/operate. There's a lot of room for improvement in this space.


I agree with the observation. But the alternatives are grossly over-engineered or a case study in NIH. I'm sticking with Makefiles until someone can present something better.

>Something like Bazel theoretically solves this problem, but it's poorly documented and non-trivial to use or operate. Other Bazel/Blaze derivatives are worse with respect to usage/operation/correctness.

I have night terrors from listening to two of our packagers fighting against bazel, tensorflow and 10 hour compile times.

>Nix improves on correctness but is even harder to use/operate. There's a lot of room for improvement in this space.

I don't think Nix improves anything when you are stuck writing a weird javascript derivative. This comes from a packager writing bash for a living.


Maybe I'm unusual, but I avoid Java-based tools (like Bazel) because my Java environment often seems to be broken for any given piece of Java software, for one reason or another, and I don't want to add "make sure my Java environment is OK for all the Java tools I'm using" to the getting-started steps for any non-Java projects I'm on.

And yes, I mean the JVM, not the Java build tools. It's one of the reasons I go "ugh" when I realize I'm gonna have to run something written for the JVM. There's a decent chance I'll lose time configuring it to get it to work.


> I agree with the observation. But the alternatives are grossly over-engineered or a case study in NIH. I'm sticking with Makefiles until someone can present something better.

I agree that they are complex beasts, but that complexity is incidental, not essential (some might argue "a lack of engineering" rather than "overengineered"). The documentation for these tools is also pretty atrocious on average. However, I don't think they're NIH insofar as their direct ancestor (Blaze) was never publicly available and Bazel didn't exist when those original Googler pilgrims brought their build-system ideas to Facebook, Twitter, Foursquare, etc. But nevertheless, there are half a dozen shitty tools instead of one decent tool. Worse, they're pretty much all designed for use in large organizations' monorepos--organizations who can employ people who are specialists in operating/maintaining these tools.

> I don't think Nix improves anything when you are stuck writing a weird javascript derivative. This comes from a packager writing bash for a living.

The improvements are certainly not uniformly distributed, nor are they sufficient to really justify its mainstream use, IMHO. :)


>It's there and has been used for decades for a reason.

I have always had problems to comply with such statements when the reasons aren't given.


either the parent edited his comment or you haven't read it, you're quoting the only sentence that hasn't got a reason in it.


None of those reasons explain why a Makefile is any better than a scripting language.


Here's 3 reasons:

- Makefiles are a de-facto standard. People know them and those who don't can read how they work in 2000 tutorials. They shouldn't have to read your (and everybody's) custom (and different) script to build a project they've downloaded.

- Makefiles are specialised to the tasks of building, running tests, etc. A scripting language is general purpose. As such, it encourages adding all kinds of crap, from overcomplicated steps, to security issues.

- Makefile just needs make which is part of the core set for any distro and works fine on Mac and Windows (WSL or elsewhere) as well. Users shouldn't have to install a scripting language (or even a specific version of one) just to build a project.


Make is better than de facto; POSIX specifies it.


Thank you. I had to reread the comment two times to make sure I didn't miss something. Glad to see I'm not the only one.


"For code generation and other ancillary tasks, Go includes the 'go generate' facility. This feature was created specifically to free developers from depending on external build tools. (https://blog.golang.org/generate)"

Please please please don't use go generate! While I respect your position, go generate is the worst and I hope they eventually deprecate it in future go versions. We tried go generate in some of our code and it went very badly:

* go generate is placed in a comment line. Comments should never be executable, they should be used for explanation. If I am trying to trace execution, I shouldn't be forced to scan through comments looking for side effects.

* from the go generate man page: "Within a package, generate processes the source files in a package in file name order, one at a time." You can't order your generate commands in a way you want, you have to order them using file order, or keep all of your go generate lines in the same file, which defeats the purpose of go generate.

* go generate will run every time, regardless of freshness of file. So if you need to run protoc or some other protocol buffer compiler, you have to regenerate it every single time regardless of whether or not it is needed which makes the build run way slower.

* What are the dependencies of this project? If I use go generate, I have to run some clumsy grep command to (hopefully) find all of the go generate comments in the package.

Sorry, go generate is to golang what COMEFROM is to INTERCAL. Please avoid if you can help it. If that means a shell script or heaven forbid a makefile, so be it.


Go build constraints (build tags) also go in the comments and I think it works well enough. The arguments against are similar to those against struct tags. Without a properly defined pragma paradigm in Go comments at the top of a file work Well Enough™ IMO.

I've also never seen anyone call out to protoc from a go generate flag. Sure, you can do that but it hasn't been common in the Go shops I've worked at.

> You can't order your generate commands in a way you want, you have to order them using file order, or keep all of your go generate lines in the same file, which defeats the purpose of go generate.

I've never had a problem with confining generated code to a single file. Do you have more details on why this is a problem? Given that all the files in a package are "flattened" I don't see why this causes a problem...


Tags at least affect the file they're in and no other. Generate lives in one file and does stuff elsewhere.

If I want to know whether a file is built, looking inside is a reasonable thing. If I want to know where generated code comes from, where do I look? How do I know?


I agree if you end up using Make like its used in C projects: compiling intermediate objects, linking them, etc. In that case the Go compiler should suffice.

I almost exclusively use Make for projects in any language nowadays as workflow automation, this includes Go, Python, Terraform, Docker builds.

In this way Make is a indispensable tool for me. As for me it's portable where it matters (macOS, Linux, WSL), it's ubiquitous, it has a stable API and it's behaviour is well know to me. Sometimes I have to work around some shortcomings of Make, eg: things that don't produce a file as result, where you `touch` a fake artifact file. But this is a minor annoyance for me compared to what Make brings in term of how simple and declarative I can automate my workflows.


IMHE, a language that fights against 'make' is generally poorly designed in that regard. Typically its functionality gets replaced/reinvented by a bespoke and buggy behemoth, which becomes yet one more thing to learn.


Make is available everywhere that matters, and is a simple declarative way to encompass build actions. What are the alternatives?

Bash? Not declarative, and requires lots more code.

Some go rewrite of Make? Not universal, possibly not maintained in the future.

Rake? Ugh, Ruby.

I strongly believe that make is the least worst way to build go projects, but please change my mind by suggesting some alternatives, not by complaining about the shortcomings of make.


This is patently untrue. Make is not available by default on Windows, which - whether you like it or not - matters as a platform for a large number of developers.

_Fortunately_ some CI images (certainly GitHub Actions and Azure DevOps) install GNU Make on their windows images by default, but it absolutely cannot be assumed for average developers.


For the sake of this argument, windows support doesn't really matter.

If you're forced to use Windows for a job- that sucks. You're probably used to doing lots of silly things just to get a reasonable development environment.

It's like saying Make is a bad system because it doesn't support left-to-right languages like Hebrew or Arabic.


While I have in the past shared this view (and vocalised it at Microsoft events...), I’m not so sure that dismissing the majority operating system is such a good idea if you want a project to gain traction.


"Available" and "available by default" are two different things, and GP didn't claim the latter.


Well, by default, nothing is available on windows.


> Bash? Not declarative, and requires lots more code.

Most uses of Make outside of C are not declarative either. They tend to just be full of phonies and would be better served by a "$@" Bash script.


> The first is that make was developed for building C projects

This is sort of a misconception. The C compiler was developed for building C projects. Make exists because those projects had to build other stuff and needed a way to stitch the files together. Make's only built-in support for "C" amounts to some default rules for building .o files out of .c files.

If all you want from your build system is to compile a big unified blob of source in a single language into some kind of output file (like the examples you cite) you don't need make, just use whatever it is that your local language provides.

When you have requirements that go beyond that, where you have programs (often themselves built locally, and often in variant languages or runtimes) generating custom intermediates and need to track that madness, that's when you need a more complicated build manager than your compiler provides.

And that's when you start to understand why, despite four decades now of attempts to replace it, some of us still reach for make.


And when you distribute the project, how do you „document” all possible build / packaging / release / test options. Shell script? Readme?

I look at things like Jaeger and all I see is a Makefile with all possible operations for that project neatly placed in a single portable, actionable format. If I have no make, sure, I’ll copy paste the command and run manually. But why would I?

edit: spelling


Why do people on this website use "make(1)" instead of "make" in writing? And I know it's in the man pages but what is this number even for?


The (1) in make(1) corresponds to which section of the manual[1] make is in. This is useful in some cases to distinguish between things that might be in multiple sections like printf(1) the user command and printf(3) the C library function. When everyone knows what's being discussed, I think it's mostly people trying to give a shibboleth that they've read the fine manual.

[1] - https://en.wikipedia.org/wiki/Man_page#Manual_sections


Unlike “ls” and “cp”, “make” is a real word, so to help us human readers parse the document’s prose, adding the section number in parentheses gives our brains a quick clue as to what’s going on.


The number is the "section" of the manual:

https://www.kernel.org/doc/man-pages/

My guess is make(1) is to distinguish from make(1p) - http://man7.org/linux/man-pages/man1/make.1p.html


> My guess is make(1) is to distinguish from make(1p) - http://man7.org/linux/man-pages/man1/make.1p.html

Or just reflexively included just in case, because of commonly encountering potentially ambiguous situations that you just disambiguate by default, even for non-ambiguous situations.


I guess to avoid the confusion with the verb and make the sentence more pleasant to read. I think the more usual way to deal with that type of problem is to use italics.


As the previous replies stated, the "1" is the section of the manual the page you are requesting occurs in.

If you have trouble remembering what the sections are, I recommend running "man man" in the terminal. The section numbers are explained near the beginning.


Irix (IIRC) used to respond to the argumentless command "man" with the supremely snotty "appropo what", which made me laugh out loud the first time I saw it


They're showing off their geek creds. "I know what man page sections are" essentially. There's absolutely no practical reason to use it in a comment.


As soon as you have a semi-complex project makefiles or some other custom scripts are required. Go tools alone won't do. There is this attitude that sees Go as the center of the universe, it is not. If people say idiomatic, it makes me laugh.


Who cares, really? I use Makefiles for everything from eliminating 8284738 random bash scripts to orchestrating global infrastructure deployments with Terraform in a docker container.

None of my above fits your “correct” view of make. But it works fine and has for years.


I avoid using makefiles, unless I need them :). Yes, plain Go projects which are supposed to produce an executable probably won't need a makefile and I haven't used any for those. But if your project should produce a shared library, it is nice to wrap the build command in a makefile, as it is easier to type "make".

Also, when integrating into larger projects or when other tasks in addition to building the Go project are required, unifying them with makefiles can be helpful.


If you download some random tar file with go code in it, I agree that you should expect to be able to build it with only go installed. "go get" depends on this, and largely works well!

But the day to day act of developing a system that uses go involves more than just building a go binary. Your go program might depend on things like generated protocol buffers. You need some way to regenerate those when you edit the definitions. That then involves having the right version of protoc installed and also having the right version of protoc-gen-go. The go compiler can't help you there. go generate suffers from the same problem; it's not automatic, so you can pass it the wrong dependencies (generation tool flags, version of the generator, etc.).

People are using makefiles as a convenient place to write all these extra instructions. "What flags to I pass to protoc?" "What flags do I pass to docker build?" Why document it when you can "make protos" or "make container"?

Unfortunately, make isn't actually good at this. It doesn't version the generation tools (or itself), so you will end up with vastly different results on different machines. The result is things like a 300 line diff to a generated protobuffer because the second engineer to work on that file happened to have protoc 0.7.7 instead of protoc 0.6.42, or they installed protoc-gen-go@master instead of protoc-gen-go@v1.2.3. Make doesn't care. It exited with exit status 0, so it must have worked.

What started as a nice way to write down some instructions for hacking the code has now become a giant mess. Reasonable makefiles can only ever work for one person on one computer at one point in time. At that point, they might as well be a README.md. At least the README can mention the version numbers of the dependencies, and, most importantly, can wish the reader luck.

There are two long-term solutions. One is to only use go. Write a program that reads the protos at runtime. Write a program that runs your Typescript through a hand-written compiler whose source code lives in your project at runtime. Now you only need "go build". This is... impractical, though. It's a nice ideal, but you'll never get anything done in the real world.

So what you really need is a real build system that captures every dependency, knows about high level tasks ("make foo.proto available to a go program"), and knows every dependency between files like the go compiler does. With such a tool, you can get a working build on every computer with no instructions or manual setup. And since it is carefully written to understand what it's doing, you can get reliable incremental builds. (The full build and your incremental build should have the exact same md5sum of the resulting binary.)

Such a tool does not exist. bazel is close. If you have to build more than just go files, you probably want to look into it. It's crazy. It's a lot of work. Don't do it if you're the only developer on the project. But if you want 10 random people to be able to build a project that's written in more than one language, you have to invest in some sort of tooling. Make is good for a one person team. Make can be scaled to do crazy things poorly (hi, Buildroot!). But it's probably not what you want to be using. If a README isn't good enough, you need a real build system.


It's not just Go, loads of projects use Makefiles as a collection of Bash scripts. I've never really been certain what it actually buys you...


If you're using windows for development you're doing it wrong.


I agree. They claim `make` is simple, but it really isn't. PHONY targets are one example.

Unfortunately I've looked for an alternative and didn't really find anything very good. I eventually settled on a Python 3 script. Python 3 is reasonably nice to use with type annotations. It doesn't require compilation and its speed is fine if you're using it to drive other build systems, rather than as a build system itself. Way more people understand Python than Make, and it is a full programming language so you don't get stuck when you want to do something complicated.

It doesn't have a build in DAG task system but I'm sure there are a million libraries for that. I haven't had need of that yet, but a quick search turned out https://pydoit.org/ which looks ok.


If you think Python 3 is a better tool than a Makefile you're entirely missing out on what makes "make" a useful tool.

The entire point is that it's not a general-purpose programming language, so that you're forced to separate the DAG aspect of your build process from any complex logic.

Sure, some things you can't do in a Makefile, and you generally shouldn't try. The Makefile should call some Python / Perl / whatever helper script to accomplish that particular task, separated by a process boundary.

By doing it like that you can seamlessly run your build process in parallel if you haven't screwed up in declaring your dependencies. E.g. some script that needs Python to fetch something from a database for the build will run concurrently with the build of the documentation.

It also encourages you to structure things in such a way as to have incremental and resumable compilation, e.g. Ctrl+C-ing the middle of a build in a project with a well-maintained Makefile won't require you to run the whole thing from the beginning, but that's typical of someone's home grown "I can do it better" monolithic "./build" shell or python script.


Read the last paragraph of my comment. I am entirely aware of what Make does.


If you've tried `redo`, what issues did you find that put it into the "not good" bucket?

[1] https://github.com/apenwarr/redo


PHONY targets are GNU extensions to make. They are not part of the make specification itself. POSIX make is really quite simple and it allows phony targets by depending on a target that has no dependencies or commands in it.


This... isn't even using the `make` part of Makefiles at all.

If you look at the final example, every [1] rule is marked as `.PHONY`. `make` bundles 2 capabilities: a dependency graph and an out-of-date check to rebuild files. This demonstration uses neither.

The author would be better served with a shell script and a `case` block. The advantages:

- Functions! The `check-environment` rule is really a function call in disguise.

- No 2 phase execution. The author talks about using variables like `APP`, but those are make variables with very different semantics than shell variables (which are also available inside the recipes).

[1] Yes, there's a `check-environment` "rule" that isn't marked, but it likely should be since it isn't building a file target named `check-environment`.


I disagree. Make is more than a build system, it's also an automation tool. It gives you a fairly flexible format for managing different tasks with shared variables and autocompletion and more.

You can do it with a bunch of shell scripts too but I prefer having everything in a single file.


> Make is more than a build system, it's also an automation tool.

I agree with that, but I don't believe that the GP said otherwise. Make is an automation tool, but what it aims to automate is exactly dependency tracking. Other features, like actually performing the tasks, are off-loaded to the shell and other tools.

> It gives you a fairly flexible format for managing different tasks with shared variables and autocompletion and more.

You could as well be describing a shell here, which already have these features.

> You can do it with a bunch of shell scripts too but I prefer having everything in a single file.

What stops you from using a single shell script? Likewise, you have a Makefile delegate tasks to other Makefiles (as the author has done for the build-tokenizer rule).

Anyway, you should of course use them in any manner that suits you, but for a guide on "using Makefiles for Go" I think it's an oversight to ignore the main selling point of make by just using it as you would a switched shell script with no dependency tracking. It just introduces another syntax and new caveats to the problem, adding little value.


I think people kind of talk past each other in these discussions because there isn't a standardised vocabulary. I would call that kind of anaemic Makefile a "task runner" rather than a build system. And yes indeed, a task runner can easily be written as a single POSIX/Bash shellscript that has conditional behaviour based on its first argument. The `case ... in ...` statement in shell is rather nice!


I'm more confused as to why use .PHONY in so many places. Golang builds from .go files whos modification times are changed when written to, same as .c and .cpp files, so make is able to know when the go compiler needs to be called, or not.


Make absolutely does not understand all the scenarios for rebuild, and go build has much more complicated logic than that. For one, it's content-based not mtime-based. For another, a command needs rebuild if any of its dependencies change, not just its own source file. Make absolutely does not get that right for Go.


It's really frustrating seeing so many Makefiles that don't _make_ anything.

Make syntax is really odd. I see so many folks go out of their way to deal w/quirks of make when they really just need a shell script. You can see this anti-pattern very quickly when you see `.PHONY` targets for everything.

I think make is useful for some aspects of go. GOPATH is becoming less relevant now, but still helpful when you want to have build-time dependencies in $PATH

    $(GOPATH)/bin/some-dependency:
        go get -u ...
I still use make when building artifacts, especially in CI. But as a default, I almost always try to talk folks out of using make for this sort of stuff.


Is there a way to easily have something like make targets in a shell script, without a ton of boilerplate?

> Make syntax is really odd

I don't find it particularly strange, except my biggest peeve - the insistence on tabs!


Here's a named task runner shellscript (as compared to a Makefile full of .PHONY being used as a named task runner).

    #!/bin/sh
    set -e

    case "$1" in
        build)
            ;;
        run)
            ;;
        clean)
            ;;
        *)
            echo "unknown: $1"; exit 2
            ;;
    esac
Someone else ITT hinted it could be done like the following. But the last line is dubious because it will happily run anything on PATH.

    #!/bin/sh
    set -e

    test $# -gt 0

    build() {
        :
    }

    run() {
        :
    }

    clean() {
        :
    }

    "$@"


> (as compared to a Makefile full of .PHONY being used as a named task runner)

Personally, I've never used PHONY, or had a need to.

That said, your bash examples are pretty simple - I especially like the 2nd example, as it would trivially allow having targets that ran other targets, e.g. an "all" target from a makefile:

``` all: build push

build: @docker build --tag ${IMG} --tag ${UNSTABLE} .

rebuild: @docker build --no-cache --tag ${IMG} --tag ${UNSTABLE} .

push: @docker push ${NAME} ```


This Makefile could as well have been a shell script. It doesn't track changes to dependencies even when it's obvious how to do so. For example, the build rule has an obvious dependency (main.go) and an obvious target ($(APP)). Instead of tracking these which IMO is the primary advantage of using Make, it deliberately destroys the existing build. docked-build always necessarily rebuilds the binary as well

Presumably, Go has some kind of build cache making such dependency tracking relatively useless anyway, maybe Docker has too, but if you aren't tracking dependencies and rebuilding only when necessary why use Make instead of a big switch in a shell script?

Personally I'd only use Make for Go if I introduce some task that takes significant time and isn't already handled by the go toolchain.

Another couple of notes: there are two docker-push rules. The first seems like it was meant to be docker-build. The other is that the docker build rule will tag the build with the HEAD hash, regardless of whether it's building from a clean checkout or a dirty repo.


rm -rf ${APP} is a code smell. If ${APP} is not a directory, -r should not be in this command. At best it is possibly confusing, and at worst if somehow ${APP} accidentally becomes a directory it will just remove it and you will have no idea that it was a directory, whereas just rm -f ${APP} will fail because it can't unlink a directory. Build success is an important factor in a CI/CD pipeline, therefore builds should fail immediately under unexpected behavior.

Also, on .PHONY on a single line:

  But for Makefiles which grow really big, this is not suggested as it
  could lead to ambiguity and unreadability, hence the preferred way is
  to explicitly set phony target right before the rule definition.
If your Makefile grows really big, it's going to become a nightmare to maintain. Either split up your codebase + builds into sub-directories, or figure out some other way to structure your builds so that it's not super complicated to reason about or maintain them.


I find that complexity of a Makefile isn't necessarily a function of its size. Ideally, one should be able to reason about each target individually, specifying its dependencies without consideration for how they are generated, whether they already exist etc. In such an ideal situation, it doesn't matter how large the Makefile is. Maintenance problems IMO happen when you can't trust tasks to fully specify their dependencies or that task commands only generate the target output.

Over all I agree with your argument, though.


I’ve been using Makefiles for Go development basically since I started with the language. It’s really effective for me and makes compilation and, in my case, deployment to AWS Lambdas via CloudFormation commands (also invoked by Make) really simple. It’s also easy to bring someone up to speed with how building and deploying works.


It might make it easier to bring someone up to speed for your project, but he won't learn a damn thing about building other Go projects - I guess that's one of the arguments that the opponents of make, er, make...


there's no standard for building Go projects (except the use of "go build"). Make is common enough that learning how to use it is a worthwhile use of time for a new Go developer.

I also use a Makefile for my projects, for all the above reasons, but mostly so that I can hand it to a new team member and say "clone the repo, install make if you haven't already, then choose either make docker_init or make localdev_init" and know that they'll have a working installation about 15 minutes later.

Not supporting Windows is a furphy - we're developing a Linux-based web server. There's no point trying to develop this on Windows. Yes, you can do it with Docker, but you'll always be wondering if that error was from your code, or from some misalignment of stuff in your tech stack.


Great article, but I'm not sure it's a good idea to segment your Docker images by environment. Part of Docker's appeal is that you can be sure your staging & production containers are bit-for-bit the same. I use a workflow like this:

* For all commits on all branches, run tests. If tests don't pass, don't push containers to registry.

* For all commits on all branches, build and push a container `{branchname}-{commitsha}` (assuming tests pass).

* Code review, etc.

* Merge pull request to `master` branch (tests will run, and only push a container if they pass).

* Deploy `master-{commitsha}` to staging.

* Do your final testing on staging.

* Deploy the same `master-{commitsha}` to production.

Now you're deploying to production from the master branch, which passed tests, and the container is the same one as you tested on staging.

Plus, you can always deploy your non-master `{branchname}-{commitsha}` images to a separate environment, or to staging, if you need to do a bit of experimenting.


I noticed you didn't mention ".DELETE_ON_ERROR". AFAIK, it's recommended to always use it (according to the GNU make manual: "[...] 'make' will do this if '.DELETE_ON_ERROR' appears as a target. This is almost always what you want 'make' to do, but it is not historical practice; so for compatibility, you must explicitly request it.")


Alternatively, you could use Bazel[1], and automatically generate most of your build rules with Gazelle[2].

This would allow you to extend your build system beyond what's available via "go build", while avoiding the well-known pitfalls of Makefiles (config complexity, reproducibility, etc.)

[1] https://github.com/bazelbuild/rules_go

[2] https://github.com/bazelbuild/bazel-gazelle


I tried Bazel for awhile on a bunch of projects and just found it to be more headache than it's worth. I like the promise, but the tool is... meh


I suggest reading “recursive make considered harmful”. It is a wonderful introduction to make, and explains how to avoid a few pitfalls most make users (including this article) run into.

In particular, the targets in the subdirectory makefiles can and should be auto-generated using make itself. There’s no need for the makefiles in the subdirectories (there is also no need to use an external tool to generate them, which is the other mistake people often make).


I think the example discussed in this blog is selling Make short. A more complete example with: protoc(protobuf), Docker, DynamoDB local, go modules (and vendoring), and testing can be found at https://github.com/rynop/abp-sam-twirp/blob/master/Makefile


Nice article. I use makefiles a lot, mostly for all projects that I use, both for frontend and backend, mainly to have the same commands independently of framework/platform that im using. For me its helpful to just run `make` both to build and run a go project and a react project.

Another thing you can add is:

.DEFAULT_GOAL := start

start: fmt swag vet build run

Helps define you default command soyou just need to run `make` and will run all inside of `start`

Since most of us use `.env` files for enviroment files, you can use something like:

# this allows to import into this file all current system envs

include .env

export

And it will inject all of .env file into the current running `process`

Also have some other shortcuts (variables):

GOCMD=go

GOBUILD=$(GOCMD) build

GOCLEAN=$(GOCMD) clean

GOTEST=$(GOCMD) test

GOFMT=gofmt -w

GOGET=$(GOCMD) mod download

GOVER=$(COCMD) vet

GOFILES=$(shell find . -name "*.go" -type f)

BINARY_NAME=my-cool-project

BINARY_UNIX=$(BINARY_NAME)_prod


Going to throw out my 2 cents here:

1. I don't like multiple makefiles. Icky with lots of duplication and high maintenance. Bad article.

2. When possible I use target expansion to generate targets in the main makefile:

APPS:= app1 app2 app3

$(APPS:%=build.%)

3. I prefer to use makefile functions rather than reaching for "bash" where possible: https://www.gnu.org/software/make/manual/html_node/Functions...

4. If something is really complicated - extract to bash

Make has been written in 10K languages and the original is still the best


I don’t mind Makefiles, and usually use them for basic command configuration.

However, when the logic gets even a little complicated and I almost always reach for bash. Everybody knows a little bash, and it’s available in almost all systems.


Not related to the contents of the article, but I love the font on that site. I also love the simplicity of the site as well. Nothing takes away from the ability to read. It is so clear and concise.


There really is no perfect build tool, but in my experience, nothing touches invoke http://www.pyinvoke.org for building and automation.

Any project will eventually have build and deployment scripts with non-trivial amounts of logic in them.

The question then becomes whether you want all that complex logic in shell scripts, makefiles, or Python.

For me, it's a no-brainer. I'll take the latter every time.


You don't use makefiles in go! You just take your code, copy go.mod and go.sum into a Docker image, then RUN mod download, then re-copy the rest of the code and run bui...

Shit... Docker is a makefile...



For projects that use tags to turn on/off compilation options, Makefiles and shell scripts make sense.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: