
Golang: Don't be afraid of makefiles - sohlich
https://sohlich.github.io/post/go_makefile/
======
jgrahamc
Two things I try to get people to do with Makefiles:

1\. Use the := syntax

:= means that a variable is immediately expanded; = means it's delayed until
the variable is used which on a large Makefile has a speed penalty. If you
know that a variable is fully defined (i.e. all the $(...) references in its
value are fixed) when the variable is being defined then use :=. That's the
case here where the Makefile starts:

    
    
        GOCMD=go
        GOBUILD=$(GOCMD) build
        GOCLEAN=$(GOCMD) clean
        GOTEST=$(GOCMD) test
        GOGET=$(GOCMD) get
        BINARY_NAME=mybinary
        BINARY_UNIX=$(BINARY_NAME)_unix
    

which can be written

    
    
        GOCMD   := go
        GOBUILD := $(GOCMD) build
        GOCLEAN := $(GOCMD) clean
        GOTEST  := $(GOCMD) test
        GOGET   := $(GOCMD) get
    
        BINARY_NAME := mybinary
        BINARY_UNIX := $(BINARY_NAME)_unix
    

2\. If you have a one line recipe then use the target: preq ; command syntax
to avoid tabs. For example,

    
    
        build: 
            $(GOBUILD) -o $(BINARY_NAME) -v
    

is the same as

    
    
        build: ; $(GOBUILD) -o $(BINARY_NAME) -v

~~~
wruza
While it is not so important in simple cases, '=' allows to not depend on the
order of definitions, so you can group variables in logical order, not as
statements. You can look at := as variable definition and = as semi-function
definition. Penalty of delayed-but-many variable expansion that doesn't
contain shell code is questionable. Also, with := half of them may not be used
by 'maked' rule at all, actually losing performance.

There is also ?= which only sets a variable that was not set before or via
command line or environment (make inherits current environment variables
iirc).

~~~
dom0
> and = as semi-function definition.

Variable binding.

~~~
klodolph
Well, = doesn't really behave like traditional variable binding, since you can
do stuff like:

    
    
        build = cc -c -o $@ $<
        file1.o: file1.c
            $(build)
        file2.o: file2.c
            $(build)
    

(This is meant to illustrate the difference, this isn't a good makefile.)

To reinforce the notion that variables are really functions, you can even
supply arguments when you call them:

    
    
        # reverse is a function
        reverse = $(2) $(1)
    
        # eventually expands to 2 1
        a = $(call reverse,1,2)

------
rgbrenner
There's another good use for Makefiles with go.

Gopath is one of the dumbest ideas I've seen. Multiple large projects under
the same directory? Who thought that was a good idea. (and yes, I know you can
create subdirectories, and mix all your packages together to create a giant
mess.)

So I separate the projects into their own directories.. and use a Makefile to
set gopath and run the go commands.

Still trying to figure out why golang tried to be innovative here.. What
couldn't they use cwd like every other compiler I've ever seen?

~~~
leggomylibro
Yeah, that is what turned me off of Golang. It was kind of a "straw that broke
the camel's back" thing, though. Go is just an EXTREMELY opinionated language,
and if you don't like that then you won't like it. The Best Way to do
everything is decided for you ahead of time, which irks me because it isn't
always true. But I can see the frustration it grew from when I use a language
like Ruby, which has a half-dozen nearly-identical ways to accomplish most
simple tasks.

Anyways, you could muck around with symlinks and crap, but come on. If I can't
simply mkdir/vim/compile/run a 'hello world' program, there's probably
something wrong with the language.

~~~
weberc2
> If I can't simply mkdir/vim/compile/run a 'hello world' program, there's
> probably something wrong with the language.

Like this?

`cat $HELLO_WORLD_PROG > /tmp/hello.go && go run /tmp/hello.go`

~~~
leggomylibro
Yeah, like that!

But you know what I mean; multiple scattered workspaces, project folders on
removable storage, etc. That sort of thing should be painless.

~~~
weberc2
I've definitely had some other minor pain points with GOPATH, and I hate when
veteran users of another language trivialize my pain points with their
language, but I think all of the use cases you've described are all well
within GOPATH's wheelhouse:

export GOPATH=$HOME/go:/media/USB-DISK:/foo/bar/baz

Now Go can find any projects in any of those workspaces. The only caveat is
the projects have to be in a workspace, which is a directory with a src folder
containing your project. I agree that this is a minor nuisance and I would
design it differently, but Go's tooling is still far easier than pretty much
anyone else's (it's actually what brought me to Go).

~~~
vog
_> but Go's tooling is still far easier than pretty much anyone else's (it's
actually what brought me to Go)._

Really?

Maybe that's true if you compare it to PIP (Python), Gems (Ruby) or NPM
(JavaScript/NodeJS).

But I think that Cargo (Rust) got it almost perfectly right, and
Opam+Ocamlbuild (OCaml) is very close.

~~~
weberc2
Some other languages that make Go's tooling look awesome: C, C++, Java

Cargo is comparable to Go's tooling in my mind (in terms of ease of use; I
think it's better in other categories), but figuring out how to use libraries
(mod vs use) and organize a multi file project is a notorious pain point with
no analog in Go. This means it's harder to get started with Rust than Go
(language features notwithstanding), which is pretty much what we're
discussing here.

I don't really know what to say about Ocaml. I don't have much experience, but
most of it has been bad. It seems like you're supposed to use Makefiles, and
an Opam file. The more I learned the less I liked, and it seemed to be the
consensus that the build tooling was one of the language's weak points. I will
say that installing packages from Opam was a painless experience. The build
tooling wasn't the only thing that drove me away, but it was right up there
with the poor standard library, ecosystem fragmentation, and low-quality
documentation. Which is a shame because the language features look great.

With Go, all you need to know to get started is "go get".

~~~
ernst_klim
>poor standard library

What's wrong with containers, batteries and core?

>go get

Wow, once I've tried go the package management was utterly broken, even cabal
hell looked better than this. What, no package versions? Downloading directly
from git repo, are you kidding me?

~~~
weberc2
> What's wrong with containers, batteries and core?

Well, between those three and the actual std lib, there are four standard
libraries, which defeats the purpose of having a _standard_ library.

> Wow, once I've tried go the package management was utterly broken, even
> cabal hell looked better than this. What, no package versions? Downloading
> directly from git repo, are you kidding me?

`go get` isn't a package manager. Go's solution to dependency management is
vendoring, which has its own suite of problems, but my comment was about
getting newbies off the ground, and for that `go get` is suitable.

------
clra
Nice article!

In my opinion, this level of abstraction may be going a little far:

    
    
        GOCMD=go
        GOBUILD=$(GOCMD) build
        GOCLEAN=$(GOCMD) clean
        GOTEST=$(GOCMD) test
        GOGET=$(GOCMD) get
    

It might seem like a good idea to constantize everything, but in practice,
those commands are _never_ going to change.

By just invoking them directly in the targets you'll come out with something
more readable and no less maintainable:

    
    
        build: 
                go build -o $(BINARY_NAME) -v

~~~
kqr
But you may want to add a compiler flag, or change the path of the binary. If
the binary is called in multiple places it does make sense to constantize it.

~~~
clra
Possibly, but by moving flags up to the constants you also run the risk of
obfuscating things further. A little bit of repetition when it comes to
compiler flags (especially in Go where there are relatively few) is a good
trade for the added clarity it conveys.

Invoking a particular binary is valid, but even there, you only need one
constant, not one for every subcommand.

------
kodablah
In my recent cross-platform projects, I just make a build.go script that can
handle multiple commands and then just have builders only use that. E.g.

    
    
        go run build.go build release
    

This crosses platforms, is easy to read and contribute to, is a much better
language than make + CLI utils (or shell scripts), etc. It's not too much
asking them to have "go" on the PATH. A similar approach could be taken w/
Python too. For most of this, there really isn't much code reuse needed, but
it's often better to embed it than rely on external deps. Complex build script
example:
[https://github.com/cretz/doogie/blob/8c53a266f35146a3c0d8f14...](https://github.com/cretz/doogie/blob/8c53a266f35146a3c0d8f14fa22d830141f1497b/src/build.go).

~~~
0xdeadbeefbabe
camlistore does this too, but a Makefile calls "go run make.go"
[https://github.com/camlistore/camlistore/blob/master/Makefil...](https://github.com/camlistore/camlistore/blob/master/Makefile)

------
williamdclt
You're missing half of what makefiles are for: only building what you need
based on dependencies.

I never used Go so the following may not be technically correct, but in
principle:

\- The `build` rule should be `build: *.go` so that it builds only if a file
has changed. \- The `run` rule should have the `build` rule as dependency, so
as to not rebuild if nothing has changed. Plus, it avoid to repeat the almost-
identical build command (which is not very DRY)

Plus, I'm not convinced of the usefulness of having a $GOCMD variable (or
$GOGET, $GOBUILD...). But again, never used go, no idea if the tooling is as
tweakable as C (for which this kind of variable is used a lot in makefiles)

~~~
jgrahamc
You really don't want to do that. I know what you mean but the go command
handles the dependency stuff. Trying to shoe horn that into a Makefile is a
disaster.

~~~
zeveb
It's doable though: 'go list -f "{{.Stale}}" $PACKAGE' can be used to
determine if a package needs to be rebuilt; this can be used in a make rule
which touches a stamp file, and then the stamp file may be relied upon for
other make rules.

My quibble with the example Makefile as written is that it doesn't utilise 'go
install,' which I much prefer to 'go build.'

------
chimeracoder
There are a lot of problems with this approach and a few errors in the post,
but I'm just going to highlight the most significant one:

> If the project uses CI/CD or just for consistency, it is good to keep the
> list of dependencies used in packages. This is done by the “deps” task,
> which should get all the necessary dependencies by “go get” command.

This is a very strong anti-pattern. Everyone should be using a dedicated
version management tool (ideally dep[0], but there are others), which handles
this seamlessly and behind the scenes. Introducing another place where
dependencies are tracked and can be installed is a recipe for problems down
the line.

[0] [https://github.com/golang/dep/](https://github.com/golang/dep/)

------
mfer
Makefiles are great for *nix systems. But, Go is a first class language on
Windows. For open source projects who want to have contributors cross
platform... how can they use Makefiles effectively?

~~~
asveikau
Install gnu make. I personally typically go with msys since it is slightly
less intrusive and more windows friendly than cygwin. There is also the new
Linux subsystem...

~~~
wtetzner
If you build under the new Linux subsystem, you'll get a Linux binary. That's
not a good solution if you want to build a Windows binary.

~~~
asveikau
Note I didn't say anything about what compiler will be used or commands
actually run. Make just calls the shell. If the shell runs stuff that produces
Windows binaries you get Windows binaries.

(Assuming they fixed the Linux subsystem in a way that execve(2) works on
Windows binaries, which I heard in the first version they didn't have working,
then there should be no problems.)

------
humanrebar
Quick feedback:

There should be a BINARY_NAME target that 'run' depends on instead of
repeating the build command.

The source files should be named as dependencies when they are inputs to a
build step.

Anything that isn't a file should be listed as .PHONY target. Otherwise, make
gets confused if you, for example, create a directory named 'test'.

If the goal of your makefile is to make your workflow simpler and easier to
discover, consider adding a 'help' target that at least describes the
interesting targets. It might also direct users to a more involved explanation
if there are interesting things about your project they need to know about.

------
bacongobbler
My only gripe with Makefiles is the lack of support on Windows. I work on
several Go projects, all of which use a Makefile. It makes for a great
experience on MacOS and Linux, but on Windows I have to break down the
Makefile logic into powershell scripts or run raw `go test/build`. Both are
fine, but just pointing out the fact that it's not a silver bullet if your
project is intended to work across multiple platforms (like a CLI).

~~~
rpeden
If you use Cmder[1] as your terminal, and you install the full package, make
is bundled with it and works exactly as you'd expect it to. I do this with a
couple of cross platform Go projects, and make works nicely on macOS, Linux,
and Windows.

Of course, this is less useful if it's a team project, because then everyone
will have to install Cmder (or at least msys) to use the Makefile on a Windows
system.

[1][http://cmder.net/](http://cmder.net/)

~~~
bacongobbler
Sure, I have `make` installed and I'm using cmder, but some things do not
translate well in Windows.

[https://cognitivewaves.wordpress.com/makefiles-
windows/](https://cognitivewaves.wordpress.com/makefiles-windows/)

------
didip
This is gonna get me in trouble here, but I like using Python for build tool.
The reason for it is because Python plays nicely on all 3 major platforms.

~~~
corpMaverick
Fair question. What do makefiles have that make them better than a good
scripting language like python ?

~~~
ThatGeoGuy
Make will check if your dependencies need to be rebuilt, and will only rebuild
them if they have changed / been updated.

This is hugely valuable because you can save build times by not needlessly re-
compiling the whole world. Linking times are always going to remain the same
regardless, but you can definitely make the process more efficient. An example
from C/C++ is not generating new object (*.o) files for source files that
haven't changed (and who's dependencies haven't changed).

Make also has a pretty standardized syntax and has a bunch of open source
implementations. This may not seem to matter since Python has many
implementations (as an example), but remember that make is just a program, not
a full platform. Python may execute differently depending on the version of
Python, the version of any library you're using in the process, or even on the
architecture (okay, maybe not in Python so much, but in some scripting
languages it could matter). Make only depends on the version of make that you
install.

Lastly, make comes with parallelism in your build mostly for free. Calling
`make -j4 all` will run up to 4 rules at a time in parallel. In some cases
this won't work because your dependencies are tangled in a mess and
parallelism can't be guaranteed, but it's a lot easier to get parallel builds
for most projects with make compared to something like Python, which either
has to use something like the multiprocessing library, or pray that the GIL
doesn't bite. In my opinion, this is not something you should be baking
yourself into your build system. Make has done it and it has been done right
for years.

------
curun1r
For tasks like the one the author describes, I prefer using just [1] over
make. It's designed to be a command runner rather than a build tool, so you
lose things like considering whether files have been modified when executing a
target, but it includes some features that make it better than make for
keeping a list of commonly-run commands.

In particular, it can: \- list/show available commands \- pass through command
line arguments \- run commands in a different directory \- run with a
different interpreter \- has better error messages

[1] [https://github.com/casey/just](https://github.com/casey/just)

~~~
sjellis
> For tasks like the one the author describes, I prefer using just [1] over
> make.

Just really ought to be better known.

------
Walkman

        # Go parameters
        GOCMD=go
        GOBUILD=$(GOCMD) build
        GOCLEAN=$(GOCMD) clean
        GOTEST=$(GOCMD) test
        GOGET=$(GOCMD) get
        BINARY_NAME=mybinary
        BINARY_UNIX=$(BINARY_NAME)_unix
    

A little bit of Clean Code education is needed here :) Why on earth do you
want to introduce so unnecessary abstraction? If those would use a lot of
switches which you don't want to duplicate and just possibly want it to change
in one place, I would say OK. But you don't even use it that way:

    
    
        $(GOBUILD) -o $(BINARY_NAME) -v ./...
    
        CGO_ENABLED=0 GOOS=linux GOARCH=amd64 $(GOBUILD) -o $(BINARY_UNIX) -v
    

So this doesn't make sense at all. You should write the commands as it is and
it would be a lot easier to read an already hard-to-read syntax Makefiles
anyway.

~~~
striking
I'd say "GOCMD" would be sane if it were phrased "GO" instead. That's the
convention for most commands in makefiles, stemming from the fact that if you
need to invoke `make` recursively, you use `$(MAKE)` (provided automatically)
to do it so you don't break nice things like -j.

------
gabrielcsapo
I really liked makefiles and thought wouldn’t it be interesting if new
features could be added to them so I made
[https://github.com/gabrielcsapo/build.sh](https://github.com/gabrielcsapo/build.sh)
it isn’t completely done as you can only run the entire pipeline but I really
like the ergonomic that makefiles provided. (Written in node, also provided as
a binary, node not necessary)

------
sigjuice
Aren't all the targets in OPs Makefile phony? Why not just have a simple shell
script instead and not involve make at all?

------
tptacek
I like Makefiles and Makefiles for Go are especially simple, but they don't
play well with "go get", do they?

~~~
insertnickname
Just because you have a Makefile in your repository, it doesn't mean the
package(s) aren't `go get`-able. As long as the package can be built with `go
install`/`go build` it should be fine.

~~~
tptacek
Right, but then the value of the Makefile is pretty marginal.

~~~
jeffinhat
The value of the Makefile is for automating repetitive dev tasks. Consumers of
a golang CLI should use `go get` but this makes it easier for contributors who
run tests, cut releases, ..

The authors example is clean/simple. Here's an example not so clean/simple
example where we needed to do more complex stuff like build for all target
platforms[1].

[1] [https://github.com/cloudfoundry-community/stackdriver-
tools/...](https://github.com/cloudfoundry-community/stackdriver-
tools/blob/master/src/stackdriver-nozzle/Makefile)

------
rcarmo
Been doing this since I started to use Go, with one twist: I set $GOPATH and
have a vendoring target for each project, because Go _really_ needs per-
project environments and saner dependency management... (dep is OK, but I need
more control)

------
cbhl
How do large Go projects outside of Google get built, anyway? Is it idiomatic
to use Makefiles?

Inside Google, there are BUILD rules for Go, but it looks like they haven't
been open-sourced in Bazel yet.

~~~
zlalanne
There are some released bazel rules for Go:

[https://github.com/bazelbuild/rules_go](https://github.com/bazelbuild/rules_go)

~~~
cbhl
Nice! Are there any plans to document their existence in the Build
Encyclopedia[0]? Or is the idea that each company should vendor the build
rules for languages like Go?

[0]
[https://docs.bazel.build/versions/master/be/overview.html](https://docs.bazel.build/versions/master/be/overview.html)

------
lobster_johnson
Don't forget to use "go build -i", otherwise everything is compiled every
time. With the -i flag you get caching, which can significantly speed up
compilation.

------
ruskimalooski
I've seen many articles lately about using Make for NodeJS, Go, and other
modern languages. I really do not understand it. Lots of these languages (Go
and NodeJS) were designed to be cross-platform or at least easily compiled on
different OS's.

Why use a platform specific tool and restrict yourself to Linux? I'd prefer
people find a similar tool that is designed to work on any platform.

~~~
planteen
Huh? I use make all the time on Windows. You don't even need Cygwin, MSYS, or
Windows Subsystem for Linux. There are make binaries that work well, for
example:

[http://gnuwin32.sourceforge.net/packages/make.htm](http://gnuwin32.sourceforge.net/packages/make.htm)

~~~
grogenaut
What do u use for all the shell scripting and commands that usually go along
with make

------
EGreg
Is the word "be" missing in the title, or it's a joke?

~~~
maccard
Did you read the article? It's fairly clear that the author isn't a native
speaker.

~~~
StavrosK
We may have been too hard on the GP, as I think he was just wondering whether
it was accidental or the popular "doesn't afraid" meme.

~~~
EGreg
Poe's law strikes again!

------
diebir
Cool! A variant of C language, makefiles... Now all I need is a tower PC with
Windows NT 4.0 on it and I'll be all set!

On another thought... no. I'll stick in 2017 with modern tools and languages.

~~~
titanomachy
However you feel about go, it's not really a variant of C. The syntax is a bit
similar, but there are a lot of fundamental differences.

~~~
diebir
This is a fair point. What I meant is that I think the usability niche for Go
is similar to that of C. Is that a fair statement?

I was unfair to Go and this is because I seem to see many cases where Go is
brought to where it does not fit.

