Hacker News new | past | comments | ask | show | jobs | submit login
Go: Best Practices for Production Enviroments (bourgon.org)
222 points by ConceitedCode on Apr 27, 2014 | hide | past | web | favorite | 44 comments



What an interesting article, thanks for the writeup.

Are you using Docker for containers?

As a language neophyte I found this far more persuasive than arguments over emergent behaviour[1], because it talks about actual practice, as opposed to talking in abstractions (which rarely leads to fruitful discussion). The attraction of Go to me is in its commitment to simplicity, at the expense of some power or terseness it becomes easy to read, easy to extend, and hard to use to create monstrous complexity.

It feels like an improved C rather than building on the traditions of C++ say.

The package system, import system, composition and no inheritance, and extensive use of interfaces all encourage simplicity above all.

Composition - it's harder to create huge taxonomies which all depend on a common root.

Packaging - encourages very simple small components rather than huge frameworks which are hard or even impossible to understand all at once or separate into components. Godep[2] looks interesting here, thanks for the link.

Interfaces - similar to many other languages, but without the boilerplate of declarations (this sort of decision is where controversies start - what some see as boilerplate others see as important contracts, same goes for header files).

It is of course missing a few things which would be nice, but nothing should stop people trying it out today - if you value simplicity above all, Golang will probably appeal.

1. https://news.ycombinator.com/item?id=7653164

2. https://github.com/tools/godep


> It feels like an improved C rather than building on the traditions of C++ say.

I do feel that Go is an improved C (perhaps Java-lite). If you want a better C++, you might want to check out Rust.


Please don't call it java lite. Yes, it targets many of the same projects you might write in java, and it has garbage collection, but it's very different from java in the feel and the resultant code, and nothing "lite" about it.


Not to get into an argument, but I find Go and (modern) Java almost identical. Not in exact features (they treat interfaces and exceptions differently), but in the general philosophy of the language (i.e. "blue collar") and the level of abstraction. Go is simpler than Java, so I agree with the "lite" as well. I would use Go for simple, relatively small, self-contained projects, and Java for bigger, heavy-duty stuff. The languages are so similar that I hardly feel it when I switch from one to the other.

Obviously, you can't compare small Go projects to huge Java monstrosities that were written in the old Java style (lots of XML configuration etc.), but modern Java and Go are hardly distinguishable (in approach; not syntax). Maybe I'm particularly prone to this view because I've implemented true fibers (goroutines) and channels for the JVM, so I sometimes transliterate Go code to Java (the resulting code is always of similar length, perhaps with a slight advantage to Java, now with Java 8's streams and such).


I work on Juju, a 240k loc Go project which spans 4 executables across multiple machines all working in concert, coordinating through a replicated Mongo database. It's pretty big.

Go was designed by people at Google for large projects. Saying it's only good for small ones is selling it short.


> so I sometimes transliterate Go code to Java (the resulting code is always of similar length, perhaps with a slight advantage to Java, now with Java 8's streams and such).

How does performance compare?


As usual, that depends. Java performs better on function calls (because of the JVM's inlining JIT) and garbage collection. But because fibers are not native to the JVM, they do incur some overhead. If the fibers perform no work, then you're essentially just measuring the overhead, and Go would perform better. The more work is done, the more the advantage goes to Java.

With regards to IO, fiber-blocking IO currently uses the JVM's asynchronous NIO (which uses epoll/kqueue) under the covers, unchanged. But Java's async NIO is intended to be thread-safe, so there is lots of unnecessary synchronization going on there. The result is about 7% slower than Go. Again, the more computation work you do, the more the performance swings in favor of Java. We can do much better with IO by plugging our own NIO provider (Java's dynamic linking everywhere is awesome), but because the performance is really good as it is, it's not a top priority for us at the moment.

EDIT: the reason I would use Java over Go for all the "serious stuff" isn't primarily performance, though. It's the incredible monitoring and profiling capabilities, hot code swapping and other stuff that's possible with full dynamic linking.


Thanks for the insightful reply. :)


Go's simplicity didn't stop me from ending up with a messy, bloated and monolithic project on my first try, but static typing and good test coverage makes refactoring relatively painless.


I reckon there is no language that can prevent a bloated and messy code base. Avoiding that comes from both understanding the solution and how to write the solution with the language.


Indeed- languages without bloated and messy codebases are languages without serious users.


Nice article. As for keeping stable dependencies while remaining go get compliant - I opted for a slightly simpler solution. We fork dependencies to our internal git server (gitlab in this case, sometimes we just keep our stable fork on github), and go get them from there in the build process.

It's not much different, in that it still requires rewriting the internal imports of the libraries, but it doesn't require copying of files or working with git submodules, and provides the same stability.

I think godep certainly looks promising and I have it in my TODO to try it out, but working with forks for the relatively small codebase we have is enough for now.


"Dependency management! Whee!" indeed.

I totally share the author's feelings that fetching master to do deployments is pretty rough for repeatability. Vendoring fixes the core repeatability problem, but leaves a lot of other questions. What if I want to use this library in several projects without essentially forking it for each one? What if I want to reuse two of my libraries in other project, and they both vendored the same things? And so on.

I use git submodules for golang source dependencies with reasonable success. I also asked myself "What if I need this to be backwards compatible with `go get` semantics?" and found it's hackable: create a dir in your project which you'll add to your $GOPATH, add git submodules there with fully-qualified dir names matching their public urls (i.e. `myproject/.gopath/github.com/whoever/whatever/`), and to tie it off make a symlink from your own project's fully-qualified dir name back up to your repo root.

With that symlink, your own project has the same package names regardless of whether the code is being referenced internally to that project, via the public go-get'able urls, or via being submoduled in another project. Nobody's left out.

Example here https://github.com/polydawn/pogo/commit/c6fac440c99c00c2db65... (though I've since switched to putting the symlinks in a .gopath dir, as that seems more popular).

Submodules can introduce complexity to a project if inserted into the middle of a project where abstraction layers aren't finished yet (all the complaints about merging being totally undefined are true). But when a relatively stable library API is either already existing or a firm intention, submodules stay pretty well out of the way in normal development and can actually cross over to helpful by drawing attention to when you should think twice about API shifts. (Also, it might be worth mentioning git's CLI for submodules has gotten radically better in the last few releases.)

It's certainly not the only way to do things, and if you're happy with other solutions like vendoring, by all means carry on (godep is great too, as the article mentions; it doesn't vendor, and still gets you a hash for precision and integrity of your deps)... I just thought it was interesting that it's possible without any special tools.


While I have in general sworn off java, and is rather enamoured by clojure's lein build tool -- I enjoyed the recent post (that was on hn as well):

http://blog.paralleluniverse.co/2014/05/01/modern-java/

It does appear grails manage to expose many of the good parts of Ivy/Maven while keeping build-files reasonably sane -- and crucially it appears to offer a pretty solid story in terms of "predictable future deployment" (can I restore a service in two years and deploy it on a contemporary system?).


I really like Go, and have been getting into it quite a bit. It's powerful, quick, and fits in nicely as a "native code" utility language, which I dig. However, the first thing the OP talks about, $GOPATH, really fricken annoys me. I know that's sort of silly, but hey.

Are there any good fixes for this at all? What other ways of tackling the dev env are there for Go?


Use the symlinks, Luke.

My $GOPATH is in a standard place: $HOME/.local/share/go (aka $XDG_DATA_HOME/go) and all my projects (whatever language it is) live in $HOME/dev.

I use github for all my projects, so the public path for my go projects would be $GOPATH/src/github.com/rakoo/petproject. So all I have to do is

    $ cd $GOPATH/src/github.com/rakoo
    $ ln -s ~/dev/petproject petproject
and everything works flawlessly. I get to keep using my ~/dev dir for all my projects, and the Go environment is well defined and works as expected.

You do have to do this once for each project you start, but that should scale quite well.

TL;DR: Your projects don't need to be in $GOPATH; they only need to be accessible from $GOPATH


Ah hah! I wish I could give you more than just an upvote, as that's awesome, and I really wish I'd thought of it first. I'll give it a try, thanks very much!


That's neat!


Not sure what you mean by other ways to tackle the dev environment. You need GOPATH, it's totally non-optional for any nontrivial code. Try it for a bit before you look for an alternative, it's actually quite elegant.


Don't know why you were down voted :/

It's quite possible that I will be better off just embracing it, but I do so much development on this machine with close to a dozen languages that I've developed my nice project setup to handle it all, and it seems a little frustrating to have to change it all, I guess. I sort of prefer per-project stuff overall.

I may end up using a dedicated VM for it, perhaps, to get around it :)


Heh, I was wondering that myself, but I try not to get too hungup on votes :)

Per project is just going to be a pain in the butt, honestly. It means you'll need to constantly swap out the gopath, and there's really very little harm in having all your go code in one giant gopath. The only time it is a problem is if you use different branches of a package for different projects. That has been very rare in my experience, hopefully it will be in yours as well.

However, I do know some people that have a gopath per project... you certainly can do it, it's just more work. But what you do need is some sort of gopath. It's fundamental to the language. In theory you could probably build without one if all your imports were relative, and you checked out code without go get... but you'd end up with non-idiomatic go code that no one else would touch with a ten foot pole, and you'd lose a lot of the cohesiveness of the tooling.


What is elegant about the use of an environment variable? From what I can see on the net, the requirement for all GO projects to exist within $GOPATH mainly seems to confuse and annoy people.

A couple of examples:

- https://groups.google.com/forum/#!topic/golang-nuts/9WCKPKTO...

- http://stackoverflow.com/q/17780754

UPDATE: Please tell me what I said wrong here so that I can avoid being downvoted in the future. I was not trying to be offensive. Sorry.


”Elegant” wouldn't be a world I would use.

”Standardisation” would be more like it.

Though you could have separate $GOPATH for each project. Note that $GOPATH can be colon delimited (though `go get` will download packages to the first directory in the list).

You could use something like http://swapoff.org/ondir.html to switch out $GOPATH depending on which directory you are in in your shell.


It's elegant, because all code generally lives in a repo online somewhere, and by having a canonical place for that code to live on your local disk, it's trivial to have code that references other code that lives online somewhere else, and you can get all of it with a single command.

If I go get your package, and your package uses someone else's package, I get their code too. It's quite elegant. And all it needs is one little environment variable that says "store my go code here".


Don't try to work around it, but embrace $GOPATH instead. As soon as you put your project inside $GOPATH/src (or ${GOPATH%%:*}/src if you want to be pedantic), things work flawlessly, esp. all the tooling around Go that expects to work with $GOPATH.

In my company, we have the convention for our internal Go projects to be checked out below $GOPATH/src in a $company_name/$project_name hierarchy, and that's also what you'd use as import path.


You don't have to use go get, you can set gopath, then ignore it and use whatever paths you want - go run/go build will happily build for you no matter where the files are, though obviously go get would use the gopath.

You don't have to use go get though, you can define import paths any way you like (e.g. relative to project pkg) and go run or go build /path/to/my/project to build your project with the -o flag to specify the output binary. As long as dependencies are where you specify, it'll happily build. So if you want per project dirs, it's perfectly possible to do that with the standard tools, but when sharing code people will expect to be able to go get etc. and relative import paths are frowned upon.

So if you wish you can pretty much ignore GOPATH, if you're not go getting dependencies, but if you're sharing it might be best to bend to the consensus which is to have one gopath and put all your projects there (they can of course be within project dirs within that gopath or linked to elsewhere).


I use GOPATH only inside of a wrapper script that sets GOPATH to the directory for the current project. All libraries are stored alongside my project code in source control. I use go get when adding/changing libraries, but remove the .git folder when doing so.


Vendoring is the way to go, but it done in a very awkward way here. I find it is better to just fork the repos you depend on and reference your fork. This makes it very easy to update them from upstream and eliminates any need for third party code in your repo.

Using the command line args is great for programs with small configs, but we find good old INIs and JSON to work better for large complex configs.

I think the deploy step outlined should be running the unit tests with the race detector enabled.


Since an import cannot reference a specific tag or commit, even with forking you're exposed to the situation where a team member may update the fork and break the project. Keeping dependency information external to the project is a bad idea.

So with forking, you still have to reference a "vendor" path, as described in the linked article. You could use Git submodules or Git subtrees, or something homegrown, all of which add complexity and are not ideal.


The way you describe it ends up being more awkward, because you have to change code to reference the new path, both in your own code, and potentially in the library you've forked and other dependencies that use it. Vendoring by just maintaining a GOPATH for your dependencies that you control is the preferred way.


This is a great article. I have also been vendorizing my dependencies. I recently have been bit by projects that I rely on (config parsers, things of that nature [and for a couple of my projects, I have too much config for using command line flags]) moving or doing silly things. The old code works fine but now it is a pain to setup a new dev environment because I have to clone from a different repo than its import path is from and check out the appropriate commits.

To solve these problems I have started an experimental thing and I would love some feedback on it. I am calling it `gopkgr` and the idea to have a thing that will let you make nice tarball packages and install them.[1] Eventually, we could have something like godoc host version-ed packages. I want to be cross compatible with `go get` but I also want to support more general situations.

For instance, right now it is really difficult to write polyglot go systems. In my research, I have a system that is go, java and python to tie things together. Super annoying to have such a polyglot system but I have different needs and each ecosystem offers the path of least resistance on each one of those needs. With the current go situation is not possible to create a re-usable package for the go code which is usable by other projects without breaking it out of the repo (which doesn't really make sense in this case).

In anycase, I don't want to solve dependency management with gopkgr. I want to solve packaging dependencies for repeatable builds without having to worry about code moving from this source code host to that source code host. (eg. google code to github or vice versa) So if you have good ideas in this regard. Let me know about them.

[1] https://github.com/timtadh/gopkgr


How is this different from godep? https://github.com/tools/godep


I aim to do packaging (putting things in tarballs). Godep is trying to solve the harder problem of creating metadata and systems to say what versions of libraries to (transitively) depend on. I actually want to utilize the Godep stuff to solve the dependency management problem for the packaging portion.

You can think of what I am trying to make as a language specific version of %.debs or %.rpms. Combined with proper dependency management, hosting, code signing and verification we could build a language specific apt-get. This is what python's pip is like. There are a lot of tricky problems between where go is at today and apt-get. Especially if we want to be resistant to things like evilgrade.[1]

[1] http://www.infobyte.com.ar/down/isr-evilgrade-Readme.txt


Nice article. For those of you working on a SOA project, can I have some feedback on how people manage data sources? For instance, I am in the process of breaking up a monolithic application into small services, but the problem is that most of these services rely on the same data (For instance a search service and an autocomplete service will work with the same data). What are you strategies to deal with this? I mean, is it simply a matter of relying on the same database? How about models built on top of this database and their tests?


Nothing wrong with two services using the same data source. If data locality is a concern, deploy both services alongside one another.

Or you could place the data behind an API. It would speak HTTP/Thrift/whatever and would be solely responsible for handling the data. Front the API with a cache if latency becomes an issue.

For your specific example, consider pushing data to autocomplete and search instead of having them pull.


Interesting. Are you aware of any high profile examples of this architecture?

For my own case, search and autocomplete use the same modeled data, but autocomplete for instance runs lots of precalculations on this data, so the data behind an API is not really a choice.


check out NSQ, designed for (in part) exactly this use case (and also written in Go)

http://bitly.github.io/nsq


That means tabs for indentation, and spaces for alignment.

Not a go user (yet), but this one caught my attention as being a little ugly. I'm a fan of only using spaces.


That is just the gofmt default. That same gofmt utility also has this option:

    -tabs=true     : indent with tabs
which when set to false will uses spaces instead of tabs for all indent.


Tabs are just customizable whitespace. If you like 6 space indents and I like 2 space, we can both be happy if it's tabs, and just configure our editors the way we like. With gofmt, there's no danger of mixing spaces and tabs (not that it's actually dangerous, since whitespace isn't important in go).


Excellent article, thanks for sharing! I wish I could attend Gophercon but I'm way down here in South America.

What book would you recommend for someone who wants to learn about Go practices and professional tips? I'm not interested in a reference book that teaches me how to loop, but a book that teaches me applicable knowledge in a bit more advanced subjects.

I'm aiming to make Go my primary workhorse language.


Why the advice against `make` and `new`? Is it only because make sometimes requires you to know sizes in advance?


It's less noisy just to take the address of an automatic variable, like foo := &Bar{}


It's just their style, to keep things consistent.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: