I'm working with a bunch of people who occasionally start new Golang projects that are eventually maintained the just about the whole group. Everybody has their own idiosyncratic ways of setting up their new projects, so every time I switch to working on another project I have to spend some of my cognitive capacity figuring out and remembering what the natural way to nest directories, name config files, wire up build/deployment scripts etc was for this particular project.
It's not the end of the world, but I think the effort required outweighs the benefit of natural growth, and any arbitrary, ill-fitting consistency is preferable to no consistency. imo, at some point, the worn-out "cattle, not pets" mantra applies to codebases too, not just infrastructure.
So, I don't have any extremely strong opinions on project layout and if all of the unimportant decisions are already made for me, all the better. That said, I would never want to tell anyone "you must blindly follow this one particular structure". Rules are meant to be thoughtfully broken.
There is one thing, I find very nice in Rails and other MVC projects and this is a standardized project setup. You can still bend and break the rules, but the mental effort is so much reduced and the programmer is able to concentrate more on code and doesn't need to think about structuring.
Let's assume something can be called "Waterfall", there's a spectrum on the other side from "Agile":
A) The less you know about something in advance, you want to iterate in small increments in order to learn&fail fast, pivot and have many small gains and something to show early (Agile).
B) The more you know about something in advance (ie. re-implement Z on platform Y instead of X), you want more up-front design and re-use of already established patterns. Since the end result is mostly already known, you don't want to waste time&energy hunting down clever solutions, but spend shorter time on pure implementation.
For a typical golang project you have some idea of scope, or purely convention/convenience, you could decide to setup according to minimal structure. Even though we know java-programmers will want to implement their "architecture" on the project filesystem-level, we don't have to go all the way to the other end of the spectrum either.
A plain main.go file would require more refactoring, but hold the most promise in terms of freedom, innovation and creative input. But the Go-community have started to follow some minimal conventions already, so as to rein in the worst and unnecessary cases of divergence.
Doing the most minimal thing is often the right answer in Go. I recently went down a rabbit hole before finding out that something supported timeout natively. I was glad to delete the code.
I think it's important to note that this is a very opinionated way to set up Golang projects. It relies on a lot of tooling not explained in the post plus uses some processes that may or may not be standard (eg the version is one of many ways to do it). In future posts you might want to break down why you do things the way you do and explain why you chose those solutions over others.
You are right, this is quite opinionated solution, as everybody has their preferred ways of coding. The fact that not everything is explained in this post is because there are 4 parts, which are all linked at the top of README in the repo you linked.
The pages loads quite slow indeed, that's because of the traffic coming in right now... The 1GB RAM VM is having hard time serving all the requests...
I would hardly say this structure looks "perfect". For a start:
- Use of Docker is a serious problem. I have no intention of ever running Docker on my machine to build some random project for which I have the toolchain already installed. Wrapping Docker with a Makefile is even more irritating.
- Use of "package pkg" to keep the version number in is bizarre - I've never seen that before. Some use `package version` _inside_ a `pkg` directory which contains other public packages also, and use of `package version` at the top level is also fairly common.
- Multiple modules in one git repository presents many challenges for people who want to reference them at different commits in other projects.
- `pkg app` under a `cmd` directory is sometimes fine, but if there is only one app and no library, keeping it in the root should be preferred so that `go get` works properly. Furthermore, other packages being nested under it is not often great, since they are highly likely to be reused by other `cmd`s in the same repository.
Personally for Go I've never needed a "starter" project, since not much needs configuring out the box as the toolchain is quite reasonable.
If I were evaluating a project structured like this as a potential dependency, I'd knock off major marks for the structure and investigate alternatives. I guess if things came down to the wire, the structure wouldn't be a deal breaker, but I'd certainly aim to avoid it.
> People often say "It works on my machine (and not in cloud)...", to avoid this we have simple solution - always run in docker container. And when I say always I really mean it - build in container, run in container, test in container. Actually I didn't mention it in previous section, but the make test really is "just" docker run.
I've had nothing but bad luck with Docker for Mac. It's been super slow. If any container is running at all, the com.docker.hyperkit process eats 160% CPU even if `docker stats` shows the process hovering near 0% CPU. This has been the case for our entire team, and we're in the process of moving away from it.
> I've had nothing but bad luck with Docker for Mac. It's been super slow.
I'm not sure if this affects every Mac user.
I have a Docker course that has over 15,000+ people who signed up, of which a little less than half use MacOS and of that I've only had a handful of people mention really poor volume performance.
I don't have enough data points to know exactly what the cause is. It's definitely a legit problem and it does affect a decent amount of people because I've seen the posts in other places too, but I don't know how wide spread the problem really is. It might be some type of hardware / OS version / tech stack combo (some languages and frameworks require dealing with many more files than others).
I happen to use Docker for Windows and it's really really fast too for reference and I never had anyone mention slow volume performance in large apps. Native Linux is as good as it gets as well.
I hope one day the folks at Docker figure out why the performance is so bad for some Mac users.
Yeah, it affects everyone working on our project. I'm certain it's volume performance, but we've tried all of the workarounds, docker-sync, and several other suggestions. Also (and respectfully), I wonder if your frequency of incidence is artificially low given that many courses tend to use toy-esque projects (although perhaps yours isn't, somehow?)?
The main app in the course is based off the original cats vs dogs example provided by Docker. It's a combination of a .NET, Node and Flask app all wrapped together in Compose. It's not a huge project, but it does touch a few different web frameworks.
Most of the people who take the course go off to Dockerize their own applications using whatever framework they use. Most of the people who have mentioned slowness were working with Laravel apps, but there were also a few using large Node apps too. In all cases they were using a Mac.
No one has mentioned slow volume performance issues on Windows that wasn't fixable with tweaking a Windows setting or 2. Most of the time it was Windows defender (or some other anti-virus tool) drastically slowing down I/O.
This used to be very common with Docker for Mac, but for most people it's been fixed for years at this point. You might want to try a fresh install of DfM, just for curiosity's sake.
> But to add a serious note, why is every dependency on the environment so complicated that we just wrap it in some container to just get rid of it?
Bad build/dependency-management tooling and non-perfect abstractions over system APIs probably account for most of the issue.
Containers are conceptually useful; it's nice to have ephemeral, isolated environments. But if it literally eats all of the CPU it can get its hands on, then it's not useful.
My default is to use environment-based configuration (12f) while running in a scratch-or-at-largest-alpine-base image. This precludes a lot of things like shell-scripted startup, but then again, why are we running go if not to generate one binary as the entry point?
Because we've chronically underinvested in universally useful dependency management tooling, so the state of the art is a jungle of partial solutions that don't compose.
I like Linux as much as the next guy, but the Linux desktop ecosystem (hardware and software) doesn't get the investment it needs to be on-par with Mac. It's cheaper and easier to jettison Docker than MacOS.
Not the person you're responding to, but I'd like to add a data point. I try to use Linux where I can (I use it at home and for personal projects). Alas, I work in an environment where MacOS users are prioritized, so sometimes I cannot use tools or access services controlled by other people who still have "linux support" somewhere in their backlog.
Depending on how critical these apps are for my work, putting up with eg. a sub-par Docker experience on MacOS may be the lesser evil.
Hmm, interesting I had assumed /cmd/ was used for cli-ish applications that use a 'command pattern'. I didn't think people also did this for http apis.
That is my understanding as well, and having a directory tree that goes like "projectname/cmd/projectlibrary" seems super weird and non-idiomatic to me.
cmd/ directories are normal. Hosting all the business logic for the whole package under cmd/ seems less normal.
You kind of need cmd/, because you can only have one main() in a package, and a package with a main() can't easily be imported into another package. That's why the directory exists, not (as I understand it) because there is some merit to burying code 3 directory levels deep.
my response was to your comment that "having a directory tree that goes like "projectname/cmd/projectlibrary" seems super weird and non-idiomatic to me"
I'm on mobile and didn't check all your examples, but grafana has pkg/cmd and traefik has cmd next to pkg. That's not having the business logic below cmd. Whereas cmd/pkg is.
Most projects I see put source at the root so it’s straightforward to import and if there is a commandline utility you can build it ends up in ./cmd/main.go or something.
One thing I dislike about go is that the way imports work you often see source code mixed with stuff like docker and make files.
You generally use `cmd` package if you have more than 2 binaries in your project. Or, in rare case, where you ship a library and a binary (so you still can import a library by root package path).
> People often say "It works on my machine (and not in cloud)..."
That's definitely not a problem I have with Go. Building everything with Docker is wayyy overkill.
Also, why is everything under the cmd/ package? It doesn't make sense.
As for go modules, using `mod vendor` might not be the best suggestion as it requires using `go build -mod=vendor`. If you use go modules, just use go get and drop the vendor folder.
I use the cmd directory for my main files, setups, etc... but use project root level packages for the bulk of the business logic. I kinda go the impression here that he uses cmd for all source.