Any other language that I work with understands that dependencies should be self-contained within the project folder. Each project has it's own set of dependencies that have been tested to work together. The user can select where to checkout the project, or even have multiple copies of the same project lying around.
To achieve the same thing with Go, one has to set a different GOPATH per project, and then checkout the project deep into that root. This is not convenient as a developer and as a software packager. Or give in and having to resolve dependencies that work with all the current projects. Given that Go doesn't really do package versioning, finding the right set of commits that all work together is a exponential nightmare.
Now that since Go 1.6 each project can have its own vendor/ folder, GOPATH shouldn't be required. I should be able to checkout the project where I want and then have the tools look into the vendor/ folder to resolve any dependencies. Please change it that way. The reason govendor and all these tools are complicated is because of this GOPATH madness. Synching dependencies between GOPATH and vendor/ should only be a problem for Google, not the rest of us.
It's pretty easy to declare a GOPATH that's local to your project. I've been quietly doing it for years. The linked repo is just one short shell script that helps do it (on any project; you don't even have to commit the script, and it doesn't change how anyone else develops).
Frankly, I think every highly-productive gopher outside the Google offices uses some form or another of this. It's unpopular to say it and goes against "the community" zeitgeist, but many people will admit it in private if you ask around.
My biggest complaint about the current state of Go dependency management is actually the vendor dir. We never needed it; project-local GOPATH is enough in the first place. Now, we just get the headaches when both exist and are in conflict, or one provides something that should've been in the other (not a problem, until you push, and someone else on the team has to tell you that you didn't vendor enough...)
At the root of the project I just have to do `echo 'github.com/username/package' > .gopkg` and after that, for everything I do under that directory tree, GOPATH will be automatically set to something like `export GOPATH="$(dirname .gopkg)/.gopath"`.
I don't use zsh anymore though.
Not that it justifies any pain, just found it to be a pleasant side-effect.
Sometimes it's worthwhile to just give in and try something new, even if you don't like it. You might discover unexpected advantages. Then go back to what you prefer and apply those lessons there.
Everything would be much better if we all kept an open mind and learned from each-others experiences. It might not always be obvious, but things are usually a certain way for a good reason. Might not be the best or right reason in the long run, but there's always some lesson to be learned.
> It's distasteful of Go to impose a filesystem layout.
> To achieve [self-contained dependenceis] with Go, one has to set a different GOPATH per project, and then checkout the project deep into that root.
> Given that Go doesn't really do package versioning...
I think the oddness is around the directory structure. It may also be because Go was developed by UNIX old hands. When UNIX was designed, it was not just for ordinary users, but also developers. When developing in UNIX, you can use all of UNIX as an IDE ( grep, find etc ). Most Unices have a folder called /usr/src  where sources are stored. When you want to build a package, you cd into it ( say 'cd /usr/src/make') and then say 'make && make install' and it builds and installs the packages to your /usr/bin.
I am assuming, Google's internal monorepo, which is derived from Perforce  encourages you to keep the /usr/src under source control. Since Piper is not released to the public and most people now use git, GOPATH was probably conceived as a way to relocate '/usr/src/' to another location. This is probably why 'go install' ( which probably mimics 'make install' ) installs everything at 'GOPATH/bin'. If this is true, I think we have a mental model on Go's directory structure.
Hopefully someone can tell me if I am off base.
What exists in the public world of go was simply what the go team wanted for go and not a compromise to fit in with other systems at Google. I think the fact that you end up with what looks like a monolithic repository is just a coincidence.
In the end ~/go doesn't look a lot different than the @INC path that points to something in your home directory, or /usr/include, /usr/lib, and /usr/bin in a pure C system. It's just the fact that you also put your own code in the same directory as third-party packages that confuses people... but in the end, someone else will consider your package a third-party package someday... so I don't think there's much value in treating it any other way.
Well, yes and no, and that's the only thing I don't like about GOPATH: that your source code is not in GOPATH, but in `$GOPATH/src`.
I'm not against GOPATH per-se because I'm already used to other PATHs; I'm just against it not behaving like those other common PATH variables, like PATH, LD_LIBRARY_PATH, PYTHONPATH, etc. GOPATH introduces intermediary directories (src, pkg, bin), instead of going straight to the point and being only for source code.
I mean, there's `GOPATH` and also `GOBIN` which is just `$GOPATH/bin`. IMO it would make much more sense for GOPATH to be only source code, GOBIN to be only executables, and something like GOPKG for the current `$GOPATH/pkg`.
: Yes I know venv is a thing, I'm just mentioning PYTHONPATH to illustrate my point.
/Usr/src within the Linux community hasn't really been a normal thing for many many years.
Go certainly has old hands involved and the /usr/src example makes sense, but the context is one requiring years of knowledge.
That more or less remains how BSD ports systems work.
The default Go Docker images include everything needed to compile Go. But once you have a binary, you really only need an image capable of running your binary. I've seen images sizes reduced by over 95%.
Revising the example in the article:
COPY . /go/src/bitbucket.code.company-name.com.au/scm/code
RUN CGO_ENABLED=0 go build main.go
RUN apk add --no-cache ca-certificates
COPY --from=0 /go/src/bitbucket.code.company-name.com.au/scm/code/main .
COPY ./ ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o /dist/main .
COPY --from=0 /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=0 /go/src/bitbucket.code.company-name.com.au/scm/code/dist/main /
Those tools aren't optimized for running arbitrary binaries - hence containers.
It feels so good to skip over all the Docker tools, config and services to build a Go app.
I wrote up more details here:
I wrote up some documentation about this for my use case just in case I ever have to go back and change it.
You can see it here: https://fn.lc/post/docker-scratch/
Please don't run untrusted code in Docker, it's not designed for security.
I run with the flags: --net=none --cap-drop=all --cpus=1 --read-only --tmpfs=/tmp:rw,size=1g,mode=1777,noexec
This gives it no way to communicate with the outside world and drops all capabilities so it's not allowed to interact with the kernel at all. Setting CPU limit to be 1 also prevents DOS attacks internally.
I also run the process under the nobody user which entirely avoids the "container root is system root" issue.
I'm also only sort of running "untrusted" code. I'm running tensorflow models which can do arbitrary computation but are more secure than just running raw code.
Also, port and volume mapping, so the container/go binary does not need to know physical/host pathnames to data files.
I misunderstood the parent, they meant to use `golang:1.10-alpine` as the build image, not the run image, as in:
COPY . /go/src/bitbucket.code.company-name.com.au/scm/code
RUN go build main.go
RUN apk add --no-cache ca-certificates
COPY --from=0 /go/src/bitbucket.code.company-name.com.au/scm/code/main .
As a Python developer, I am no stranger to slightly ridiculous packaging / environment woes. However Go has 19 years on Python. I understand that they are different languages and ostensibly serve different purposes, but it feels like a shortcoming to me - based on the little Go experience I've had.
Speaking of which, what is the best way to start a python project in 2018, now? Python's version and dependency management is my least favorite aspect of the language.
Pipenv and flit. Together, these do most of what npm/yarn provide for a Node.js project.
Pipenv for installing packages. It’s a virtual environment manager (like pyenv, virtualenv, venv) that’s aware of and stays in sync with the package requirements list, and supports repeatable builds. It claims to be “officially recommended”, and Travis and Heroku, for example, detect its configuration file.
Flit for publishing a package. It’s driven from a declarative project file.
EDIT: Added details and a link to .
C++, though, still a mess to get an isolated build environment there. Meson and Bazel are close, so close...
Out of morbid curiosity, what's the current answer for "how to pack up a Python program into a single executable/directory for Windows" now?
The Hitchhiker's Guide to Python (HHGTP) encapsulates a lot of standard tools and best practices. I've been recommending it to students for a few years now. Last year it listed virtualenv, venv, pyenv, etc.; now it recommends just pipenv.
HHGTP also has sections on "Packaging Your Code" and "Freezing Your Code". I think what you're asking about is referred to there as "freezing". The "Freezing" section of HHGTP lists some Windows tools, and contains a table comparing them. There doesn't look to be a single accepted one.
A G^nP post did criticize Go compared to Python, but I don't believe the author speaks for “the Python community”.
What does your comment add to the discussion other than snark and aggression?
You aren't obligated to do anything; the question is open to anybody.
Pipenv has only existed a year or two, qed.
Well, so does C (and basically everything else). Alternatively, you can pass tons of -I and -L flags to your compiler calls but you can do that with Go, too.
The point of GOPATH was to make this unnecessary because everything can be deduced from the source code and its location within the file system.
"Set that up manually" usually consists of an m4 script that takes 15 minutes to run as it tries to look for all the dependencies ("./configure"). It's flexible, but it's not great.
Go not having version locks for dependencies by default is a major failing. Fortunately it looks like that will be changing.
I've got to figure out which version was out during the last commit and then pin it. Absolute nightmare.
Go explicitly dismisses everything learned by the programming language (and tooling) community at large because its authors "know better".
There really isn't much to say about Go anymore in my opinion, other than answering the title of this post with "Don't".
A proper package manager, like cargo, npm, pip, composer, maven and the like, for one.
And I mean a dominant one -- there's a few attempts for Golang. Default doesn't have to mean "core-team built". All the package managers I've mentioned are de-facto standards for their respective languages.
Surely not the the discussion here, which is about "how to start a Go project in 2018" -- or the subthread, about the kind of tooling Go lacks.
* GOPATH is going to go away soon. Once vgo gets fully baked in to the language. Probably by 1.12 - https://github.com/golang/go/issues/4719.
* There is a one-line installer called "getgo" which installs Go. However, it needs further polish which is being worked on - https://github.com/golang/go/issues/23381, https://github.com/golang/go/issues/21277.
- IIRC package management was historically left to the community to solve for, rather than the go maintainers mandating how package mgmt should be done. Many languages have followed this model. I've never heard of it having to do anything with our mono repo model. The community never ended up standardizing on a model; since then, the go team has endorsed godep as the official experimental package manager, whose learnings were used to design vgo. Vgo is very new, and still being iterated on; check it out though!
- vgo removes the need for GOPATH
- There is a one-line installer to set up your env. It's very useful and simplifies a lot of what this blog talks about.
Sorry I don't have links - on phone at airport! All the above should be easily googleable though! :)
The official Go issue: https://github.com/golang/go/issues/24301
And Russ Cox's Tour of vgo: https://research.swtch.com/vgo-tour
Which is part of Russ Cox's vgo series: https://research.swtch.com/vgo
I currently have a project that uses Node and React for the front-end and Go for streaming and scraping large .xml + .json data due to its great standard library.
With Node I can start a project wherever I would like — e.g. D:/work/node_project. Go seems to be much more opinionated on that? I opted to follow Go’s preferred directory structure and include npm related packages there, but feel I am not following best practices. Ideally, I would like to have one directory for my projects.
I will take a good look at vgo this afternoon. It removing the need for GOPATH sounds interesting. Thank you.
Here is a boilerplate Go Lambda app with a Node Lambda function and a static Vue app:
I've avoided changing the GOPATH by using the following set up:
(Standard windows GOPATH)
Then symlink in WSL:
ln -s /mnt/c/Users/<user>/go ~/go
Works pretty well. Then you can install Go on Windows if you need it, otherwise your dev environment is isolated and your Go code remains on Windows if you remove/refresh WSL. Also if you use file history, the code will be backed up this way in your Windows home as well.
Has this been solved? I really want to use Go because of everything I've heard, but I can't stand how $GOPATH forces a directory structure on me.
Keeps everything organized.
The second is a problem. The first is bike-shedding (and Go doesn't really force anything -- you could just use e.g. a Makefile, and have your Go files wherever you want. You only need to follow the directory structure convention if you want go build, go install etc to work out of the box).
The biggest pain with Go is the dependency management. I initially started to commit the whole vendor folder for each project, but lately I only commit the Gopkg.lock and Gopkg.toml
Both those approaches bothered me over time and I'm still undecided about what's the best way to do this. I know some projects only commit their Gopkg.toml (and not lock) with some explicit dependencies.
This really bothers me too; I appreciate Go's liberal application of convention over configuration, but I feel like this part of dep is a break from the simplicity afforded by the aforementioned principle. I often appreciate this in other Go tooling (e.g. gofmt), and I would love if they just made everyone do dep one way (I don't care which.)
I've just started to check in vendor/ for my current project. It's nice to know that all the code is "locked" in git and that a git clone is all you need to get all the dependencies. OTHE, it creates more work and force you to take a more active role in dealing with dependencies. But it gives you more control.
One you set it up, document exactly that setup in readme. If you use something specific for dep management, document it. If not, document it. Document how to run tests. Document everything...
At work I often run into issues with "random project X on GitHub" and do a bit of drive-by fixes. But many times when I find some Go project, I have no idea how to get from a fresh repo to running tests. And if that requires a lot of time to figure out, you're getting a vague issue that I may not want to spend time on, rather than a ready PR. Unfortunately this happens for go projects much more than other languages in my experience.
I'm not sure that's particularly true. Getting started is easy: 1. Install go. 2. mkdir -p ~/go/src/yourproject 3. Start writing Go in yourproject. go build will build your code. go install will put your executable in ~/go/bin. When you need some code from github, run "go get" and don't worry about where it goes. You're just starting out.
Compare getting started with Python: 1. Install python. 2. Create your project directory anywhere you like. 3. Start writing python. Python will run your code. pip will install libraries if you ask for them. Don't worry about where they are going, you're just starting out.
Other than "choice of directory" that's not very different.
Industrial-strength dependency management is harder than that, but you can retrofit it on later quite easily in Go. Both dep and vgo (from personal experience, I assume others too) can examine a project that is a mess of hand-vendored directories and references to the general package space and extract out a manifest file for you, which is probably exactly what you want, since it will reflect the current project.
It may actually even be easier than Python here, since Go source can be examined and the exact packages you are using confidently and accurately plucked out; I don't recall if pip has a "just examine this directory and freeze its exact dependencies" command and blew out my "HN comment time budget" trying to google the answer. In Go, you can just cruise along for a long time, using dozens of packages, and drop dep or vgo in at the last minute and get functional dependency management on your existing project in one command. I've converted two projects that ended up using over 30 packages (at least a middlin' size in the Go world) each to dep in the ~5 minutes it took "dep init". (It checks things out fresh, so that's mostly source code retrieval.)
Well, not that good. Decent would be the best one could say. It has many special cases and warts, and lacks a few very standard features. But let's not get into that here.
>because getting started with it is downright PAINFUL compared to Ruby or Python
If you mean regarding package management, maybe. In other terms, not really painful at all. It's one of the easiest languages to just download, code, and build your code -- you can even cross-compile with like 2 extra ENV settings.
Is this about being used to dynamic programming and not groking types and "pointers" in Go?
About the tooling?
About the lack of certain features like generics?
But for all the good this attitude brings (and I would definitely regard it as a net benefit) there have been some pretty huge missteps too, and the Go team's prioritisation of things like type aliases or the half-finished plugin architecture over those missteps leads many to ask reasonable questions about how those priorities are decided.
GOPATH is a long-acknowledged mistake and a lot of community effort and consensus-building work has gone into finding a solution. vgo was a bit of an ambush and has left a lot of people confused and bewildered, especially after a huge amount of community goodwill and momentum was built up around dep. That was a price Russ Cox decided to pay with vgo, which was his prerogative. Now we just need to wait and see whether it was worth it, which will take some time.
If you haven’t yet read it, start here: https://blog.golang.org/versioning-proposal
vgo is the only reason I am willing to give Go the time of day in 2018.
This seems to be a recurring chestnut, but I would have expected the author to actually test it. If I search for [go open file], 8 of the first 10 results on Google show useful information.
| |- config/
| |- db/
| |- server/
| |- router/
| |- handler/
| | |- get/
| | |- put/
At various points you'll probably find that sub-packages become general enough that they can be easily broken out into their own repos and maintained separately. This can wind up being a good goal to shoot for when designing the APIs for each sub-package.
Btw what kind of info would you like to see in a write up about this?
Example: MVC was one of the go-to choices some years ago with folders /models, /view and /controllers, where everything was put in it. But, nowadays we know that it can get messy, specially when there are too many domains. Besides that, MVC isn't appropriated for the REST world (the V is not suited for that).
In these environments, you usually end up with an opinion anyway, so they built it right in, so people could skip the debate stage.