Go's default toolchain is fine, everything else is optional. Some questionable advice in the article:
- Vendoring dependencies using "go mod vendor" is not a good default workflow - it bloats the repo, the checked in code is impossible to review, and is generally a pain to keep up to date. Don't, unless you really have to.
- There's no point in stripping a binary or even using UPX on it unless you're targeting extremely low memory environments (in which case Go is the wrong tool anyways), all it'll do it make it harder to debug.
I'm on the vendor bandwagon; always have been. I don't want a github outage to dictate when I can build/deploy. Yes, that happened. That is why we vendor :).
Now you can set up a proxy server; however, I don't want to do that. I'm pretty sure I have a few vendored packages that no longer exist at their original import path. For code reviews, we put off checking in the vendor path til the end if possible.
I have to strongly agree. Third party repos move, code on the internet disappears or silently changes, connectivity goes away at the most awkward time. You always want a point-in-time copy of your code and all dependencies under your control. Sometimes even for legal or security reasons.
Always vendor your dependencies in your private Git repo or a proxy you control. Or heck, even in some long term backup solution if you must. Experience trumps theory.
> I don't want a github outage to dictate when I can build/deploy. ...I'm pretty sure I have a few vendored packages that no longer exist at their original import path.
Golang now has an automatic transparent caching proxy at pkg.go.dev. If your build has ever worked, it should continue to work even if the original source goes away. Furthermore, your build should only break if both pkg.go.dev goes down, and the upstream source is unavailable (is down or has moved).
I do all my vendor in a "cache-proxy" thing (for lots of vendors). That box always runs, I just need upstream the first time I get the package. Doesn't bloat my code, makes sure package is available and makes audits of vendor stuff easy.
UPX only means smaller files on the disk, but it comes with a cost: it tends to increase memory requirements, because the binary on the disk cannot be mapped to memory anymore. Unless it's uncompressed somewhere in the filesystem.
Worse, if you run multiple instances of the same binary, none of them can be shared.
A bit simplified, without UPX, 100 processes of 100 MB binaries requires only 100 MB RAM for the code, but with UPX 10 GB.
Edit: In reality, likely only a fraction of that 100 MB needs actually to be mapped into memory, so without UPX true memory consumption is even less than 100 MB.
All true, but I think a compressed iso9660fs can actually support dynamic paging - the pages are decompressed into memory, obviously, but can be demand paged without staging them to media.
Can you expand on this a bit? I use upx at work to ship binaries. Are you saying these binaries have different memory usage upx’d than they do otherwise?
Normally operating system simply maps binaries, executables and loadable libraries (.dylib, .so, .dll, etc.) into memory. The cost is approximately same whether you do this once or 1000 times. The code is executed from the mapped area as-is.
However, when a binary is compressed, this cannot work, because in the file the binary is represented as a compressed data. The only way you can work around is to allocate some memory, decompress the binary there, map the region as executable and run it from there. This results a non-shareable copy of the data for each running instance.
Also impacts startup time. Really it's only appropriate for situations like games where you're very confident there will be just one instance of it, and it'll be long-running.
And even then, it's of dubious value when game install footprints are overwhelmingly dominated by assets rather than executable code.
I'm curious, was the practice of using upx there before you got there? We generally A/B test changes like this pretty thoroughly by running load tests against our traffic and looking at things like CPU and Memory pressure in our deploys.
While there are valid arguments against vendoring dependencies, I’m not convinced this is one of them in the typical case. It’s exceptionally easy to ignore certain directories when reviewing PRs in GitHub (although I still wish this was available as a repo-level preference), and I’d hope at least this would be the same in Gitlab, BitBucket, etc. I don’t review vendored dependencies, and I wouldn’t expect anyone else to, although the utility of that is admittedly domain-dependent.
Go also has the benefit that its dependencies tend not to be in deep chains, so the level of repo bloat when vendoring is usually not too terrible, at least relatively speaking.
Yeah, if you have a problem with it split it into two separate commits to review separately.
But WTF is this about not reading your dependencies. Read your dependencies! It is the most amazing superpower for someone to be like “Uh I don't know how Redux handles that and you can just tell them because you have read Redux. And that's also how you'll know, hey, do they have tests, are they doing weird things with monkeypatching classes or binaries at runtime, “oh the request is lazy—it doesn't get sent unless you attach a listener to hear the response,” what would it look like for the debugger to step through their code and is that reasonable for me to do or will I end up 50 layers deep into the call stack before the code actually does the thing.
I get it, this dependency is 100,000 LOC and if you printed it out that's basically 5 textbooks of code, you'd need a year to read all of that and truly understand it... Well don't use that dependency! “But I need it for dependency injection...” I mean if that's all then use a lightweight one or roll a quick solution in a day or explicitly inject your dependencies in 5 pages or or or. My point is just that you have so many options. If that thing is coming at 5 or 50 textbooks or whatever it is, what it actually means is that you are pulling in something with a huge number of bells and whistles and you plan on using 0.1% of them.
In this context, what would be useful is something like a linker-pruning at the source level.
That is, when your code is compiled, the linker can prune code that is never called. Then a feedback mechanism could show which part of the code is actually used (like looking in the .map of the linker).
Google's Closure compiler was doing this for JavaScript, where it matters because network bandwidth is a limited resource in some places. There it was called “tree shaking” if you want the jargon name for it.
There is a benefit to using "go mod vendor". Some corporate environments lock down their CI/CD pipelines. By vendoring everything, the CI/CD does not need to make external HTTP calls.
So, I don't bother with vendoring my dependencies ( usually ), but you have it the wrong way round.
Vendoring would make it more likely you're gonna review the changes, be ause you can quickly eyeball whether or not changes look significant, which is something you often won't get out of a go.sum change.
That's not totally without cost though, as it can break workflows that cherry pick commits between branches. eg main/master branch vs stable release branches
I don't think anyone is saying it's without cost, just that there are certain circumstances where you might want to bare the cost.
There's a generic question of how you build confidence in your dependcies not being compromised, and there's steps you can take to mitigate that without reading code, but if everyone was adopting that stance then we'd likely have no mitigations
If the problem is distribution, what's wrong with gzip? All the upsize of UPX and none of the downsides. If your distribution method is http, then you don't even have to write any code other than setting a Content-Encoding header.
I don't really believe that, at the speed of nic it makes pretty much 0 difference even on 30k servers. Shaving couple of ms at worse few seconds vs modifing a binary, def not worth it.
The servers are not all on gige. Many are on 100mbit and yes, that saturates the network when they are all updating. I learned through trial and error.
The updates are not pushed, they are pulled. Why? Because the machines might be in some sort of rebooting state at any point. So trying to first communicate with the machine and timeouts from that, would just screw everything up.
So, the machines check for an update on a somewhat random schedule and then update if they need to. This means that a lot of them updating at the same time would also saturate the network.
I’m curious why you’ve got servers on 100Mb. Last time I ran a server on 100Mb was more than 20 years ago. I remember the experience well because we needed AppleTalk support which wasn’t trivial on GbE (for reasons unrelated to GbE — but that’s another topic entirely).
What’s your use case for having machines on 100Mb? Are you using GbE hardware but dropping down to 100Mb, and if not, where are you getting the hardware from?
Sounds like you might work in a really interesting domain :)
Not the GP but edge devices on wifi/m2m are another scenario where you're very sensitive to deployment size.
Which can also be solved with compression at various other stages of the pipeline as mentioned by other commenters, but just to say that that's an easy case where this matters.
For large-ish scale distributed updates like that, maybe some kind of P2P type of approach would work well?
IBM used to use a variant of Bittorrent to internally distribute OS images between machines. That was more than a decade ago though, when I was last working with that stuff.
Another issue with that is that the systems I was running can go offline at any time. P2P, which could work, kind of wants a lot more uptime than what we had. It would just add some complexity to deal with individual downtime.
CI would run, build a binary that was stored as an asset in github. Since the project is private, I had to build a proxy in front of it to pass the auth token, so I used CF workers. GH also has limitations on number of downloads, so CF also worked as a proxy to reduce the connections to GH.
I then had another private repo with a json file in it where I could specify CIDR ranges and version numbers. It also went through a similar CF worker path.
Machines regularly/randomly hit a CF worker with their current version and ip address. The worker would grab the json file and then if a new version was needed, in the same response, return the binary (or return a 304 not modified). The binary would download, copy itself into position and then quit. The OS would restart it a minute later.
It worked exceptionally well. With CIDR based ranges, I could release a new version and only update a single machine or every machine. It made testing really easy. The initial install process was just a single line bash/curl to request to get the latest version of the app.
I also had another 'ping' endpoint, where I could send commands to the machine that would be executed by my golang app (running as root). The machine would ping, and the pong response would be some json that I could use to do anything on the machine. I had a postgres database running in GCP and used GPC functions. I stored machine metrics and other individual worker data in there that just needed to be updated every ping. So, I could just update column and the machine would eventually ping, grab the command out of the column and then erase it. It was all eventually consistent and idempotent.
At ~30k workers, we had about 60 requests per second 24/7 and cost us at most about $300 a month total. It worked flawlessly. If anything on the backend went down, the machines would just keep doing their thing.
Sounds like an interesting problem to have. Would something peer-to-peer like BitTorrent work to spread the load? Utilize more of the networks' bisectional bandwidth, as opposed to just saturating a smaller number of server uplinks. I recall reading many years ago that Facebook did this (I think it was them?)
> Vendoring dependencies using "go mod vendor" is not a good default workflow - it bloats the repo, the checked in code is impossible to review, and is generally a pain to keep up to date. Don't, unless you really have to.
Vendoring dependencies is a nice way of using private Go repositories as dependencies in CI builds without importing any security keys. Vendor everything from dev machine, and build it in CI. You don't even need an internet connection.
Sure, it makes sense. But that's another moving part in the machinery that you have to configure and maintain. It also makes sense to just keep things simple and vendor dependencies, sacrificing some extra space for simplicity of configuration. It just depends on what tradeoff you're looking for.
A Go vendoring pattern that I've found very useful is to use two repositories, the first for the main "project" repository, then a second "vendoring" repository that imports the first as a module, and also vendors everything.
This may require a few extra tricks to plumb through, for example, to make all cmd's be externally importable (i.e. in the project repository, transform "cmd/foo/%.go" from being an unimportable "package main" into an importable "cmd/foo/cmdfoo/%.go", then have a parallel "cmd/foo/main.go" in the vendoring repository that is just "func main() { cmdfoo.Main() }", same as you have in the project repository in fact).
Vendoring aside, this is also a useful pattern if you're "go:embed"ing a collection of build artefacts coming from another source, like a frontend HTML/JS/CSS project.
At this point, why not do the clean thing and have a forked repo per dependency. Setting up your "monorepo" like construct is as easy as a gitignore and a json file listing your dependencies and the specific hash, then have a script pull them and do a checkout.
This lifecycle is vastly cleaner and easier to update/control than vendoring, and also forces you to actually have explicit copies of everything your build needs in the same way that vendoring does, but in a cleaner, separated, traceable, manageable way.
> - Vendoring dependencies using "go mod vendor" is not a good default workflow - it bloats the repo, the checked in code is impossible to review, and is generally a pain to keep up to date. Don't, unless you really have to.
Go's setup is that if you don't vendor your dependencies then your build might break at any time, no?
> proxy.golang.org does not save all modules forever. There are a number of reasons for this, but one reason is if proxy.golang.org is not able to detect a suitable license.
If you're vendoring something without an appropriate license, you're skating on thin ice legally.
That's just one possible reason. The disclaimer does not specify all the possible reasons the proxy would drop a saved version. Treating it more like a cache seems appropriate.
Unless you're doing something stupid like "create a clean virtual environment for every build" then yea your build might break if you lose the internet or the packages disappear. Just don't ever do that stupid thing.
You're not expected to review the committed dependencies any more than you're expected to review the external repositories every time you update go.mod/sum. If you don't care, just ignore those parts - if you do care, you were already doing it.
I'd go way farther than "a bit of a project smell." I literally cannot think of a single instance in which vendoring a dependency for any reason other than, say, caching it for CI so you don't have to worry that the maintainer pulls a `left-pad` on you, has gone well.
If the package has bugs, you're far better off either waiting for upstream fixes, working around the bug in your application code, or just switching to a different library. That goes double if the library you're using is missing a feature you need, even if it's scheduled for the next version release.
Unless you're prepared to maintain a full-on fork of the dependency (and, if you do, please make it public), everything about vendoring for these reasons is 100% bad for you for very little incremental benefit. It's like the joke about regular expressions ("You have a problem and think 'I'll use regexes to solve it.' Now you have two problems"), except it's not a joke, and it sucks way more.
TL;DR: Vendoring to cache for CI/build servers, yes. Any other reason, just don't; it's not worth the headaches.
I've built many CLIs with Cobra and haven't found it all that intense. I've built incredibly simple, single function CLIs up to some incredibly advanced CLIs that the entire company relies on daily.
I like Cobra because it gives you a great place to start, with tons of stuff "for free". Things like spellcheck on commands like if you type "mycli statr", you might get a response "mycli statr command not found. Did you mean mycli start?". This is out of the box. I don't do a single thing to create this functionality. Really nice help pages, automatic --help flags with autodocumentation all comes for free, by just running a simple init script to start the project. It speeds up my ability to make a reliable CLI because I don't need to worry about alot of the little things, I basically just create the command I want, choose the flags, and then start writing the actual code that performs the task and don't have to write very much code to manage the tool.
I usually organize my project so all the Cobra stuff is in the main module. Then I write my own custom module that contains the application code that I am building. It builds great seperation between the two. The main module just has cobra setup, flags, configuration, documentation, etc.. Then for each command, all I do is call a function from my module, where all the logic resides.
This makes it easy for me to switch between a "Cobra" context and my "Application" context depending on the module. It also makes it portable. If i want to use a different CLI framework or make this into a backend job, I can pull my module into a different project and then call the functions from that other project that reside in my module. The module is unaware of Cobra entirely, but it performs all my logic. The cobra module (the main module) contains only Cobra stuff and then offloads all the logic to my application module.
Cobra has all the power you could want from even the most advanced CLIs (I think github's CLI and Kubectl, kubernetes cli are both built on it for example). But you don't need to use any of the advanced stuff if you don't want. It means there is a lot of confidence to build a project in cobra and if it grows you won't be limited, but it also abstracts the complexity away when the project is simple.
I don't have a dog in this fight, just a fan. It is a tool I really appreciate. I will check out subcommands though, it looks like a good project. Reminds me of "click" for python.
hmm I recently just finish making a CLT with cobra and it wasn't too bad. Granted it's extremely basic but I like it for my use case (cloning all the repos in an org):
What did you find complicated about it? What I struggle with is creating man pages automatically, although I did just find where in the repo this is explained.
subcommands does look neat tho, I'll likely use it for another tool I have in mind.
One thing I did not realize is that cobra and charmbracelet/bubbletea are not compatible. It may be my inexperience in making CLTs vs TUIs but I was disappointed I could use say the loading spinner from bubbles easily in my tool (opted for briandowns/spinner instead).
i don't like this post because it makes golang feel overwhelming when the stdlib + default tooling is plenty good for most use-cases. it's as if someone made a post called "how to go hiking in 2023" and spent 10 pages linking to gear on amazon.
how should you actually start hiking? grab a water bottle, get outside, and hike.
here is how you should _actually_ start a go project in 2023:
$EDITOR main.go
go run .
everything else you should add as needed. don't overcomplicate things.
Yeah, for a basic Go service or tool I don’t think you usually need anything besides the standard lib. Maybe you will want to use a client library but most of the time it’s only thinly wrapping various http functionality. I work on some go binaries used at massive scale that have little/no dependencies.
On the other hand, the article contains information about things you are likely to need. This is the exact article I would want to present to someone new to the language. It isn't recommending every tool in existence (well, until you reach the very end) -- it's simply giving information about what you'll almost certainly need to know
> the article contains information about things you are likely to need
Then maybe write something on the specific topic in detail than bundling them into a How-To tutorial. A How-To tutorial is suppose to show "how to" do something correctly, it's a teach than showcase.
I do understand that the author was writing this article with best intentions in their mind, but the resulting article is not a How-To, rather, it's a How-Do-I which is opinionated.
I think when reading articles like this, it is important to remember that, sometimes, more is less.
You import this many tools into your project, many of them are unnecessary and will not help you completing the project, now they've been download and installed, maybe they're even interfering with other tools. You look at them and starts to think "hey, make be I should learn to integrate and utilize them". Then you wasted an afternoon trying to utilize the tool, but by the evening you realized that in order to use the tool correctly, you must restructure your project.
This is not how you can finish things, you know? If you want to write a new project, just `go mod init` it and write the code. And during the writing, if you found the need for some tools, just introduce those tools one by one to fulfill the need. Don't downloading tools or creating "project layouts" just because some tutorial said so.
Along with the issues listed here you will run into issues with editors not building/linting your tests files because they have build tags that the editor is unaware of.
You can also put the environment variable in a TestMain[1] to cover an entire package of integration tests:
Personally I like putting them in the tests, with t.Skip (frequently further in the code, in whatever sets up test dependencies that makes it an "integration" test, so it's automatically skipped).
That way you can blend unit and integration in the same package.
Well, you can configure your editor to lint your test files with build tags and that's the end of the issue. At least on VSCode and Goland you can. I think it's way cleaner to have your integration tests with a build tag rather than this extra piece of code.
If you don't have tests all over the place and you are moderately organized with your folder structure, there is no reason why build tags should represent a discoverability issue.
Pretty good article. One comment: it recommends zerolog for logging, but recently slog [1] has started to become part of the standard lib. I guess it’s the future.
I used slog for a project a few months ago; then I stopped working on it and continued on it a few weeks ago and there were all sorts of incompatible changes.
That's completely fair; it's still in development so this isn't a complaint! But just saying, at this point you need to be prepared to have to deal with that.
FWIW: slog has been pretty stable for a month or two, and should be officially standard library in go1.21
There was a last round of changes mostly revisiting use of contexts a few months ago - hats off to jba for taking a lot of time to work out the best fit
Interesting. Though bleh, I would LOVE it if Go would stop releasing things like this with global default values - it leads to tons of libraries not building a way to pass in specific loggers. Better to cut that off at the head.
Note that statically linked go binaries work in a docker image from scratch. This can be created with a multi-stage build where the builder image uses whatever OS you prefer, with any required packages. A build line such as
RUN CGO_ENABLED=0 go build -o mybin -ldflags '-extldflags "-static"' -tags timetzdata
And the second stage like
FROM scratch
COPY --from=app-builder mybin mybin
ENTRYPOINT ["/mybin"]
The builder can create users and groups, and the final image can import necessary certs like so:
The article mentions GOW[0] for a file watcher. If anyone is looking for a non-go specific one, I've really enjoyed reflex[1]. Makes it super easy to reload different parts of a project based on what type of file has changed.
one minor thing: I've skipped using build tags for integration tests because those tests will be out of sync one day with your main code, even with Goland (?).
Instead I use the usual test and check if an environment variable is set, if not, then
t.Skipf("env var %q not set, skipping integration test",envVarName)
or you can use an additional CLI flag, e.g. in `feature_test.go` write
func init() { flagIntegration := flag.Bool("test.integration",false,"run int tests") }
additionally, if it's an integration test, you may want to always run with `-count=1` at least. e.g. if you use a DB, you certainly want to not skip any cached tests when the schema changes, etc.
Go is pretty mainstream now, so if your editor is mainstream too, just keep using it and add its Go support; you don't need to pick a Go-optimized editor. The basic thing I think you really want is something like `goimports` (which you can just install), which automatically manages your imports for you and takes out 80-90% of the pain of Go's library usage policy.
I see the note in the article around using -ldflags="-s -w" - is there any other useful tool for binary size analysis/reduction? I was surprised when my binary size doubled when incorporating the K8s client package to get a secret; just using the HTTP secrets API manually without referencing the client package shrank the size by many MB. It would be nice to find similar opportunities for size reduction that aren’t as obvious.
# show the impact of cutting any package
goda cut ./...:all
which prints a sorted ASCII table with stats like 'size:4.4MB loc:134171' for each package, which is an estimate the savings you'd get if you eliminated that package from your binary. That is a great way to see what is unexpectedly large compared to its value.
goda has a bunch of other capabilities around dependency analysis, and was written by long-time Go contributor Egon Elbre. The examples in the README are the best way to get started after 'go install github.com/loov/goda@latest'.
The go k8s packages are pretty bloated - this may also just be a niche case. If you are looking to get secrets with hot reloading, you might also consider mounting a file or setting env vars and coupling it with this reloading operator: https://github.com/stakater/Reloader
go-binsize-treemap[1] is the best tool for this by a large margin. I came across it because of the exact same reason as you did actually, k8s client bloating my binary massively.
A better answer would be some recommendations for component pieces. e.g., I will need most of the essential things in https://github.com/go-chi/chi, so why bother rolling a version myself? The same goes for things like sqlx. I'm averse to leaning on a "framework," but do find good value in targeted libraries.
This is just generally bad advice. The code that gets written as every single project that "doesn't need" web frameworks is just a worse, less secure, more error prone version of the code that comes in every well-supported web framework.
That’s quite true, although for routing / muxing I do tend to use a third party one. The numerous Go web frameworks are solely created for the glory of the authors.
And in general, one does not need frameworks in Go to the same degree as say Java or C# - it is an easier language to build things from scratch with I think.
Having worked on a large golang project that did not use any "frameworks", it gets clumsy quickly. There's nothing special in golang that makes it not need a framework.
We ended up moving the project to a DI framework with an ORM-ish library since things got out of hand.
I see $GOPATH is no longer strictly needed, which is nice. But what's the deal with $GOROOT? I seem to always need it set but I don't know if that's just my workflow or force of habit.
It's either habbit or you are doing something out of the ordinary with your Go installation. The standard installation of Go has not required GOROOT for a very long time.
One thing for profiling HTTP services specifically, you can attach handlers for pprof data easily [0]. I usually only mount the routes if I've set a flag for it, usually something to indicate I want to run in debug mode. This does everything "for free", i.e. it starts profiling memory and CPU and then exposes the data on routes for you to visualize in the browser.
I would like to mention Magefile. We recently have been using it over makefiles and it has been amazing. Removes more non-go dependencies. You write your files in Go, and it all works really well.
Genuine question.. I'll often see Golang pretty heavily criticized here on hacker news. Either that or people say it's a boring language and not worth learning when there is something more interesting (usually referring to Rust or Zig).
Why does it have such a bad image?
Personally i like it as an alternative for python because:
1. It can build binaries that just work for most architectures quickly. 2. It has nice c-like syntax without some of the headaches. 3. It seems to be really nice for creating apis or backends, especially for web projects. Lately i use it as an alternative to php, to build MPAs which are enhanced with htmx. 4. It seems very beginner friendly and easy to start with and has a non-gatekeepy community.
There are also some things i don't like so much such as:
1. Goroutines and other go specific stuff 2. The dependency system requiring full import paths with urls. 3. The strictness about syntax etc. The fact that saving a file with an unused import will remove it in the ide.
But it overall seems much nicer than running node/js or python on the server side, no?
I think it appeals to cynical devs who have seen projects misuse more powerful languages, and who don't want to debate style guidelines or linter settings for any more than 5 minutes. I count myself among them.
Aside from offering nothing new, its design was wilfully, explicitly anti-intellectual. Once you've used an expressive language, having to copy-paste boilerplate becomes very painful. And there's no real USP except Google backing, so it's pretty disappointing to see it beat out better-designed languages.
I didn't mean globally (though on a quick search I'm seeing it place above e.g. Swift, Kotlin, Dart, and Ruby), I meant it's frustrating to see Go be chosen over a better-designed language "in the small", for a specific product or SDK.
There are a lot of foot guns. nil slice? Fine. nil map? Segfault. Loop variables with closures. The list goes on.
Generics seemingly split the community. May be some libraries won’t get used because they picked the wrong side.
It’s surprisingly weak at modeling data. Union types would really help out.
The community is so anti-design that it’s hard to play with them. Most want to make a big ball of mud and call it agile. When you point out simple patterns, they call you an Architect Astronaut. Checkout r/golang. Also look out for people telling you how dumb you are for wanting generics.
In many cases it’s a step backwards but it has the positives you posted. That is often a reason to grin and bear it. Eventually Stockholm Syndrome kicks in.
> Generics seemingly split the community. May be some libraries won’t get used because they picked the wrong side.
I haven't really observed that at all.
One thing that is going on is there hasn't been a massive disruption while everyone stops to rewrite the world in generics, and generics are not suddenly everywhere, which is what some people had predicted would happen. I think part of the reason is that in some cases another solution (closures or interfaces or whatever) can be a better fit, and the evolutionary approach to generics that Go took means you can use generics in conjunction with non-generic libraries or other pre-existing approaches without suffering from an ecosystem split.
Golang is IMO the best applications language. Most of my criticisms of it would be that it makes some systemsy things clunky, and because of garbage collection it just isn’t ideal for some systemsy stuff.
I personally hate the empty interface and definition shadowing of Go but that could be just me not “getting it”. Fortunately at work we don’t use that too often
I think most of the criticism is from people like me coming from C++. I am continually baffled that people write web backends in Python and Node at all, to me they seem so inappropriate that criticizing them would be a waste of time. I would consider Go to be much much better overall, and thus worthy of actual criticism
> I think most of the criticism is from people like me coming from C++. I am continually baffled that people write web backends in Python and Node at all, to me they seem so inappropriate that criticizing them would be a waste of time
Care to elaborate? I'm curious what's wrong with either for web backends
It is boring, that is the appeal for most of its proponents I think. Obviously that doesn't appeal to the nerdier side of programming, but if your goal is to "write programs" as opposed to "do programming for programmings sake", it gets out of your way most of the time to enable that in my experience.
My favourite languages are Common Lisp and OCaml but I've found Go surprisingly useful professionally. It's easy to onboard new team members, even those without prior experience, and some made contributions the very first day. The language handles complex workloads without much need for tuning or premature optimization and has sensible garbage collection defaults for latency. My main concern is an ongoing trend towards "Java-ification", like watching your favorite punk band start to sound suspiciously like Nickelback, but I think it's still a pretty good language. Go is like one of those "so bad it's good" movies.
> But it overall seems much nicer than running node/js or python on the server side, no?
No, it's just trade-offs.
I think you are making the same mistake by looking for validation on HN that you're making some sort of Better Choice, but you're just making a normal choice. You just don't yet have the experience to see all the trade-offs nor how they compare to, say, Node or Python.
For example, there are various ways Node is "nicer" than Go on the server. Just compare things like Promise.all or a concurrency-limited Promise.map to Go's WaitGroups.
But I'm not really making a choice. I'm open to anything, i don't want to restrict myself. I have and do use the things i mentioned a lot (node/python).
I was just curious what the people have to say since i noticed this recently. I don't really need the validation since I'll try out all available options due to curiosity anyway.
HN loves playing "obligatory contrarian" so often commenters will go to lengths to find faults
there is nothing wrong with Go; it delivers on its promise, you don't need to be a genius to use it, has good community support, and you can get access to a large and decent job market
Rust is a great tool but isn't as purpose-suited to network services as Go
Zig is even less purpose-suited to writing network services and won't be at Go's level of maturity for years, if ever
If a backend dev could only know one language in 2023, it would be hard to go wrong with Go
+1. In terms of development speed, Node + Axios is lightyears ahead. It's like 5 lines of code to send a JSON payload via http, vs 15+ in Go. The Javascript version is much likely to be correct as well, since it doesn't let you forget to check any of the three errors, or forget to check the http status of the response.
It’s simple when you have WhatsApp vs BlackBerry Messenger and we know that WhatsApp wins, you begin to question yourself what’s making blackberry win even tho it’s inferior, then hearing their secret weapon “Simplicity” trying to mimic the secret weapon but yet still wondering why isn’t going as expected, this is what is happening No matter how go improves people will still talk about its past failures with present failures, so you don’t have to worry people will still use the language and its competition
Yeah, it's pretty much optimized for junior programmers to write babby's first enterprise network service in. It's got a lot of features junior programmers think are nice and easy to work with, but as you mature as a developer its verbosity becomes annoying and its shortcomings become apparent.
Using Go as a PHP alternative is pretty much the use case most aligned with its niche. So go nuts if you like doing that. But stray too far from that use case and Go will start to provide pain without adequate justification, especially when compared against Zig, Rust, or even TypeScript.
The post I'm replying to is downvoted and I should probably simply move on but there's key phrasing here I'd like to point out:
> optimized for junior programmers to write babby's first enterprise
This is the (toxic) attitude Go strives to distance itself from. There is no magic, we can all be equals in this place. It's humbling. I'm not aware of any other mainstream project that captures this essence so well.
There is power in a language equalizing things. If you aspire to wring elegance out of complex or esoteric language features then by all means have fun with that but I have no interest in working with you on that. Your definition of pain could not possibly be more diametrically opposed to mine.
I'm tired of people saying that there is no magic. How are you saying that? Do you any basis? Named returns, compiler not ensuring that non pointer receivers do not modify a property, bare bones dependency management, laughable implementation of errors, the list goes on...
> There is no magic, we can all be equals in this place.
...in the Harrison Bergeron sense.
The fact that Rust has attracted relatively inexperienced coders to do bare-metal, real-time programming shows that you don't need to nerf the language in order to appeal to interested developers of all skill levels.
Who said anything about attracting inexperienced developers? If anything I'd argue that's a negative for a healthy ecosystem, and I'd argue it has been a negative for Rust.
It's the junior engineers that most often struggle with trying to devise a way to use every language feature under the sun when solving a problem, not the other way around.
We create a Generic HTTP Handlers enable developers to create functions, methods, and data structures that can operate on any type, rather than being limited to a specific type. https://blog.vaunt.dev/generic-http-handlers
Any de-facto service framework in Go to recommend, something like Dropwizard but supports both gRPC and HTTP APIs? Such framework should also has ready-to-use integrations things like metrics, logging, tracing and etc. And God please don't just support Prometheus. A pulling-based `/metrics` is really not the best solution, at least not always.
In my own experience coming from a Java background, I find Go much easier to build from scratch with since the control flow is so plain and the standard library API is simple and well designed - worth trying.
I absolutely love the integration in VSCode with the official Go extension. I can debug a running web server with delve with minimal config. Same for tests. Just experiment with the options, there are quite a lot, and unfortunately some not very well documented like gopls ones, at least last time i checked.
What's "most modern languages"? Go is better than Python's native tooling (only Poetry and other similar 3rd party tools compare). Javascript I find unruly and fragmented. "Modern" C++ is still a nightmare.
Java I haven't touched but I don't exactly hear rave reviews or angry rants about, so I expect it's middling.
Rust, scala, and haskell are definitely better experiences, but they are definitely in the minority in terms of industry usage.
Go is not "quite bad", in fact far from it. I'd say it's better than average.
> What's "most modern languages"? Go is better than Python's native tooling (only Poetry and other similar 3rd party tools compare). Javascript I find unruly and fragmented.
Exactly the scale I had in mind, thanks. When I saw 'go get <package>' rather than dependencies added to the equivalent of a Gemfile / cargo / pom file it had concerns.
There's nothing stopping you from adding to go.mod though, you just have to update the sumfile, no different than using pyproject.toml directly vs adding with CLI.
I've left Golang for a while, then when I came back it felt a bit complicated to figure out how to get started. I feel rustup.rs, really got Rust to a much better spot than Golang in such a short amount of time.
- Vendoring dependencies using "go mod vendor" is not a good default workflow - it bloats the repo, the checked in code is impossible to review, and is generally a pain to keep up to date. Don't, unless you really have to.
- There's no point in stripping a binary or even using UPX on it unless you're targeting extremely low memory environments (in which case Go is the wrong tool anyways), all it'll do it make it harder to debug.