Hacker News new | past | comments | ask | show | jobs | submit login
Go best practices, six years in (bourgon.org)
440 points by krat0sprakhar on May 1, 2016 | hide | past | web | favorite | 202 comments

The stages of go enlightenment:

1. Holy crap, this is like coding in early Java days, what the heck were the language designers smoking?! They just ignored everything! Where's my testing framework! DI?! Build system, dependency management?! WTF.

2. Holy crap, this is like coding in the early Java days! This is awesome! I can understand all golang code I read! Everything is so simple and easy. I finally get "less is more", and "worse is better"!

3. (Many months later) Oh god, I'm getting so sick of writing this test assertion in 4 lines over and over, and writing 100s of lines of mock structs. My tests are 5% substance and 95% setup. Okay let's write a junit/mockito assertion library. Oh crap, I'm having to manually wire everything everywhere and it's such a pain. Let's write a DI library. Ugh my project is getting too big and `go get` just doesn't cut it - time to use a dependency management library. Ugh, makefiles only go so far, time to write a build tool. Ugh code generation and metacoding is nonexistent, let's write our own code generation tool so we don't have to hand build 100s of types. ETC.

4. (Distant future) As golang's stubborn designers finally accept some flaws in their original thinking, they add features to make the language look a lot like another widely used language at the Googplex. Mission accomplished?

P.S. This is somewhat tongue in cheek - I do enjoy using golang day to day. It is a refreshing change :).

Java was:

1. Wow, this program runs on any platform!

2. Wow, no need to free memory!

3. Wow, threads and locks are so easy to do!

Then many years later

4. Who really cares about cross-platform code if all I do is on Linux?

5. Why does this program require 4GB of ram?

6. Must I really download and install the JVM all the time, and what is it good for anyway?


1. Wow, I can write stuff that previously could only be done in C/C++, and I can do it almost as easily as writing Python or Ruby.

2. Wow, I don't need anything installed for my program to run, it just runs!

> 4. Who really cares about cross-platform code if all I do is on Linux?

Deployment happens mostly on Linux, but development happens mostly on OS X and Windows. Having guaranteed behavior, compiling it only once, being able to deploy as a simple JAR file, the same JAR file that you tested locally, is awesome.

> 5. Why does this program require 4GB of ram?

Java is actually very memory efficient, all things considering. There are Ruby developers choosing JRuby for deployment because of that efficiency. By the time Go's GC will reach the same maturity, it will behave similarly.

> 6. Must I really download and install the JVM all the time, and what is it good for anyway?

The JVM ensures consistent behavior across all platforms and is able to do optimizations based on runtime profiling, with JVM bytecode acting as a very efficient ABI, making the JVM an efficient target for multiple languages, including Scala, Clojure, Groovy, Ruby, Kotlin, Ceylon and others, languages that have been able to bootstrap easily and have a huge ecosystem at disposal because of the JVM.

> 1. Wow, I can write stuff that previously could only be done in C/C++, and I can do it almost as easily as writing Python or Ruby.

Go is not a replacement for C/C++ because it has a GC. In fact it's impossible to use Go for the same use-cases where C/C++ is required. And if you can use Go, then you can use Java as well. And in fact, because Java is more mature and has alternatives like Azul's pauseless GC, it's more amenable for projects with soft real-time constraints.

> 2. Wow, I don't need anything installed for my program to run, it just runs!

That's cool sometimes, but Java is almost as ubiquitous as POSIX and HTML5. On Linux it's basically one "apt-get install" away, just like everything else. So the advantage of not requiring a runtime for me is, you know, meh.

>> 5. Why does this program require 4GB of ram?

> Java is actually very memory efficient, all things considering.

Huh? Considering what? Compared to what?

> There are Ruby developers choosing JRuby for deployment because of that efficiency.

Yeah, Java !=JVM, and Ruby is not Go so this is kind of moot.

>By the time Go's GC will reach the same maturity, it will behave similarly.

Says who? You can't just state that as a fact without backing it up somehow.

>Java is actually very memory efficient, all things considering

That makes me wonder about your universe of things to consider. Java's terrible memory usage was set in stone the moment they decided not to include structured value types.

Besides being planned for Java 10, there are AOT and JIT compilers for Java that are able to convert simple Java types that follow a specific value pattern into value types.

An example would be how the IBM J9 JIT converts a final Point class, just with getters and setters into a value type.

Also there are language extensions from JVMs like IBM Packed Objects or Azul Object layouts that are exploring how to involve the language into value types.

It sucks that they didn't follow the route of Mesa, Modula-3, Eiffel and Oberon in regards to value types, but they aren't holding still.

If the compiler can prove that an object is final and immutable and is never used in any other context that requires reference semantics, it could potentially use that optimization.

But then you also need data structures that can be parameterized to store such value objects as values. Otherwise you're back to square one.

Which is the point of IBM and Azul's research.

Value Types are being worked on now for Java 10+.


The Java language is definitely not set in stone.

No, the Java language is not set in stone. Its terrible memory usage is set in stone as long as the language doesn't have value types.

If (and that's a big if) they introduce value types in 2018, it will have been almost 25 years since that fateful decision that has done so much damage.

While it sucks that Java does not support value types like other GC enabled languages, apparently Go owner's are fine with that decision, to the point of fighting to keep using it on their products instead of replacing it with Go.

That's like saying you're fine with every decision taken by a government you voted for.

Go owners have millions and millions of lines of existing Java code and they employ many people whose entire careers were made on top of Java.

I'm not saying there are no rational reasons to prefer Java over Go. But Google is hardly the yardstick for that.

With >3 years release cycle by mid 2017 we are hoping to get Java 9 so to me Java 10 seams 2020 at the earliest.

You're in stage 2, then.

> threads and locks are so easy to do!

I don't think concurrent programming was ever particularly easy in Java. If you're talking about `synchronized` methods, well, they are of limited usefulness (to put it mildly) when you need to lock objects in an interleaved fashion: "lock A, lock B, unlock A, lock C, unlock B, lock D, unlock C, etc.", which isn't too uncommon a user case.

> stuff that previously could only be done in C/C++ (...) almost as easily as writing Python or Ruby

Metaprogramming, any? C++ has templates. Python and Ruby have metaclasses. What does Go have?

> What does Go have?

Go generate AKA C Macros but even worse since there is no macro syntax , you have to code you own code generator yourself .

2. Yea, right. Obviously you haven't run enough Go binaries

Really? I also find that to be true for the go stuff I run.

The last example is nsq[1]: Download, unzip and run the binaries .. aahh :)

Obviously this can be true for non-golang programs as well, but I can't think of any right now.

1: http://nsq.io/deployment/installing.html

> 5. Why does this program require 4GB of ram?

The biggest issue with Java IMHO , 6 not so much since deployment is automated.

> 1. Wow, I can write stuff that previously could only be done in C/C++

Not true, like at all. Every Go does can be done with Java.

> 4. Who really cares about cross-platform code if all I do is on Linux?

Most developers write their code on Windows and MacOS and then deploy on Linux. I'd say this "Write once run anywhere" thing was and still is a pretty big deal.

> 5. Why does this program require 4GB of ram?

Oh no! I need to buy a $5 piece of hardware to run this Java code? What kind of nonsense is that?

> 6. Must I really download and install the JVM all the time, and what is it good for anyway?

Once or twice a year, if you really want to. You update your browser much more often.

> Oh no! I need to buy a $5 piece of hardware to run this Java code? What kind of nonsense is that?

4 GB per program absolutely destroys VM consolidation. I love the look on managers' faces when they ask for a 1 GB VM and I ask them, "Is this for a Java program?" :)

Just adding some ram is also not so easy in an embedded system.

But the Java JRE update installer wizard for Windows touts the fact that Java is "in" thousands of embedded systems -- parking meters, toasters, etc.

There are dozens of Java VMs some of which are optimised for embedded systems:


Imagine, Java is run on your phone's SIM card. Enough?

On other hand, on truly constrained embedded systems where real time is essential, no GC-based language has any business, no matter how GC is good. Even C++ is sometimes avoided.

You mean like missile radar control systems?


Or do you prefer Aegis battleship Weapons System ?


These are my favorite examples for Java being used in life critical situations.

Well 2 thing stand out for me from links you mentioned:

> The applications use a combination of Thales proprietary middleware written in C combined with Java code enabled by the Java™ Native Interface (JNI).

> AONIX has additionally delivered professional services to help PERC Ultra users at Thales optimize their applications and improve execution performance including, a customer-specific training course handled the needs of coding in Java™ with hard real time constraints.

Seems like with helping of C and Consulting on 'How to do Java right' they made it work.

> End-to-end distributed thread timing constraints measured from stimulus to response are typically under 100 ms.

Since it is slightly faster than typical (per char) typing speed on keyboard. It appears nice but terribly impressive real time Java use case.

For me it is impressive, because if the system doesn't work, the wrong guys die.

> Imagine, Java is run on your phone's SIM card. Enough?

Not Java, not even close. JavaCard is an extremely stripped-down version of Java. I've used it and it feels like C.

From https://en.wikipedia.org/wiki/Java_Card#Java_Card_vs._Java:

> However, many Java language features are not supported by Java Card (in particular types char, double, float and long; the transient qualifier; enums; arrays of more than one dimension; finalization; object cloning; threads). Further, some common features of Java are not provided at runtime by many actual smart cards (in particular type int, which is the default type of a Java expression; and garbage collection of objects).

[EDIT] But I am not arguing with the overarching argument that Java proper can run efficiently on very constrained devices. It all depends on the implementation.

> What kind of nonsense is that?

.. and then someone gets the idea that your application should now run natively on Android phones, since they all run Java anyway, right?

Yes, they all run Java.

What's your point?

I'm guessing his point is that "4 GB" is a big problem on a phone.

Yes it is, but his number was an exaggeration in the first place.

There are hundreds of millions of phones running Java today, what does that tell you?

The point is that memory still matters, not that Java is inherently a problem. You're shifting the goalpost here - earlier your claim was that 4GB is not a big deal since we can just buy more memory, now it's that 4GB is a ridiculous number anyway.

It's not if you meet one of these typical "don't do any premature optimisations (even in the design phase of my data structures), because some important CS person said so"-designs. There's all too many programmers who think they should never have to care about hardware anymore.

> Once or twice a year, if you really want to.

I really hope you actually mean 7+ times a year. https://en.wikipedia.org/wiki/Java_version_history#Java_8_up... - most versions contained security updates.

None of these updates are mandatory. You don't have to upgrade anything if you don't want to.

If you looked at the issues and verified they're either not applicable in your case or don't affect your application, you already spent a lot of time on it. If you do install, you should verify your app still works. If not, you already spent time on analysis. The actual installation time is next to nothing compared to everything else around the "new jre came out" event.

What I'm saying is - you should care 7+ times a year. Doesn't matter if you only update twice.

> Oh no! I need to buy a $5 piece of hardware to run this Java code? What kind of nonsense is that?

Funny. That's Microsoft line of thought with Vista.

I'm sure this comment was written in good faith, but it doesn't belong at the top of this thread. It could be attached to virtually any post about Go. It barely engages with the actual post at all. Practically every other top-level comment below it is better.

And, of course, it spawned a completely useless thread litigating Java vs. Go. Perhaps that's why it's at the top of the thread --- which means voting had a perverse effect for this topic.

I don't know what to do about this tendency in language threads on HN, but it's a real problem.

I think having a ui widget to let us fold sub threads would be great. I really don't enjoy trying to scroll past some giant thread I think is not interesting to me.

There are chrome extensions that do this :)

Appreciate it was written as tongue in cheek, but I am glad that today a developer can start with Go, test, write code, build, deploy all with Go, whereas nearly every other ecosystem requires additional tools, often external competing tools. Then life becomes about having to learn whole new toolbekt before even coding.

Go still keeps it lean, and is great.

The fact it does not have hipster dev approved status is also an added bonus. The minute we see a bloated testing suite with yet another DSL, for Go, we are in trouble.

> The fact it does not have hipster dev approved status is also an added bonus.

What are you talking about ? the hype is strong with Go. So strong devs are persuaded they need to use Go at all cost then complain Go has a garbage collector ( just go go-nuts mailing list).

So strong there are countless articles on the net about "How we moved from X to Go..." just like in 2007/8 with Rails.

> The minute we see a bloated testing suite with yet another DSL, for Go, we are in trouble.

GoConvey and co ...

> So strong there are countless articles on the net about "How we moved from X to Go..."

Writing about Go is a great way to hit the front page of HN. I once saw 5 articles about Go on the HN front page at the same time!

I don't think that's true anymore (?). HN has a well defined, very short, hype cycle.

Except you can not use GoConvey and still write a quality test suite.

Plus GoConvey wisely separated the UI visualizer from the BDD, we use the former and not the latter.

> just like .. Rails

Uncanny, isn't it?

> The fact it does not have hipster dev approved status is also an added bonus

Huh? In my experience Go is the most hipster dev language around at the moment.

The insight here is that the "hipster" devs are always the ones who have strong opinions that you do not share.

Seems more like Rust is the new hipster target if i count the recent hn posts about it...

I don't think you can equate press coverage with hipster-ness.

How many articles a day about Rust on HN ? that's right, not as much as Go.

I would say it is second after node.js

Rust has that built in, but managed to be a much more solid language (feature wise).

I don't know, I want to like Rust, but every time I pick it up I feel like I'm relearning C++. It's learning curve is steep, the sorts of applications I write benefit more from solid development velocity and a good concurrency story. I might be wrong, but I get the feeling that Rust really only shines where performance and meticulous control are paramount. I want to like it, but it feels ill-suited to the applications that interest me. :(

I don't think there's anything wrong with that. Rust is designed for a specific market, and it sounds like you aren't part of it.

I agree; but I think I'm still allowed to be disappointed that my use cases aren't a good fit for so thoroughly-heralded a language. :)

Don't be disappointed. If you don't need the performance Rust offers, then use another ML-ish language. More expressiveness, none of the ownership issues (just GC).

Which would you recommend for someone who prefers a C-like syntax, a good concurrency story, a comprehensive standard library, Java-esque speed, and dead-simple tooling (e.g., Go's tooling), and compilation to a static binary?

C# and F# can both compile to a static binary (Mono's AOT compiler), and meet your other requirements. I find it curious that "safety" wasn't one of the points.

If you're looking for C-like though, you are probably not in any modern-featured language designer's aim.

> I find it curious that "safety" wasn't one of the points

It was implied by the context of "ML" languages. That was my intention at least; I thought ML languages were characteristically safe.

Regarding "C-like", I mostly mean syntax. I'm not too familiar with the ML family, but many FP languages have bizarre syntax which amplifies the difficulty of learning new concepts IMO.

>I feel like I'm relearning C++

Because you are. You likely wrote widly unsafe things which are perfectly legal in C++. Rust is really just enforcing RAII which C++14 already has, and you've likely avoided.

C++has had RAII for decades already. Actual Rust innovations are the borrow checker and dynamically sized types.

Yes, we were already using it on Windows to program COM in the mid-90's.

The borrow checker is predated by Cyclone and other work on region systems. Rust added more polish, but none of the basic ideas are new.

Of course, I'm aware that work on region systems dates back to the mid 90's, which is why I called Rust's borrow checker an “innovation” (the first time a product form factor or feature successfully makes it into a market) rather than an “invention” (coming up with a product concept that didn't exist before).

Wait, people avoid RAII? With how useful it is in every circumstance, I can't imagine C++ without it.

It's not just the safety features; it's the syntax, the build system, the numerous kinds of strings, the difficulty of finding up-to-date documentation, etc. Rust just has a steeper learning curve than many newer languages. This is not to say that the learning curve is unjustified; only that it exists and is significant.

Regarding safety, we don't even need to call into question my competence with writing safe code :) ; Rust's borrow checker currently precludes a swath of perfectly safe programs. This isn't a knock on Rust; it just means it's not yet intuitive.

It's not just about performance, but also (perhaps primarily!) correctness: making sure that resources (not just memory) are freed in the right places, that your code won't try to use resources that have already been freed, that the same object can't be mutated by two or more parties at any single point in time, etc.

Rust really aims to be replacement for C. It allows you to program in low level, but you have much more powerful type checking and keeps track of ownership which reduces number of errors that plagued C language.

Yes go also claims to be C replacement, but probably the only similarity is its simplicity, but unlike C it is not as expressive.

I haven't used Rust extensively, more Go for sure. But it seems to me that Go was aiming for simplicity more than being a true innovation. It's basically a compiled Python with type annotations and unfortunately a GC to spoil it all.

Rust was aiming at innovating and advancing the PL space with meaning. I can't say looking back the time I invested with Go was well-spent.

Rust is probably worth the effort if you have the need for what it offers. It just requires more out of you than Go does.

I don't know a whole lot about C++, but I've worked on projects using C. Seems to me Rust is straightforward and just enforces what you have to do in your mind otherwise.

Once IDEs start coming around it should be quite competitive. I'll admit to thinking the same thing: I'll use Rust for the absolute critical parts, then F# for the rest. But more and more parts I'm writing are easy enough to do in Rust, and I get the perf as a bonus.

> The minute we see a bloated testing suite with yet another DSL, for Go, we are in trouble.


You forgot 5. (In a more distant future) After a plethora of features, long time Go developers become disappointed with how much bloat the language acquired over the years, decide to start from scratch and design the definitive language to replace it. Go to 1.

Well, hundreds of lines of mock structs is probably a sign that there is something very fundamentally wrong with the code. Go, as a language, has nothing to do with it. Testing is one of its nicest features. I only had to improve my testing experience just a bit, like stringifying complex outputs to compare them easily, comparing all of outputs with expected outputs in a single if statement and issuing a single verbose t.Errorf on failure, showing what functions produced what outputs on what inputs and what was expected.

100s of lines. It feels very much like mocking objects in Java before mocking libraries came about. The explicitness is nice, but the verbosity increases very quickly as you start adding state to your mocks.

At some point mid #3 I finally hit the "I think it's time to just learn Elixir for every situation where I don't need a portable binary..."

So far, looking like a solid decision.

I'm looking to transition from Rails to something more performant. Phoenix is at the top of my list, but these benchmarks worry me:



In both Phoenix has very high error rates, and in the first it seems really slow. Here is their Phoenix test app:


So what is going on there? Is it less of a perf win than I've heard? Is there something bad with their code?

Yea, I had the same concern until I heard the reason. There are other benchmarks out there that compare to Go/gin that are more accurate.

Edit: Here.


The benchmark for Phoenix was done at the last minute, and had a lot of problems which they didn't have opportunity to fix. The next round will be better.

There were problems with the benchmarks and nobody had time to fix them. This is usually the case when the benchmarks have high error rates and this has also happened in the past with Haskell, C# and F# libraries.

This is helpful to read because I've been debating between learning Elixir vs Go

Counterfeiter is a pretty serviceable mock generator:


What do people use mocks for?

It will get extremely meta when golang's infrastructure gets good enough for other languages to want to use. Then they'll split it out into a 'GoVM', or I guess 'GoRT', and then you'll have a 'GoVM' competing with the JVM...

The article sort of glosses over IntelliJ with the golang plugin as an "other" IDE, but it's best in class hands down. At one point in my past I had sworn off of Java-based IDEs, and used Sublime Text for years with primarily Python and Go. Something convinced me to try Go in IntelliJ, and really it's fantastic.

It also covers "important-to-me" features the author mentions. I haven't tried VSCode yet, but have tried all the others listed. IntelliJ is fantastic for Go.

In my experience most Go developers seem to favor a terminal-heavy workflow, sometimes frequently shutting down and re-opening their editors in new paths and projects. The barebones simplicity of Go-the-language seems to be a nice match to this workflow. It's a different thing altogether to how large IDEs like IntelliJ expect you to work: opening up a project and staying in the IDE for a long period of time.

Both approaches are totally valid, of course, but I suspect Go developers are biased toward lighter-weight editors.

I concur. Go syntax is dead simple and even the most basic tools like https://play.golang.org/ can be used without hesitation for some quick and dirt POCs. In the discovery mode, when you're learning a new library, any tool that supports code completion and provides documentation links is nice to have.

Surprises me it doesn't have syntax highlighting.

Possibly because Rob Pike is not a fan of syntax highlighting, calling it juvenile.


Wow, his second response is really demeaning. Or maybe it's just a shock to me after hanging out in really friendly communities (Elixir etc.) for a while...

On the whole I've found the golang community to be quite positive and enforcing good norms about crappy behavior. I wouldn't assume too much from a post Rob made 4 years ago.

Pike literally quoted a passage from the bible:


I won't argue that it was in good taste.

If by "literally quoted", you mean paraphrased... Or is this just another case where literally doesn't mean literally?

No. Sorry. I should have linked to the NASB translation, which was the source of his (literal) quote.


I feel bad for the person who wanted it on := lines because of sight issues.

Maybe it has to do something with the fact that Go's grammar is modest in size, being only 25 keywords. Compare that with C99 - 37, C++11 - 84, Rust 52 etc.

Oh me too - my setup is usually split screen IntelliJ and iTerm2. But as projects grew larger, (and I wasn't a vim-er) a lot of the tools that come with a full fledged IDE became nice-to-haves (definition-on-hover as source code, click-to-definition, tree and object graph always open, and so on).

I know my setup isn't for everyone, but if you're looking for a good Go IDE experience, especially if your projects are of any size, don't pass over trying IntelliJ. I spend 95% of my time in Go now, and IntelliJ has been surprisingly good.

As a Java developer that uses Intellij exclusively this is great news. Maybe I'll check out Go more. I really hate using languages without good IDE integration.

I think you'll have a good experience with the integration. Go is the only language I know of where tooling is mostly painless. If you can manage to grok the notion of GOPATH, everything else pretty much just works--even in vim, all you have to do is install vim-go and you get code completion, automatic formatting, jump-to-definition, test execution, coverage reporting, compilation/compilation-error-reporting, etc etc all out of the box (vim-go just wrangles a bunch of programs that provide these facilities, which mean they can also be wrangled for other editors as well).

I second this. I've given Gosublime (and more recently VSCode) a solid try on several occassions. Neither hold a candle to the Intellij Go plugin. Being able to give solid rename refactor, and code navigation, even when your code is in a partial spaghetti, non-compiling-mid-development state is hard. And the Intellij plugin does a decent job at it. Not quite Java-level good. But pretty darn good.

The existing command line Go tooling (while great) just don't do a good job at providing that kind of support to your IDE.

go-plus for Atom is pretty good. I haven't used the IntelliJ plugin, so can't compare them.

There's also Goclipse for Eclipse.

> Only func main has the right to decide the flags that will be available to the user.

This one applies to every language. I was working with a python package that decided that, since `sys.argv` was available from anywhere, it should parse `sys.argv` to configure its own behavior. For a long time, their suggestion was to first parse `sys.argv` yourself, then to modify it before going into the library.

Reminds me of when systemd parsed the kernel command line for the kernel debug parameter, making life hell for kernel devs.

It was so ridiculous, someone ended up sending in a patch for the kernel to modify the command line once it had been parsed to remove the debug parameter.


I agree but find it bizarre that this tip comes in a list of best practices for the major language invented at Google, when the same company's "gflags" library, also heavily used, advertises right at the top that it allows definition of flags in any source file! https://gflags.github.io/gflags/

This defeats the purpose of a good flags library though.

Where flags shine is when you've some obscure tunable deep in the dependency tree that you need to tweak (particularly in an emergency). Plumbing through potentially thousands of flags through to main is nonsensical in that scenario.

Which works great, until you want to modify that parameter on the fly, or adjust it based on the current load of the system. Then you're sink, because you had that tunable parameter read from some global state, instead of being set from its parent.

The flag libraries I've used always make this easy, but the problem comes in when you try to reuse that dependency in another binary as a library.

Maybe you've seen a better flags library than I have, though.

No, the flags libraries I've used had exactly that issue.

When that came up what you'd do then is refactor that to be a class parameter or config option or whatever (and we'd usually ask that the flag be kept in some form).

Until then though you get the benefits of a quick and easy way to both expose and use tunables, which is much better than not having it at all.

After working with .NET/Java/Node.js/Ruby/Python etc the move to Go involved a larger investment in time. I found this really informative and it's great to have the learnings condensed down.

Check out YouTube conf. talks from the same author, where he shared his experience developing services in Go, while working at SoundCloud. Also, very helpful.

Could you please provide the link?

I'm assuming they are the ones listed here: https://peter.bourgon.org/talks/

Checking these out now. I use SoundCloud.. cool to hear about their infrastructure. It is a bit weird how you can't see his hands in the shot:


Yes, thank you, appreciated.

I have run a coding-dojo type project internally at work at ADP. We had two PRs in Go, so while we are on this topic, are there any improvements that can be made to the Go submissions?

Does anyone want to submit a best-practice Go solution?


Good article on the whole, but I have a few quibbles:

> If your repo foo is primarily a binary, put your library code in a lib/ subdir, and call it package foo.

IMHO, that's just ugly, uglier than foo/foo or foo/foolib or foo/libfoo.

I also think that anything which has commands other than a single-command project which will always be a single-command project (there are fewer of these than one might think …) should put all commands, even a single one, in cmd/.

I think that inline config objects should be used with useful zero values to try to emulate Lisp's or Python's keyword arguments: if one always needs to provide a value for each config object member, then just use arguments after all.

I think a testing library can be a great addition, since it can turn a three-line if check into a one-line assertion.

> IMHO, that's just ugly, uglier than foo/foo or foo/foolib or foo/libfoo. > I also think that anything which has commands other than a single-command project which will always be a single-command project (there are fewer of these than one might think …) should put all commands, even a single one, in cmd/.

It's this kind of inflexibility in the directory structure of ones repo that really turned me off of Go. It's such a trivial thing, and yet the fact that Go lacks a level of indirection (package.json, Cargo.toml, pom.xml, etc) to map a chosen directory structure to the standard build tooling causes problems that get really annoying. For instance, there's a ton of Go repositories out there that are not go gettable because the author wanted to put source code under a src directory and hacked that together using make. And good luck with organizing any repository that contains multiple languages where go is one of them...go's "code lives at the repository root" doesn't play well with others.

It all makes me sad since Go's decentralized dependency management (i.e. go get being able to pull/build based on a meta tag) is brilliant and I always hate having to rely on a single, centralized repository to deal with dependencies.

> For instance, there's a ton of Go repositories out there that are not go gettable because the author wanted to put source code under a src directory and hacked that together using make.

Are there? If someone's that ignorant of the language, I don't think I'd want to run something written by him …

> And good luck with organizing any repository that contains multiple languages where go is one of them...go's "code lives at the repository root" doesn't play well with others.

I've actually built libraries which involved both Go & Python without a problem.

> Are there? If someone's that ignorant of the language, I don't think I'd want to run something written by him …

Yes, there are. I don't know any off the top of my head, but I see them all the time. So, just for fun, how about a little test:

Step 1: Google 'notable applications written in golang'

>> first result: http://www.infoworld.com/article/2843821/application-develop...

Step 2: Click through to github repos

>> Result 9/10 of them won't work with go get.

So, are Docker, Kubernetes and Etcd the kind of software you wouldn't want to run because their creators are too ignorant of the language?

The etcd binaries are go gettable. The others are large enough to easily justify their own build process.

> I think a testing library can be a great addition, since it can turn a three-line if check into a one-line assertion.

Totally concur with this. I feel like all these people complaining about "bloated test frameworks" either haven't written a lot of tests, or are just fine with repeating themselves in test code, or they end up writing their own version of a test framework anew in every project. So much simpler and sane to grab an off the shelf solution for test code.

It was a great talk at QCon London earlier this year - the video is due to be published towards the end of next month. I'll try and come back to this thread when it is with the URL.

(Disclaimer: I was the track host for Perer's talk)

"those parameters should be part of type constructors"

I'm not sure if this is a nit, a misunderstanding on my part, or a difference in terminology, but I think what is meant here is "value constructors" (or more commonly, just "constructors"). As I understand the term, "type constructors" construct types.

Well, there are no type constructors in that sense in Go.

That was my understanding, but being new to Go I wanted to be sure (and be sure there wasn't some other specialized use in which "type constructor" meant something different than "constructor").

Not even for pointers, maps and slices?

Oh, good catch!

Libraries using log.Printf are incredibly tough to work with in a production environment if you want anything more interesting than looking at stderr. Libraries that let you provide a logger API at time of construction are better than log.Printf, but they still fail at letting you include contextual information inside method calls.

We inject a logging interface into all our methods that take a context.Context object [1]. This allows us to push contextual information onto the stack of context objects, and then when we log we can do it at the point of failure and have access to an immense amount of useful information. Check the attached gist to see an example of how this works in practice [2].

Given that the context library originated out of Google and is now part of the stdlib in 1.7, I would love to see other libraries embrace it instead of relying on much less flexible solutions.

[1] https://godoc.org/golang.org/x/net/context

[2] https://gist.github.com/justonia/f81eead323d2b23eca1c485ed8e...

> No statements go by where the object is in an intermediate, invalid state.

This seems kind of misleading. Per my understanding of Go, omitted fields in a struct initialization are defaulted not reported as errors. So you're equally likely to be passing invalid state, in the two situations.

One pattern I like in Haskell, for this kind of thing, is to define a defaultConfig value that contains typical defaults and which can then be tweaked as desired.

One big advantage this has is that when some functionality is made newly configurable, you set the default to be the old behavior and existing code continues to work correctly unmodified without any additional effort.

Hopefully you'll combine that idiom with "make the zero value useful" so that omitted fields still give valid behavior.

Reading on, I see that.

It still breaks down when a zero value is meaningful for a configuration parameter, but means something different than what was done before the parameter was configurable. I think that's probably not super often.

In light of all this, I still don't really see what harm we're avoiding by preferring the inlined struct initialization. I do agree it looks a little prettier, but the article seems to give it a greater import.

Looks a lot like a language designed by a committee, but I generally like it.

I don't love it, like I love Python, but it does its job and it is fast.

What I really like is the defer(), the channels(having the option for asynchronous channels would be awesome), the go routines(async callback hell is getting old).

What I find completely awkward however is the enforced first letter capitalization(don't tell me how to live my life, go!), the interface{}, and the GC.

And how could I forget, the multiple return values. Give me tuples, Go, don't give me multiple return values!

I get - and agree with - the complaints about interface{}, but what's wrong with the GC?

No control over it. It will run when it decides to run, stalling my app.

> That advice still holds today: vendoring is still the solution to dependency management for binaries.

This might be a stupid question, but can someone explain to me what this means? Thanks!

To vendor a dependency is to store a copy of all the source code you're trying to use inside your project itself. That way, when you compile your project, you also compile the code it depends on.

Updating code you depend on is a semi-manual process - you choose to copy the latest version of the code you depend on into your project again.

This is contrast to stacks like Java, where I can give you a pre-compiled JAR file that you can link your code to.

I have found that using git-subrepo is a great help with vendoring as single

    git subrepo pull --all
is enough to update all dependencies.

Is this different from statically linked libraries (as opposed to dynamically linked)?

This is about code, not libraries. Vendoring the code means copying it into your own repo... that way you're guaranteed that everyone who builds your project is building the exact same code, and if one of the third party repos disappears, your project doesn't all of a sudden fail to compile.

"Vendoring" means maintaining local copies of third-party libraries and other dependencies in your local repo so exactly what you depend on is a part of your internal history. Git provides ways of doing this easily, such as subtrees.

Just to add: the name comes from the convention of storing all those dependencies in a subdirectory named "vendor".

Is vendoring strictly storing the source code, or can it include storing compiled libraries?

Go doesn't have support for using compiled libraries (although support is coming). It's always using the source code.

A point of clarification for anyone skimming the original article:

> Top Tip — Libraries should never vendor their dependencies.

Peter goes on to clarify:

> You can carve out an exception for yourself if your library has hermetically sealed its dependencies, so that none of them escape to the exported (public) API layer. No dependent types referenced in any exported functions, method signatures, structures—anything.

I think this is the way to go if you're writing a library which has its own dependencies. You get a repeatable build and free yourself to change which dependencies you rely on without impacting users of your package.

There are exceptions, such as if your dependency has an init which only makes sense to run once. Loggers come to mind, where the setup should be determined by the main package. The f.Logger point in the article is friendlier to users of your package than just using log.Printf, and frees you from having to vendor logrus, for example, if you want to support structured logging.

What you need is... a [goat](https://github.com/mediocregopher/goat)

Slightly tangential, but, could someone share their experience using Go specifically for building websites?

How does Go (including Go frameworks specifically geared towards web development) compare in terms of performance, ease of development with RoR, Laravel?

Is building websites using Go a good use case or is Go better suited for building high performance microservices?

I've done a lot of ruby and go. Honestly, I would think twice before building a major website/product API on go. Development time is markedly slower in go, some causes off the top of my head:

- testing is way harder (ruby has some amazing mocking/stubbing/testing tooling).

- debugging is way harder. Hope you like printf.

- Json; the strict typing & struct searialization rules make dealing with Json a pain in the arse, and heavy on boilerplate

- interfacing with Sql (if you have to do a lot of it) is a pain in the arse, for similar reasons to json.

- having no stack for errors is insanely frustrating.

- templating in go is primitive

Go is amazing for services. The parallelisation primitives are fantastic, and allow you to do things that would be hard/impossible in ruby/Python. Go is undeniably much faster, execution wise. (Although our super high volume ruby product api is at 50ms median so, it matters less than you think)

Decide what you care about most; if your prime goal is to get a product in front of customers as soon as possible, I'd pick a higher level language. If you want to make a rock solid service that sips on resources & scales to the moon, use go.

Median latency is a completely useless number. There is literally no use for it, ever. Satan invented median latency to confuse people, because it lies to you and whispers sweet nothings in your ear. Averaging latencies in your monitoring leaves the outliers out in the cold because you will never see them, and tail latency is way more important for user experience.

Quantile your latency. I suspect you'll find 99th tells an interesting story between Ruby vs. Go. The best latency graph has five or six lines at different percentiles (yes, including 50) stacked in a latency sandwich. I'm willing to bet your app has a rough 99th. Maybe even a rough 95th. Nothing against Rails (honest, this applies to most interpreters), but most scripted stuff has a pretty rough latency tail, and you ignore that with median. (Go is not off the hook either; GC pauses can blow out high percentiles if you're memory-heavy.)

I'm pretty passionate about this, sorry. I'm on a mission from God to purge our mortal plane of average/median latency, because it's one of those misapplications of statistics that everybody does without a second thought. Don't even collect it. Average latency is unimportant and misleads people, and misleads people more as you get more popular. It's that 1% of clients sitting there for 2sec that impacts perception of your UX.

Sure, and I thought twice before putting that in :) We of course monitor our 95th & 99th primarily, for reasons that you said. But I didn't quote those because they're usually blown out by slow db queries, n+1 selects,n+1 cache gets, etc, issues that happen regardless of language. often they're on rarely used end points, where the cost-benefit of optimising isn't worth it. I feel median gives a resonable proxy for the language/framework execution capabilities.

That response actually illustrates my point, oddly enough; median has comforted you to organizationally disregard the super interesting data you're getting from the percentile aggregations. Those blowouts are far more interesting than you're saying, I wager.

p99 blowout is generally an operational smell, even on low-RPS endpoints.

I think we might be talking past each other a little :)

The median has absolutely not "comforted" us, nor do we disregard the percentile data. In fact, we have a whole perf team basically dedicated to looking at 95/9th & other slow transaction trace data, and fixing these.

In big projects, serving 10's of millions of users, with a handful of devs, you've gotta pick your battles, you cant fix everything always. We do our best, just like anyone else. So obviously we agree with the importance of percentiles. Dont make assumptions :)

And all of this is rather irrelevant to the main point now, which is overall _language_ speed.

It's not, actually, and I'm not sure why you think I'm talking past you. Your claim was that Ruby is comparable performance-wise to Go in your application, and you cited median latency to make that case. I pointed out that interpreted languages often have a long tail and that the long tail is more interesting and undermines the comparison. You replied that I am correct and, indeed, you do have a long tail but you wrote it off as "stuff that happens" regardless of language. I am replying that the "stuff that happens" is the interesting stuff that undermines the flawed comparison you attempted to make, and I disagree that all of it can be written off to language-independent concerns; if you investigate it further, you might find some of it comes from the choice of language itself.

I have not wandered off topic into irrelevancy nor talked past you. This is still addressing the point you attempted to make by citing median latency in defense of the performance characteristics of your chosen language, in order to assuage general hesitance to adopt Ruby over performance concerns. I think you'll find that your metrics do not support your data if you dig into that p99.

(I share that experience from handling tens of millions of users myself, which is all I can admit to publicly, so that request re: assumptions goes both ways.)

To recap: claim submitted with flawed supporting data, thread addressing why the supporting data is flawed. No talking past taking place.

Never claimed ruby was as fast as go. Our go services run at about 5-15ms(excluding 95/99 etc), so they're def faster(they do a lot less too!). I'm just saying ruby (or other dynamic langs, i dont mind, I dont have a "chosen language") can make a decently fast product API, and thats all.

And of course we've investigated further... (this is anecdotal i know), but of all the degenerate 99th cases we've investigated/solved, I can only think of 2 or 3 that were lang related, rather than logic, db, n+1s, etc. If you're curious, those were: - degenerate GC behaviour in a specific circumstance - JSON serialization (specifically crossing the C/ruby boundary shiteloads of times to serialize an object) Sure these issues sucked, but we mitigate/sidestepped those issues for the most part.

It's all a trade off when you're trying to decide what to use for your project; right tool for the job etc. Im not evalganizing for or against anything.

Curious what would you propose instead to roughly compare speeds of a product API? (Keeping in mind my anec-data about nearly all of 99ths being a logic/db issue). Comparing 99ths (from my experience) would be more like comparing which codebase has more bugs, because thats how we treat degenerate cases.

> Never claimed ruby was as fast as go.

Fine, you claimed the performance difference "matters less than [one thinks]," which is pretty much the same thing if you really step back and think about it. It's also wrong for a whole cornucopia of reasons in general, but I'm choosing to focus on the supporting data you used to make that claim.

> Our go services run at about 5-15ms(excluding 95/99 etc),

See, nobody can resist middle-ground aggregations to describe things, even in a thread about middle-ground aggregations! They are such a cognitive trap. You should say "our Go services run at about 5-15ms half the time," because that'd be more correct if my assumption about where that aggregation is coming from is correct (and I'm guessing it's a gutstimate of median). And, again, 95th and 99th are super interesting, particularly when describing the performance characteristics of a latency-sensitive service, and it's a disservice to omit them.

I will absolutely say "about ___ms" and refer to my 99th and let people think I'm telling them average. To me, 99th is my average. (Normally I'd ignore this as pedantry, by the way, but it's the subject of the thread...)

> - JSON serialization (specifically crossing the C/ruby boundary shiteloads of times to serialize an object)

All of the terrible code in the world that handles JSON is one of my favorite "make this app go faster" targets. Its ease and ambiguity is its downfall, because people write genuinely awful code to interact with it. Most of that code is in language standard libraries. I will stand by that remark no matter how much you challenge me on it.

I'm of the fun school of thought now where I treat JSON as an external hot potato and throw it in a sane bag at the edge like protobuf or Thrift internally. If your internal services are communicating in JSON you are wasting a lot of cycles and bandwidth for pretty much no reason. You can switch to MsgPack and get an immediate win if IDLs aren't your thing, or CapnProto and get an even bigger win if they are. If you like being on the fun train, protobuf3+grpc is a pretty fun environment. This complaint even applies to Go even though Go shipped JSON in the standard library with clever reflection, so now everybody wants to lazily expect JSON configuration files which map cleanly to their internal config struct (please, stop doing this and write configuration formats that don't completely suck; looking at you, CoreOS).

Does serialization really matter, you ask? Profile your application and watch how much time it spends dealing with JSON. I've seen switching away from JSON remove the need for entire machines at scale. Whole machines. Because that many cores were freed up by not making every single instance spend 5-6% of its time marshaling and unmarshaling data.

> Comparing 99ths (from my experience) would be more like comparing which codebase has more bugs, because thats how we treat degenerate cases.

In other words, I was correct about organizational comfort with how you interpret 99th percentile latencies.

99th are not your degenerate cases. The poor souls in the 1% are your degenerate cases, and they are users too. A bad 99th percentile latency is bad, no matter how you justify it. Most folks write off 99.9th latency; 99th is a bit strong, especially if we're talking about your 10MM+ (M?)AU app. 1% of requests is a shitload of requests if your volume is as high as your audience description implies. I'm weird in that I consider a strange 99th+ as interesting data worthy of investigation, but I think that should be the norm, too.

As for comparing the performance characteristics of two separate apps, the metrics I'd start with are going to be RPS, TTFB, and TTLB. For the times I mentioned, σ (my personal favorite) as well as 50th, 75th, 90th, 95th, 99th, and 99.9th percentiles. Those are the externally-interesting ones. I also want to know how many cores are running it, how much RAM it consumes, and a whole bunch of other stuff on the inside. But it's a poor comparison at all, really; no two codebases are directly comparable, which I think you already know.

Grandparent's point is that 99th percentile latency is, for their application, unrelated to the ruby-vs-Go comparison.

> Average latency is unimportant

That feels like an overly-bold claim. Improving your 99th percentile latency from 2 seconds to 1 second may not be worth it if you bring your average latency up from 0.05 seconds to 0.99 seconds.

> It's that 1% of clients sitting there for 2sec that impacts perception of your UX.

Well, surely that depends on what endpoints those are and what people are expecting from them. A 2s wait for a whole rendered dashboard page of your entire organisation may not really be a concern. A 2s wait for 'find out more about our fast CDN' might really harm sales.

Now, if your point was "you shouldn't only look at average latencies" then you're entirely right, but I cannot see how they're irrelevant. The overall distribution is important and I'd actually recommend that people look at this shape. Just picking one percentile is always going to be misleading because you're throwing away a vast amount of data.

That actually was my point, and at no point did I say only look at one percentile. I in fact said look at five or six, stacked, including median/p50. I implied that median latency is only useless by itself, or so I thought, so I apologize if that was unclear. It is perfectly fine in concert with other aggregations. This is the type of graph I mean:

       |                 ====   p99
    ms |+++++++++++++++++++++   p95
       |.....................   p75
       |`````````````````````   p50
        t ->
That outlier jump might be a production emergency, such as a database server dying or something. Yes, really. Had you only graphed median here, you would have missed it until some other alarm went off.

You gain a lot from this. Visually, you can see how tight your distribution is as the rainbow squeezes together. Narrower the better. Every time. In fact, very often the Y axis is irrelevant, and here's why:

Reining in your p99 that far at the expense of a higher average is, oddly, a win. That might surprise you but it is borne out in practice, because at scale only the long tail of latency matters. A wide latency distribution is problematic, both for scaling and for perception. A very narrow latency distribution is actually better. If you can trade off a bit higher latency for less variance/deviation, it will be a win every time. Weird, I know. User perception is weird and, as you point out, the rules change per-endpoint. Perception tends to evolve, too. As a rule of thumb, though, tighter distribution of latency is always better and how far you can push median to get there is your business decision and user study environment.

To borrow one of Coda Hale's terms[0], the scenario you presented demonstrates the cognitive hazard of average and is actually my root point. The average came up, yes. That is not necessarily bad (at all!), but "average" intuitively tells you that it maybe should be. In this case, it is misleading you, because the exact scenario you presented might be a strong win depending on other factors. A 99th of 1000ms with a 990ms average is a really tight latency distribution so it is fairly close to ideal for capacity planning purposes. It blew my mind when I discovered that Google actually teaches this to SREs because, like you, how unintuitive it is threw me off.

It's hard to swallow that a 990ms average might be better than 50ms. Might be. Average doesn't tell you. That's why it sucks. Not just for computing, either; average is pretty much the worst of the aggregations for cognitive reasons and is really only useful for describing the estimated average number of times you fart in a day to friends, because it is quite overloaded mentally for many people without experience in statistics.

[0]: https://www.youtube.com/watch?v=czes-oa0yik

Your cause is just. Keep fighting the good fight!

> Median latency is a completely useless number. There is literally no use for it, ever. Satan invented median latency to confuse people, because it lies to you and whispers sweet nothings in your ear. Averaging latencies in your monitoring leaves the outliers out in the cold because you will never see them, and tail latency is way more important for user experience.

Umm, you do not get the median latency by averaging anything. Median latency is just the 50th percentile. It is definitely not one of the interesting ones to reason about or care about improving, but not valueless to measure. It is interesting to have if you are graphing your latency curves, to make an example.

I think if you reread my comment you'll find that what you're saying does not actually disagree and is a restating of what I said.

It is not. Your comment confuses the median with the mean latency. The comment you replied to did not mention using averages - it mentioned only using the median. You introduced scathing critique of using averages when those are only used for means and therefore totally irrelevant here (even if I totally agree that people that use them should be educated. But that wasn't the case here).

I actually don't, and you're reacting to the use of "averaging" as a verb. That's why I encouraged you to reread. When you're discarding 50% of the samples in a 50th percentile median, I think "averaging" is an acceptable verb to proxy the situation in English since "medianing" isn't a word. I could have said "computing the median," but that's just tedious.

Notice later in the comment I say average/median, implying that they are separate but related concepts. I think it's safe to assume that someone who can conversationally use the word "quantile" is not confusing median with mean. You're assuming that my (intentional) selection of a lower-fidelity term to describe a concept, which I carefully illustrated with ancillary points to give specificity to said concept, demonstrates a misunderstanding of the very field I'm explaining. That is pretty obviously wrong and a bit condescending.

We agree, which is what's frustrating. You're just latching on to a pedantic correction of my point, and rather than belabor that correction I encouraged you to reread to see that we do actually agree. Now, I do see average latency far more often than I'd care to admit, which is why I got lazy and just said "average" at the end there once I started referring to generality instead of specificity, but I think it's pretty clear that I understand the distinction regardless.

In statistician-land, the median is simply one way of averaging, as is the mean. Introducing (or at least using) the extra term takes care of some ambiguity.

> debugging is way harder. Hope you like printf.

You have other options: https://github.com/derekparker/delve

> having no stack for errors is insanely frustrating.

Check out https://github.com/pkg/errors

If you want more rich control of the output of the stack trace there's https://github.com/go-stack/stack

Yeah, we've started using that. Kind of amazing that you need an addon lib & Wrap() everywhere to make errors useful. Just more boilerplate! :)

> - debugging is way harder. Hope you like printf.

Can't you use gdb with Go? https://golang.org/doc/gdb

The second you launch a Go routine, gdb becomes useless.

How do you debug then? (I'm not a Go coder, tho I'm familiar a bit with it)

Oh, that's simple. We gophers don't make any mistake. Ever. It makes our lives easier.

This is wrong. The answer is obviously "Go is debugged in production with extensive logging," so I question whether you're actually a gopher.

See another answer in this thread, https://news.ycombinator.com/item?id=11607829.

I really enjoy using Go for building websites. But I use Go for everything, including games, frontend (instead of JavaScript), etc. There's nothing else I'd rather use. Heck, I even rewrote my resume in Go that renders HTML on frontend [0], hehe.

[0] view-source:https://dmitri.shuralyov.com/resume

My experience says "don't use Go for building web applications".

Compared to something like Ruby on Rails, the development story with Go is much less batteries-included. You'll find yourself gluing together lots of components, and they're not really that friendly. Plus, for any front-end code, you'll end up using Node or whatever anyway to build assets. It's big and messy and not a good match.

Go does excel at two kind of web applications however:

- Simple single-page applications that use Websockets or simple AJAX to perform a basic task. In this case, the limitations around templating and such are less obvious, and Go is a brilliant match for Websocket clients. Think of things like status dashboards.

- JSON APIs with no front-end. Build your front-end using whatever other technology, and just fire requests at a Go app. It's pretty good at handling this, though JSON marshalling/unmarshalling is a bit stupid in Go.

what would you use go for then?

People use Go to write "infrastructure services", such as Rest APIs, caches, proxies,queues ... and command line tools. That's what Go does best since it requires minimal investment upfront( the language is tiny, the standard library has a lot of network oriented code...).

Whether one can effectively write a classic website with complicated HTML templates and web forms like Rails,JSF and co allow is however questionable since Go type system lacks of expressiveness. Go fans answer is "use client side javascript" which is ironic since javascript is the antithesis of Go in terms of design.

Go is great for what is describe as "plumbing" - things like proxies, infrastructure tools or simple microservices. The simplicity is a great benefit in these cases.

I've also found it unexpectedly useful for some embedded applications. Deployin code to the raspberry pi is super simple and that makes it quite attractive - there are projects that provide for thing like i2c, GPIO and SPI, and combined with the built-in HTTP support its a pretty compelling platform.

> Go is great for what is describe as "plumbing" - things like proxies, infrastructure tools or simple microservices. The simplicity is a great benefit in these cases.

Those reasons aren't really convincing to add another platform into the stack for me. I'd just continue using the many good-enough technologies to build those things on.

I was getting into Go some years ago and came to the same conclusion as you stated. It's good of course if its someone's favorite language to use, then it doesn't matter what it is and isn't great at. But in the end for me it didn't do enough, or do enough better than existing broad PLs that I already knew.

I think Go is still in need of a "killer usecase" to justify its usage for people like me. Rails was that for Ruby. Linux entrenchment and Django was that for Python (now numerics and scientific computing). Node for JS. That said, I'm a Go fan in theory as a "simple is better" type.

I worked with both Go and Elixir but found Elixir and Phoenix better than Go for building backends for web services. I was more productive and all the awesomeness of Erlang VM made me choose Elixir. I would say give Elixir a try.

The other answers provided plenty of info on Go, so if you're looking for a new language and framework for web dev I suggest checking out elixir and Phoenix. They're inspired by ruby/rails (without some of the problems) and have many of the benefits of erlang.

>>How does Go ... compare in terms of performance, ease of development with RoR, Laravel?

The available web frameworks for Go are more comparable to microframeworks like flask. If comparing directly to RoR or Laraval, you may find some gaps. Not that they can't be solved, but there's less available out of the box. Things like:

- multiple file upload

- session handling without cookies

- built in support for a variety of authentication types and backends

The communities are also significantly smaller, so there's less experience to draw on.

Not to say you can't build websites just fine with golang, but it's doesn't seem to be a strong suit at the moment.

RoR is an awesome framework that has (almost) everything you need to build monolithic web apps.

Go is a different story. You need to join the pieces together to make something functional. You will probably need:

- A router like [Gin](https://github.com/gin-gonic/gin)

- A [XSRF token generator and validator](https://github.com/golang/net/blob/master/xsrftoken/xsrf.go)

- An ORM like [Gorm](https://github.com/jinzhu/gorm) or [XORM](https://github.com/go-xorm/xorm)

- A template lib like the one that comes with the stdlib

- Node for front-end (ugh, this is the harder part)

- The testing library in stdlib plus maybe [testify](https://github.com/stretchr/testify)

- etc...

Many nice things are missing. Some were [shamelessy copied by other people](https://github.com/go-testfixtures/testfixtures). Some you may need to implement yourself.

The big advantages is that Go has much better performance, and run on Windows, etc.

err... RoR is a full-size framework. Ruby is the language. Go is a language. Go has frameworks.

You are comparing apples to oranges. With go you can then select what functionality, libraries, toolkits or frameworks you want to use with your system.

ps. why would you mention node? would you use node with a php site?

> Go has frameworks

I don't know nothing monolithic like Rails for Go, do you? What I was trying to say is that you have to use libs, instead of a big monolithic framework like Rails, that do not exists for Go (AFAIK).

I don't like Node and NPM, but it would be the alternative to Asset Pipeline (e.g. for concatenating and minifying JS and CSS, etc).

> I don't know nothing monolithic like Rails for Go, do you?

Revel is pretty monolithic: https://revel.github.io/

I've used it to build a web application, and here are some of the problems it addresses (this is just the table of contents from their manual):

    - Controllers
    - URL Routing
    - Request Parameters
    - Validation
    - Session & Flash
    - Results & Response
    - Templates
    - Interceptors
    - Filters
    - Websockets
    - Internationalization
    - Cache
    - Database
    - Debugging

Can you explain more about using Node for the front end? Do you mean you use `npm` for front-end package management, or are you running a Node server process along side your Go process on the backend?

Yes, I am talking about asset management. Rails has Asset Pipeline. In Go, if you want a build step for JS and CSS you have to play with node.

You don't actually need most of that.

Go is pretty good for anything backend related.

My experience over the years is that Go serves a specific niche ver well. The niche being boring long running services that do simple tasks in unison with other services. It's better to see it as a language wi th which you can build little self contained machines that work well together. I have a system made up of such little machines and it quietly works away at some low end server.

No, that is definitely not correct, unless your "anything" is merely that which is fully defined & composed at compile time.

(And yes, you can go the IPC route to introduce dynamism that you get out of the box from a VM based language. But here is the classic case of the Go [tail] wagging the architecture [dog]..)

[edit: correct tail endian]

As you probably know (and as you can probably tell from some of the responses here), sometimes choosing languages can be like choosing a religion :-)

As a backend for a web application, Go's performance is pretty great. The Go standard library comes with a fast parallel-enabled HTTP server that already makes use of all your cores in recent Go versions. No need for things like gunicorn to take advantage of cores. There's no interpreter overhead, and there's plenty of benchmark comparisons with other languages out there on Google. I've never heard of someone not using Go as a server-side language because it wasn't fast enough :-)

Go has very robust support for creating web applications. Of course, for the frontend you will still need HTML+CSS+JS (maybe + a JS framework like Angular depending on your needs). There is a crazy experiment called GopherJS to run Go in the browser/frontend, but currently, a crazy experiment is all it is IMO...

For serving dynamic content, the Go standard library comes with its own templating language and library: https://golang.org/pkg/html/template/

The standard library also comes with most of the tools you'll need for defining HTTP routes (see the server examples): https://golang.org/pkg/net/http/

Personally, I've found the following third-party packages to be quite helpful with putting together HTTP server-side apps in Go:

  - Gorilla mux for defining templated routes: https://github.com/gorilla/mux
  - Negroni for HTTP route Middlewares: https://github.com/codegangsta/negroni
For testing HTTP server-side Go applications, the httptest package is quite helpful: https://golang.org/pkg/net/http/httptest/

As you can see, most of what you need just comes with the standard library, which is quite professionally designed and coded (reading through their source is a pleasure). The two external dependencies I mentioned are helpful, but not a strict requirement to build a server-side app.

The main obstacle people have with using Go IMO has little to do with its ability to be used as a server-side language, and more to do with the nature of the language, and the consequences of its design choices. I think people who strongly prefer terse dynamic languages have trouble adopting/enjoying Go and its way of doing things. For instance, someone in this thread said that Go's structs and static typing makes it harder to work with JSON. You could also make the argument that this is a strength- all I have to do is declare a struct to specify what exactly I want to read from / write to JSON. Some people have complained about Go's verbosity. Sometimes they are right- certain things in Go take a little more text to express. I'm convinced that Go is a language that is more optimized for readability than for writability. You typically can't write "expressive" koans in a line of code... but your teammates probably have a higher chance (in my subjective opinion) of understanding the Go code you wrote a year after you've forgotten what you were thinking about.

I could go on and on, but now I'm just fighting the eternal language holy wars :-)

I hope I've given you a starting point for more research if you were considering Go for a server-side web application.

The cost and pain of developing software is approximately zero compared to the operational cost of maintaining it over time

not my quote

The site blocks Tor users; ugh.

Try a different exit node, the website uses Cloudflare so if you could access HN you should be able to access that one as well.

Tor creates new routes every X minutes (usually 15). It also uses a different route for every IP it routes to. So they'll always go through a different exit node.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact