Hacker News new | past | comments | ask | show | jobs | submit login
Practical Go: Real-world advice for writing maintainable Go programs (cheney.net)
688 points by ra7 on Feb 21, 2019 | hide | past | favorite | 226 comments

> Naming the Config parameter config is redundant. We know its a Config, it says so right there.

> In this case consider conf or maybe c will do if the lifetime of the variable is short enough.

This seems petty. Is it really that problematic to type out a few extra characters?

Yeah, I used to go by this advice, and I found the maintainability of my code dramatically increased when I typed out full names. I don't even use "i" for loop variables anymore. If the length is a problem, invest in an editor with autocomplete.

Well-known abbreviations are fine, like "iter" and "prev", but single-letter variable names notoriously impede readability for me.

I feel like I have the opposite problem.

If the name is more than a few characters long, it starts to become non-instantaneous to recognize it. Things get much easier to follow with visually-instantly-recognizable symbols.

So in conditions where a variable is used over a short area in the code (or where it's used _constantly_ over a wide area), I prefer short variables.

I think you're identifying the core issue here, that is, frequently vs infrequently read code. I'd argue that if you've spent enough time in go, variables like `i`, `conf` and `ctx` will become very recognisable and easy to skim over, if they're always used in the same context.

If it's less "boilerplate" code, I'd go for more descriptive names. I think.

I like the sorta described "rule" in this document; the longer the variable is used, the more descriptive it should be.

Totally agree. If you have to scroll up to be reminded what some variable means, a longer name is good. But if its usage is a few lines away from its declaration then there's no reason to add visual noise to your code.

Its easier said than done though. We started following this few years back but then while it was "few lines away" originally, code evolved and now there are parts where its 20+ lines away. This means that short var names need to be continuously "enlarged". Well, then why not start with a "medium" name and avoid all that headache? ctx is a good enough compromise between c and context. (c can be client, config, context, certificate ... you get the point).

Rarely do you have a case where a variable outgrows its simple name (and value) and you keep it the same. Even when a function adopts new characteristics it's advisable to change it's name or make a new one. If you initially had variable `i`, created and destroyed in 5 lines of code but that has grown to 20+ lines, I'd recommend you (1) not add the 15+ if it could be put in a function (you get to name that section of the code), or (2) rename the variable. But a 5-line code that becomes 20+ lines is both strange and interesting.

Well as the article states, one-lettered variable names should only really be used in tight loops; `ctx` is a better name for e.g. a function argument.

This wouldn’t likely be hard to write a linter for: is the name 1-2 letters other than a method receiver, “i” or maybe “j” and is referenced more than 5 lines from where it’s declared? Lint warning.

I tend to agree here but I limit that to block scope as a general rule, I also only bother with really common concepts.

id identifier

i loop counter

j inner loop counter

c general purpose non loop counter

a, b, comparator methods

k, v, key value setters, array iteration

e, event, error or exception. Contextual rule but rarely collides.

Other than those I pretty much always write entire full word names. If it's abbreviated or shorthand it likely sounds funny in my head, or it leads to ambiguation. If I need to even ask / recall for a split second it's not good enough.

Programmers incorrectly place far too much weight on time to type. I write like ten lines of code on a good day. It's just not important. I want readable code that is clear and concise as possible. Every branch, block or shorthand variable raises a question I have to think about before moving on.

Clever code is why I call ten lines of code a good day. It sure as shit isn't my typing speed.

Dense code that benefits from dense variables(which describes a lot of "pure algorithms" work) I often approach by aliasing the variables to shorter ones. It's all in the same body so the context is not especially hard to lose.

IMHO short is OK as long as the name makes sense and is easy to connect to the real meaning. For instance "msg" is perfectly valid name for a message var, and even "m" in some short snippet of code is obvious, but calling it "a" or "x" or something completely unrelated like that is a bad idea. In matter of fact calling it something too neutral like "data" can also be as bad if it's not clear which data you mean. Context is everything in making it easy to understand.

For me Config just isn't one of those cases, HackerNewsOnYCombinatorMessageBoard on the other hand becomes an opaque mess to eyes when it is mixed with other variables of similar length and complexity.

Regarding single-letter variables, mathematical functions might be an exception. I think writing func gcd(a, b int) int {...} is better than other alternatives. There is simply no need to assign any more meaning to the arguments other than their type.

Maybe if you - and more importantly, everyone that will read the code in the future - are comfortable with domain-specific expressions like that. It depends on the audience really.

As an extreme example, scalaz is similarly a very specialized DSL. Or in my personal experience, functional constructions like map, flatMap, foldLeft and reduce (which I never learned in school).

no, it's not. bigger deal is that single letter vars make it much harder to search for variable usage in files using arbitrary editors.

Don't use arbitrary editors, or editors that don't support "search for standalone identifier" (e.g. surrounded by space, in (), with ; after etc.".

What would be the signature of your gcd function ?

Numerator, denominator?

For GCD? I would find those names very misleading since semantically the order of the arguments to GCD is irrelevant (even if in the implementation you typically mod by b there's no reason you couldn't mod by a).

This is not mathematically correct. Those names would be pretty misleading, actually.

Good luck with any bigger equation then

I generally find if you need multi letter variable names it means your function or scope is to long, or manipulating to many things. It's a nice little red flag for me.

Up to four or five single letter variables is pretty trivial to remember. Especially when three of those are i, j, and k. More than 6 or 7 rapidly becomes painful. But if you are manipulating 6 or 7 variables _in the same scope_ you are doing to much.

I also agree with Pikes comments about typography here: http://doc.cat-v.org/bell_labs/pikestyle

I could not disagree more. When I'm skimming code, I want to immediately know what a variable means. I don't want to go cross-reference elsewhere.

If you have to go "cross-reference" elsewhere you are modifying something "to far away" from you. That's a giant sign of spaghetti code.

Also names vary with there contextual scope. Larger scopes mean longer names generally. Russ Cox gives a succinct description here: https://research.swtch.com/names

  A name's length should not exceed its information content.
  For a local variable, the name i conveys as much information as 
  index or idx and is quicker to read. Similarly, i and j are a 
  better pair of names for index variables than i1 and i2 
  (or, worse, index1 and index2), because they are easier to
  tell apart when skimming the program. Global names must convey
  relatively more information, because they appear in a larger
  variety of contexts. Even so, a short, precise name can say more
  than a long-winded one: compare acquire and take_ownership. 
  Make every name tell.
Variables should generally have short scope (we don't want a lot of global, or even package level variables). So _variable names_ in particular should be short.

> For a local variable, the name i conveys as much information as index or idx and is quicker to read

This is only true because i is a specific, common abbreviation for index. When writing arbitrary glue code, a single letter variable would be a meaningless abbreviation without shared context. If you encounter "i" and it doesn't mean "index of a for loop", you're going to be taking additional time parsing meaning.

An example where I disagree, and use 1-letter names daily: arrow functions in both Java and JS.

`usersList.stream().forEach(u -> someSet.add(u))`

It's immediately obvious that u is a user in usersList. I realize that it's debatable if u is really more readable than spelling out user, but I prefer it, and I don't think anyone is going to be confused by it. If the chain does get really long, also, I will spell it out explicitly.

Shouldn't you be able to write that as `usersList.stream().forEach(someSet.add)`

For Javascript at least that would work.

I haven't written Java since like java 5, but I believe java 8 has syntax for this as well:


Sure, for a lambda I think it’s arbitrary. For larger functions / classes it matters

I disagree. I am pretty sure that there's some code that is short and simple but you want to have descriptive names anyway like financial related computations.

>I generally find if you need multi letter variable names it means your function or scope is to long, or manipulating to many things. It's a nice little red flag for me.

Many times your function should be longer, rather than shorter.

Short methods and functions used just for the sake of being short just move the complexity of understanding in the interaction between them, making logic harder to follow an algorithm when it could have been all in the same place (for related functionality of course, I don't advocate having a function do 2 different irrelevant things).

Readability is a problem -- long variable names are harder to skim, and rapidly begin to blur together for me.

Agreed. Step through the code in a debugger if you really want to understand what it's doing. Even unreproducible problems with production code can succumb to raw understanding gained from stepped test cases.

In short Go functions I tend to like single letter variable names. Maybe because it makes me closely consider the behavior of the program instead of trying to assume by the variable names.

Well, for most things ok, but i is so well established, than if you don't use it for loop indexes you you probably impede the readability of those reading your code

I think 'k' and 'v' are fine for iterating over maps.

I am fully there with you, the only exceptions are still using "I" in short for loops, e for WPF/Forms event handlers due to convention and typical math symbols like x,y,z.

I stopped using "i" when I had it in a context where it could be easily confused for "imaginary constant".

> I don't even use "i" for loop variables anymore.

Yeah, I noticed that for myself, too. so instead of i I might use frame_index, or whatever it "actually is". Up to a certain length it seems faster to just read what is there, without an additional mental translation step. But to be honest, I just do it because I like it.

I do that off and on, mostly because with the editor I use, it's harder to highlight single letter variables.

And this is worse advice:

> Functions should do one thing only. ... In addition to be easier to comprehend, smaller functions are easier to test in isolation, and now you’ve isolated the orthogonal code into its own function, its name may be all the documentation required.

Using single-caller functions as a substitute for comments makes the workings of a specific operation much harder to follow, as you have to jump around the source to understand its effects.

A long function is easier to understand than an exploded one.

Also tests should target specific operations (aka functional tests), not every single function in the program.

EDIT: Every function you add becomes part of your internal API. Any API, exported or not, should comprise a cohesive collection.

> A long function is easier to understand than an exploded one.

This is a pretty controversial position, and quite situational in my opinion. I absolutely agree that having to hop all over the source to understand something is frustrating, but that doesn't mean that very long functions are the right solution. Some combination of reasonably named helper methods and a function flow that makes the logic easy to parse should be the goal; either end of the spectrum is a problem.

The summary is great:

> If a function is only called from a single place, consider inlining it.

> If a function is called from multiple places, see if it is possible to arrange for the work to be done in a single place, perhaps with flags, and inline that.

> If there are multiple versions of a function, consider making a single function with more, possibly defaulted, parameters.

> If the work is close to purely functional, with few references to global state, try to make it completely functional.

> Try to use const on both parameters and functions when the function really must be used in multiple places.

> Minimize control flow complexity and "area under ifs", favoring consistent execution paths and times over "optimally" avoiding unnecessary work.

I took issue with this because it's a conventional wisdom, and does a fair bit of damage.

Single-caller functions attract other callers over time, gain backwards-incompatible features, and result in regressions.

This is one reason I like nested functions. They’re not available to the surrounding scope so they don’t succumb to these weaknesses, while also allowing you to organize your very long function internally by task. I use ‘em in Python all the time.

It’s a bummer that more languages don’t support them, though you can get there with lambdas too, sometimes at the cost of more syntax.

Go has closure functions; great feature.

One of the (few) things I like about Javascript is the ability to define a closure anywhere in the containing function, so it appears in the order of operations:

  function f() {
    setTimeout(fDing, 2000);
    function fDing() {

Disclaimer: I have no Golang programming experience.

I am curious with this approach though...you're nesting behavior and/or logic, doesn't this further obscure the meaning of the code and contribute more to the need to jump around the source in order to figure it out?

Go has anonymous functions too.

The Go Programming Language book (Kernighan and Donovan) has some examples of them.

Now write a unit test for

    function fDing() {
Honestly with modern JS I am not sure this feature is that great. Looks more like a code smell these days imo.

Edit: Formatting

Now write a unit test for the inline block of code within the longer function which would have become the nested function???

You write the test (where any is needed) for the outer function.

Edit: the start of the discussion was about using nested functions to decompose what otherwise would have been “unpartitioned” long functions. Such blocks of code nested within a long function would not be possible to unit test, either.

Unit tests, rather than integration tests, are usually bogus, anyway, though.

You write unit tests for units of code. A function with nested functions inside of it is a single unit of code; that's essentially what those functions being nested, and hence not directly invokable from the outside, indicates.

Make private functions that are only visible to the current module. Then write your unit tests directly in that same module, next to the functions, so they can access them even when the rest of the world cannot. Of course, this requires sensible language support.

Hierarchy vs list. I don’t want a list (of subroutines). I want a tree of self contained routines. Only your containing routine uses you. Which “private“ routines use which? (I know the IDE will tell me about this routine, and that routine, but I don’t want to have to ask)

Your employer doesn’t want you testing getters, anyway, but rather features. Unit test fanatics need to stop.

I guess unit tests were useful for C++, when it was constantly crashing everything :-(

C++ and its legacy need to ride off into the sunset, already.

A bit of our “heritage” in programming destroyed by the C family of programming languages

Pascal, like Algol, had nested subroutines for decomposing longer operations without leaking the details.

Nested functions is one of the things I like about JavaScript as well.

C has static functions for decomposinglo ger operations without leaking the details.

For better or for worse, taking advantage of that forces you to keep source files fairly short and cohesive in functionality.

You are still leaking the details of the function that is the sole caller of those other functions. It's not leaking across translation unit boundary, but it's not the only one that matters.

ML derived languages, Julia, D and C# support them.

GNU C supports nested functions as an extension. I use them for the exact reasons you mention.

That extension results in executable stack memory, which is going to be a non-starter for many developers.

I believe that that only happens if you pass a nested function as a function pointer. Then gcc emits a trampoline.

If you just call it from the enclosing function it's fine.

Yeah, agreed. I do this in Rust a lot too, and I picked it up while writing a lot of Python.

C# has inner named functions.

An excellent balance is to try to make a function only do one level of abstraction at a time. It's a bit of a flexible guideline, but basically you shouldn't call `isUserActive`, do some complex arithemtic, read extract data from a complex data structure, and call a templating engine in one function, since those a results all different levels.

As long as what are doing is approximately the same type of thing, it is fine to do a lot of things without breaking readability.

I believe you've misunderstood this. A function can be very long and do only one thing or very short and do many things.

The function:

  func ManyThing(i int) int {
   return i+1
does two things, and it's two lines long. The function tcp_send_message_locked (https://github.com/torvalds/linux/blob/master/net/ipv4/tcp.c...) does one thing at it's 261 lines long.

Shorter code is _indicative of_ orthogonality (what he asks you to break on)__but_ it is not the same thing.

My critique is of single-caller functions as a documentation device, not shorter functions.

But he doesn't advocate that... anywhere. He advocates factoring out orthogonal code to get shorter functions.

Inclined to agree, expounded upon here (“Classes should be deep.”): http://alex-ii.github.io/notes/2018/10/07/philosophy_of_soft...

> Functions should do one thing only

Well, that's extremely good, and standard, advice.


Longer functions are much more prone to causing errors, and errors that are harder to find. It is honestly much better to have functions that do one thing and one thing only. Might not always be possible, but it is always the best way to code.

It seems that belief is unfounded. See the link posted by @xtian.

I think “doing one thing” and “getting called one time” are getting confused.




Those could do one thing but could be called at several points within a larger program, which i think you are fine with. Or they could only be called once in which case i think you are saying it might make sense to just inline them.

If I understand you correctly, then I agree.

I don't necessarily agree with "one thing only". At least, for things that start simple and grow as needed, I split things into functions either to not have to copy and paste the same code, or for readability/structure purpose. But not out of principle and always, until I can't divide any further. I can still split things out into functions later, should I need it, but doing it "just in case" and then not even having a use for it doesn't save me time and just adds overhead.

Though it also depends on whether I'm doing something familiar or something new, if I'm doing something new I might split things up more to help me conceptualize the problem. But when I'm just making a quick CLI tool, I might put it all in main first and only split it up as needed.

The argument is that it is not fundamentally better to use the longer name in the given context, so why make it longer? I'd say it isn't about how long the variable takes to type either, but how long the code takes to read and parse.

I'm wrestling with this at work right now and the short names don't really bother me as they are right in that the fact that if you have the type the shorter name can make the code easier to read slightly. What annoyed me is that for those single letter names I always got collisions so my naming was inconsistent. I personally ended up doing medium length names (like conf) except for the case where there was only 1 local variable (and sometimes if there were two or three) because in that case there were no collisions and what that variable was is very very clear

It is problematic for readability reasons as long as you are making complex expressions:

    configuration := NewConfiguration()
    configuration[word] = parameters[values[line][column]] - parameters[values[column][line]]

    conf := NewConfig()
    conf[w] = params[vals[i][j]] - params[vals[j][i]]
It takes your brain more time to parse the first line. Now, there is an obvious limit, this is probably too much

    c[w] = p[v[i][j]] - p[v[j][i]]
unless maybe the scope of the vars is very limited.

In your example I would compromise with:

    conf[w] = params[vals[line][column]] - params[vals[column][line]]
or even:

    conf[w] = params[vals[l][c]] - params[vals[c][l]]
if you really want to save a couple of characters. Confusing which loop index is indexing which dimension is far too easy.

The first version was too long to fit on the screen for me. Reading the second I missed the swapping of `i` and `j` -- only noticing it when I went back to the first version to figure out which parts of line I would assign to something if I were to rewrite the code.

It says so in the signature, but Go code tends to have long functions, so it's kind of a fallacy to say that it's going to be easily recognizable 50 lines in because it's in the signature. Now if this were Haskell and it was a single small expression, it would be different.

The issue w/ calling it "config" is that you end up with the confusing scenario where "config" is the object and "Config" is the type, differing only in casing.

Why is that confusing? Local variables are, idiomatically, never capitalized in Go, so the distinction is obvious at a glance.

Sorry. To clarify: it's not that I'm confused by the distinction between capitalized vs uncapitalized, it's that visually, "config" and "Config" look quite similar at a glance, whereas "c" and "Config" are clearly visually distinct.

As are unexported objects.

Yeah, getting to idiomatically capitalize names has been pressure to move them to their own package in the past, which feels awkward.

This is a fairly go-specific issue due to its unexported vs exported convention. Many languages solve this through case conventions

They don't use any IDEs or modern editors, but vim as a bare bones text entry, without syntax highlighting or word completion.

Also in this case Go makes you use types in the name since it doesn't allow function overloading.

It can be a problem with Config comes from package `config`.

Came from Java world, I find the Go comment and godoc are really limited. We can't link between functions, types. We don't have a standard way to declare input, output, don't have any distinction between a normal word and a Go identifier Refactoring using tool (e.g: Goland) usually lead to unexpected text replacements.

Take following functions, for example:

  // Auth check if user credential is existed in bla, bla...
  // ... (many more explanation)
  Auth(user User, authProvider AuthProvider) bool

  // AuthDefault check user against the default **AuthProvider**
  AuthDefault(user User) bool {
    return Auth(user, new(DefaultAuthProvider))

If only the AuthProvider in the second function's godoc be a link to the first one, we don't have to repeat the explanation for the second one. Dev will be able to discover the explanation easily via their IDE. This alone will be very helpful for the maintainability of any big projects.

Flip-side is less complexity and the doc comments in raw code look better. Go's generally all about, "how much can we strip away and still have things work".

Your criticisms re refactorings and links are on point though.


The problem with the Java way, as I see it, is that it puts more of a burden on the programmer to the point where a majority simply won't do it. Getting programmers to document their code is an uphill battle to begin with. But the bureaucracy that Java(doc) imposes makes it even harder to win in all but the most elite institutions. What I see a lot, is IDE-generated documentation which can be automatically inferred from the code, and is therefore redundant. While it superficially looks like documentation, it really is just line noise. I would agree that Java documentation at its best can be better than Go documentation. But I suspect that the average Go codebase is better documented than the average Java codebase.

I guess it's hard to get around to the advanced features when so few people even bother to take advantage of the basic ones.

Documentation for most Go libraries I've come across has been pathetic compared to similar things in Python or PHP.

edit: Compare to something like Racket: https://docs.racket-lang.org/plot/intro.html?q=graph#%28part...

Disagree hard on Go vs Python. Python is my day job, but documentation is rough. You're likely not going to get documentation for all the types, and if you do it's usually in one giant page and you can't tell which class's `__str__()` docstring you're looking at. Often you'll have a method with some terse description for parameters that don't completely describe the types accepted for a parameter. Most of this stuff comes for free with godoc even if you don't take the time to add a comment, and most libraries add at least some minimal comment (the linter complains if you don't). Godoc also has really nice support for executable examples, and examples run as part of the test suite so you know they're up to date.

EDIT: Racket shares many of the same problems as Python. Dynamic languages need more documentation than static languages, yet they (or rather, their users) often fail to deliver.

I'm mostly comparing this:


to this:


If the source weren't hyperlinked, I'd be lost on the latter.

edit: I love Racket's documentation. I've been using it for so many in-house things because I can just read what the function does in English--instead of having to run experiments or dig through someone else's code.

First of all, https://godoc.org/net/http is the page to visit; it serves documentation for all public packages, not just the standard library, and it's generally a little nicer. That everything is on the same site is a big deal because maintainers don't have to link to their dependencies and readers don't have to deal with the inconsistent presentation of documentation across the ecosystem.

The distinction I see is that someone put a lot more care into Flask's documentation (including the theme, which is actually a little hard to read since it renders certain things in small italic font), which is great but not indicative of the broader ecosystem. The most significant distinction wrt readability is that the Go page has types for _every_ parameter with links to the type definition. For all of the care put into the Flask doc, you still see things like this all over:

> view_func – the function to call when serving a request to the provided endpoint

I have no idea what the signature of that callback is. Meanwhile Go has https://godoc.org/net/http#HandleFunc which clearly shows the type for the handler callback (with links to types):

    func HandleFunc(pattern string, handler func(ResponseWriter, *Request))
> I love Racket's documentation. I've been using it for so many in-house things because I can just read what the function does in English--instead of having to run experiments or dig through someone else's code.

I tried to use Racket (it looks really neat), but I just spent so much time trying to figure out what to pass into each function. It was nearly impossible since types are often absent. That's just not how I want to spend my free time. :(

That sounds like a stronger defense of types than Go's documentation.

It largely is. Formalized types allow a documentation tool to do a lot of heavy lifting. But Go's documentation system is still quite a lot nicer than javadoc, doxygen, etc. for other statically typed languages.

> Godoc also has really nice support for executable examples, and examples run as part of the test suite so you know they're up to date.

Ah yes, the nice godoc feature which only got added to the python stdlib in checks note 2001.

I didn't mean to convey that it didn't exist in Python; I was just remarking that it was a nice feature. No need for the snark.

I started using Python with version 1.6, and never used for anything more than scripting tasks, so my experience is mostly related to the official docs.

The manuals provided with Python are great when compared with quite a few alternatives, some of them require to go buy a book to properly learn them.

You absolutely want to add documentation linters to your build process as early in any project as you can. Just requiring that something be in the function docstring is usually enough to get people to put at least a minimal amount of description there.

I’d argue there’s a failure of the code to some extent to need such extsenibility with documentation. I’ve often found when I need more info outside of Godocs that simply clicking through and diving into the source provided me with all the answers I needed to know.

This attitude, which I'll paraphrase as "needing any feature that I personally consider unnecessary is a code smell," is endemic to the Go community and a big reason why I still find myself frustrated by it on a daily basis even after a month of using it for a greenfield project. I don't like the language itself, since I'm generally in favor of a language offering more affordances rather than fewer, but it's fine to write in and I'm ultimately concerned with making the pragmatic choice. When I go to look for what the idiomatic way to do something is, though, wow! It's rarely just "this is the way to do it, since the language optimized for a particular use case by making the tradeoff of omitting the usual features for this": instead, it frequently goes on to assert that those features are unnecessary in nearly every case and languages that provide them are wrong, or old-fashioned, or too Academic, or the people that use them don't care about Getting Things Done. Also, throw in a few cargo-culted potshots at Java for good measure! I've used a good number of languages professionally at this point, and the Go orthodoxy is easily the most off-putting I've ever encountered. I don't know why it's become so aggressively totalizing, but it's not a good thing.

I couldn't disagree more. I think it comes down to personal preferences, mostly.

I get so much more done in Go, have fewer maintenance issues, and more frequently collaborate with other folks/contribute to other projects.

> I've used a good number of languages professionally at this point, and the Go orthodoxy is easily the most off-putting I've ever encountered.

I could -- and do -- say the same about the Java ecosystem.

For what it's worth, I don't particularly have a cross to bear with respect to Java. I cited it just because its invocations as a Bad Language bogeyman are common in the Go-focused writings I've found, and those are far out of proportion to its inherent flaws (of which I believe there are many!). I completely agree that it's a matter of preference, which I think is supported by the fact that I found Go to be the best choice of language in the context that I'm operating in whatever my reservations.

I'm confused about your assertion that the Java ecosystem has a similar problem, though, because my sense is both that it has a much less identifiable orthodoxy by virtue of being so widely used across so many domains, and that the new guard, such as it is, is very much in favor of creating libraries and utilities that favor simpler implementations and interfaces contra its "architecture astronaut" baggage.

You make a fair point; I'm not adequately exposed to the Java ecosystem, I mostly end up reading Apache projects.

I guess my remarks could be rearranged as: In general, I find Java, and the JVM friend Scala, to worship Abstraction over simplicity. Complexity is constantly confused for convenience, and that makes me sad. The number of files I need to read to understand _any_ piece of Go, I could likely count on one hand, if even. For the JVM-based approach: dozens, if not more.

Abstractions _in theory_ are great: they reduce the cognitive burden, they simplify behavior, make it easier to rationalize and cast judgement; But in practice, that just isn't true. You _always_ need to peel away the abstraction.

In Go, countless times, I've found myself reading the standard library implementation. Is this good? No, of course not. Ideally, as a consumer, I never need to look under the curtain. Things should just work. But that tends to never be true, and looking under the curtain is an important aspect of computing (c. 1970-1999) that permeates everything we do.

Go makes it really easy to look under the hood, see what's happening, and more-or-less instantly achieve clarity. The only other languages I've ever used that came close were C and C++, and history has demonstrated how well that's worked out.

To add to your point last week I cleaned up a part of project in Java sprawling in 30 or so files with 5 KLOC to 2 Java files and under 1 KLOC of Java code. Now main clean up was using library properly which was already present as dependency for years and removing innumerable level of indirections.

This code was written in really roundabout way in Java tradition where e.g. setName() would call getName() -> initName() -> initClassName() -> getDefaultClassName() -> initDefaultClassName() just to set one damn string value. And believe me this getName() function would get called grand total of one time in whole project.

And it was possible for me to do so because learning and writing Go made me confident that straightforward code is not a sign of junior/inexperienced developer. Because in Java it will be exactly that if you do not bury the actual logic in ten levels of abstract crap.

GP isn't saying that Go is not productive, but that the community has an irrationally hostile attitude towards anything that is not possible, or easy, in the language.

And I agree - it has become the prime example of "you're holding it wrong" school of programming language design and apologetics.

To the point that if someone would take the hard work to try fork and implement the features some consider missing, they would be burned on the fire instead of praised for their work, which is why anyone with the skills of doing it just goes elsewhere instead.

Do you think really Java or any other professional language would accept some language feature which was developed prior to explicit approval from committers?

If someone thinks since they have developed feature and put lot of hardwork in it so language maintainers have to merge it. I am sure they would have no option but to invent their own language.

Yes, that is how a large majority of contributions work.

Language improvement proposals aren't taken in just with words on paper.

What I found off putting is they lie about what they are doing by claiming there are 'technical' reasons[1] it has to be that way. When other languages prove that's utter BS.

I'd have more respect if they just came out and said, this style cause hey we don't want conflicts over style and since our language our way or highway. And this feature is abused a lot in other languages (we think) so we omitted it.

[1] I hate that programmers habitually lie about technical issues, especially to management and others. Because those people are as dumb as programmers think. It's why your non technical manager doesn't trust you.

Yeah, every time I need to know why my car isn't working correctly, I take it apart. All the answers I could possibly want.

Fixing a car in everyday situations is a false analogy to writing code or using library code.

The authors have chosen to make it as limited as they have on purpose, as I understand it.

I disagree with them, but here we are.

I've been slightly tempted at times to make something with markdown but never gotten enthusiastic enough to put the time. Plus, markdown is technically a bit too powerful, so I'd want to cut it back down, and before you know it I'm inflicting the 3,124th custom markdown on the world. Really I just would like bold, italic, monospace, identifier linking as you say, and numbered and bulleted lists not in the unformatted text wrapper.

I'd just like to be able to toggle javadoc, phpdoc, godoc visibility on and off entirely (not collapsed to a line, or a marker) just make them vanish until I summon them back.

I find them distracting when I'm reading the code but I don't want them not to exist either.

EDIT: Another thing I'd love to be able to do is 'tag' a variable with a synonym and have that appear next to it eveywhere.

so $strCstAcctRec (I wish that was fake but it's pretty much straight out the codebase I inherited..) would show up as $strCstAcctRec (CustomerAccountRecord).

There are a bunch of ergonomic features like that I've considered over the years, I might at some point try implementing some of them as an intellij plugin.

I've used an IDE that was aware of doxygen comments. Seems like that's close to what you want. You could right click and it'd show you the comment. I loved that IDE and it's long gone.

You should put the docs for AuthProvider on AuthProvider, not on Auth or AuthDefault.

You didn't get my point. Of course the doc should be long to where it should. In this case, I want the doc for my base function is easier to be discovered from the helper functions.

The docs for Auth() will be right next to the docs for AuthWithDefaultProvider(). Like, right above one another.

I agree. If the documentation seems duplicated, chances are the code is too. Worded oppositely, if the code is duplicated, chances are the documentation is duplicated or repetitive. Both the documentation and the main logic it describes should live in one place.

Would love to see better separation on the page between "Bad" and "Good" examples. While a good read, it's difficult to quickly distinguish between the two.

For example: https://github.com/airbnb/javascript

true. Google will index it and I can see many people copy and paste bad code considering the best practice for doing something

I'm not a Go user myself, but I skimmed through the document. What I found was a lot of good programming advice that is generally applicable, rather than being limited to the Go language. (It seems most of the Go-specific stuff is in section 5.) It's always good to see robust coding practices being promoted, regardless of language.

About half-way through and I think this is a great article, in particular the quotes and I also agree that the first 4 sections are generally applicable.

One thing I disagree with is the remark about having fewer, big packages. Though conceptually I agree that avoiding having too many public APIs that aren't widely used makes sense, in practice--at least on the types of projects I tend to work on--I find that directing people to split things into a few packages forces them to think about a decoupled design with good APIs between the components. This could certainly be done with discipline inside a single package, but unless everyone working on the codebase is very diligent about this it's easy for abstraction leaks to creep in.

Ultimately it's a judgment call, but I think an earlier paragraph (copied below) is far more important than optimizing on having fewer packages or fewer exported types and functions, especially (as is also pointed out in the doc) you can use `internal` subdirectories to make APIs project-private if you are writing a library that is consumed by other projects, as opposed a service.

> A good Go package should strive to have a low degree of source level coupling such that, as the project grows, changes to one package do not cascade across the code-base. These stop-the-world refactorings place a hard limit on the rate of change in a code base and thus the productivity of the members working in that code-base.

I agree with you. I like the way packages are isolated from each other, meaning that to understand a package it is usually by definition a good start to simply read what is there. Smaller packages mean more bite-sized chunks of the program. And I think the discipline of slicing up your program this way is very, very good for the design, and often makes the tests easier to write to by significantly shrinking the surface of what your tests have to "fake" in order to test your code. I think it's just a whole heapin' helpin' o' benefits.

However, I feel myself to be in the minority on this one. To which I basically shrug and write my code with lots of relatively small packages. It really only affects code you're working on, or that your team is working on. Things you pull in as libraries and have no direct interaction with don't matter too much on this front.

If packages are too small it can get hard to understand the code if your not familiar with the structure yet. Working from bottom to top level can work but might also be different because you are missing context for the low level packages to make sense.

In general I found larger packages tend to produce more direct / pragmatic code with less indirection which is usually easier to understand, even though it also feels wrong to me from a theoretical standpoint.

"If packages are too small it can get hard to understand the code if your not familiar with the structure yet."

I solve that with describing the context of the package in the opening prose section. I think this section is underused in every language community I've seen, even though all the automated doc systems support a top-level summary/contextualization/etc.

I think that an advantage of small packages is precisely that it is easier to understand if you're not familiar with the structure yet, by isolating how much structure you have to understand. Large packages, or languages with loose barriers, force you to eat huge swathes of the project at once to understand the code. Small packages are both bite-sized on their own, and also the packages that use the small packages often allow you to gloss over the used package while you're learning that package.

I don't end up with much indirection caused by the package boundaries. (Where there are interfaces I would usually have them anyhow for testing purposes.)

This tends for me to be one of those places where I wonder if I'm just doing something really different than most people. Another example is all the many people over the years who have tried to convince me, with varying level of politeness, that testing code should only use the external interfaces, or dire consequences like having to rewrite all the testing code if I tweak the package will happen. All my testing code uses private interfaces unless there's a really good reason not to, and maybe once in ten years have I had a serious rewrite of the test code come up. The threatened problems don't seem to happen to me. (And I am pretty sure I'd notice them if they did, although I guess I can't completely discount the possibility that I'm just too oblivious somehow.)

Certainly, if they cause you trouble, either because your style is different, or your problem domain is different, or whatever reason, don't use small packages.

I often find myself wishing Go either allowed import cycles between packages, or allowed namespaces within a package. Because they can become unwieldy.

For example, a common convention is to avoid redundancy. Let's say you have a package "builder". Your encouraged to have "builder.New()" as a constructor, not "builder.NewBuilder()". Fine. Now let's say you need two types of builders: One for building "schemas", one for "objects". Your original constructor now needs to be something like "builder.NewSchemaBuilder()" ("NewSchema" would be confusing, since the function creates builders, not schemas). Or maybe turn it into a verb: "builder.BuildSchema()", "builder.BuildObject()". Part of this is due to the lack of statics, or we could've had "builder.Schema.Build()" or something.

You can also split the package up into two packages, one for schema building and one for objects. (Naming here can be tricky, too. Will it be "schemabuilder.New()", or "schemas.NewBuilder()"?) But if the two builders need to share types, you may end up refactoring the common types into a common package that exists only because of the split.

I have a concrete example of this right now for a small query language. The language has data types (common interface Type) and values (Value). Values can tell you their type. Types can construct values. But they're two distinct sets of declarations. The type implementations and the value implementations can't easily be split across separate packages without having a common package that exists only to hold the shared types. It'd be nice to have "types.String" (in types/string.go) and "values.String" (values/string.go), not "lang.StringType" (lang/string_type.go) and "lang.String" (lang/string.go).

Having a namespace option would help here. Everything could be in one package, but under separate namespaces.

Could you work around this somehow using a combination of internal packages and type aliases in public packages?

Not sure. But sounds messy -- type aliases aren't really intended for that.

It was more a sincere question than a suggestion. I haven't written any serious Go since aliases were added and have never had any need for them so far.

Yes, and I think the quoted paragraph has so much more to do with coding around interfaces (behaviour) than with abstraction using non-exported package symbols.

This has got to be one of the best and most useful post i’ve seen about go (and code design in general) in a long time.

> A good Go package should strive to have a low degree of source level coupling such that, as the project grows, changes to one package do not cascade across the code-base.

I wonder though why there is so little emphasis on how important interfaces are in Go. I mean, section 4.5 talks about it a bit, but in my experience, this mistake is made far too often.

This is encapsulation, really, and simply makes the point that the principle applies to packages.

> 2.1. Choose identifiers for clarity, not brevity

This is a real problem in Go these days, people use one and two letter vars quite a bit which makes reading code you’re not familiar with practically impossible. On one project we simply switched to semi-java length like names since our customers couldn’t read the code.

Since when are variables names a language issue?

There is no such thing as "Java length like names", or whatever. There are good practices and bad practices and they are universal.

I disagree with this because I personally change my coding style based on the language I’m writing. When I write Go I do use one letter variable names like u := User because I’m modeling what I see from some of the top Go developers (core team members, etc). When I write JS I usually follow the popular airbnb style guide and order my code differently than I would in any other language. When I write ruby I look for naming conventions that read like English. I’m always trying to write idiomatic code that will feel familiar to readers who write that language.

It's really more of a language community issue, but communities of a language are commonly referred to by the name of the language itself.

I'm sure there are many actual professionals using Go. Not just script kiddies.

I happen to be learning Go coming from a C background. I find "The Go Programming Language" by Donovan and Kernighan exemplary and the decades of experience that went into the language really show.

No need for insinuations, that's not a point of contention.

If you look past the book, and directly at the STL, you'll find a common example: the "fmt" library, shortened to save three letters. Or compare the verbosity of these two examples from language docs, and the length and descriptiveness of variable names in them: https://docs.oracle.com/javase/tutorial/essential/io/cl.html to https://golang.org/pkg/bufio/#example_Scanner_emptyFinalToke...

I agree, it's a great language. These naming conventions are part of its developers and community.

Naming conventions are part of your own practices and processes.

There is no issue with having short names to describe well-known entities part of the language, or trivial code.

Your example from pkg.bufio is trivial and but still the variables names are meaningful. There is a difference between clearly and meaningful, and verbose.

Your example from Java is not very different.

I feel that the point is not being understood here.

If you’ve seen any amount of Go and Java code (starting with, say, the standard libraries), you’d know these communities take very different approaches to variable names.

The point is that you do not choose the way to name variables based on a sheepish "it's the way the community does it". You're responsible for your own code, and your organisation's code. It has nothing to do with the language.

I know the point you're making, but I don't think it's very convincing.

First off, OP is talking about a trend in the community to favor shorter and shorter names. He didn't say you need to code that way, but there is an increasing chance that the programmers you work with, or the libraries you depend on, will have adopted this convention.

Second, unless you are coding in a vacuum or starting a brand new project, you're usually expected to follow the conventions of an existing codebase. These were likely influenced by conventions in the community, which often are set by conventions in the standard library, which have much to do with the language.

But then you end up with a non idiomatic coding style, which may be an issue if your code is open-sourced. I'd be very wary of a Java program where variable names are snake_cased, for instance: did they do it for a good reason? Are they just beginning java developers? Will I run into other ad hoc coding practice when trying to understand/maintain/extend their code?

Again, there is nothing "idiomatic" is poor naming, nor do I see what open source has to do with it...

"Poor naming" is subjective. What is the correct name for a variable that associates a user's name with its configuration profile? userNameToConfigurationProfile? user_name_to_configuration_profile? userToConfig? profiles? userCfg?

If there was one true way to do it, everybody would adopt it (unless you consider some communities are just plain dumb).

We're discussing 'good naming' as defined in the article and first comment of this comments thread:

> 2.1. Choose identifiers for clarity, not brevity

Cosmetic considerations like snake or camel case are irrelevant.

The advice above is standard good practice that is not dependent on the language used.

Good naming is almost all that matters infact, that’s how you express your intentions to the reader. You can dispense with all other constructs in a programming language as long as it’s turing complete, but no one would be able to understand your program then.

They usually aren't however they can be an issue if the idiomatic style trends towards towards bad ones.


The article says basically the opposite.

Local, short-lived variables are short, this is nothing new. See: the ephemeral loop variable i.

Says who? This sounds like teenagers at school... Not like the people who actually created the language and who are notable experts.

> 2.3. Don’t name your variables for their types

What's the best way to handle a situation where you use two different types for the same data? e.g.

  var usersMap map[string]*User
  var usersList []*User

How about describing the intended use of the collections? E.g.

    var usersByUsername map[string]*User
    var usersToEmail []*User

I’d call the former usersByID (if that’s what the string is)

You could use a struct with private fields, e.g.

  type Users struct{
    dict map[string]*User
    list []*User
and attach methods to both access the data and coordinatedly update both forms, e.g.

  func (us *Users) byID(id string) *User { ... }
  func (us *Users) sortedByID() []*User { ... }
  func (us *Users) addUser(id string, nu *User) {
    dict[id] = nu
    list = append(list, nu)

Personally I'd use usersByID and users - I nearly always name my maps thingsByKey, and my default is that a plural thing on its own is nearly always a list, so adding a suffix of List doesn't add anything much here for me (Cheney's "Don’t name your variables for their types").


> 8.3. Never start a goroutine without [knowing] when it will stop.

100% agreed with the concept, but even the final example is flawed. That'll unblock and continue immediately after `close(stop)`, without being able to do two important things: it can't tell you when it's done shutting down, and it can't tell you if it encountered an error. Fixing this makes it even more complex.

Both of those are common and often necessary things to do - e.g. if you need to drain traffic, you need to wait until it's actually done before killing your process. Same goes for closing files (you might need to flush / sync first), OS resources that might survive your process, etc. This example will just kill your process as soon as everyone has been informed that it should stop, not when they're done.


Yeah it's a bit nitpicky, but it's even called out as "asking a http.Server to shut down is a little involved, so I’ve spun that logic out into a helper function". Except that the helper function can't ensure the involved server shutdown process actually shut down the server. That's a dangerous pattern to encourage, and it's an over-simplification that's absolutely rampant in go tutorials and code I encounter.

The workgroup lib he links doesn't (arguably?) have this flaw, but it has some strange / incomplete error handling details... E.g. if you Add 5 funcs, you'll only receive the error result of the first func to complete, not the first err or anything more predictable. If the first succeeds but all the rest fail, you'll see no error. Go's errgroup does "first non-nil error" at least, but you can still only get one: https://godoc.org/golang.org/x/sync/errgroup

> 100% agreed with the concept, but even the final example is flawed. That'll unblock and continue immediately after `close(stop)`, without being able to do two important things: it can't tell you when it's done shutting down, and it can't tell you if it encountered an error. Fixing this makes it even more complex.

The final example actually does both of these things already, but maybe you're confusing the purposes of some of the channels?

The loop at the end of main will wait until it receives one (possibly nil) error value from every task that was launched. Upon receiving the first error, it will close the done channel, which will then cause any remaining tasks to run their Shutdown function, which will attempt to gracefully shutdown the server. The error of each shutdown function is then returned and printed if non-nil.

There is no immediate termination, and all errors are communicated. It doesn't let the process die until every task has sent its error value.

Obviously, there are two semantics not enforced at compile time: each task must send exactly one error value, and each task is responsible for its own shutdown. If a task sends more than one error, there will be early termination. If a task doesn't shut itself down properly, that is its own fault.

Shutdown blocks until draining is complete, but ListenAndServe returns immediately when Shutdown is called: https://golang.org/pkg/net/http/#Server.Shutdown

>When Shutdown is called, Serve, ListenAndServe, and ListenAndServeTLS immediately return ErrServerClosed. Make sure the program doesn't exit and waits instead for Shutdown to return.

So no, this writes to the "done" channel essentially immediately once `close(stop)` is called, and ends the process prematurely.

This is so fortuitous as I started writing my first real Golang service and as a python dev I have no idea what I am doing.

But one refreshing thing is how opinionated the language and the frameworks are refreshing as there’s only one acceptable way to do many things.

I agree so much with this. I recently attended a Golang meet up and since I was one of the few who had been using the language regularly for over 2 years now i was asked to let the newcomers know my favorite "feature". I replied with the same point you make above and was met with a look of refreshment on people's faces.

Go lets you write One True Code for the most part.

I have found this advice spot-on for more subtle questions, over my last two years of learning and building with Go full time:


Also see back issues of the Go team blog for insight into concrete implementation realities of slices, interface types, GC and more.

Good collection of best practices overall. I would not say they are Go-only, as I read about them 5 years ago in Clean Code by R. Martin and another part can be found in Code Complete by Steve McConnell.

Once again, good collection if you have no time to read the book. Anyway, it does not cover other parts like contracts/interfaces, the absence of comments (which can be a good sign) etc. Would recommend checking both CC books if you want to write better and maintainable code.

PS. Anyway, Dave, thank you for the popularization of best practices.

I've always found Go's 'guiding principles' to be highly subjective, the annoying thing about it is that they are masqueraded as objectivity. Simplicity and readability for whom?

If you've been writing C and Java for years, all simple/readable means is 'familiar'. There are other definitions of simplicity

Go developers have a sectarian mind set. They obey the cult of The Three Creators, and some other minor advocates/evangelists like this australian dude.

> That’s a lot of repetitive [error handling] work. But we can make it easier on ourselves by introducing a small wrapper type, errWriter.

> errWriter fulfils the io.Writer contract so it can be used to wrap an existing io.Writer. errWriter passes writes through to its underlying writer until an error is detected. From that point on, it discards any writes and returns the previous error.

this is a cute trick.

In some cases it might produce a great result. For example, floating point nan values work roughly like this: a special error values included in the type that can be computed upon without needing to write special control flow until you explicitly need to check the final computed result to see if you got an actual value or the nan value.

but I fear the applicability of this trick in go is limited as it relies on a heap of idiosyncracies belonging to this example:

The chain of potentially error-ing operations that are being performed all involve a value of the same type that can be wrapped. In a less contrived example there would be a variety of different types involved in the sequence of possibly error-ing operations.

The code that is processing the wrapped error-hiding type roughly must not perform side effects when processing an apparently non-erroring computation, since now the error wrapper type is stopping the code from aborting early. It's completely non obvious that this transformation will be correct (ie result in a program with equivalent behaviour to the original program) without reading exactly through the implementation of everything that touches the error wrapper. This isn't helped by go not having a way to tag functions as pure.

The new code now superficially looks like it is wrong as it isn't bothering to check error values of the functions it is calling in the usual way. This will probably cause a spray of linter warnings about failing to handle possible errors in your CI build.

In other languages a much more general way to address this kind of problem could be:

error monads ; or exception handling

Which can be used to short circuit the entire normal control flow.

> In other languages a much more general way to address this kind of problem could be: error monads ; or exception handling

errWriter is quite literally "let's do monadic error handling, except not just without monads but without generics or reified errors". And thus it can only handle a single source or error type.

Also FWIW monadic error handling != error monads. For instance Rust does the former, but its type system can't express the latter (so there is no monad or functor abstraction on top of Result).

I don't understand something.

> 5.1. Consider fewer, larger packages

> 5.2. Keep package main small as small as possible

If you have one package, which is the main package, then how are you supposed to keep it small, and at the same time not creating more packages because somehow that is the work of the devil? He is telling me to try to fit everything into one package, but at the same time keep it really small. It is not going to work. Maybe I misunderstood?

In golang packages named "main" are special. They're not importable and they correspond to a single executable.

If you want to be able to reuse code in different packages in the future or have more than one compiled binary, you won't be able to put everything into a single "main" package

Right now I use many internal packages. It makes my code more organized. By many, I use 4-5 and the number probably is not going to change. I dunno if this is a good idea, but it seems like the best I have got that suits me.

Your main package can sit under `cmd/foo/*.go` and import a few larger packages

That's what it means.

Love Mr. Cheney, hate stuff like this.

Someone who has put the hours in will probably stray from all of this advice, and still end up with something beautiful. Meanwhile, for all of the other schmoes who haven't put the hours in, this is just more fuel for screeching about how this name isn't right or that comment is too long.

Dijkstra said GOTO was bad, and now we have callback hell and 10-layer inheritance hierarchies instead.

If you're writing the kind of thing where you need callbacks, GOTO isn't going to make it more clear what's going on...

Never said it would. I did imply that things haven't significantly improved.

Throw away goto and globals, replace them with anonymous functions and closures, and the new thing looks an awful damn lot like the old thing.

All I know is that we used to have retrained housewives writing physics programs. Now we expect 10,000 hours to write a halfway decent web form, and halfway decent doesn't happen nearly as often as it should.

This book is a great read for writing maintainable code: https://www.amazon.com/Clean-Code-Handbook-Software-Craftsma...

My advice would be this: If you have a specific reusable component.. Make a package to contain it.

Use init() to compile any regular expressions and store them as variables within that package, so that you don't

Split out related code in a folder, remembering that the included code is done in alphabetical order - so shared functions within the same package you should alphabetically make it the first to be included.

You can register components of a larger system by using the init() function to call a function in the base package, and within the calling package (usually main) import the "plugin" using the underscore space prefix (sql database drivers use this method)

Use the new go modules system; its great. But sometimes it doesn't include the latest packages, just modify the go.mod file and anything after the package name remove it and just put master instead.

One last thing with error handling. Everybody does if err!=nil. I tend to go the opposite way, if err==nil and nest these, if err isn't nil i drop out and deal with the error. If it is ok, it will return ok.

> One last thing with error handling. Everybody does if err!=nil. I tend to go the opposite way, if err==nil and nest these, if err isn't nil i drop out and deal with the error. If it is ok, it will return ok.

Don't you get "staircase of doom"?

    err := unsafe1()
    if err == nil {
        err = unsafe2()
        if err == nil {
            err = unsafe3()
            if err == nil {
                err = unsafe4()
                    if err == nil {
                    err = unsafe5()
                    if err == nil {
                        // ...

> Use init() to compile any regular expressions and store them as variables within that package

No need to resort to init() for regexps, simply use an unexported package level variable:

var validFoo = regexp.MustCompile(...)

Slightly offtopic: Does anyone know which template was used to generate this document? I'm assuming the source in markdown and I really like that the generated html has table of contents, support for footnotes and is responsive!

It’s asciidoctor, and it’s indeed really nice.


I can't offer a solution to what it was made with, but the italicization on this paragraph makes me almost sure it was a markdown source.

  The go tool also supports a special package declaration, ending in test, ie., package http_test. This allows your test files to live alongside your code in the same package, however when those tests are compiled they are not part of your package’s code, they live in their own package. This allows you to write your tests as if you were another package calling into your code. This is known as an _external test.

Nice write up, could use a little simple editing / spell check!

One more advice - implement Generics in your programming language, it's 2019 already, not 1970s!

Really glad to see more pragmatism and less dogma in the Golang community.

Great article, as always, by Dave Cheney.

I took a lot of his advice when designing V [1].

It's very similar to Go, but it has

- No global state

- Only one declaration style (a := 0)

- No null

- No undefined values

- No err != nil checks (replaced by option types)

- Immutability by default

- Much stricter vfmt

- No runtime

- Cheaper interfaces without dynamic dispatch

[1] http://vlang.io

Just wanted to say...as both a language nerd and opinionated snob, I'm really excited by what I see with V thus far. I'll be following your progress for sure and wish you the best.


This is interesting. Go proclaims itself as a modern spiritual successor to C, but I think what you've done is much closer to that goal.

BTW, if you're going for immutability by default, perhaps swap := and =? Reason being, if variables are immutable by default, assignments should be far less common, so the language should discourage them with more verbose syntax over initialization of immutables.

Besides, C originally used = instead of Algol's := for assignments for the exact opposite reason - because assignments were more common in C code than comparisons - this would be a nice opportunity to fix that mistake.

I agree with you, but I want the language to be not too different from C/C++ and Go, to make the transition easier.

This seems to address basically all of my frustrations with go. Well done!

Hi, never heard of V before. Looks great. What are the cons?

No runtime and such a small language!

How does it do threads? Networking? Any story on cross-compiling?

The main con is it's at a very early stage.

It supports only traditional threads right now (but with automatic locking).

Networking is supported. It uses curl/windows api.

Cross compiling will be top notch, just like with Go.

Does this mean it doesn't support standard linux socket operations, or am I misreading it?

It does. I was describing the HTTP library. My bad.

Just a note: the .v extension is already used for Verilog and Coq files. Even though V seem more eligible to use it that might disturb LoC counters like tokei or scc I guess.

I never heard about this language and it looks impressive on the paper. Keep up great job! I'm waiting for open source release to give it a try.

Well it's pretty new. I launched the website today :)

The binary size and compilation speed look amazing.

The "Detailed comparison of V and other languages." link on the homepage doesn't seem to work.


How long was this language in development? In what language was the compiler originally written in?

It was written in Go, my favorite language at the time :)

About 2 months later I rewrote it in V. Bootstrapping is fun. Did it 4 times (V1 => V4).

> - No err != nil checks (replaced by option types)

Where can I find code illustrating V Option types?


They are very simple and combine Rust's Option<T> and Result<T>.

  // `http.get()` returns an optional string.
  // V optionals combine the features of Rust's Option<T> and Result<T>.
  // We must unwrap all optionals with `or`, otherwise V will complain.
  s := http.get(API_URL) or {
  // `err` is a reserved variable (not a global) that
  // contains an error message if there is one
  eprintln('Failed to fetch "users.json": $err')
  // `or` blocks must end with `return`, `break`, or `continue`

This looks really handy. Is there somewhere else (outside of V) where I can read about Option types (in the real world, or in theory).

What's the difference between "break" and "continue" inside an or-block? Or do they apply to the outer enclosing loop if there is one?

Yes, just like in Swift.

Then I don't get it - how do you make the or-block exit normally and yield a value, instead of exiting the outer scope one way or another?

So I don't know how to fly a commercial airliner, but I could probably figure my way around a small single prop airplane. That's basically the difference between Go and a language like Rust or C++ or any language that requires a lot of up front investment, but then let's you work at power level 9000.

So yeah, Go will get you there. It will take you a lot longer to get there when you aren't going 600MPH, and you can carry a lot less people, but at least you aren't driving a car like the people using Python. Sure it's probably not going to be as rigorously inspected as the Boeing borrow checker, but Go is at least getting an inspection, whereas your Python code is just going to break down on the side of the road when the problems surface itself because you didn't check the oil light.

To me, Go is a better Python. It's easier to learn, faster, more scalable, and safer, and just as easy and quick to write. It's just not a language for big projects.

I'm sorry but most analogies for programming languages just don't make any sense. This is why no one takes software "engineers" seriously.

What the hell does "power level 9000" even mean for a programming language?

Big, successful projects have all been written in Go, Rust, C++, Python, and many more languages. The key is that they've been chosen in cases when it makes sense to choose them. And it has nothing to do with childish "Rust is fast jet. Python is slow car." analogies.

> What the hell does "power level 9000" even mean for a programming language?

Power level 9000 is a reference to Dragon Ball Z. I haven't watched it but I gather it's up there on the power scale.

I think the analogy makes sense. Sometimes it is better to take a car than a Cessna or a jet, so it doesn't necessarily fail as you think it does.

The analogy might "make sense" but it's not useful. All it does is drive discussion towards arguments framed in the context of the analogy. Except we're not talking about public transportation! We're talking about programs! Points about "roads are everywhere" and "Are you driving across the Pacific?" are just snark.

As an example, let's take this quote:

> So yeah, Go will get you there. It will take you a lot longer to get there when you aren't going 600MPH, and you can carry a lot less people, but at least you aren't driving a car like the people using Python.

If Go was the name of a Cessna, C++ the name of a commercial airliner, and Python the name of a car, then that statement might make _some_ sense. But no, they're programming languages and you can't just throw this analogy out there and then try to have a serious discussion based on it.

Sometimes a Go program performs faster than a C++ or Rust program, and is written in less time. For certain domains, you might get something working (and performing faster) in less time in Python than writing it in Go. Python (and others) can call out to C libraries. Oh, I don't recall your car being able to summon a commercial airliner to pick it up in the middle of highway traffic. But that's how you'd have to explain FFI in this make-believe world of programming language cars.

All metaphors and analogies break down. No need to get pedantic about it.

i'm riding that car for nearly a decade and never needed to pull to side. you can't ride a plane without airports, but you can travel to everywhere with a car - because roads are everywhere.

Driving across the Pacific anytime soon? :)

But Go will never be a data science language like python, because Go is too limited to express that domain.

Python may leave you in the lurch when it comes to maintainability, but for some domains python is infinitely better than Go.

Go fanboys won't get it. They are too busy proving their language doesn't need Generics.

Hmm. Weird. Almost my entire data science pipeline is in Go.

Data engineering is not data science

I agree with you, but data science is a separate field, so I don't normally factor it into my discussions of tools for software engineering.

tell that to the NumPy folks!

if you want to fly anything other than a small plain you’re doing it wrong. think about it! we could get a new pilot and have them fly this in no time. it’s simple and can do this 2 tricks really well. everything is fluff and not for professional pilots. and google flies a couple of them so how could it be wrong?

No wonder most Go developers come from Python, not C++. Go is a faster Python with concurrency. Go is not suitable for serious projects, but is ok for mediocre devops stuff.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact