Hacker News new | past | comments | ask | show | jobs | submit login
3.5 Years, 500k Lines of Go (npf.io)
326 points by NateDad on Mar 24, 2017 | hide | past | web | favorite | 238 comments



I'm 3.5 years into using Go exclusively as well, and this rings very true to me with regard to generics:

> Interfaces are good enough 99% of the time.

Generics would be really nice for generic data structures (heaps, trees, etc), but code generation is ok at that.

Generics could allow for some nicer (and safer!) nil handling and error checking, but I think the benefits-vs-complexity are less clear than with generics for data structures.

I think it's hard to quantify the massive benefit Go's simplicity is to onboarding developers -- especially for a new language where hiring experienced devs is next to impossible. The ease of onboarding is reason enough for businesses to consider Go as over an org's life a lot of time will be spent (wasted?) onboarding.

Any language features which increase cognitive overhead had better offer some extremely compelling benefits to outweigh an increased learning curve.

> And I don’t mean interface{}. We used interface{} rarely in Juju, and almost always it was because some sort of serialization was going on.

This has been my experience as well. Whenever I hear someone complaining about interface{}, I wonder what they're doing. I think it's often people used to having generics or a dynamic language trying to follow similar patterns in Go and not considering alternative patterns.

I've never used a language that didn't lack type information at the edges (where serialization occurs). Even using strongly typed serialization (eg Protobufs) in a language with strong type features and patterns (eg Java) I always see a fair amount of glue code converting from "weaker" serialization types to stronger internal representations.

As long as those edges are architected to be easily testable (even fuzzable!); I don't see it as a problem. You have to convert from bytes-on-the-wire to typed variables somehow.


>Generics would be really nice for generic data structures (heaps, trees, etc), but code generation is ok at that.

Either code generation is an implementation detail of generics, or this is an afterthought that adds incidental complexity and was almost assuredly better off being a core language feature.

>I think it's hard to quantify the massive benefit Go's simplicity is to onboarding developers

This is really only applicable to small applications using new languages where the main cognitive onboarding is learning a new language/paradigm. As soon as you reach a certain size of application, the frameworks, architecture and domain knowledge heavily outweigh the cost of learning even the most esoteric of language features.

>Any language features which increase cognitive overhead had better offer some extremely compelling benefits to outweigh an increased learning curve

I wonder what kind of language feature you're talking about? The whole idea of generics in a more functional oriented language is to reduce cognitive overhead. The beauty of parametricity is it takes things away from the developer to help them reason about the code!

Our (very large) team of engineers use this to great affect throughout our codebases. There is no such thing as a silver bullet, but this comes as close as I've seen in the 12 years I've been an engineer (from startups to a Fortune 10 company).


> Any language features which increase cognitive overhead had better offer some extremely compelling benefits to outweigh an increased learning curve.

This is a common misunderstanding that occurs (I think) because most people only know C++-, Java-, and C#-style generics, which can be conceptually complicated, because of the often tricky interactions with the type system (or, in the case of C++, the use for metaprogramming).

In contrast, module-based genericity (SML, OCaml, Ada, Modula-3) is conceptually pretty simple, as it avoids complicating the type system. Its main downside is (relative) verbosity, but Go has never really eschewed verbosity.


It struck me the other day that when using Go codegen tools which generate packages based on a template, e.g. [0], the Go developer is doing manually what the OCaml compiler will do with functors at compile-time. The concepts are fundamentally the same...the effort just happens in different places.

[0]: https://github.com/cheekybits/genny


They're very different. Generics in languages that have them are typed—you have to declare up front the methods that your generics support. That's the entire thing that makes ML functors functors.

By contrast, generics in languages like C++ and D, as well as the code generation tools for Go, are untyped—you can call whatever functions you want on your types, and if it doesn't work it fails at template instantiation time. This causes the confusing template errors people see in C++.


You are, of course, absolutely correct: compile-time awareness of semantics vs just syntax is an important difference.

I guess the point I was getting at – though improperly phrased – was that if current Go tooling is similar to templates, and templates are similar to functors (in that you're instantiating both but the latter have additional checks for correctness), then perhaps module genericity wouldn't be as tough a sell to the Go team/community.


>>> I think it's hard to quantify the massive benefit Go's simplicity is to onboarding developers -- especially for a new language where hiring experienced devs is next to impossible. The ease of onboarding is reason enough for businesses to consider Go as over an org's life a lot of time will be spent (wasted?) onboarding.

Agree. I feel it so much.

I stopped counting the companies I couldn't join because they had exotic languages or just the latest fad of the year.

That's a very effective way for a company to stay away from having any experienced engineer while making it very hard to recruit at all (note that both effects amplify each other!). The hardest and newest the languages, the worst it is.

As an employee, that's a painful way to have a company cancel a decade worth of experience. I'm not interested in starting fresh again. Bye.


Accordingly to your logic then we should have no language other than C because everyone had decades worth of experience in that language and was not interested in starting fresh again. From my point of view I was very relieved when imperative languages and their adept seemed finally to embrace better tools after Y2K. Apparently now there is Go that follows up with that style (that was perfectly fine for a 10 years old ages ago) and it is making again adepts. For me this new spring of imperative languages seems more like the winter of programming. The only "new" language that captured my interest is F#, but obviously all the go proponents will be horrified by the simplicity of type providers and all the nice "magic" things that are not easy unless you spend at least half an hour of your time to understand them.


Actually, yes, that's a very good example.

Go uses the C syntax. That is the most known syntax in the world. It's in the same family as C++, Java, C#, and in a different league PHP, Javascript.

That's the kind of VERY IMPORTANT DETAILS that make a language reasonable. Go didn't attempt to throw under the bus a decade of muscle-memory coding.

By comparison, F# or Ruby are total aliens in every regard and not just the syntax, even when compared to the most similar languages.


F# is completely not an alien in any regard, it is basically an SML (a language-predecessor of OCaml/Haskell from 90s) for .Net. I can say in the very same manner that Go is a total alien since it does not use Hindley–Milner type system or higher kinded types and throws under the bus a decade of muscle-memory coding.


> The first was assuming forward slashes for paths in tests. So, for example, if you know that a config file should be in the “juju” subfolder and called “config.yml”, then your test might check that the file’s path is folder + “/juju/config.yml” - except that on Windows it would be folder + “\juju\config.yml”.

Wait what? I thought this was solved ten years ago https://en.wikipedia.org/wiki/Path_(computing)#MS-DOS.2FMicr...


Go gives you the tools to handle this, but you have to do it yourself. filepath.Join(), filepath.ToSlash() and filepath.FromSlash() are used for this.

I find myself very productive in Go, I've written a lot of it now, but I do also find myself writing more code than I thought I would, having had some expectations set by Python and Java. Go's philosophy seems to avoid doing something which can be done incorrectly, and instead punting it to developers. It's not always as simple as simply replacing forward and backward slashes.


> Go gives you the tools to handle this, but you have to do it yourself. filepath.Join(), filepath.ToSlash() and filepath.FromSlash() are used for this.

You're missing the point—it's unlikely that you need to do anything at all; the Windows APIs handle forward slash as a path separator just fine.

And on that note, for any MS employees reading now and who write READMEs and other docs for projects with publicly released code in this "New Microsoft" era, please default to using forward slash for the paths in your code snippets, instead of backslash. There's hardly any reason not to. (Unless you're using the command prompt—but you guys are a pointy clicky bunch and don't like the CLI anyway, right? Even so, you have PowerShell.)

So you can drop practice of writing up separate "For Windows users" and "For Mac and Linux users" instructions that differ only in the type of path separator used. Just use plain ol' slash.

Please tell your colleagues to do the same.


One reason I think that folks think they need to do this is because it's not very publicized AND most Go developers would come from a Unix background not a Windows one.

Also, I've recently discovered Go has a lot of trouble with case sensitivity in filepaths. i.e. people tend to compare filepath strings directly instead of taking into account case sensitivity. The `path/filepath` package has a `HasPrefix` function which has been deprecated for 4 years for this exact reason[1], but in my experience it is still widely used (even in some newer semi-official Go projects like the dependency management tool[2])

[1] https://github.com/golang/go/issues/18358 [2] https://github.com/golang/dep/issues/296


This is about what the Windows API reports when you ask it what the path is of a file. Sure, inputs work with slash. Comparing outputs, which was the example I gave, does not.


I take it you're referring to the part that VMG quoted. Thanks for providing context.

This doesn't change my response to oppositelock.


That works for input, not output.

If you ask what the path of a file is on Windows, it'll give you the backslash version.


I happened to believe that a language should have one style.

You want a functional language? Use it.

You want a procedural language? Use it.

You want an OOP language? Use it.

They're all good, but not in one language.

For example, you like FP idioms, and program everything with maps.

I like for loops.

You have to fix my code one day I'm on vacation.

You think that it's ugly, and rewrite it as a map.

I get back, bug comes up, I rewrite it back to for loops.

Repeat.

Go showed how to finally end indentation wars, maybe it can show how to end style wars.


Even though I've written more ruby than anything else over the last decade, I have to agree with this. Ruby is nice OO language, but it's unabashedly multi-paradigm, and that leads to a lot of mess when you get a team with varying backgrounds the result is potentially a byzantine mix of procedural, object-oriented and functional styles.


When you are in a big project a multi paradigm language seems to be the only way for introducing new styles. In the stuff I am working on I can't just switch to a new language. I agree that this can create a mess though.


Absolutely, but it's such a double-edged sword. All else being equal, I love ruby, and when it comes to shitty legacy codebases, you can do worse than ruby because of the power at your disposal to work around problems. On the other hand, it's not that hard in ruby to make something that is so thoroughly fucked that there's no reasonable option but to nuke it from space.

The beauty of Java, Go and Haskell (never thought I'd use those 3 in the same sentence) is that you really have to work to fuck things up that badly.


Since you can write for loops in anything other than a pure functional language, this is equivalent to saying that, unless your language is pure functional, it shouldn't have any FP features in it. I think this is obviously false: Ruby is not any worse for having the map method.


> You have to fix my code one day I'm on vacation.

> You think that it's ugly, and rewrite it as a map.

Well, if your colleagues do that, their is a social problem in your company, and I'm not sure a langague has anything to do with that. Your colleagues could also rename all your variables at this point…


I completely agree, and it's one of the reasons I've liked Python. From the zen of Python:

  There should be one-- and preferably only one --obvious way to do it.


This is my problem with modern day Java.

Java is an OOP language.

OOP languages communicate through message passing.

So why do modern UI frameworks use callbacks?


Because, as a Zen master would point out, there is no message.


I'm looking forward to my first project with GO. It appears to offer a lot with minimal complexity.

> Because Go has so little magic, I think this was easier than it would have been in other languages. You don’t have the magic that other languages have that can make seemingly simple lines of code have unexpected functionality. You never have to ask “how does this work?”, because it’s just plain old Go code.

That lack of magic and his comparison to C# sounds like a really good mix.


> It appears to offer a lot with minimal complexity.

Actually I think it offers little with minimal complexity.


Here's a blog post from Rob Pike about the design philosophies inherent in Go, and how that affected adoption from C/C++ developers vs. Python, Ruby, etc.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


> That lack of magic

I have a really hard time understanding what people mean when they say magic. In every language I've ever worked in I spend a fair bit of time saying "how does this work". Go doesn't seem any different in that regard to me.


Rails is the epitome of the magic philosophy. Stuff "happens" through inference because you touched some part of the code (check out routes, foo_{url,path} methods, url_for, and passing some models as argument to those), or the database schema (defining accessors from DB fields or adding features when magic-imbued names are used, such as type, version or foo_id/foo_type). This makes one feel fast and powerful at first, but as the application grows one has to memorise all sorts of conventions and DSLs of Rails's as well as one's own, and this is more and more stuff the developers have to remember instead of being explicitly stated in the code. As the application grows in scope, it is bound to veer ever so slightly away from the Holy Conventional Way and trip onto something lurking in a dark corner, and that's where things start to break for seemingly no reason at all unless you want to dive deeper into the cave where dragons born out of someone's eagerness at being smart lie asleep.

IOW "Magic" is wanting to achieve extreme generalisation through combined use of conventions and dynamic features of languages, which inevitably leads to gotchas†, corner cases, and pitfalls[0] as well as significant cognitive dead weight due to the very nature of its implicitness.

† ever tried to mix STI, polymorphism and url_for?

[0]: http://urbanautomaton.com/blog/2013/08/27/rails-autoloading-...


I've been pondering this topic lately, as I come back to Rails (seems every few years I'll write a Rails app on the side, and my brain has totally forgotten everything since last time): one other way to look at this "magic", is it makes programming feel "intuitive". Not sure how to do something, I often find I can just "guess" the right and most natural way, and the code will just work. For that reason, I always feel like I'm most productive when writing Ruby (I really love Go too, just for different reasons).

I can totally see how the situation you describe would be frustrating too, I felt the same about Java annotations when they came out, and on massive code bases it could become a nightmare when used to the extreme. My own experience has been that Go scales very well to large code bases, I've never wanted to try the same with Rails.


I've had this same experience bouncing back to Rails for contract gigs. You always feel this temptation like you're missing out on 'real' programming, performance with low level code or super clever languages like Haskell or Clojure. But ultimately Rails is just a great programming experience for getting the job done.

Despite it's faults and the problems with using 'magic' frameworks like Rails it's a really great language/framework for what it's meant to do. And it still is in 2017 despite what some people say (although Elixir/Phoenix is getting there if it can reach the scale of adoption as Rails).

That's the end of the road lesson, that there's the right tools for different jobs. There is no 'perfect' solution. Not rabbit to keep chasing.

Either way though it's still good to get exposure to as many different languages as possible (low level ala C, easily parallel-based languages ala Erlang, some lisps, typed languages like Haskell, dynamic FP ala clojure, etc).


Example, properties in C# can be method calls, and while they appear to have the complexity of accessing a field they actually could be arbitrarily algorithmically complex. This leads to a programmer down the road, calling it in a tight look, expecting field access overhead and getting somones complex property-method-logic. That is an example of magic.

IMHO magic is when the run or space time complexity of a code isn't obvious by its on-screen representation.


So by that definition of magic, something like ranging over a channel is magic?

It looks like a simple for each loop, just like over a slice or map, but under the covers involves locking semantics.

If so, I guess I'll buy that definition of magic, I'm not sure thats any different than knowing what the language does.


In go, there are a minimal amount of primitives (like channels, slices) to learn. Once you know them, they're fairly intuitive.

In C#, properties can be arbitrarily complex. You can't just know how properties "work" and then do mental shorthand on them. Every time you look at a new codebase you might have to dig through several files to find out what one line does.


But the "magic" in this case is that properties can be methods. Once you know that how is it any different than methods?

In go methods can be arbitrarily complex. You can't know how they work without digging through several files to find what one line does.

Another example of magic in go would be method names. You have no idea if it is safe to change the name of a method because it could be satisfying an interface far away from the definition site (or in the case of exported methods nowhere you have access to).

We could go back and forth all day about what is and is not magic, but it still just seems like "language differences" to me. If the claim is "go does a lot less for you than other languages, so has a lot less opportunity for magic", I could probably concede that.


>But the "magic" in this case is that properties can be methods. Once you know that how is it any different than methods?

It's different because you have to apply that general knowledge in every single case when reading code that you are not familiar with.

In Go or Java, the information on whether a.x is a constant time variable access or a function call of arbitrary complexity is available at the call site. You don't have to look it up. It's one less thing to do when reading code.

And when you do have to look up what an expression means, how straightforward is it? Consider this expression:

  f(x)
In Go f(x) means whatever the function f does, and f is exactly one function in the current package.

f(x) in C++ (and to a slightly lesser degree in Java, C# or Swift) is one of a set of functions called f. Knowing which one actually gets called requires knowledge of tens of pages of name lookup rules plus knowledge of possibly large swaths of the codebase.

It is often claimed that languages more powerful than Go just have a steeper learning curve. But it's not true. Even if you know all the name lookup rules of your favorite language (do you?), you still have to apply them every single time you read unfamiliar code.

In my view it's pretty simple. If you have to read a lot of unfamiliar code all the time then Go is great. If you can know both a more powerful language and your codebase inside out, then Go will be frustrating for its lack of abstraction features.


> Another example of magic in go would be method names. You have no idea if it is safe to change the name of a method because it could be satisfying an interface far away from the definition site (or in the case of exported methods nowhere you have access to).

Is this true? If you changed the name of a method, it will no longer satisfy that interface and your code would not compile.


I don't think that's quite right. E.g. if you do call a method, it's complexity is unknowable with only local context, so it can't be obvious. I would rewrite that to:

> Magic is when the run or space time complexity of code is misleading by its on-screen representation.

An apparent field access that is actually a method is misleading. Calling a method explicitly just directs you to check that method to know for sure.


I think we are on the same page.


Yes, I'm just being pedantic about how you say it. :)


So these will be made into functions with uncertain O complexity. How's this situation preferable?


Most of the time in Go people use fields directly, so there is a clear difference between struct.Field and struct.Method(), struct.Field is preferred and you only have to worry about uncertain complexity if you see struct.Method(). The parent is saying in C# struct.Field might be a simple access or it might be a complex method.


When the programmer sees a function call they know its complexity is a O(?) and will (hopefully) vet it before calling it in a tight performance sensitive loop.


When a C# programmer sees any call to code they don't know - property or function - they should know its complexity is O(?) and will (hopefully) vet it before calling it in a tight performance sensitive loop.

I get that someone could say "Go has fields, and they're always fast" and that seems like a great facility of the language, but any C# developer that says similar about instance members is wrong, and has some invalid assumptions about the language they use.


A corollary to this is that C# property accesses can have side effects. I recently started working on a legacy code base where reordering access to a set of properties on an object resulted in different results!

And I'm not saying that properties are strictly a negative; in some cases, they can be very useful for refactoring an underlying implementation without having to change the API exposed to callers. But just like any magic, it needs to be applied thoughtfully and judiciously.


Take a look at a java codebase that uses:

  * Complex DI frameworks (Spring Bean*Processors, event listeners, XML config)
  * classpath scanning-based autowiring (See Spring @Component)
  * aspect weaving-based autowiring (See Spring @Configurable)
  * Code littered with annotations that invite aspect-based pointcuts
  * Complex ORMs like hibernate that are incredibly difficult to use properly
And you'll start to get an idea of how ridiculous things can be. Golang is making a huge mistake for not adding Generics. 99.9% of the complexity in a typical Java codebase has zero to do with generics and everything to do with the insane abuses of the JVM classloading system that the java community has subjected itself to, as well abuses of overly complex libraries like Spring and Hibernate.

If the Java community allowed itself to write simple golang-like code the majority of the time, there'd be much less defection to golang in my opinion.


There is nothing language specific or magic about those things. You could (and you will see people) write those things in go as the language starts getting more adoption.

Go goes further and encourages code gen, so that will probably be the way you start seeing terrible frameworks being built.

In any case, "configuration as code" doesn't seem like a good definition of "magic" to me.


My point exactly. The issue isn't java the language. The issue is the flexibility of the JVM runtime and how people are abusing it.

Also, if load-time aspect-weaving and classpath-scanning-based autodiscovery don't count as magic to you, then not much will. Code generation at least has the huge, huge, humongous advantage that you have code on disk that you can read and debug.

I also admire Racket's macro system for coming with IDE support for introspecting and debugging the code generated by macros. Macros are a much better design because they generally run at compile time and they generally only make local code transformations that are much easier to reason about, as opposed to the sweeping global changes a weaver will make.


When people say "magic", what they often mean is "code over here can effect the execution of code over there in an implicit way". Like in Ruby, I could conditionally monkey-patch a function into an object someone way over there was using, causing code to break.

Other languages, like those with stronger type systems, will not allow this to happen.


Yea, monkey patching is helpful when dealing with a 3rd party library that needs to be tweaked 10 layers up the inheritance chain without having to change the object type all over the whole system.

If it gets overused it causes problems but there are times when it is close to a miracle. That said, there is a reason ruby devs are so test conscious.


Yeah - of course monkey patching has good uses :) The problem is that when you're trying to debug an issue, it's another thing that you'll have to remember - "is anyone monkey patching something in here?"


Yea, I can't work on Ruby codebases without something like Rubymine where I can jump straight to the declaration for that exact reason.


one of the things that go eschews is operator overloading.

    a := b + c
What's the runtime complexity of this statement? How much memory will it cause to be allocated? In go there's only two possibilities for what this code is doing... either this is string concatenation, or it's adding two numbers. Both of which are immediately comprehensible for impact on run time and memory.

In C#, you can overload operators, so the + could in theory do anything. And what's bad about that is that it is deceptive. It's easy to miss the fact that this line might actually be doing something complex.

It also means that if someone is looking at your code, they can't make any assumptions about what any particular line of code is doing, without complete understanding of a vast amount of code.

This is one of the pieces of magic that I'm glad go doesn't have.


Hogwash. In a language with operator overloading, this would be something like `a.assign(b.plus(c))`. Which really doesn't tell you more about what go on under the wraps than the operator form.

What can be confusing is what the meaning of `+` (or `plus` is). In some case it can be fairly obvious (e.g. concatenating sequences), while in other not so much. Operator overloading is nice, but has to be used tastefully (like every abstraction or language tool).


In C#, you can overload operators, so the + could in theory do anything.

What can be confusing is what the meaning of `+` (or `plus` is)

QED


Any function can do anything. I can write a function called "read_from_file()" that doesn't read any files.

Amazing, I know.

Also please actually read the comment before replying:

> Operator overloading is nice, but has to be used tastefully.


The point the OP made was that operator overloading is not nice - it means that any operator (not just function) can do anything. It makes code harder to read and reason about.


    a := Sum (b, c)
How can you be sure that Sum actually does a sum without looking at its implementation?


I think you are missing the point. Of course you can't assume what a function will do with certainty.


From CS point of view + is just a function name just like any other.

A concept used in lambda calculus, introduced in computing since Lisp exists.

Also part of abstract mathematics field, where operator symbols get defined for the proofs.


> From CS point of view + is just a function name just like any other.

From a Go point of view, it isn't.


Just because Go eschews decades of CS knowledge, in the name of the "easy to hire programmers" for Google[1], it doesn't make it less true.

[1] - According to the language designers own words


What you wrote isn't a universal truth. In Go, + is not a function like any other. There's no argument to this.


What is the difference, apart from the notation?


The point is in most languages operator overloading is no more complex than a method call.

It's not exactly magic.


Overloading + could be magic if you want it to be. In go, + is exactly what you think it is. In languages with operator overloading, I literally could make + do whatever I wanted to.


Just like you can make a function do something totally unrelated to how it is called, do what it is actually in the name, wipe out the hard drive, launch missiles, whatever.


> How much memory will it cause to be allocated?

In Go, it's impossible to tell because "a" might be captured by a closure, in which case it will be heap-allocated. But if escape analysis promoted it to the stack or a register, then it will not allocate memory.


I honestly don't understand the difference with ' a := add(b,c) '. Does it really make things that much difference? And lack of overloading makes maths heavy code horrible to read.

Of course, you can go to the C extreme, no overloading, specialisation or anything, every function name means one thing. That does add some nice features, but is a pain in the ass when naming things!


The difference is more apparent in the other (non-overloaded) case. When in Go (or C, …) you see an expression "x << y" you know immediately that this is just a shift operation, mapping to at most a couple machine instructions. In certain other languages it's most likely an integer shift, too. Still, you have to carefully consider the context lest this simple shift expression causes synchronous I/O to some space probe near Mars


"they can't make any assumptions about what any particular line of code is doing, without complete understanding of a vast amount of code."

Yes, this is a problem with many designs. More importantly they can't easily look up what does an operator do. But this problem can be easily avoided if a set of operators used in a particular scope had to be explicitly specified. For example, if you want "+" to mean a bigint addition from a set of bigint math operators, you would have to import that set into that scope, kind of like this:

  import_operators "bigint"
  a := b + c
Now you still have overloading, but it is very clear where to look up an operator and a set of operators used in this scope.

"In go there's only two possibilities for what this code is doing..."

There should be only one possibility, though. Dual-meaning operators don't provide any value to ever have them, apart from familiarity with design mistakes of the past.


I agree, I wish + wasn't string concatenation either.


One of the few good decisions PHP made is not using + for concatenation.


> In go there's only two possibilities for what this code is doing... either this is string concatenation, or it's adding two numbers

That is overloading! What go eschews is user-defined overloading.


One way to look at it is, some languages, I would suggest, C, C++, Clojure

When you start learning those languages, and then compare the code you write, to the code of popular major or standard libraries, they look completely different

They say, that Go is different and special this way, in that advanced Go code, doesn't look all that different from average Go code

Anyway, I can't really judge, I never looked or tried to learn Go, hence (They say)


I don't know about this axis of "magic vs. muggle" we're talking about, but for me the phenomenon we're talking about seems like it can best be described by the ease with which you can find the code that implements specific behavior. Go (and Java) are pretty good at this. Python is OK. Ruby is awful.


This is true except in the case of structural satisfaction for interfaces. You can't trust a rename function in a IDE in go for instance due to this.

And in the case of channel select behavior...and ranges over a channel...and the context object, etc.

My point being "magic" seems to be code for "familiarity with the language and it's idioms". Which I'll grant might be easier in go because of how limiting it is.


Magic is when you access struct pointer members w/o the asterisk. ;-)


I think the go language has taken a position along the lines of "re usability and abstraction are overrated". I certainly think there is some truth to that, I am really enjoying working with go on smaller projects and it is this philosophy that has made understanding the code much easier.

But I wonder how it really scales on a large code base like this? Some of the best projects I've worked on leverage usability more effectively to create a sort of vocabulary. They're far more concise, there is rarely more than one source of truth, they're far easier to change and improve. Does this hold true for 540,000 lines of go code?


> But I wonder how it really scales on a large code base like this? Some of the best projects I've worked on leverage usability more effectively to create a sort of vocabulary. They're far more concise, there is rarely more than one source of truth, they're far easier to change and improve. Does this hold true for 540,000 lines of go code?

Doesn't this article speak to this? It mentions juju has over a million lines.


Yeah, but there is no comparison to the same project done in Lisp, Haskell, Java, etc.

All the author is doing is relating their success using Go, which is great, but there is no comparison to how it would have fared in another language, except his previous frustrations with C#, on other projects, I guess.


It would be ridiculous to expect Canonical (and even moreso, the author alone) to rewrite juju in another language as a simple comparison. Even if it were feasible, the comparison would be polluted by the experience gained by building the application initially (or you could rebuild with a completely new team of developers, but then you're introducing a whole new set of variables). The best we can reasonably do is compare applications aggregating on language.


Juju was originally written in Python. They ported it to Go 4 years ago.

https://groups.google.com/forum/?fromgroups=#!topic/golang-n...


functions and methods are the original abstraction. Interfaces allow you to abstract implementations of sets of methods.

I fail to see how that is saying that abstraction and usability are overrated. They're not. They're important.


Yeah, I guess some need to review the definition of abstraction.


Sometimes I feel like the OO world has redefined abstraction to only mean an interface.



And go can not write abstract functions and methods.

I can not write a function dealing with arbitrary values.

Somethind I’d do in Java with

    public static <T extends IIncrementable> T increment(T val) { val.increment(); return val; }
is impossible to do in Go.

You can not abstract over types, and write metric fucktons of duplicated code. I’ve tried porting some of my Java code, and it quickly grew to some classes being duplicated hundreds of times, once for every type. Fixing bugs became a nightmare.


> I’ve tried porting some of my Java code, and it quickly grew to some classes being duplicated hundreds of times

Who would have thought. Maybe a rewrite would have been more appropriate.


Porting meant rewriting here.

But I still ended up with the same classes replicated hundreds of times.

Polymorphic code is not possible in any other way in go – you always have to duplicate the code.


I'm sorry it went badly then.

Your example intrigues me, isn't an interface enough?

Is what bother you that the return value won't be typed T but IIncrementable in go?


Yes, exactly. This is an actual issue.

My system is usually designed that I provide a function that generates a value of type T, a function to filter T (T -> boolean), and a way to display Ts.

So, the library now gets these functions, and does all the filtering on other threads.

So either I have to replicate the entire async code for every type, or I can’t keep type safety.


Don't keep type safety then. Think of Go as half-way between Python and Haskell in that respect. Types are great when they're useful, but they're not required.


If you’re serious, that’s... the worst solution I’ve heard yet.

That’s the same mistake C did with void, Java did with Object and corrected with generics in 1.5, just called interface{} this time.

And while with python and Java, even if I circumvent the type system, I can still use annotations (see typed python, or JetBrains @Contract, Google’s @IntRange, etc), with Go I have nothing of that sort.

Types are useful to provide safety, if your type system has to be turnt off, then I am losing all that safety, and might as well code in PHP (although they also* saw that mistake, and are fixing it now, with 7.0 and later).


To actually answer the question - yes it holds true, yes it scales (in my experience). There are a lot of abstractions that we built for Juju. And we absolutely tried to ensure there was only a single source of truth for everything. It would have been totally unworkable if we couldn't reuse logic etc. We may not always have chosen the best abstraction, but that's a problem in any language.


Well, it'd definitely be easier to change than something that had the wrong abstractions.


Great point about time. At work we've adapted github.com/WatchBeam/clock and it's helped a lot.

Thanks for blogging about your work on juju! Despite Go already being five years old, many of the patterns around building large applications are only emerging now.


This is a minor nit, but it seems to me that it would have been easier to configure the unit with a shorter timeout duration rather than mocking out the time functions. Am I mistaken?


Mocking out the time functions means you don't get any race conditions. If you try to just twiddle with how long the time functions run, you still never quite know how long to set them. Bear in mind that even rather generous timeouts like a full millisecond to set a map entry in another goroutine can still fail if your system's CPU is loaded or the system is in swap, and spurious test failures are spurious test failures regardless of their cause. Often with this sort of code you're testing in one goroutine and the test code is in one or more other goroutines. If your test code can emit something down a synchronous channel, then your test code can also be sure it is staying in sync. I mean, even to test with "extra special small test timeouts" still means you're writing in extra test structure, you might as well just use one of these time mocking things. (In fact, you should generally always use time mocking libraries. In all languages, direct access to the system clock that can not be easily redirected for testing is at least a code smell and could possibly even rise to the level of "antipattern".)

I have a lot of places in my Go test code where my test code is deliberately pushing things down channels. It's one of the top reasons my test code often still ends up being in the same package, so it gets private access to those channels for safe, properly-sync'ed testing. (Not that I'm really all that concerned about trying to test only the public interface; I don't have a lot of troubles with that anyhow. YMMV.) I also have some places in my code where I have channels in the private interface of some goroutine server whose sole purpose in life is to sync with the tests. This generally appears when I have some server that I am sending a message that I expect to change the state of the server, but for which there is no reply. In order to verify that the proper changes have occurred in the data structures of the server, I need to sync with the change before I do the check. Having a simple struct{} channel whose sole purpose in life is to synchronize works well enough for that.


> Mocking out the time functions means you don't get any race conditions.

This is a common misapprehension. Actually, even if you fully mock out time, you can still get race conditions, because goroutines can remain active regardless of the state of the clock, and there's no general way to wait until all goroutines are quiescent waiting on the clock. This is not just a theoretical concern - this kind of problem is not uncommon in practice.

I think clock-mocking can be very useful for testing hard-to-reach places in leaf packages. But at a higher level, I think it can end up producing extremely fragile tests that depend intimately on implementation details of packages that the tests should not be concerned about at all. In these cases, I've come to prefer configuring short time intervals and polling for desired state as being the lesser of two evils.


OK, correction acknowledge (no sarcasm), mocking out time functions means you can write test code that doesn't have any race conditions.

"and there's no general way to wait until all goroutines are quiescent waiting on the clock."

Hence my semi-frequent usage of "sync" channels which I described in the previous post.

"But at a higher level, I think it can end up producing extremely fragile tests that depend intimately on implementation details of packages that the tests should not be concerned about at all."

I'd rather have a test that correctly reasonably verifies that a package is correct (or at least "passes the race detector consistently") and reaches into some of the private details than fail to test a package. Too many bugs I've found that way.

It may also help to understand my opinion when I point out that I tend to break my packages down significantly more granularly than a lot of the rest of the Go community, which in my opinion is a little too comfortable having the "main app" directory contain many dozens of .go files. My packages end up way smaller, which also mitigates against the issues of excessively-coupled tests. I have a (not publically published) web framework, for instance, that is broadly speaking less featureful than some of the Big Names that are all in one directory (though it has some unique ones all its own), but is already broken up into 16 modules.


> I'd rather have a test that correctly reasonably verifies that a package is correct (or at least "passes the race detector consistently") and reaches into some of the private details than fail to test a package. Too many bugs I've found that way.

I agree with this, with the caveat that if you can test a package with regard to its public API only, it is desirable to do so because it gives much greater peace of mind when doing significant refactoring.

The difficulty comes in larger software where the package you're testing uses other packages as part of its implementation which also have their own time-based logic. Do we export all those synchronisation points so that importers can use them to help their tests too? If we do, then suddenly our API surface is significantly larger and more fragile - what would have been an internal fix can become a breaking change for many importers.


It really depends. You might want to test something that requires days/weeks expiration.

I would agree that the Clock interface is a bit large. In personal projects I prefer to just have something simple like timer func() time.Time


I don't understand your argument. What's special about a days/weeks that would prevent you from shortening this value for your tests? In other words, if I want to test my timeout logic, it should be independent of the timeout value, so I should be free to use a shorter timeout value in my timeout tests.


Go was released in 2007, it's 10 years old. Rust is closer to be 5yo, it's 7yo according to wikipedia.


Go was announced to the world on November 10th, 2009.


It only became stable in mid-2015.

https://blog.rust-lang.org/2015/05/15/Rust-1.0.html


Comparing 1.0's is a much better comparison, except in cases where there was huge adoption before 1.0.


I wouldn't say rust was hugely adopted before 1.0 (not that it's hugely adopted after, either).


Yes, sorry, wasn't saying that either Go or Rust were an exception, just trying to forestall the inevitable "but what about X thing that was used by everyone and is still 0.9" :)

For Go vs. Rust, I think a comparison of 1.0 releases is perfect.


I think gp meant that for some platforms it can make sense to count even 0.x releases (nodejs is the best example I think), even though for most it doesn't (including Rust and Go).


Wasn't released to public at all until late 2009.


    always always checking errors really makes for very few nil pointers being passed around
If your developers always always remember to write error checks, they probably also would have always always remembered to write NULL checks.


Are generics really that big of a thing if you got structural typing?

I mean, you don't have to implement all the interfaces explicitly, you just have to get your structure right and be done with it.

Am I missing something?


Implement a heap that works for any type with a user-defined ordering relation (in particular, you should be able to implement both max heaps and min heaps over the same type by changing the ordering). The heap should allow for efficient operations to add an element, retrieve the minimum element, and to merge two heaps; merging should be able to make use of specialized bulk operations rather than just adding elements one by one. The data structure should be opaque, so that you can (e.g.) switch out binary heaps for Fibonacci heaps later on. The interface should be typesafe.

Heaps are useful, inter alia, to define efficient priority queues.

Other examples:

* Implement directed graphs using arbitrary types for nodes and edges. Graph algorithms are useful in a number of application areas.

* Implement a parser combinator library that works for various types of tokens, semantic values, and states.

General problems with lack of parametric polymorphism (where subtyping is not enough) are:

* The necessity for the client of a service to cast the result to the desired type.

* Difficulty in implementing binary operations efficiently where both operands are of the same or related types (such as the heap merge above), because you can't know for certain that they are of the same type just because they conform to the same interface.

* Lack of type safety guarantees when you're mixing incompatible instances (such as merging a min heap and a max heap over the same type).


Yes they are. And actually they would play nice with generics! Imagine

    type Ord a interface {
      Compare(a) int
    }
Also it would be nice to do something like

    func Max[a <: Ord a](a, a) a
(Syntax stolen from Scala)

Right now, interfaces are too opaque. You can't take a value of interface type X and return the same type. You have to return an opaque X, which gives you no guarantees about its concrete implementation. Parametricity is an extremely well-motivated and solved language feature!

I think if you add parametricity to Go functions (not even parametric types. Just type variables that play nice with all the builtins) you can write this and get guarantees about its implementation (assuming it follows the functor laws)

    func map[a,b](func(a) b, []a) []b


It can be annoying to not have a generic collection. So if I build a heap I need to build it for a specific type or use interface{} as the values.

That being said, it's never been a deal breaker for me and don't end up missing generics THAT much.


I think generics would mostly be useful for container types, like putting methods on slices (like .map) without caring what's inside.


You might be interested in my side project: https://github.com/lukechampine/ply

IMO when people say they want generics in Go, they really just want a few functional-style methods and functions on slices (e.g. map, filter, reduce methods, and functions like sort, reverse, repeat). So I wrote a compile-to-Go language that allow you to write "Go code" that also has those things.

One of the things I've found, though, is that Go's first-class function syntax really harms the elegance of these sorts of constructions. For example, compare:

    // square a slice of ints in Go
    for x := range xs {
        x *= x
    }

    // square a slice of ints in Ply (morph == map)
    xs.morph(func(x int) int {
        return x*x
    })
When really, we want something like:

    xs.map(x => x*x)
I think it's possible to achieve the latter without having to write a new lexer/parser/typechecker from scratch, but for now I'm avoiding adding any new syntax.


Wouldn't

    xs.map((x int) int => x*x)
be simple desugaring without messing with the type system?


Perhaps, but you aren't saving a ton of typing if you still need to specify all the types. I think you can get away with

    xs.morph(x => x*x)
because you know that xs is a slice of ints, so x must be an int, and that means x*x is an int, so the lambda has the right type signature.

As far as I can see, there are two potential annoyances with this approach: you have to explicitly cast constants sometimes (e.g. uint64(3) instead of just 3), and you have to explicitly cast values if you want to return an interface (e.g. io.Writer(file) instead of just file). But these cases are rare enough that I think the tradeoff is worth it.


It prevents anyone outside go core writing type safe containers.


What do people think about the future of Juju?


Been a few years, and it was cool in the day, but very complex. With k8 now, not sure why one would choose Juju first. They are slightly different problem domains, but not exactly.


For automation of deployments we prefer ansible (ex. it's agent-less)

For container orchestration it's clear that kubernetes is the winner (if you have k8s deployed you don't need juju)

For managing the whole thing manage iq have much to offer.


I like Ansible and use it for deploying vagrant setups and such but I've found quite a few things broken enough that I'm wary of using it to do deploy production stuff for quite a while I had issues with apt: so much so I ended up using shell, last time I wrote a new playbook they had resolved that though so things do get fixed.


3542 files

540,000 lines of Go code

65,000 lines of comments

So on average only 170 lines per file, including 17 lines of comments.

Are this normal ratios ?


I think so? Assuming a regular function (a method should do one thing and one thing only) is about ~20 lines, we're looking at ~6 functions (170 lines - 17 comments - about 15 lines of newlines, imports and so on = 130-odd).

Six functions sounds very reasonable to me.


The comment ratio is going to vary heavily depending on the problem the project is trying to solve. If it was a library, I would expect the ratio of comments to go up to 30% or more.

The LOC per file average seems slightly high for my taste but okay.


This sounds normal in go. In go a minimal import (package) is a while directory, not a single file, so it's normal to split packages into smaller files, each file implements a smaller set of features.


Using cloc against the Go source code, I found that it has 934025 loc with 3080 files. This averages to about 300 loc per file. There are 149688 lines of comments, which averages to about 50 per file.

I'd say those ratios are very similar.


I try to keep my files under 100 loc, but I mostly build CRUD applications.


Does anyone know where this developer is going after Canonical or why they left? It seems like they got to work on a nice project while there. Maybe that's another blog post though. It piqued my curiosity I guess.


Is it too late to queue the joke about it being 500k lines because the lack of generics and the resulting code-generation?

Reading about projects with 100+ types of collections can certainly lead you to think so.


Wonder how long does it take to build juju?


42 seconds.

Just tried it with go 1.8 and a clean gopath (after downloading). That's on a 3 year old quad core i7 laptop w/ SSD. That's `go install github.com/juju/juju/...` which actually builds two binaries - the client and the server.


dave cheney has been tracking juju build times since the regression in the compiler when it was rewritten from C to go:

older article here: https://dave.cheney.net/2016/04/02/go-1-7-toolchain-improvem...

spreadsheet: https://docs.google.com/spreadsheets/d/1mczKWp3DUuQvIAwZiORD...


Wow the latest revision is down to 140% of 1.4.3. I'm impressed.


This entire piece sounds like the Blub Paradox made real.

http://paulgraham.com/avg.html

It's written with knocking down a very specific set of straw men in mind, but rather carefully avoids coming anywhere close to addressing the legitimate criticisms of Go as a language. One of the things that's most irritating about Go enthusiasts is the way they try to close ranks on legitimate critique and reframe their language's warts as "simplicity".

Also: 20 bonus blub points for pulling the old 'I don't need generics, therefore NOBODY does' gambit.


> rather carefully avoids coming anywhere close to addressing the legitimate criticisms of Go as a language.

The "criticisms of Go are addressed every time the language is discussed: in practice, generics are rarely missed. They would be nice to have, but their absence is outweighed by other benefits of the language.

Critiques of Go tend to be principal-based: "Go doesn't have features X, Y, and Z, therefore it cannot be a good language. QED." Praise of Go, on the other hand, tends to be pragmatic: "We built something in Go, features X, Y, and Z weren't missed and we enjoyed features A, B, and C, which the language's detractors oddly refuse to acknowledge." Which view carries more weight is left up to the reader.

> http://paulgraham.com/avg.html

I know this is heresy, but has this article really held up well over time? Since 1995, when ViaWeb was founded, how many other companies have been able to run rings around their competitors by using Lisp or something similar? Are we really going to base our arguments on a sample size of 1? How many counterexamples are there?


It's clear that if a powerful language was such a competitive advantage, languages like Haskell would rule the world - instead they're hardly used for business projects.

Along comes Go and in 8 years people have built more production code with it than probably all the functional programming languages of the world combined. If that's not true yet, it will be soon the way the trend is going. And those languages have been around for many decades.

I think this means Paul Graham was wrong - a powerful language isn't a killer advantage. Other things matter more. Like simplicity that means being able to read other people's code and cooperate on a large code base. Strong language tooling. A batteries included standard library. Being able to hire developers not skilled in language X and get them up to speed quickly.

At the end of the day it seems pragmatic wins.


I've got another heresy coming up:

Outside of very specific fields, language doesn't matter.

I was quite strongly attacked on another thread for implying that WhatsApp is just another CRUD app.

The thing was that I wasn't bashing their dev team. They could have done a crazy amazing job, and it helped their company take off, but what was their secret sauce?

The ability to have an (almost) free SMS/MMS app which worked the same across all devices and worked with numbers rather than names/userid's, followed by network affects.

They could have written it in PHP and have it take off. And a competitor could have written it Haskell and had it fail.

(FB is written in PHP, MySpace was written in CF, Friendster was written in jsp, so logically PHP > CF > Friendster. Therefore, PHP > Java. QED?)

Twitter didn't fail because it was written in Ruby. It failed because of business.

Diaspora isn't FB not because of tech, but because of business.


I don't think you deserve to be attacked for implying that about WhatsApp. You're right, in that the truth is that in the vast majority of cases for the vast majority of cases, programming language doesn't matter all that much.

Perhaps people disagreed with you because it seems to me that WhatsApp is an outlier where a specific language and runtime (Erlang & OTP) provided a clear advantage in a specific domain (realtime networked messaging). In this specific instance, the language provided facilities that made it easier to solve a specific type of problem than it would have been in other languages. It helped them get to market more quickly, and in that way could have directly contributed to the application's chance of success. I suppose what I'm getting at is that I think there's a decent case to be made that WhatsApp is one of those very specific instances you mentioned where the language is part of the company's secret sauce.

That's still no reason for anyone to attack you, though! It's easy enough to disagree in a friendly way. :)


The problem is that WhatsApp really isn't just another CRUD app and it's very unlikely that you could pull off an engineering feat of that scale with a different technology. It is the exact counterexample of what you say.


The thing is that it is a CRUD app. You send it data (text, images or video) and get back data (text, images or video).

That's it.

It's not that different from Facebook.

Now, to get it to scale it may pay to use a high-reliability language. To make it secure you may not want to write it in C. But it would have taken off the same had it been written in Perl, PHP, Lisp, C, Assembly or Erlang.

Maybe it would have needed more hardware.

Maybe it would have crashed a few times.

But it would have taken off nonetheless.


It's literally an XMPP server that doesn't connect to other XMPP servers. I downloaded it and I was very disappointed. It literally does nothing else.


To be fair

> Along comes Go

Really means

> Along comes Google

It's nothing against Go, but we have to be realistic that having Google's weight behind it significantly increases its marketing while also making business owners a little more comfortable (aka - a modern nobody ever got fired for buying IBM).

There are a lot of frontend frameworks out there but React and Angular dominate largely because of Facebook and Google's influence.

That said, every language has tradeoffs. All you have to be able to do in order to have a rational conversation is to acknowledge those tradeoffs. Go has some tradeoffs but there's a general feeling that the resulting balance is worth it.


> we have to be realistic that having Google's weight behind it significantly increases its marketing

It also increases no of expert engineers contributing to project full time.


Yes. A toolchain that's easy to code, easy to test, and easy to hire for provides more business value, especially as teams get larger. Why this is assumed to be some xor with advanced or clever programmers is really too bad, because actually the same toolchain supports keeping more of the solution in the advanced programmer's head over the same experience with a complicated toolchain, in my experience.


It does seem like Rails -- which relies heavily on Ruby features for speed of development -- has been a big part of the strategy of Silicon Valley startups for many years.


Or Haskell is just too different from what most programmers are familiar with. It's not really Go versus Haskell, It's Haskell/Ocaml/ML/Lisp versus mainstream languages.

It's also not like Go is the only popular language. Other popular languages like Javascript, C# or Python have plenty of features and magic. Also, Elixir seems to be doing pretty well, so maybe that's a way for functional languages to gain traction.

Elixir, being newer, was made with modern environments and tooling in mind. Go has the same advantage.


And really, it's Go versus C++, C# and Java that's the actual comparison, not Haskell. That's what Go competes with, and then Python, Ruby, PHP and Node on the web server side.

Lauding Go's success over Haskell is not really saying much.


To be fair, you'll never see me compare Go to Haskell :)


> I know this is heresy, but has this article really held up well over time?

One immediate thought that I have - programming is now much more accessible. High-quality compilers are much easier to get, good documentation is much easier to find. Much fewer people have done nothing but work on one language, one technology stack, etc. They still exist, of course, but they're much fewer in number.

This means that people are more likely to choose the right tool for the job, which kinda defeats the point of the Blub paradox - that people's perspectives on problems are constrained by the languages that they know.

Seeing as how a lot more people are dicking around with Haskell and Lisp in college and in toy projects, I don't think that this is as much of an issue. You know what generics are, and you've used generics in other code, but you're making the conscious decision not to use a language that has them in order to get better traits in other areas.


Assuming the number of programmers choosing Go eclipses those choosing languages with generics in them, like C# or Java, or duck typed languages.

I don't think it does. So maybe the people picking Go have a preference for minimalism, or they pick Go despite it's lack of generics and other features, because of performance, or popularity, or tooling.


This is riduculous. Where are the strawmen? The article makes very few value judgements about the Go language at all! It's recounting one person's experiences writing Go, but it's not defending any specific features of Go.

He makes one generalization (in the "Overall Simplicity" section) about Go's simplicity being a useful trait, and he's right. You could make a similar observation about Java, but about very few other languages — C, C++, Scala, Ruby and Python all have a lot of pitfalls, hidden side effects etc. that Go doesn't suffer from.

The rest of the article is about how they did testing, how it's one monorepo, how they manage dependencies, etc.

> 'I don't need generics, therefore NOBODY does'

The author said no such thing. He wrote: "Only once or twice did I ever personally feel like I missed having generics". That's all. No generalizations about other people.


I can't believe that I read a nice article about working on a massive project in Go over a couple of years - a meaningful experience that we could probably all learn from - and the top comment doesn't build on the content of the post at all, but is rather a thinly veiled accusation of "Go is a terrible language".


Really? Because your profile says you've been a member for over 2700 days, so you should be used to HN comments by now.


Well I can definitely believe that the first reply to my comment is something pedantic and unrelated to the meaning of the comment. ;-)


LOL, you made my day.


Over time, I've come to notice that once any online forum reaches a certain critical mass of programmers, nearly any mention of a specific programming language will devolve into a flamewar about that language.

I find it frustrating, but I also suspect that it's been that case since approximately forever ago. One day I plan to don a flame retardant suit and wade into Usenet archives to see if programmers were as prone to language flamewars 30+ years ago as they are now.

I'm guessing the answer is probably yes. :)


  ... if programmers were as prone to language
  flamewars 30+ years ago as they are now.

  I'm guessing the answer is probably yes. :)
Your guess is spot on. Not only languages (C vs Pascal, "why doesn't anyone use Ada?"), but probably best remembered for the vi vs emacs crusades. Though about 20+ years ago there were some pretty heated exchanges regarding OOP which seem to still smolder to this day (see HN history for evidence :-)).

There's a reason why Godwin's law[0] came about :-D

0 - https://en.wikipedia.org/wiki/Godwin%27s_law


I think it's the whole bikeshedding thing, where people will jump to talking about the most superficial thing they have an opinion about.


> but is rather a thinly veiled accusation of "Go is a terrible language".

That's because it is.


> '... therefore NOBODY does'

The author never said that, just 'I don't need generics' and they very rarely missed them.

What other criticisms did they dismiss/reframe?

Lack of stack traces in errors is mentioned and mostly dismissed, and error handling in general is reframed I guess... but these claims are backed up by evidence that it works. Crashes and NPE are extremely rare, and for those standard errors are sufficient to locate problem areas.

But then package management is listed as an honest pain point.

Anything else?


Package management is in the works though


Right. My point is that it's acknowledged and not dismissed or reframed at all.


The Blub Paradox argument basically states that you must use the most powerful language because it's the only one that'll let you have a broader perspective to judge all the other programming languages. Ironically, this kind of perspective is very narrow itself. You gotta consider the ultimate goal of writing software is to generate solutions that successfully solve users' problems. For complex problems that require lots of people working together, usually the effectiveness of that collaboration is the hardest of all problems, well above the technical problems. Most of all modern programming languages can solve all the problems, in different ways. So the criteria for selecting a language is not power, but how it facilitates existing and new members of a team to make progress.


I agree with your framing that the collaboration aspects of large projects are probably harder than the technical aspects.

However, I think there are still lingering questions about whether the "power" of a language has noteworthy interaction with the difficulties of collaboration. For example, some might argue that the guard rails put up by FP languages allows them to reason better about interaction with colleagues' code. Talking to colleagues about code interaction is a human collaboration problem too.

If that's the case, then part of choosing a programming language is also in service to mitigating the difficulties of collaboration.


Conversely, though, if moving to a higher level language / paradigm makes you more productive, you may not need so many people in the first place.

Modern-day parallel would probably be WhatsApp, with Erlang taking the place of Lisp.


The mentioned paradox is simply not true. How would being fluent in Haskell make you appreciate the need for assembler programming in, say, math opitimizations in Go crypto packages, or any low level stuff?


I would like to hear this claim with more specificity, like if the claim is that learning Clojure will help you predict how the language atoms of Go might lead to some structures, or more narrowly that if Clojure has more supported concurrency models, then it helps you judge Go vs Elixir from a concurrency perspective, or if learning a multi-paradigm language helps you judge a language with a paradigm focus.


This is a case study; no one is proposing that this case applies to all or many other projects. You're the only one extrapolating here.


You're now the third person in the past couple of weeks I've seen who are insisting that the only explanation is that Go language users must be ignorant of all these other wonderful features.

But I'd say that in order for that to be true, you must believe that there is a large number of Go programmers who either learned Go for their first language, or their other languages they know are all bereft of these features.

That sounds superficially more appealing in the abstract, but when you try to concretize what languages the Go programmers could possibly be migrating from that don't have these features it gets a lot harder. When I tried this recently on /r/programming, I had at least one commenter suggest that maybe a lot of people learning Go might be coming from Visual Basic. I'm not sure how serious they were being, because I honestly couldn't tell if they were being funny or seriously reaching for the argument that Go programmers must all be coming from Visual Basic.

Because the only practically-likely language that fits the bill is C, and I still don't think there are all that many Go programmers who came to Go from a position of knowing only C.

Personally, I use Go a lot at work, tend not to be too pissed off about it (though certainly every once in a while, I am), and I know Haskell. I don't just mean "I can write the fib function", I mean, I wrote code that used conduits, lenses (the "real" ekmett version, and a non-trivial use where I passed lenses themselves around as first-class objects, not just as a field-accessor library), forkIO, STM, a custom monad, typeclasses, and a type family, so not just a casual "I can write the hi/low guessing game in Haskell".

The key is that what I'm looking for in a work language diverges from what I'm looking for in a personal fun language. I've actually learned (from experience!) to be really nervous about the code that FP ethusiasts and snobs produce in a professional context... FP code that strives to use every cool feature possible is often quite poor from most objective standards. Since I actually understand the paradigms in question, I can often see that they are either using features where they don't belong, or even at times outright misunderstanding and misusing them. On a couple of occasions I've even had to clean it up, so if I sound a bit bitter here, well... it's earned. There is a time and a place for code that uses the minimal feature set possible to get the job done, even if that means a few more lines of code and even if it means some programmers might find what they are writing distasteful to their refined palettes, and generally large-scale collaborative code is such a place.

I know what masters can do with functional programming. I've seen their work in the Haskell community. (I fall into the set of people who looooove the ekmett lens library, which even a good chunk of the Haskell community finds "a bit much".) There's a lot more people who can create a snarled complicated ball of "features" than there are those masters, though, and if you work with more than a couple of carefully-selected people, you'll be working with them, not the masters.

I'll submit another explanation for your consideration: A lot of us do understand those other features, and we actually, truly do find that where we are using Go, it is not catastrophic that they are missing. Instead of ignorance, what if instead it is a matter of different priorities and different problems resulting in different optimal solutions?


>... Go language users must be ignorant of all these other wonderful features.

All of your comments are reasonable but I don't think it addresses what grabcocque was complaining about.

It seems grabcocque was criticizing the writing about Go but others seem to be interpreting it as an insult to Go programmers. They are 2 different things!

(My guess is that the word "blub" triggers the misinterpretation.)

All of the following can be true: 1) Go is productive and helps get real world work done; 2) Many Go programmers are happy with it; 3) Go's feature set matches the type of work Go programmers are using it for

However, all those positives are orthogonal to writing flawed arguments about Go. For example, the author's writes : "Because Go has so little magic,..."

Whenever I see a writer talk about "magic", "simplicity", "spooky action at a distance", etc it usually turns out that the author picked adjectives that sounded good but not well-defined enough for readers to learn anything useful from it.

Consider the well-known C Language function "printf()". Is that "magic"?!? No? Why not? On MS DOS, its assembly language source code calls int21h function ah9[1].

If we want to be simple & explicit with no magic, why don't we insert int21h-ah9 in-line assembly whenever we need to display a string? What about the "return" keyword that "magically" moves that x86 stack pointer (ESP) back?

Why are those abstractions not derided as "magic? What is the computer science theory that classifies that these things over here are "explicit" and "simple" but those other things over there are "magic" and "complex"?

If an evangelist describes programming languages with words like "magic", you're not educating me because I have no idea what your threshold for perceiving it as such is.

tldr: Golang is a great language but that doesn't excuse the flawed intellectual writing about it. (And not to pick on just Go because some of the Rust essays also suffer the same flaws.)

[1] https://montcs.bloomu.edu/~bobmon/Code/Asm.and.C/Asm.Nasm/he...


"Whenever I see a writer talk about "magic", "simplicity", "spooky action at a distance", etc it usually turns out that the author picked adjectives that sounded good but not well-defined enough for readers to learn anything useful from it."

A valid objection.

Here's my personal definition for "magic", which is still a bit fuzzy around the edges, but much more solid than most loose definitions of it: A bit of code is magic when it is not sufficient to examine the tokens the code is made up of and go back to the static textual definitions of those tokens to understand what is going on.

In Python-esque psuedo-code, this is magic:

     class Something:
         def method(): return 1

     s = Something()

     s.WhereDidThisMethodComeFrom()
and the last line does something useful, rather than crash with method not found. You follow s back to its definition, you see it's a "Something". You follow back to the Something class... and there's no "WhereDidThisMethodComeFrom". Something came along later and added it. Who? Where? Oftimes these are so magical that grepping over the entire code base for "WhereDidThisMethodComeFrom" may not help because the name itself may be constructed.

In more Pythonic Python, the following is middling magic:

    @somedecorator
    class Something: # entire rest of code example pasted here
Following back to "Something" you can at least see that something has decorated it and it's a good guess that that much be involved. Still, it's a bit subtle and decorators aren't generally supposed to do that.

Not magic at all:

    class Something(Superclass): # rest of example follows
Ah, WDTMCF comes from the superclass. Inheritance is defined as doing that in the language spec so you can follow it up the inheritance hierarchy. (But note this holds for statically-coded inheritance; the fancier your dynamic setting up of the hierarchy gets, the more magical it gets.)

Go is entirely non-magical by this definition. The two closest things to magic is having an interface value and calling a method on it, where you can't statically determine exactly which method will be used, and struct composition making methods appear on the composing struct, but both are in the spec and can still be traced back. (The only tricky thing about the struct composition is if you compose in multiple things you might have some non-trivial work to figure out which thing is providing which method.) Haskell, perhaps surprisingly given its reputation, is mostly unmagical. (The OverloadedStrings and friends extensions make it a bit magical, and there is some syntax you can bring in via extension which can be tricky. But otherwise you can, if you work at it, pretty much just use term rewriting by hand to understand anything Haskell is doing.) Python can be magical, though the community tends to avoid it. Ruby and certain chunks of the Javascript community can be very magical. (No non-esolang mandates magic that I can think of. INTERCAL's COME FROM operator/statement/whatever it is may be the epitome of magical.)


I really appreciate this definition of "magical"... reminds me of some code I was reviewing that was something like this:

HTML:

    <button class="does-stuff" id="foo">Foo</button>
    <button class="does-stuff" id="bar">Bar</button>
    <!-- etc -->
JS:

    var handlers = {
        fooClicked: function() { /* stuff */ },
        barClicked: function() { /* stuff */ },
        /* etc */
    };

    $('.does-stuff').each(function() {
        $(this).click(handlers[this.id + "Clicked"]);
    });
And like... OK, that's clever, but when I'm debugging the fooClicked() method 4 months from now, I can't just do a Find All for "fooClicked" and track down where it's being called from.


I suspect you would have the same problem as you see in Python, in any situation where you use code generation with Go.


If they came from any scripting language they probably have never worked with generics -- that doesn't really strike me as an outlandish scenario.l


Ask a dynamic language enthusiast about that. Dynamic languages don't have "proper generics" but what they do have is all the generic use cases covered... so... which is more important, having a particular feature, or being able to do all the things that the feature can do?

A generic enthusiast can argue that the type safety is a fundamental difference. A dynamic language enthusiast is obviously going to take issue with that. I'll let the two of you fight it out.

From a Haskell perspective, dynamic types and the sort of generics that Java implements are probably closer together than you might think. Haskell expects your "generics" (which are actually the type classes; what haskell calls "generics" is more like what other languages call "reflection") to be able to provide mathematical guarantees like "If I have an A, I am guaranteed to be able to produce a B for that" which is much stronger than what imperative generics tend to be able to guarantee.


>Ask a dynamic language enthusiast about that. Dynamic languages don't have "proper generics" but what they do have is all the generic use cases covered... so... which is more important, having a particular feature, or being able to do all the things that the feature can do?

Well, the "things that the feature can do" in a dynamic language you can do in Go with interface{}.

It's the one thing you can't do (in Go or a dynamic language) that Generics are all about: type-safe generic code.

So, you'd made the parents case for them...


Well what's more important that dynamic languages don't give you is knowing what the hell the type is supposed to be able to do when you maintain the code.


I worked in C++, C#, and Java for 15 years before my work in Go. I've used generics. I will freely admit to not having a lot of experience in functional or ML style programming languages a la Rust. I'd love to spend some time getting up to speed on Rust, it seems like an interesting language, and at the very least, a great learning experience.


I would definitely recommend looking at Rust (and the ML family).

I have spent most of my programming life in C++, Java, and Prolog (probably in that order). Two years ago I switched to Go for most of my work, because I needed a native language, C++ was just becoming too much effort, and Rust was still changing monthly. I have written some substantial projects in Go, including a neural net natural language parser and a part-of-speech tagger, both used to annotate web corpora.

Since 1.0 has been released I have slowly transitioned into Rust and am now using it for most new work. What I strongly like about Rust above Go: parametric polymorphism (which I do use on a daily basis), limited operator overloading, sum types, RAII, the borrows checker, Cargo, quickcheck [1]. What I strongly dislike compared to Go: compile times.

Deep in my heart, I like ML more than Rust ;), but having a quickly-growing ecosystem is also important.

[1] There is property-based checking for Go, but in my experience it's usefulness grows with the strength of the type system.


I really want to learn Rust. It's been on my next-to-learn list for a while now. I have limited time because of young kids and getting embroiled in politics (actual politics). Someday :)


Rust is not a functional programming language :-) it has features which are ML-like, and it has first-class functions, but it doesn't really encourage functional idioms like the ML languages do.

If you want a great intro to ML-style statically typed functional programming (without the Haskell-style monads and functors jargon), check out https://www.coursera.org/learn/programming-languages/ . You'll get a lot out of even just going through the (10 minute each) lecture videos, Prof. Dan Grossman is a great explainer and it's a treat finding out the cool syntax, semantics and idioms of an ML language.


If you want to learn an interesting language, try Swift.

But if you really want a "great learning experience", you should look at Haskell or F#.

Finally, if you want your mind blown, learn yourself a Lisp.


You're missing his point. Maybe you should consider that people who don't have the same opinion as you (Go doesn't have this feature, so it sucks) don't have the same opinion for valid and rationale reasons, not just ignorance.


Maybe so, but I reject the argument put forward in favor of that position.


I actually lean in your direction. I've strongly headed back to static typing in the last 3-4 years after a lot of dynamic typing. I'm just not in the mood to have that argument with dynamic typing enthusiasts right now. :) I've done my time on that battlefield.


I write Go all the time. The number of times I've said "I wish I had generics" is 0. It's absurd how goddamn often this gets said ON EVERY GO THREAD EVER.


There is absolutely 0 chance in the time you've written go that you haven't encountered a problem that would be best solved by generics. Chances are you're just implementing them in a different way. I say this as someone who uses Go.


If your project has high turnover (say 20%+) then a blub language makes a lot of sense.

Any gains from using a richer language are dwarfed by extra time spent getting new developers up to speed.

I've worked on projects with an average tenure of 9 months and projects with an average tenure of 4 years. The approach you have to take is completely different.

There is just so much variation in our industry.

As an example the most important work I did on one project was setup and maintain a prebuilt dev environment image. It saved many dev years worth of effort. We were adding ~15 new devs/month.

On the other hand I've also worked on projects where setting up and maintaining a prebuilt dev environment would be a complete waste of resources. We were adding a new dev every ~18 months.


The blub article assumes that the computer is the ultimate arbiter of language utility:

But Lisp is a computer language, and computers speak whatever language you, the programmer, tell them to.

A program, especially a large one, spends far more time being read by humans than being written. Computer languages need to be readable by humans, they need the right abstractions, and the minimum of those, so that there is little to learn, little to forget, little to be confused by.

If your language lets you build abstractions only your present-day self can grok, it is less useful than a more limited one which lets you build something you and your colleagues can easily understand and modify in the weeks and months ahead.


Doesn't Graham in his Lisp book talk about how using macros properly leads to more readable and maintainable code, and that Lisp is great at producing that kind of code in general because you can mold it to closely fit the problem domain?


Well exactly, this is why languages like Lisp are so seductive and powerful - they let you write languages within the language. When you have a really flexible language like Lisp, it's tempting to produce a meta-language to describe your problem which is specific, terse, and powerful. However there's an interesting trade-off here.

At first this seems like a wonderful solution, but if you ever work with a few other people, and you don't want to be constantly code-switching between your own thoughts today, Bob 6 months ago, Alice 5 years ago, and Bob last year, you don't really want your language to have so much expressive power. It's exhausting and ultimately fruitless in the long term and across teams; what feels powerful and expressive today as one person writing code can change to being a heavy burden when everyone is inventing their own language and forcing you to speak it, or worse when you attempt dialogue with your past self of 2 years ago and have no idea what you were talking about...

Now that's not an argument for lower expressive power always being better, or against generics (for example), but it does behove language designers and users to consider the limits of complexity, as well as the limits of simplicity.


That's a legitimate concern. But then again, the world is full of programming languages. Sometimes you really appreciate what an R gives you when needing to do a lot of statistical computing and data wrangling, or what C or Rust offers for systems programming, and so on.

And speaking of DSLs, the Pandas and Numpy libraries in Python are super useful, and they're possible because Python offers enough metaprogramming facilities to create such libraries, which go a long ways to offering to kinds of programming you find in R or Matlab.

So maybe the Lisp way wasn't wrong? Instead of a bunch of Lisp DSLs we end up with a bunch of programming languages.


I think the point is that Lisp lets you create an ad-hoc, poorly-specified version of any language you like, or perhaps several mashed together, which feels great at the time, but feels horrible when you come back to it later.

DSLs/Jargon/New Languages are constructing a new world, and you need to be really sure the costs of that abstraction are outweighed by concrete and lasting benefits in the domain (sometimes they are, oftentimes they are not), crucially, this sort of language is often best used for a very specific domain and nowhere else (say R for statistics).


The DSL doesn't go away. Your only choices are to implement it, or else read and write the boilerplate you get by imperfectly compiling it in your head.


I agree we're just shifting complexity around, and sometimes it's a matter of taste where it goes, but very flexible languages do allow you to construct a world that is very difficult for others to understand or trace execution in.


That same argument could be applied to functions (or procedures, if you prefer) as well as macros, and yet I don't believe anyone sane is arguing against functions/procedures. Bad code is bad code, just as bad metacode is bad metacode.

I don't believe that Lisp macros are particularly tricky for a professional-level programmer. Yes, students can use them improperly — but students can and will use anything improperly! One grows out of it eventually.


As with many programming language discussions, it's an experiment without a control.


Sadly yes, today I learned C# is a programming language for 10x programmers.


Why is this a surprise? Microsoft employs a fair chunk of the researchers currently working on functional programming languages in industry. The ideas get exchanged between Haskell, F#, and C#. Look at C# 7.0, where many of the new features seem to be an effort to make the language friendlier to Haskell programmers.

At least, that's my perspective as someone who uses Haskell.


I was being sarcastic regarding the blub programmer, JVM and .NET languages are my daily tools.

Given the usual line of reasoning for Go's "simplicity", even JavaScript is for 10x developers.


edit: there are other more useful responses


Actually, I can't deny that I'm a language snob. I also don't think that's a bad thing. I find Go's magical maps and slices, its null pointers, its verbose error handling and its lack of generics to all be in fairly poor taste.

So, yeah, I'm a snob.


+1 for admitting to snobbery it is an important step!

Different problems (and people) have different solution domains. For instance, the Union-Find algorithm [1] is straight forward to implement in an imperative language with mutation. It is significantly harder to achieve an optimal immutable version as demonstrated by a paper from 2007 [2].

There are real trade-offs between features in terms of what is easy to express. Your favorite way to program may be more difficult in Go. You think Go's features are in "poor taste." However, it is totally reasonable that different people with different problems might actually like the language. I myself have programmed in many other languages including SML and Scala and believe that for my current problems Go is a good fit.

That said, whenever I have to do even a little bit of numerical work (as I currently am doing) I miss a language with good numerical options (Python, R, Matlab, Julia, ...). Go stinks for numerical work and none of your criticisms have anything to do with why it stinks. Not having a nil pointer would not suddenly make Go a great language for numerical work.

Language choice is once again about trade-offs. I will happily take the trade-off of poor numerical support (less than 1% of my code) for concurrency primitives, compilation to native code, easy C integration, memory safety, and garbage collection. There are things to like about go and things to hate. I do hate the way errors are dealt with, poor support for writing collections, etc... But, just because I don't like those things doesn't mean it can't "get the job done."

[1] (https://en.wikipedia.org/wiki/Disjoint-set_data_structure)

[2] (https://www.lri.fr/~filliatr/ftp/publis/puf-wml07.pdf)


Few people would accuse airline pilots, surgeons, construction engineers of snobbery for demanding high standards.

When it comes to software, challenging this "everything goes" culture is called "snobbery". People are called "senior engineer" after 2 years of copypasting javascript from stack overflow - and become "proficient" in a language in a week.

And yet we wonder why so much software is bloated, unreliable, insecure, overly expensive to develop.


> and become "proficient" in a language in a week.

I'd argue that this might not be a stretch in some situations. If you have a lot of software development experience in varied environments (back end, front end, desktop, command line tools, embedded, etc.) and with various dissimilar programming languages (static/dynamic, compiled/interpreted, Algol-inspired/not-Algol inspired, etc.), you should at some point be able to pick up a language at a decent rate.

And even though you won't master its idioms, unlike a total newbie, you'll be aware that you don't know its idioms. A sort of "known unknowns", if you will.


There's having standards and there's snobbery. Rejecting a language simply because it doesn't fit your conception of what a language should be is snobbery.

I've chosen Go for projects specifically because it produces better software (within certain contexts). A fast simple binary, a simple language (so co-workers can look and hack on the code), etc.

To act like people are picking Go because they are ignorant or stupid or lazy (which is what the notion of "blub" always is) is unfair and lazy.


> I find Go's magical maps and slices

Magical? What's magical about a hashmap or a pointer into an array? There are a lot of valid criticisms of Go; "magic" is not one of them.


Go has special syntax for channels, maps and slices. It's impossible to define your own data structures using the same syntax.

An argument can be made for array subscripts, certainly, but why can't you use 'make' for your own code?

They're also magic in that Go does have generics, but only for those three types.


I think you and I have different definitions of 'magic'. I agree that Go does not have user-defined generics. I don't think of a few blessed data-structures as 'magic', but I won't argue semantics.


I disagree with the concept of "blessed" syntax in the first place. I want users to write libraries just as powerful as the standard library.


Philosophically, I agree. Pragmatically, generics seem to tempt programmers into writing really weird code in pursuit of some purist notions of terseness or reusability. After having used Go as well as lots of languages with generics, I will say that most Go programmers write very similar code for very similar problems, and it does make it easier for programmers to understand each other's code. There are a few times that user-defined generics would be handy, but those times are really very rare and I'm not convinced that adding generics to the language would be a net positive. In fact, I think the case for algebraic data types is stronger than the case for generics.


I need a tree. Sorted lg(n) structures are incredibly useful, and maps aren't sorted, but as it stands I can't have one without risking runtime type errors.

Static typing that doesn't cover common use-cases is worse then useless. It gets in your way.


> I need a tree. Sorted lg(n) structures are incredibly useful, and maps aren't sorted, but as it stands I can't have one without risking runtime type errors.

I agree. See my previous comments.

> Static typing that doesn't cover common use-cases is worse then useless. It gets in your way.

I agree. Fortunately Go's static typing covers the most common use cases. If your primary deliverable is typesafe tree algorithms, Go might not be for you.


I think algebraic data types and not having nil would be a huge improvement to Go. I would agree with you about generics if Go had a good macro or templating system. Having to generate code feels a bit silly.


Grabcocque is not advocating for less in the marketplace of ideas. He's advocating for quality criticism.

But what are you trying to say? That grabcocque doesn't think there's more than one way to do things? That he's a snob? That he should get over himself? Why do you hide your derogatory speech behind implications and snideness?


Grabcocque is not advocating for anything, but attacking the article and go enthusiasts. There is no criticism to Go as a language presented, only criticism of those that claim to be productive in it.

In fact in its original form the comment contained only the link to the blub article and nothing else. This isn't enlightened discourse, this is a knee-jerk reaction :)


I just found it problematic that aaron-lebo was specifically and snidely attacking a user's personality, whereas grabcocque was attacking an article author.

At this juncture the conversation had shifted from Go to a question of conduct.


I've edited the comment and see that this discussion is not productive, but to defend myself, I see no difference between a simplistic "this is blub" response and "attacking a user's personality".

The discussion had nothing to do with the article which is why I responded the way I did.


And half are from dependencies? ;)


Read the article:

As of this writing, the main repo for Juju, http://github.com/juju/juju, is 3542 files, with 540,000 lines of Go code (not included in that number is 65,000 lines of comments). Counting all dependencies except the standard library, Juju is 9523 files, holding 1,963,000 lines of Go code (not including comments, which clock in at 331,000 lines).


It was a joke.


I think the author just wan to said: choice Go is exactly a mistake.


<_< >_>

No.

Go was a good choice. The project benefited from it, both from a hiring perspective and from a technical perspective. If I were in charge, I would make the exact same choice again.

Juju lives exactly in Go's sweet spot. A networked server worked on by more people than you can fit around your dining room table. Not every project would benefit from Go, but this one certainly did, IMO.


this appears to be incorrect.


Off topic rant: I don't know much about the details of godeps hash file but I do wish that there were a better infrastructure for merging contents of various file formats. Built into git or shipped as a separate repository. I wasted too much time merging vcxproj.filters files just because in its XML representation one item of a sequence of folder assignments occupies 3 lines (opening tag, contents, closing tag) instead of having everything on one line. Similar problems with JSON files when new items added concurrently to the end of the array.


I found a program someone wrote called sortxml that will prettify and sort XML files. It's written C#. Might be possible to run it on Linux also, or to find another similar program for Linux that will. I'd investigate further if it wasn't for the fact that I rarely work with XML files.

https://github.com/kodybrown/sortxml

You may be able to write a small wrapper script that does the following: For each filename passed as argument to your script you will run sortxml on it and store the result in a temporary file, then once you've done so for all files, you pass the sorted files to an existing merge tool such as for example meld, kdiff3 or vimdiff.

Let's say that you name your script mergexml.bash and that you put it in ~/bin/. Then it should be possible to first git merge as usual but when a conflict arises in an XML file, you will run something like

  git mergetool -t ~/bin/mergexml.bash example.xml
And your script will have simplified the process.

For bonus points, you might write an unsortxml program that your script will call at the end to rearrange the nodes and attributes to be in the same order as they were in one of the original XML files -- the user would be prompted to select which of the original XML files to use for ordering the nodes and attributes.

I haven't tested this of course, otherwise I'd probably have actual code to share for it but I think something like what I've outlined above should work.


Here's an extended example.

  mkdir -p ~/tmp/xml-merge-example
  cd !$
  git init .
  
  cat > example.xml <<EOF
  <?xml version='1.0'?>
  <hurr durr='hello'/>
  <durr hurr='world'/>
  EOF
  
  git add .
  git commit -m "Initial commit."
  
  git checkout -b alternate-reality
  
  cat > example.xml <<EOF
  <?xml version='1.0'?>
  <durr hurr='world'/>
  <hurr durr='hello'/>
  <foo reality='alternate'/>
  EOF
  
  git add .
  git commit -m "Alternate reality."
  
  git checkout master
  sed -i '' -e 's/hello/foo/' -e 's/world/bar/' example.xml
  
  git add .
  git commit -m "Hello foo to the world bar."
  
  git merge alternate-reality
The attempted merge will result in the following:

  Auto-merging example.xml
  CONFLICT (content): Merge conflict in example.xml
  Automatic merge failed; fix conflicts and then commit the result.
For a quick PoC, let's create a little script that will echo the arguments that was passed to it and then exit with failure. The real script would work as outlined in the parent comment and would exit with success at the end as long as the merge worked out.

The t argument to mergetool works a bit different than I thought, but not by much -- the tool must be referenced by name and the corresponding mergetool.<tool>.cmd must have been set. In the command you use the variables $BASE, $LOCAL, $REMOTE and $MERGED as explained in https://git-scm.com/docs/git-mergetool#git-mergetool--tlttoo... . This allows tools with different argument ordering, flags and such to be used.

Let's write the PoC script

  mkdir -p ~/bin
  cat >~/bin/mergepoc.bash <<'EOF'
  #!/usr/bin/env bash
  
  echo "$0: WARN: Proof-of-Concept" 1>&2
  echo "$0: INFO: Arguments: $@" 1>&2
  exit 1
  EOF
  
  chmod 755 ~/bin/mergepoc.bash
and define the mergetool cmd

  git config mergetool.mergepoc.cmd '~/bin/mergepoc.bash --base "$BASE" "$LOCAL" "$REMOTE" -o "$MERGED"'
and tell git to trust the exit status of this tool

  git config mergetool.mergepoc.trustExitCode true
Then we run it.

  git mergetool -t mergepoc example.xml
Result (with an additional line-break inserted in order for to avoid having the contents shown here scroll sideways):

  Merging:
  example.xml
  
  Normal merge conflict for 'example.xml':
    {local}: modified file
    {remote}: modified file
  /home/erikn/bin/mergepoc.bash: WARN: Proof-of-Concept
  /home/erikn/bin/mergepoc.bash: INFO: Arguments: --base ./example_BASE_4020.xml
    ./example_LOCAL_4020.xml ./example_REMOTE_4020.xml -o example.xml
  merge of example.xml failed
  Continue merging other unresolved paths [y/n]? n


The godeps file is just a dependency-per-line, tab-separated values, deliberately so it's easily amenable to shell script processing.

Aside: the conflicts mentioned in the article should never be a real problem because you can always resolve the conflict by just recreating the dependences.tsv file (you should never be editing it manually anyway).


That's not exactly true, Rog. If two people have changed the file, which causes a conflict, then you have resolve the conflict manually. Recreating the file with godeps would populate it with whatever commit happens to be in your gopath right now, which has no relevance to what is conflicted in the file and could be a completely different hash.


I tend to check out one branch, run godeps -u, then check out the other one and run godeps -N -u. Then you've got the newest deps from both branches. I still wouldn't resolve the conflict manually.


TIL I learned about godeps -N.... a month too late :) Cool, though, that definitely would help with that problem.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: