A good twelve years. Go really changed the way I think about programming. I used to be excited by programming language features instead of what problem I was actually trying to solve with programming. I'd spend hours condensing 10 lines of perfectly working code into 1 line of the most concise text possible; huffman encoded better than even gzip could imagine. And I'd feel great about it. I'd imagine people reading it and thinking "wow, there's no way I'll ever be as smart as jrockway, I should nominate him for Extreme Excellence And Awesomeness award!" Sadly, there is no such thing. Go taught me that it's just structs, for loops, and if statements, and if you want someone to be impressed, they should be impressed by what the program does for them, not what language features you used to implement it.
It has also ruined other programming languages for me. "go get" doesn't print 30 lines of text telling me that a new minor version of "go get" is available, and that I should stop what I'm doing, rm -rf node_modules, upgrade it, and then resume what I'm doing. It just pauses for a bit, and then I have the library. (I also love reading about the node community's push to make installing libraries not run arbitrary code on your machine. Yeah, I've been doing that for years with Go. It's great. I can use a library and it can't print a "hire me!!!" ad at install time. What an innovation!)
I also remember struggling for years with not being able to run other people's software. There was an RPM package, but not a Debian package, so I have to build it from source. ./configure; make; oh no, the c compiler I have can't compile this code. With Go, you can just emit a binary for every supported platform dump the binary somewhere, and everyone in the world can download it and run it. No installing packages, no finding the right version of Go (that doesn't exist in your Linux distribution because they never ship up to date versions of programming languages). Just software, running, forever.
It's really good stuff. Go is the tool that lets me make software, and not worry about bullshit. And that's revolutionary, even 12 years later.
Agreed. It still surprises me that so many other languages fail at the fundamentals (minimal learning curve, static binaries, fast builds, reproducible dependency management, great tooling, great stdlib + ecosystem, etc) and yet many devotees of those languages have positively hyperventilated about Go's error handling and type system for 12 years. Go is finally getting generics and I'm sort of cautiously excited about it (like I was when Apple added the TouchBar to MacBook Pros), but any net benefit is going to be positively negligible in comparison to the degree in which Go raised the bar on the fundamentals.
I’m not arguing that there are languages where the state of tooling is bad to say the least, but how did Go raise the bar compared to something like Java or C#, the actual “blue-collar” languages?
There's a lot to like about Java and C#, but here are a few things that Go improved upon that spring to mind. Note that in some cases, Java and C# may have caught up in the interim. Note also that some things (e.g., low-latency GC) understandably weren't available from day 0 in Go, but followed quickly:
* Static binaries by default
* Single, straightforward build system (no DSLs)
* Single, ubiquitous code formatter
* Minimal learning curve
* Zero-work package publishing
* Zero-work documentation generation and publication
* Testing framework out of the box
* Production-grade HTTP server out of the box
* Low latency GC out of the box
More generally, C# and Java have always seemed to have a philosophy of "make everything as abstract and configurable as possible, and make the user understand every configuration option in order to do anything (overwhelm the user with configuration)". Granted, I haven't used C# or Java seriously since Go was open sourced, so it's possible that these are among the things that improved in C# and Java following Go's release.
> how did Go raise the bar compared to something like Java or C#, the actual “blue-collar” languages?
For me, Go being combination of being well-thought-out, opinionated and batteries included took away the futile fanboi flamewars/bike-shedding: JBoss or WebSphere? Tabs or spaces? Struts or Spring?
For these reasons and more, Go is offers a more pleasant experience when working in a team. When reading code other's wrote,I encounter fewer surprises in the logic and project structure. Go codebases are easier to grok, IMO.
> I'd spend hours condensing 10 lines of perfectly working code into 1 line of the most concise text possible
Which is also the hardest part of convincing people to use Go for me: "Why can't I just .map()/.filter()/.find()?" followed closely by "Why do I always have to check for errors?"
What is odd to me is that while yes, my Go code is more verbose than my node services - I always end up writing less Go for the same thing.
My non-Go code is starting to look more like my Go code.
One of the things Go taught me is that I was not being as careful about my errors as I should be. It can be argued that exception-based handling provides you a nice baseline default, but it makes it way to easy when doing network or system-type programming to thoughtlessly default to that, when you need to be thoughtfully defaulting to that.
With sufficient care, exception-based programming and errors-as-values converge in the end anyhow in "code that treats errors correctly". But my exception-based code is a lot more informed by the errors-as-values approach now; a lot more try statements with catch statements that actually do something.
Even Haskell's very nice Either monadic handling can make it too easy to be in the heat of the moment and not thinking about what the errors actually mean and what I can do about them.
I don't think it's appropriate for every domain, which is why I qualified the code I tend to work on. But in those domains I'm thinking a lot more about how every single line of code can go wrong rather than leaning on default exception-based handling and expecting it all to work out.
> One of the things Go taught me is that I was not being as careful about my errors as I should be.
I identify with this so much. Especially when dealing with external things (file system, database, network) things can go wrong at nearly every step. And yeah, that means you have to check errors at every step, but it forces you to think about how you want to handle them, and what message you want to propagate when they happen. As a result, my Go code has very few unexpected errors in production.
Hard to overemphasize this. Handling errors is similar to the benefits that writing tests provides - slower upfront, but a more stable product gets shipped.
Handling errors everywhere means problems are already solved before they happen. No 1 am pagerduty alerts because we didn't consider what would happen if a DNS server hung until we timed out and thought just wrapping in a try/catch and crashing was a good idea.
Things can go wrong at every step, which means each collection of fallible steps carries with it a combinatorial explosion of successes and failures. Do 5 fallible statements in sequence and it's 5+4+3+2+1 = 15 mock expectations you need to write. This tedious enumeration exercise is the majority of the time that goes into writing Go programs, for me at least.
Most of the time, you'll return early after an error.
To get into a situation where you'd need to handle n! cases, you'd need to keep running the statements after a failure, and then collate and return all the errors at the end.
Not sure why you'd want to do that. You'd definitely need to go out of your way to do something like this, and probably start asking yourself why you're doing this pretty early on. Plus it would flat out refuse to compile in some cases where there are dependencies between the statements.
I have some code where I use multierrors and collect everything into one big error (I highly recommend this for "validation"-type code, you should generally return all errors not merely the first), but what code are you writing where you're running 5 statements unconditionally but need to handle nearly-arbitrary combinations of them failing, and multierrors aren't what you need?
If you have 5 statements like "if err := doThing(); err != nil { return err }" then you need to simulate the success of all 5, success of the first 4 and failure of the last, success of the first 3 and failure of the 4th, success of the first 2 and failure of the 3rd, etc.
An automated test suite is table stables. If you can achieve correct, maintainable code better with functional or integration tests than unit tests, you should.
There is no hard technical line between "unit test" and "integration tests", it's a semantic question of how you define your units. Even if do you take a tiny, restrictive definition of your units, you could still use a fake or stub instead of a mock.
To get very specific for Go, we use testify suites, tend to set up fully functional stubs in `SetUp`, and then test cases either use them to verify happy paths, or `s.BreakStep1()` in one test, `s.BreakStep2()` in the next, etc. So in this case we would write a total of five extra lines for the five possible ways to break.
I think that is a fair opinion and my life at work would be better if others shared that opinion. Go however does officially stake out the opposite position: coverage only "counts" if the test is in the same package as the unit under test.
Go’s error handling is a strange this to praise considering it is entirely possible to ignore it without warning and the way to check error kinds was bolted on through type assertions when errors are supposed to be values in Go. Error kinds are important especially in network programming when the error can be dozens of different things. Compare this to something like Rust or Haskell which force you to handle errors and their variants.
Once again Rust does not force you to handle errors, you can ignore errors in Rust and it will compile. I can't say for Haskell but I doubt there is a language that forces you to handle error.
let _ = openFile()
Error ignored, it does compile, it will probably ends badly.
Pretty much every form of Go tooling warns you when you ignore an error. If you only choose to run "go build", yes you can ignore it, but a simple "go vet" call and it'll be plain as day.
Certain things need to be available out of the box.
The whole point of a programming language is to make the programmers life easier.
By now decades of exposure to programming languages have made try/catch the standard way of working with errors. Forcing the programmer to guess issues, when the compiler/runtime could be doing it- IMO is just plain waste of human labor. And for even medium to large applications is a pointless exercise.
In the first and the last case, it seems obvious to me where the error is. An obvious code smell caught in review. The second error, yeah, but lots of languages don't enforce error handling around printing to stdout, and even though Rust does in theory, in practice everyone just unwraps it and moves on with their day.
> but lots of languages don't enforce error handling around printing to stdout
They do, they throw exceptions.
> In the first and the last case, it seems obvious to me where the error is
The last case is especially insidious. I've seen it in production several times now. Very easy to miss. Particularly when refactoring/copy pasting code (which you have to do a lot of in golang).
This is exactly my experience. Being forced to handle so many more errors showed me how much error handling I was inadvertently eliding in other languages!
I've used Scala in production codebases and it was common to just "let the EitherTs handle the exceptions". What ended up usually happening was swallowed exceptions that nobody bothered to handle and log because it was much less idiomatic Scala/FP to actually log what happened underneath than it was to return the first error that happened by short circuiting the EitherTs in for/yields (Maybe and do notation, for Haskellers). So we'd get 5xxes and log lines like "row not found" and have absolutely no idea why.
I see a lot of the same issues in Rust code today FWIW. People just keep bubbling up errors and at the toplevel say "fuck it" and dump them, without _actually_ building a reasonable error chain to offer the programmer a story behind how the error happened. The fact is, error handling in any language is tedious. Whether you do it with a sum type, a product type, or Go's error values, either way there's going to be lots of verbosity and boilerplate.
To be more precise, it is the language that makes error-as-values so easy that it actually strays over into "as easy to forget about errors as exception-based handling".
You can successfully build huge chains of code that are just blindly passing errors up without adding any context about where they happened or anything in a way that even Rust can't compete with.
Haskell mitigates this issue by also having one of the more clever ways of dealing with errors, if you start getting into monad transformers or some of the advanced stuff, so it's not all bad. But it definitely can make it so easy to "handle" errors that you just forget about them.
I'm a huge go proponent, but I do think the lack of map/filter/find etc. is a big downside to the language. I know how to write for (if item == myItem...) or a for (if item > max...) but it feels like a colossal waste of time every single time I write one of these loops.
Go would benefit a lot more from some basic slice manipulation tools compared to features like generics that have actually made it into the language.
The error handling I don't find so frustrating - it's great that it explicitly marks at the call site which functions can potentially error. It's like a working version of an inverse noexcept from C++. I do wish they would take a leaf out of swifts book though, with the try keyword. In practice error handling flow in my go programs is identical to using exceptions, it would be nice to have some helpers to make this default less verbose without losing the ability to distinguish at the call site which calls are error-y.
I imagine they'll add map/filter/find after generics are in. It's pretty easy to define some slice types though which include those in the meantime, type Slice []string and add some functions then just use your new type for collections.
I suspect but do not know that multiple return values in a function combined with the inability to write functions against multiple return types like (T,error) will limit the usefulness of traditional iterators in Go. I'd love to be proved wrong as they'd really clean up my codebase though.
Depends what you're doing I guess, not everything needs to return an error, and errors can be accumulated and stored during certain operations and dealt with at the end.
Most perceived verboseness of Go comes not from the language or libraries but from the formater that does not allow to compress 3 lines of the error check down to single
The actually important bits are hidden in the middle of line noise. In the "common case" where `actually_important_bits` is just a simple function call it's not necessarily as bad, but the problem is when you have ten successive instances of this and one is slightly different. It's impossible to notice the important difference at a glance.
For an industry that is just starting to understand that code is read hundreds of times more often than it's written, golang fails at the few things we actually know for certain about what makes it easier to understand code at a glance.
If you're using `res`, then you aren't going to scope it to the `if` statement. That is maybe another problem, sometimes you have to do this:
res, err := actuallyImportantBits(...)
if err != nil {
return nil, fmt.Errorf("actually important bits: %w", err)
}
And other times you have to do this:
if err := actuallyImportantBits(...); err != nil {
return nil, fmt.Errorf("actually important bits: %w", err)
}
The problem here is that you really don't want "err" to leak to the outer scope, so the second case is preferable from an absolute reliability and least-surprise perspective... but you can only do that in certain arbitrary cases. I think it's a bit of a wart.
You can certainly handle "res" in an else block, or even write "err == nil" and handle it there, but that is surprsing. I would say it's simply not done, ever, but the Go codebase itself does it (src/go/parser/interface.go.ParseDir was the first example I found; but my search returned many screenfuls of candidates so there are probably more cases lurking in there).
The fact that you have a choice is not ideal, basically.
But, having actually important bits in if err := ...; err != nil {} blocks is not detrimental to readability. You will know how to read that after 5 minutes of reading any Go program.
I'm far from a Go expert, but I feel like this line is a bit pointless, especially if you write it a lot. At this point, why not just not handle the error, or panic?
Because when you're writing a reusable, self-contained function, you often don't know enough of the context to "handle" the error.
Let's say you've got a function that reads from a file. If the file doesn't exist, what do you do? That entirely depends on the context the function is called in. If it's in a web server, you might return a 404. If it's a background task polling for some data, you might sleep for a while until the file is uploaded. Or you might want to create the file yourself.
I think a panic is still better than a chain of if err != nil { return err } that bubbles up and does nothing. Of course the best solution would be proper error handling but not everyone does that (and it's not always obvious what to do).
and most of the time you can't handle it anyway. I mean consider you are having a database query and it fails because the connection error'd (network split). what to do know? restart the network switches and wait? of course not in http you will just print a 5xx err and hope it comes back. in go you need to bubble up these errors to your middleware and handle it there.
If this is seriously what you do when something underneath you fails, then you aren't writing in the domain of software where errors really matter, and you're probably better off in a language that has a gigantic try/catch handler around everything. In that case you may as well just log (or not) and ignore the error in Go as well then.
At high scale, you want to perhaps try your query a few more times, backing off exponentially, maybe even round-robining (or more complicated load balancing) among different targets. Throwing up your hands and 5xx-ing a request that easily is not something you can afford at high scales of traffic where small outages happen _all the time_.
so basically you mean that this only applies to like 0.001% of all websites/apps.
thus it's stupid to do it. I mean even than you would have an error that you need to bubble up or circuit break it somwhere but probably not at this layer.
also KISS and MVP, never ever overcomplicate something when you do not know anything yet, so most people don't need to built for scale at day 1 and it also is stupid to do it.
You can get off-the-shelf libraries that do this for you with zero-effort sane defaults. Most of them let you tweak the settings. I'm talking about Python's `requests` and Rust's `reqwest` here btw, two of the most popular HTTP client libraries out there. Python's `urlopen3`, which `requests` uses, exposes options for backoff strategies. `aiohttp` has a lot of these parameters as well. Unfortunately Rust's equivalent, `hyper`, isn't nearly as ergonomic/easy to use.
That’s something new handle as close as possible to the outbound client, which is several layers in from the inbound request. Much of it is in gRPC middlewares, so by the time a gRPC client returns an error to the gateway it’s already over.
Our coding convention (and I happen to agree) is that the outermost layer to touch the error is the one to log it. In this case the HTTP handler, or perhaps a gRPC middleware. You don't want to be logging errors as you propagate them, or the logs will show the same error at a bunch of different call sites vs. one complete picture of what happened.
Seriously. I see so many functions in my code see that consist of 1 line of thing I actually care about followed by 3 lines of boilerplate if err not nil… over and over again.
IMO if Go didn’t have its tooling, no one would care about it.
every if err != nil; return err let's me mentally draw a line in the sand and not worry about exception handling for code above that line. It lets me start fresh and restart my mental model with the line of codes below the error handling block.
it doesn't take years of writing Go to understand this, all you need is a open mind to how Go does things.
> It lets me start fresh and restart my mental model with the line of codes below the error handling block.
My code is almost exactly:
err := doThing()
if err != nil {
return err
}
err = doAnotherThing()
if err != nil {
return err
}
etc.
There's no need for me to "start fresh". Each line of actual stuff doing might return an error which needs to be returned. That's it. It's a complete waste of space and inhibits readability to absolutely no benefit. And this is extremely common across our code base.
If you think there's some virtue in writing verbose and inexpressive imperative code, what does Go provide in that department that you couldn't have got from Java 1.44?
I have often thought that if you could travel back in time and make Java's interfaces work like Go, there would be no Go today. The first-order effects of that change may not seem like much, but the second-order effects of that change is profound. In Java, interfaces must temporally precede their implementations; in Go they don't have to. This turns out to be huge in practice. It turns out that huge swathes of all that boilerplate and frameworkitis that Java is so well known for are just trying to get around the consequences of that mistake, because all the interface-based structure has to be laid down in advance.
It isn't the only difference between the two, but I think it would have consumed enough of the oxygen of the niche Go is currently in that Go might never have been born.
In practice, I find Go to be just slightly harder than working with dynamically typed languages, rather than a huge step, and the benefits I get in return make me prefer it for almost any task above 200-ish lines. I definitely can not say the same about Java, because of this difference and the second-order effects it has on the entire ecosystem.
It is still completely practical to create a production-level Go project by just cracking open a text editor and typing "package main" into your editor and typing away. I get the impression most people would not consider that a practical way to start a production-level Java project.
Java’s nominal typing is both better and worse at the same time - but go is not novel in that area, plenty of similar languages existed before as well, so I don’t think having it contributed to go’s success.
As for your last paragraph, I feel like we often mean something different between a production-level go and java project. Non-IDE java development is entirely possible, but I think a prod project usually means something much larger than it does in case of go.
"plenty of similar languages existed before as well, so I don’t think having it contributed to go’s success."
I struggle to think of one that has risen to the heights Go has risen.
I'm a computer language polyglot and a bit of a language tourist, though not as much as some people. There's a huge churn of features out there in some language somewhere, that has never manifested in any top-level language like C++ or Java. (For instance, array-based programming has been experimented with quite a bit, but has never been in a top-grade language. The closest that I know of is NumPy.) Whether Go is a top-level language or not at this point is a matter of legitimate debate (depends a lot on your exact definition), but it's certainly knocking on it.
The closest I can name are things like Python, but there is a fundamental difference between run-time duck typing and compile-time interface conformance, although they're certainly related.
TypeScript hasn't risen to the level of Go. If you live in the JS world that may not be obvious, but in the general landscape it's not as prominent.
A lot of people make a lot of fuss over how fast the programming world moves, but I'm a bit of a contrarian on that. There's a huge amount of churn at the very small scale, but when you get to that top-level set of languages, we're actually very conservative. For pete's sake, C is still a top-level language in 2021! It's finally on its way down, but it's got a long way to go to fall out of that criterion, unfortunately.
This is why I didn't even necessarily label Go as a "top-level" language. It's only 12 years old, after. It's a whippersnapper of a top-level langauge. 25 years old makes for a young top-level language!
> This turns out to be huge in practice. It turns out that huge swathes of all that boilerplate and frameworkitis that Java is so well known for are just trying to get around the consequences of that mistake, because all the interface-based structure has to be laid down in advance.
I'm a little lost by what you mean here? In my mind the only difference between a Go interface and a Java interface is Java lets you declare them in the class signature.
Java does not let you declare them in the type signature... Java requires you to declare them in the type signature. That means, the interface must precede the implementation, temporally, as in, it must exist first. In purely local development, you may do the implementation before the implementation, but until you at least have them together, the compiler won't let you claim you've implemented that interface.
That means you can't just take an image class from some JPEG library and declare an interface around its already-existing methods. That means the JPEG library has to know all the interfaces that may be useful, and declare them all first. If some of those interfaces are from other libraries, it must depend on those libraries to do it. This pressures image libraries to get together and have to declare "image processing" frameworks, which creates pressures to create all-singing, all-dancing image frameworks because these new interfaces being declared have to work for everything up front. If you get the interfaces wrong, the end-user of these libraries can't just fix them up on the fly. The framework then is pressured to undergo significant churn as it keys in on the correct all-singing, all-dancing set of interfaces to provide, either breaking backwards compatibility or having to drag along every bad decision it made for extended periods of time.
Granted, if a set of libraries does successfully run this gauntlet, the end result can be quite impressive, but it's a hell of a gauntlet to run!
This is also why it's at least a modestly acceptable Java practice to pre-emptively declare an interface for anything you think you might need an interface for later, because if you don't do it now, you'll have a harder time coming back and doing it later. I have a Java code base where darned near every class is basically duplicated, as both an interface and an implementation. I understand not everyone thinks this is good practice, but the language still pushes you in that direction even so. In Go, it simply neither good practice, nor something the language pushes you towards.
In Go, if I need to parse PNGs, I get a PNG parsing library. It just parses PNGs. The author of the library can make the best PNG library they care to, and the author of the library has no obligations to pre-declare any interfaces. If someone wants to weave this into a metalibrary, they can, and they don't have to fork the PNG library to do it, they can easily declare interfaces as they like, wrap things if they like, whatever. It's all more flexible. You're less likely to end up with that best-of-breed, all-singing, all-dancing awesome framework that is integrated and does everything in the end... but in the meantime you also get to use all the code that would have failed to finish the Java gauntlet.
In Java, because interface declarations must temporally precede all implementations, there's this huge pressure constantly pulling things into huge, all-encompassing frameworks, because all requirements are constantly being pushed up into the interfaces... the same interfaces that also have to exist first, before they can use them.
On paper, this is such a small little difference. It even sounds good... "why, of course we should have to declare conformance to interfaces? What if we accidentally implemented an interface and all hell broke loose? [1]" In reality... it's been a huge mistake.
Java's interfaces (nominal typing) is superior for both readability as well as IDE performance. Working on a large golang code base, it's always a struggle for both the programmer as well as the IDE to find out what interfaces a given struct implements. Coding is also more tedious as it's not straight forward to add a new function to an interface and get immediate feedback at the struct declaration site about the missing functions. It's just a big mess overall.
This is of course not to mention the dangers of unintentionally implementing and interface and having bad things happen (this happened in the golang stdlib out of all places as I recall).
Languages like Scala with HKTs or Kotlin with delegates (if I'm not mistaken) solve the issue of delegating to interfaces without much boilerplate. It's just that golang authors have not been exposed to other languages since the 70s.
Go is constantly being improved and refined compared to Java 1.44, there is a strong community of third party packages, you aren't shoehorned into the OOP box by basic language design, you get green threads in the form of goroutines while in Java land you are dealing with native threads & the issues they involve (curiously just learned Java <1.3 actually did use green threads, I suppose they were a casualty in the efforts to improve performance), the Go stdlib is much leaner but simultaneously much more useful compared to the Java 1.44 equivalent (even latest Java stdlib is missing basics like JSON!), date/time handling is sane compared to pre-Java 8 equivalent, etc. etc.
Java is getting green threads by means of Project Loom, and value types by means of Project Valhalla.
Not to mention it has sum types and pattern matching, something not available in golang. golang doesn't even have proper enums, quite astounding really.
Comparing a moving thing to a static one is quite meaningless, but regarding third-party packages the JVM is parallel only to the python and node ecosystems.
Well, at some level you could say Go (pre 1.18) is Java 1.44 but with speed as a differentiator. Development speed, build speed, deploy speed, and runtime speed.
If Russ Cox ever proposes to rename Go to JavaScript, I guess no one would complain. ;)
Than Java 1.14? Yes. Lighter is true of current Javas as well, but faster is dependent on the program. Go can often prevent garbage from being generated in the first place, but when creating garbage is a must, Java will happily handle heaps up to a terabyte in size, and its GCs are simply the state of the art.
Also, Go most definitely has a runtime, what runs the GC otherwise?
But its also faster to compile and use as a developer. And by runtime I mean there's no JVM. If you're used to Java you should probably just stick with it, but using Go is a huge relief to me.
I'm happy to entertain the idea that Go is a sweet spot for many, but please let's not pretend language features are universally bad. After all, Go benefits enormously from a garbage collector and its language-level support for Hoare's CSP. Two very important features that e.g C and C++ do not have (they do however have structs, for loops and if statements).
IMHO Go certainly chose wisely when it left out many OOP features of questionable value. And perhaps in doing so, it avoided the whole OOP culture, which has produced a lot of over-complicated software and tools. The culture of simplicity in Go is a really good thing and worth defending. However, we should be open to the idea that one or more additional language features might help to further this goal.
The nice bit about Go is that this maturity is baked into the philosophy and thus the language, the tooling, and the ecosystem where it benefits even younger developers. For example, your own hard-earned wisdom is great but it doesn't automatically make your language's package manager (or any other software you depend on, including libraries) better. But when that kind of wisdom is idiomatic, everything is better even when it's built by more junior developers.
Oh, I'm not necessarily suggesting that my lack of curiosity about new language designs is a good thing. In fact I think it's kind of a mixed bag. I'm only clarifying because I don't want to seem to be implying something negative about younger developers.
I didn't think you were implying anything about younger developers, I was noting the rather significant difference between "I've matured as a programmer" and "maturity was baked into the language I use and its idioms, tools, ecosystem, etc".
Yeah, my point was that "matured" and "wisdom" are value-laden words and that's not necessarily the connotation I wanted to convey. Ideally I would get excited about both the problem and the tools. I completely understand the point you're trying to make about the ecosystem though.
> I used to be excited by programming language features instead of what problem I was actually trying to solve with programming. I'd spend hours condensing 10 lines of perfectly working code into 1 line of the most concise text possible (...) they should be impressed by what the program does for them, not what language features you used to implement it.
Interestingly, what made me go through similar evolution was the very language in which I was trying to do all those things, namely Scala. After a few years of trying to be "smart", I realized that the problem was usually bigger than the language.
So perhaps, it wasn't Go, nor Scala, who helped us in our realization, but life and experience?
Our individual lives and experiences don't make the package manager better, though. Rather, Go's package manager (and its overall philosophy more generally) is good because Go was developed by people who had a lot of "life and experience". And it seems intuitive to me that someone who uses a language with a strong, mature philosophy would influence even more junior users.
I believe he has a point. Go was from day one trying to walk a different path. It was wisely dumb, pragmatic, simple (even though yeah verbose on many fronts). It changes your focus on external value rather than internal value.
Until your users realize that if there was a security vulnerability which was fixed, a system upgrade is not enough. They will have to either hope that you haven't moved on, or build things themselves.
But then, thats not a problem with Go specifically, thats a general issue with "static link all things".
Exactly. Which is why I don't get the "static linking is the best solution for deploying applications". You need to be able to do both. For example when building a C/C++ application you can decide if you want shared libs or static builds.
Go’s package and dependency system is actually the only reason I have never started using it more. The lack of true versioning, unspoken assumption of GitHub, package system, and all of the confusion between god mod, go get, go install, etc. just hurts my brain. I think Rust actually has this perfect. It’s clear. You just specify the name and version and features in the Cargo.toml file and it is pulled from crates.io. No installation-time execution either.
As far as binaries, I think Rust beats it out here too. Take path separators for example. It’s completely possible to have a binary in Go that uses the entirely wrong path separators for the OS. Rust forces you to handle this.
There is versioning through Git tags, which I don’t consider true versioning because you can always edit those tags to point somewhere else or remove them entirely. This is breaking. I suppose there is not an assumption of GitHub, but rather Git. I probably got that impression from all of the Go documentation using GitHub as an example (perhaps to seem more familiar to readers).
No, I _like_ crates.io because Rust is very explicit that that is where the packages come from. We can get into the benefits and risks and whatever about centralized vs decentralized package managers, but that’s not what I’m talking about. I just want to know how Go does what it does so that I can code confidently in it. A lot of Go packages are specified like so:
go get -u github.com/username/repo/version
But this is not enforced (you don’t have to specify a version) and this is also not a valid Git repo path and where exactly is this dependency stored and I can go on and on. When you import this package it’s usually
import “github.com/user/repo/version/module”
But then there is not always a <module> directory in the repo and… you get the idea. This sort of stuff hurts my head and I get it that Go is maybe trying to appear simple but this black box handwavey stuff can and will burn you (this very reason is why the Go community is generally against frameworks) so I just want to have a simple, clear answer and it seems no one else in the Go world has any of these questions. This turns me off.
I’m confused by your last bit. Neither Rust nor Go have installation time execution and this is good. I was referring to how you can’t just say “Oh Node has it and this is bad but Go doesn’t and this is good” because Rust doesn’t have it either.
Re: tags, that’s true but builds will detect this and break (since the hash is encoded in the sum file). No one should move tags, ever (for many reasons), and Go is simply requiring this.
Re: git, there is no assumption of git. Go also natively supports svn, hg, bzr and fossil. You also have the option to vendor things.
It’s fine to favor crates because it’s closer to what you’re used to, but I think you’re just complaining about ergonomics. They both have semver and implement satisfiability in similar ways (AFAIK).
It seems like the source control mixin is what’s causing the most frustration, and I get it — that’s where you have ergonomics you’re not used to — however, this is also where Go shines since it essentially gives you supply chain integrity without the need to trust any users uploading code (module owners) or third party vendors (crates).
What is the question for which you are looking for a simple, clear answer? Honestly, the documentation is quite excellent [1], but I’m happy to do my best to point to the best place for answers.
I’m not sure I follow. I assume “as it should be” refers to crates, with no mechanism for name ownership other than first-come, first-served.
You name any Go package based on a URL you control (and you can refer to any underlying source control by returning appropriate metadata from that URL). Package names that are URL-based is an arbitrary decision, but it seems like it at least solves that problem. Personally I’m much more a fan of that, since otherwise you need to put far too much faith in a single provider (crates), and enable a whole class of attacks and ownership squabbles. Better to just punt this to the already-solved DNS and web ecosystem. I just can’t agree that a single centralized repository is “as it should be”.
Coming from other languages, the most interesting thing about Go for me (in my limited experience of a few other languages) was all those things they left out:
No inheritance - no more digging through the massive world-tree of objects to find the code that actually does things.
No churn - I have not seen a Go update break my code in about 8 years of use.
No complexity - I like the culture of simplicity and eschewing dependencies in favour of writing the minimum code required.
No dynamic libraries - deployment is easier and apps more stable
No declared interfaces - they are defined at the point of use, not declared elsewhere
No header files - why C++, why?
No implicit type conversion - of the kind that plagues JS (see WAT), this rarely makes it more verbose.
There are of course a few gnarly corners - nils, errors, panics, struct tags are not very satisfactory IMO at the moment.
There are also lots of great positive things about it – the GC, tooling, fast compiler, stdlib and docs are a great example for other languages IMO.
I'll be really interested to see where they take it next, while hopefully keeping the culture that has made it so pleasant to use. Thanks to everyone working on Go from me, and here's to another 12 years of Go.
"No inheritance - no more digging through the massive world-tree of objects to find the code that actually does things."
You still have interface methods? I had problems navigating a new codebase and find the places "actually doing things". Some object was passed in somewhere which mysteriously implemented a one method interface defined on the spot in the other go file. I.e. the implementation had no relation to the interface which was obvious without a lot of searching and finding the right implementation.
Also passing in functions (callbacks?) all over the place is a little messy at least for a newbie. It was hard to find where the functions where called and at what point and how the program flowed.
(this is hard to explain but maybe somebody gets the point...)
edit: somebody else touched on what I also meant: " duck typing make refactoring and understanding new codebases error prone"
Also the modules and dependencies management surely is a joke? Pulling stuff from github willy-nilly? Quite bad in any case when compared to maven where you have a local repository with all the dependencies (so they don't change or disappear from the internet so that you can actually build your software 5 years later, exactly in the same way)
There are certainly ways to write confusing code in Go too, no inheritance is just one cause of confusion subtracted.
IMO interfaces are best used sparingly and in a minimalist way like io.Reader (but I prefer them to Java interfaces and have not found them confusing nor felt compelled to find all concrete types conforming), callbacks are best avoided if possible, and yes dependency management has only recently improved.
I really do think Go has a discovery problem. Function signatures don’t tell you hardly anything, does this need something deferred, what types does it implement, etc.
> No inheritance - no more digging through the massive world-tree of objects to find the code that actually does things.
That's not 100% accurate; as a concrete example, tell me which files (to say nothing of the actual downstream types!) contain the implementations of this interface method: https://github.com/kubecost/cost-model/blob/v1.88.0/pkg/clou...(err, without using github's fancy new SourceGraph-lite integration, of course, that'd be cheating)
I find the sibling "No declared interfaces - they are defined at the point of use, not declared elsewhere" similarly suspicious, but suspect we're having a nomenclature mismatch
> That's not 100% accurate; as a concrete example, contain the implementations of this interface method
But why? I've often wanted to know what a method does in Ruby and have had to resort to .method(:x).source_location because it is so dynamic only the compiler knows once it has finished running it. I've never had to find all the places that conform to an interface in Go (a very different question) or Java, because that's what an interface is for, so that you don't need to know, and new people can come along and conform too: your interest should be limited to what they can do, which the interface tells you already. Maybe this is a problem in getting to know a large complex codebase I guess? I've never encountered it in the real world.
The second point is linked - interfaces are used the way you describe in a few other languages, Java among them, (find me all the implementors of x), but not in Go, the whole point is you have no idea who the implementors of an interface are, new ones will arrive, and that's ok.
It's certainly possible to write bad, confusing and enterprise Java flavoured code in Go, but the lack of inheritance at least does away with a whole bag of hurts related to overabstraction.
> your interest should be limited to what they can do, which the interface tells you already
That's a very idealistic perspective, and my sincere congratulations that every codebase you've worked with so far has been so great as to completely and unambiguously document every edge case and pre/post condition.
If we stick just to the cited example:
Features() string // Features are a comma separated string of node metadata that could match pricing
great ... so I'm guessing if there are no such features it returns "" then, but otherwise it ... just contains the metadata keys? key=value as CSV? It escapes any "," found in the values with \\, does it? What's an example? Well, normally I'd go look at the implementations but in this awesome world of fully specified godoc I guess I shouldn't worry myself with such details
I'm almost sorry I replied to this, because we are clearly living in such different universes, but I am actually genuinely interested in reading the URL of the godoc you've experienced that is so perfect that one need not ever bother with how many disparate implementations there are
I sincerely have never had a problem with interfaces and never wanted to answer this question, but I suspect I use them far less than a lot of people who seem to use interfaces for almost every function argument in order to use them for testing or out of habit (or use code that does).
This interface is not great as it asks for ambiguous info and three methods.
I'd look at the docs, and what the callee does with .Features() for guidance (which in well written code will usually be in the same package as the interface, ideally the same file), not what one implementation happens to do - otherwise what's the point of the interface as you're coding to one implementation?
So I guess my answer is a variant on 'No true Go programmer would use interfaces this way', genuinely sorry about that, as you say we probably live in different worlds - I don't work on kubernetes or things inspired by it. If this is a big problem for you though, I reckon tooling could solve it for you, as you pointed out.
Most interfaces in Go code have exactly two implementors that will ever be plugged in: the production code and the mock. You want to be able to delve down through the layers of production code, e.g. handler to controller to gateway, to trace what's going on in an RPC request.
They do have a few little DSLs, which I dislike: struct tags (optional, I prefer to avoid), magic comments which provide build directives (this seems icky to me, but avoids breaking Go1 promises I guess).
I don't want to be too contrarian, but I'm not aware of any library which uses struct tags to encode anything that anyone might call a DSL. At most, they're used for key-value pairs (e.g., `foo:bar`), which is pretty easy to get one's head around. I don't use build directives and ideally we wouldn't need them, but most (all?) mainstream compiled languages have them. Maybe the complaint is that they don't get their own dedicated syntax, in which case I don't see the issue beyond "in $MY_PREFERED_LANG, build directives have their own syntax, and that's how I like it!".
In all cases, these issues seem positively trivial compared to "you can't parse JSON or send an HTTP request until you learn some DSL or cargo-cult someone else's build file".
"I'm not aware of any library which uses struct tags to encode anything that anyone might call a DSL."
There's a couple of struct validation libraries that got IMHO a bit overexcited with what you can jam in a struct tag and then interpret, but they don't seem to have gotten very popular, so it doesn't factor into the language much.
Have a look at some of the bugs related to struct tags (420 open ones) - they can control marshalling and have lots of little directives in them like omitempty,attr,-,set plus they combine too (for xml,json,asn1 etc) and you can stuff your own little language in too if you want, the possibilities are endless! They are a set of limited yet unvalidated translation DSLs stuffed into a string.
Personally I think the language would be better without them, but it's too late now.
Re build directives - the complaint is they are comments which do things and change code/compilation, the syntax I don't care about, but I do care that they've abused comments to do this.
I agree, these problems are pretty trivial, I don't lose sleep over them.
To me, the greatest advantage of Go is the build time, which is almost instant. So you get the joy of programming like in Python/Js, being able to test very quickly your code, without having to deal with stupid errors coming from type mismatch or function parameters.
Plus, you get the performance of a compiled language, with a simple syntax. Sure, you get just slightly better results with C/C++/Rust, but then you deal with complicated OS level calls, memory and library deployment issues (problems fixed by Go). Yes, a few benchmarks show C#/Java almost as fast, but at the price of a highly optimized syntax not seen in any average programmer. An average guy get superb performance with Go without complications from day 1.
And absent object oriented features are quickly forgotten.
I think a lot of that is that Python doesn't launch a server that handles code analysis/formatting requests, so every time you want to do another round of things-to-do-on-save, they all have to do be done more or less from scratch (some info can be cached between runs, but if you're launching a Python process that's like 100ms right there...). I imagine if there were a Python code server that handled formatting and type checking without ever shutting down, it would be quite speedy.
I have some sort of python language server running with neovim to highlight type errors. This is definitively not fast to add errors. But being asynchronous it's palatable.
C#/Java are almost as fast, but not for immediate execution environments. If you have one off invocations of the code, Go executables will beat the pants off Java as you wait for the JVM to fire up and for HotSpot to kick in.
I write a variety of CLI tools for my own use in Java.
On this low-end Chromebook (Samsung Chromebook 3 with a Celeron N3060 @1.60GHz) I can start the JVM, load the classes for my CLI program, and print a help message,
jpavel@penguin:~$ time rcr -h
Usage: RCloner <config path> get|put <entry> [args]
RCloner <config path> list
real 0m0.316s
user 0m0.208s
sys 0m0.126s
in less than 1/3 of a second. And this is with stock openjdk version 1.8.0_302, which doesn't include the startup time improvements of Java 10 & 12.
Sure, I wouldn't write utilities meant to be piped one into another, but for standalone CLI tools, the JVM startup time isn't prohibitive at all.
I've heard smart people argue convincingly that the JVM doesn't actually take long to fire up, and that this is something of a myth. It may take a bit of time for the JIT to optimize, but that shouldn't impact basic CLI tools (un-JIT-ed code is basically just interpreted code, and that runs quickly enough). I'd be curious to hear from people who have looked into this.
For smallish programs, a recent JVM will start up in 100ms. This is enough to make some CLI tools (like eg. git, with many separate invocations that return almost instantly) slightly bad to use, but longer running CLI tools are perfectly fine with it. There is also Graal to AOT compile classes and that way the startup time is truly negligible.
What actually increases JVM startup time is class loading, as it has to do byte code validation and the like for each new class. This is only apparent for largish frameworks though.
As for JIT, there is no much point for short-lived programs. Like, most python scripts are run without that and people write prod code in it.
I’ve only picked up Go recently and I must say that the beginner experience is superb. Way easier to get started, learn new concepts and put them in use compared to some other languages.
So it's not a video lecture on Go specifically, but I first started learning itbecause I saw how it looked and how fast it ran while watching a Dynamic Programming series that happened to use Go for its implementations.
And I've been using it and loving it ever since. There are one or two things that trip me up (slice manipulation can sometimes get me if I'm not paying close attention to my capacity and just pointing around, rather than copying) But for the most part, it's a fairly elegant language, and amazingly fast! For me, it's also been incredibly intuitive start working on asynchronous features.
The new inclusion of Generics is just icing on the cake.
I can attest to the fact that programming in Go taught me a lot of good engineering. I certainly find myself better trained at handling errors, for example.
However, I always feel exhausted when implementing real-world systems with Go given the lack of $things (that everyone feels is a virtue). Over time I realized that a lot of people just aren't as lazy (or scared of breaking things) as I am - they find it easier to copy code dozens of times and then fixing them all in one go using their ninja refactoring skills. Just a different kind of tradeoff, at the end.
So given how laborious it is to make anything more than simple programs (because of the amount of copy-paste-driven-development required), I generally avoid Go unless I'm writing a webservice.
For me Go's virtues are static typing and wide variety of community packages to do things. The whole "oh not having features is a feature" thing is just an opinion for the sake of having an opinion. otoh, I damn excited to have generics support soon finally :)
If you are or your coworkers are constantly copying and pasting code, stop, and think a bit more. I do a lot of Go development and almost never copy and paste anything anywhere. (And even in those cases it's usually justified. I just a few minutes ago copied and pasted a big struct... but it's because the first struct was defining a JSON message, and the second struct was defining a very similar, but not quite identical, JSON message in another file. This doesn't seem like a big deal to me, because it's defining a separate external data format and while they are superficially similar they are not the same and really shouldn't share code.)
Either you're not using the tools Go has to their full effect, or, possibly, you shouldn't have chosen Go. But I make that last concession not because it probably fits your case, but because it is technically true. (In particular, don't take Go for heavy math code where you need a type system that is ready for lots of mathematics.) But it's probably not applicable.
You should not be copying and pasting all the time.
There are several communities; I happen to hang out on the reddit /r/golang. If you've got something that you'd like to see how to refactor to not be copy and paste, consider posting a question there (ideally with a running version in the playground of whatever you're asking about). It is true that not everything can be improved, but most things can.
I agree with the poster you're replying to. The large golang projects I've been involved with have been extremely tedious to work on, even on projects started from scratch.
The language is extremely weak (it's not expressive), which translates to overly verbose code that is difficult to traverse. Logic that can be expressed in a couple of lines in Java becomes 10+ lines in golang, with code scattered everywhere. Not to mention that golang lacks enums or sum types (the latter have now made it into Java), which are a huge safety and productivity booster.
I work in Go all the time. While there are occasionally things that come out a bit more verbose, and they may stand out in your mind, if you are always writing things that much more verbose, stop, think, and make sure that there isn't some way to do it in Go correctly. Because there usually is.
The problem is, a lot of people are used to working in languages which have an abundance of features, so when they have a problem, they have learned to reach for the feature that solves it. In Go, you have fewer tools, but they are sharp, and generally well-chosen and work together well.
"Go doesn't have the feature I'm used to using to solve this problem" is not the same as "Go doesn't have a decent solution to this problem". If you are constantly copying and pasting or spreading things out in a way you don't think you should have to, run through the tools that Go does have again. There are several techniques that aren't going to be the first things you necessarily reach from from another language, but work just fine in Go. This is not a complete list but it gives several examples of such techniques: http://www.jerf.org/iri/post/2945
There are some things it really can't do. (Again, I just can't understand the people who are trying to jam their mathematical code into Go. It's just so bad at that.) But that set is smaller than the critics think, because they confuse missing features for missing solutions. It's just another variant of "don't write X in Y", which is never a good idea.
And again, I invite you to post any such issues you may have to /r/golang. If I happen to pick it up, I won't be afraid to tell you straight out there isn't a good Go solution to that. I've done it before. But that happens less often than you might think. I also remind you my goal will be to come up with a good Go solution to the core problem, not "the closest approximation to the feature I expected to use" or anything like that. Write Go in Go, not anything else.
Sum types with exhaustive checking, and records are two extremely serious omissions in golang. Several times now we've run into issues because of this in the current project I'm working on.
Congratulations on those 12 years! Generics and supply-chain security seem like a very nice focus for the next year. That's one point where JavaScript seems to be stagnating a lot, so I'm glad other people are taking it seriously.
12 years and still no proper error handling. World stars. ;) Seriously, the error handling is a big problem with Go. Not the way it works, the way it effects how people do control flow in general.
Error handling is easily one of my favorite things about Go.
No 'oops I forgot to put in a try/catch and now my code died with no explanation'.
No 'I forgot a finally ( or didn't realize I needed one ) and now I'm leaking resources'.
No 'should I return Null or false or an error code?'.
No '20 log messages for the same error because every function is reporting it'.
The Go model
- Return err, always as the last parameter
- When you need more context, add it to the error before you return it
- There is a calling function that reports errors. Maybe main(), maybe a subscriber, maybe a goroutine. There can be other situations of course but the point is that there's a clear ownership. If someone is logging errors in a higher up method, they should be prepared to explain why in code reviews.
> No 'oops I forgot to put in a try/catch and now my code died with no explanation'.
This is so backwards. First, I want my program to immediately crash if I have a bug. Go is like shell in that it keeps going even if there's an error. Second, I'm used to getting stack traces, Go is the language that will give "no explanation" by comparison if there's a failure.
In practice often people let the exception eater in main handle it, and it either logs somewhere nobody looks or returns this sort of generic error and exits:
I don't know what scale of projects you've worked on. When you're dealing 100K requests/second you very much do not want to immediately crash. It is absolutely the worst think your code can do.
1. You have something that catches and logs all uncaught exceptions.
2. Defer is nice. As easy to forget as using in C#.
3. Always null/nil. Uncle Bob is plain wrong here.
4. Stack trace. But you should keep things wide and shallow. No matter what technology you use.
In Go it is about as easy to swallow errors as with try-catch. But my post was more about how all the if-statements generates more unnecessary if-statements.
The if-statements are not unnecessary if you want to handle errors in place and make the code highly readable. Just because you don't like it doesn't make it wrong.
These are great designs that have been completely ignored by developers in every large scale codebase I've ever worked on.
"That's the developers fault, not the languages"
Look, the old grey mares of the programming world are doing the best they can. I'm not going to blame C because its developers usually make it up as they go.
But we've learned things over the past 50 years, and one of the things we've learned is that company defined 'best practices' and code reviews can not catch all the mistakes that developers make and all of the things that can bring a service to its knees.
Go has the benefit of experience. It knows what mistakes developers make and it is going to prevent them from doing that. There is Right Way To Handle Errors in Go.
How many times have I seen uncaught exceptions? I saw one a week ago.
How many times have I lost resources because I didn't realize something I was calling could throw exceptions? Dozens.
Null/Nil exceptions. I couldn't count the null pointer errors I've had to fix in my time. Probably in the hundreds.
"should" keep things wide and shallow - somehow when Java 8 came out all the Lambda programmers lost this message.
None of these practices scale. We know that because we've seen it
>But you should keep things wide and shallow. No matter what technology you use.
This is literally the exact opposite of what I believe, and the bane of my existence at work. Wide and shallow code is spaghetti. Too many APIs = continuous confusion and relearning.
I agree with this. What I like about the Go method is that all errors are equal -- they look the same if it's a stack of microservices running at different companies, a bunch of goroutines communicating state with each other, or if you're just calling a local function that fails. It's the same every time, and the programmer can cut through the noise and make a complicated error from a complicated system simple, concise, and easy to understand.
People seem mad that they have to plumb around `err`, and that they have to provide useful context with fmt.Errorf. I see that as letting good programmers make good systems. The default in other languages is useless -- line numbers and arguments is all you are allowed to have, and when something goes wrong it takes up your entire screen with noise. Not as good as everyone thinks it is.
Go doesn't have platform-specific ifdefs sprinkled throughout the source. Platform-specific code is separated into files guarded by build tags as recommended in 'The Practice of Programming'. Compile-time control flow is not mixed with runtime control-flow.
What you learn at school when it comes to programming is to avoid if-statements for control flow. It is error prone. If-statements are mainly used for various types of guards.
And what you learn after a few years in the real world is that, when school tells you "this is the right way", they're almost always wrong. Or, there at least almost always wrong in many circumstances. The real world is a lot more complicated than they teach you in school, and the correct answer is almost always "it depends".
Should you avoid if-statements? It depends. What are you going to have to do instead? You're going to have to do something. Is that something going to be more understandable for your co-workers for the lifetime of the code? Maybe, depending on your co-workers. Maintainability over the lifetime of the code far outweighs and "should" that they tell you at school.
'if' is the epitome of control flow (next to looping). It's fundamental to computer programming. It's honest. Don't be ashamed of control flow, don't hide your control flow as if it didn't exist.
I really think Go’s opinion here arises from a sense that some abstractions of control flow just should be verboten for concurrency. For example exceptions dumping a stack trace in a concurrent program is often OK and occasionally extremely cursed.
>>the way it effects how people do control flow in general.
This.
Most people are used to thinking in terms of try/catch. Programs exiting on exceptions. This is just one of those things like Garbage Collection. Its one of those things a modern programming language should do.
The example is a bit like using automatic transmission cars. As much you can sing praises of a manual transmission cars(manual control, mileage etc) its just everyday programming is like driving in roads with heavy traffic, and whatever perceived poetic beauty a manual task could offer, the automated task works better if you have to do this for hours everyday.
In many ways a language without try/catch semantics these days is dead on arrival for most shops.
Plenty of great initiatives listed here. Integrated Supply chain security tooling is going to be increasingly important. For Java we use third party tools to identify vulnerabilities in libraries.
Congrats for turning 12! Starting using golang 6 years ago and it's been so easy maintaining our projects and upgrading to the latest versions. Cheers to the team.
The thing about Go that pulled me in around 10 years ago was that I could inspect the source code of an open source Go project on GitHub and actually tell what was going on. And on top of that I could easily cross-compile and deploy binaries in a way that blew automake/CMake away. And of course the goroutines... no more pthreads. It's a real case study in opinionated language design.
I'd guess some of the contenders for that crown are docker/containerd or kubernetes but (obviously) the answer will depend a lot on the "we" in that sentence
1Password is also a heavy golang user, as a more silent entry into that race
Go is my go-to language when I can get away with it. (otherwise its RoR for me). The only thing I dont grok besides the aforementioned filter/map/reduce (probably coming soon) is the name itself. It is really hard to Google by itself, now I always default to "golang".
Would you say Go is suitable for web services and apps?
I was trying to compare it with a rapid development framework like Rails but felt Go Web Frameworks aren't as mature and ready like Rails.
Any insight would be appreciated, thanks!
I've used Rails extensively before writing some larger sites in Go. Quick comparison of Go vs Rails:
Pros: More performance, multi-threaded extremely capable web server built-in, culture of simplicity and low-dependencies, no breaking changes, goroutines, ideal for small services, static binaries (all deps included) so no need for rbenv, docker etc etc, no included JS, great stdlib (far better than Ruby's IMO), NO INHERITANCE.
Cons: No standard structure others are familiar with, you'll need to find/write code for things like migrations, auth, rendering views, skeleton code generation.
For things you need to find/write this may seem intimidating but it's an opportunity to explore the bits you liked about rails and jettison the bits you didn't like or need, and the stdlib includes a lot of what you need at a low level (e.g. html templating, crypto libs). I cannot emphasise the importance of no breaking changes, it's so refreshing compared to other language ecosystems like Ruby or JS.
Overall I'm really happy with Go for web apps and would choose it again in a heartbeat, particularly over Ruby and Rails (which I also like, but has performance issues and cultural issues IMO).
Not really no, the lack of generics has hampered this as you can't easily return slices of arbitrary objects from an ORM so existing ORMs are pretty ugly. One way round this at present is code generation.
From your comment I think you're talking about a query builder/executor though, which is a specific part of an ORM, they're pretty simple to build if that is your thing or there are a few examples of that out there. You're really just building up an sql string, storing params, adding a few helpers for things like joins and then executing.
I personally use a query generator to generate consistent queries, and generated code per resource to create models from results.
There are ORMs available for Golang, but my experience has been better with packages like sqlx [1] or dat [2]. We've since been using SQLX for pretty much all DBMS related work, and regret nothing. In my experience SQLX gave us the right balance between abstraction and control.
Definitely suitable. Depends if you want to work within a framework or not.
If you want a framework, use Rails/Django/Node as they are very mature. Huge plugin ecosystems, great for prototyping non-trivial features like authn, because a plugin certainly exists for it.
If you want more control (fewer dependencies), roll everything yourself with Go. The built-in libraries are fantastic, you can spin up a CRUD app without any external libraries. There are also great third-party libraries, but I would stay away from framework-y stuff like GORM.
Thanks. How about a framework which has all these things stitched together already? For example, a good templating engine and an ORM that supports DB migrations out of the box?
Do we have one for Go?
But it is still a framework, which means it is making a lot of decisions for you and forcing you to learn its own way of doing things. Ultimately it is a relatively thin layer over builtin Go libraries like `html/template`, `net/http`, `database/sql`; generally speaking, the Go philosophy is to build systems yourself from these components and therefore keep the architecture simple, focused, and maintainable.
Unless you are stuck with Go for a good reason, if you want a framework use one of the industry standard ones mentioned previously.
AFAIK nothing has the plug-n-play library availability of Rails, as far as things that are highly relevant to stitching together "web apps". Not even Django, which may be its closest competitor. That single aspect of it is so good that even though I hate Rails (but I think Ruby's alright—go figure) it's hard to recommend against it for some projects.
I only use Go for two things: CLIs and web services. There are a lot of frameworks out there to help you develop web services in Go, but what really impresses me is that you don't really need them - you can get surprisingly far just using the standard library.
You should also consider Elixir with Phoenix, as it's a comparable experience with Rails (although not as many libraries) while giving you much better performance. And Phoenix LiveView is just an incredible productivity boost, there's nothing quite like it.
I cant believe they are moving forward with "generics". Programmers on average already tend to make things more complicated than they should be. Now every single module in the ecosystem is gonna use more abstractions, generics to fit their social environment. Its well known that abstractions are the devil. How many modules are gonna use generics when they should not? Most? Programmers are bad at programming and they are social herdy beast - they are human. Their code must follow the social norm. The consequences of that could be terrible for the language.
I dont understand why they move forward with this. Go is absolutely awesome, it's a gem, adding generics is too risky. This is such a bad news for me (i didnt know). I'm trully affected.
If you want to write unmaintainable code in Go, it's already very easy; just use interfaces incorrectly. Go lets bad programmers write bad programs. If you have a solution to that problem, your programming language will be the one that kills all current programming languages.
Generics will be similar; people will misuse them, and you'll curse their names when you have to dive in and debug it. But it will also let good programmers write very good programs and libraries, and that's going to be a huge benefit for everyone. As we've seen with interfaces, they can be misused, but they can also be used correctly. Just look at the standard library for examples of how they work well. io.Copy() was written once and can work on buffered streams, disk files, HTTP bodies, ... anything! That's a good use of interfaces. We're going to see the same with generics, and it will make a lot of people's lives easier and more enjoyable.
Everything you say compfort me in the fact this is going to be bad. Everything I read in the article about generics compfort me in the fact this is going to be bad.
Example: first you agree with me "its already easy to write bad code", well you agree then its gonna be easier. Your example about io.Copy. Yeah bingo, absolutely io is the exception, the only case I know in 25 years of programming that is made tasteful with generics. Actually its the only case I know in 25 years of programming where a diamond-like multi inheritance structure is okay. IoBase, IoBaseRead:ioBase, IoBaseWrite:ioBase, Io:ioBaseRead, ioBaseWrite. You get the idea.
Do you realise the language grammar is modified? They are modifying Go's grammar. Man, they gotta have some balls to make virtually a new language after 12 years of success.
Im open. I am. But the idea is bad on paper, now the signs are too.
So in your opinion, should Go also not have generics for maps and channels (as they effectively do today), or are those also one of a kind data structures that are worthy of generics?
The designers of Go recognized that generic type parameters are necessary.. it’s just that they decided to only bestow them on a few types in the standard library instead of designing a general solution.
So in your opinion, should Go also not have generics for maps and channels (as they effectively do today), or are those also one of a kind data structures that are worthy of generics?
Thats exactly the proper dichotomy. Those choices were made by great language designers, not by programmers. Generics are some type of macros, of course they are useful. In some rare, very sensible cases, where the added complexity is really worth it. In the hands of programmers eager to show how smart they are, they are deadly. Code reviewers all around the world are gonna be like "why dont you use generics here?" just because they can. Bloated code, everywhere.
Go was a niche language suitable for 10x programming. It was modern C. Not C++, not java, not C#, not F#. It was C. The extraordinary, unexpected, return of C. You could write a modern server in C. You could write a modern application in C. It was amazing. It's now victim of its success and might become an industry language. As I mentionned, the grammar being changed, it's literally a new language that's being born. Go++ is being born. Why would you terminate a language which in 12 years has become one of the top 5 choices on the server?
This is not a sensible choice. This is not the choice of the hackers. This is the industry putting its dirty paws on a great piece of technology.
> Go was a niche language suitable for 10x programming.
At first I read this as "suitable for programmers who are 10x more productive than average ones". But they must be able to judge when/how to use generics.
Do you mean programming with 10x more code, or 10x more developers?
Elm kinda had such guardrails but they are hindrance, so it's all about balance (or letting them off purposely - which makes lang implementation hard as you need to compile half-broken programs). That's why I think interpreted code will win due to productivity gain or we get something that compiles even faster than Go or real hot-reload.
Generics are not that complicated or hard to understand, and most of the time developers don’t even need to interact with them, except to say what type of data will be used in a collection.
Woah there. No, it is not "well known", nor is it correct. Over abstraction is very problematic (and very common), true. But under abstraction is also problematic. There is a sweet spot, and it's hard to find. But the correct answer is emphatically not "no abstraction".
That's what I meant, in compressed form. Plus we know you better err on the side of under abstracted in practice. So what I meant is we know abstraction is where the devil lies because in practice programmers tend to over abstract to look smart. With generics as for one.
I think there are many very simple and understandable situations where having the option of generics gives you a lot of power. Like any programming concept, it's just another tool in the box, if you don't want to use it, don't use it. If you disagree with how others are using it, wouldn't that mean these programmers are bad in your view and you shouldn't be using their modules anyway?
Programming is always going to be complicated as long as the domain is complex. Go as is has traded having no learning curve for having nothing to offer.
This hasn't been my experience. Go is incredibly difficult to read because all Go code looks the so similar, lacking intentionality or style, and it's so verbose that it takes a very long time to grasp what any given thing does in totality.
But code you didn’t write yourself but still have to wade through is still harder to read and understand than code that was not required to be written due to a better abstraction.
Depends entirely on how long ago I wrote it, though.
Maybe it's a style thing, but I tend to have a lot more trouble with "WTF does this (leaky, inevitably) abstraction actually do and where does that actually happen?", when reading code, than with too much code.
It has also ruined other programming languages for me. "go get" doesn't print 30 lines of text telling me that a new minor version of "go get" is available, and that I should stop what I'm doing, rm -rf node_modules, upgrade it, and then resume what I'm doing. It just pauses for a bit, and then I have the library. (I also love reading about the node community's push to make installing libraries not run arbitrary code on your machine. Yeah, I've been doing that for years with Go. It's great. I can use a library and it can't print a "hire me!!!" ad at install time. What an innovation!)
I also remember struggling for years with not being able to run other people's software. There was an RPM package, but not a Debian package, so I have to build it from source. ./configure; make; oh no, the c compiler I have can't compile this code. With Go, you can just emit a binary for every supported platform dump the binary somewhere, and everyone in the world can download it and run it. No installing packages, no finding the right version of Go (that doesn't exist in your Linux distribution because they never ship up to date versions of programming languages). Just software, running, forever.
It's really good stuff. Go is the tool that lets me make software, and not worry about bullshit. And that's revolutionary, even 12 years later.