All of the server backends at my company are written in Go. This was a result of me writing a couple servers in Python a few years back, ending up with lots of problems related to hanging connections, timeouts, etc. I tried a couple different server libraries on Python but they all seemed to struggle with even tiny loads. Not sure what was up with that, but ultimately I gave Go a swing, having heard that it was good for server applications, and I haven't looked back. It has been bullet proof from day one and I am overall happy with the development experience.
That was the good. The bad? Garbage collection, dependency management, and lack of first-tier support in various libraries. Garbage collection makes the otherwise lightweight and speedy language a memory hog under heavy loads. Not too bad, but I have to kick the memory up on my servers. Dependency management is a nightmare; honestly the worst part about it. The lack of first-tier support in various libraries is a close second. AWS's API libraries had relentless, undocumented breaking changes when we were using them, all on the master branch of their one repo (breaking Golang's guidelines for dependencies). Google itself doesn't actually have any real API libraries for their cloud services. They autogenerate all API libraries for golang, which means they're not idiomatic, are convoluted to use, and the documentation is a jungle.
We continue to use Go because of its strengths, but it just really surprises me how little Google seems to care about the language and ecosystem.
We run a cluster of P2P, GPU heavy machines that use Go to ingest byte streams of raw radar data, store that info in btrees, and render, cache & serve map tiles that are drawn on the fly in response to http requests. We are not using much outside the stdlib (opengl and gdal bindings).
Garbage collection has become very fast in recent versions of the language. It's quite painless now, and we did struggle with it in the past. I'd take the current GC over dealing with other strategies unless you are a very specific, >60fps kind of situation. We literally don't think about the GC anymore.
Random stats from one of our busier nodes, over the past 20 min:
40GB currently in use, 391GB allocated (and freed) in the last 20min
average of 2.6ms GC pauses every 70.6s
95th percentile on GC time is 7.26ms
For dependency management, there's a lot of tools that make things more automated and I think probably offer people what they are expecting/used to from other environments. This has been a sticky point for sure. I've settled on a bare bones solution that seems to work well: just using git submodules in the vendor directory within the project repo. Does everything I need (pin to version of a fork, build immediately after a clone and a submodule update --init). As of Go1.5, the vendor directory is supported by the compiler, and it will look there first to resolve dependencies. While this does work with recursive dependencies, you will want to flatten the dependency graph as much as possible if types are being used across multiple deps because types are distinguished by their full import paths. This encourages breaking things into modules so I'm ok with it.
First-tier support is a problem I have no doubt, but not one we've suffered from personally.
I'd say that debugging comes up a lot too, and Go doesn't have an official, reliable option for debugging. The Delve project is getting close to that though it looks like. I personally debug with the stdlib pprof package (provides an http server that prints stack traces for all goroutines currently active, allows cpu/memory/blocking profiles to be requested), print statements, general testing/experimentation and newrelic.
I don't have great measurements for this, but we have done optimizations to reduce the data flowing over the Go<->C interface. One of our key measurements that does make a big impact on performance is how often we need to upload data (that's not already over there) to the GPUs. So that's something we have worked on reducing (buffer reuse, compression). We also have a series of caches on the other side, so we aren't drawing more than we need to. It's hard for me to tease apart how much of these optimizations (and others) are ultimately aimed at addressing the cgo overhead, and how many are just typical stuff. The data we work with is cumbersome and my intuition is that there's probably a lot of room for optimization in our drawing even still, regardless of cgo. I wouldn't be surprised if a direct port C/C++ implementation of the rendering pipeline was significantly faster than ours in getting data into and out of the GPUs, but a big part of the project is data storage/networking/serving/caching as well and Go has bridged the gap for us (a small team that needs to build reasonably fast things reasonably quickly :)).
That's interesting. The cgo overhead was the only thing holding me back from considering it for games, since I didn't want to write a lot of C wrappers around the C libraries I want to use just to have them be more efficient, which is a shame, since Go is pretty nice, barring the C interop in some cases.
When I used to frequent gonuts, I raised the issue why they didn't went the FFI way as D, Rust, .NET, Delphi, FreePascal do, but sadly they rather use cgo as solution.
Yes, that is the purpose of the vendor directory. You copy the source of each of your dependencies into vendor/ and commit it to your own source control so it can't change unless you do so yourself. I personally use git submodules to manage the contents of the vendor/ tree, but the go compiler/toolchain itself doesn't really care how you get the files into there.
Edit: other comments below have mentioned some of the tools available to manage your vendor/ tree. Using git submodules manually can be crude, especially when adding dependencies that themselves have other dependencies.
We've been running splice.com on Go for 3 years now and handle 5TB of audio/binary data per day. Our memory usage is around 10-15MB per server and the GC pause time has been really low. You do need to stream your IOs instead of reading everything in memory. In regards to dependency management, we honestly had no issues and now with vendoring, it's even easier. We do use a main repo with lots of smaller packages and only a few 3rd party dependencies that are vendored or available via private git repos.
We don't use Google cloud but I heard they have 2 repos, one that has auto generated code and one that has hand written code (but less complete).
yes buffered io, you use readers to read/write small chunks at a time instead of loading everything at once. Go offers way to do both since in some cases, such as loading an entire file in memory can be fine/better/faster.
> We continue to use Go because of its strengths, but it just really surprises me how little Google seems to care about the language and ecosystem.
Go is certainly a language that is used at Google, but AFAIK a lot of "Googlers" don't really like it and don't use it. It certainly not the "official language at Google", given the weight of C++ and Java there. But that's the consequence of being opinionated. Using Go means having Rob Pike over your shoulder telling you how to write code. And he made sure you can't escape that fact since there is no place for "ninja coding" with Go.
Erm. StackOverflow was built by folk who used C# heavily, and cargo-culting is amazingly prevalent on both the C# and Java "traditional IT" communities, so _of course_ StackOverflow is biased towards those runtimes...
This slideshow clinches it for me. Go has some specific strengths that match low level, network infra types of problems. Beyond that, Go is not a good fit. This sounds like a criticism, but I don't think so. It's a compliment: it's a sharp tool for a specific kind of cutting. It's not trying to be some all-singing all-dancing language, which has gotten us all in to quite a bit of trouble.
I think it refers to the Perl-era idea that good code should be somehow "clever" rather than maintainable. It's how bad programmers who spend hours agonizing over how to reduce their line count (presumably to save disk space?) justify their behavior.
Personally, I think "cowboy coding" is the best derogatory name for this. IMO, what all of these ninjas have in common is that they don't realize that software development is a team exercise. Berkeley did a study on BSD and found that a file was opened 10x more often for reading that writing (i.e. people read code 10 times for every time they make a change). To my mind, "cowboy" conveys the proper amount of ignorance of the other people on your "team."
"Cowboy coding" already has a distinct meaning - it refers to writing code as fast as possible without concerns for technical debt.
The archetypal form of cowbody coding is the copy-and-paste: faster than any code reuse technique, but a booby trap for the future.
"Ninja coding", on the other side, refers to coding for cleverness' sake, at the cost of legibility and ease of use. No self-respecting ninja would simply copy-and-paste.
I totally agree with this sentiment. I think it's pretty well established at this point that in many cases clarity can trump optimization. Even more so when the optimization isn't performance based but rather line count based or "code golf" based. Also, do you happen to have a link to that berkley study? I can't seem to find it with a brief search.
Or, it is how some programmers continue to find motivation and fulfillment in the activity as an intellectual pursuit, yes at the cost of maintainability, but as opposed to being good little corporate drones writing a zillion dull lines of kindergarten-obvious code in Golang and Java that even the lowest decile of programming dunce can understand. Which is what Golang is designed to do. That's what they mean by optimization for large code bases: dumbing down. That's not a criticism by the way because that's optimal for large teams in large companies with large code bases and no room for too much creativity. But programming as art/fun, where appropriate, is also not (yet) completely to be dismissed as stupid, IMO.
For optimal code maintainability, there is a happy compromise between excessively terse and excessively verbose code. There is a lot of code in the wild that's too spread out.
Well sure, but erring on the side of verbosity at least ensures that someone can follow your thinking as long as the code is well structured. I would prefer to read 25 lines of decent Python over 3 lines of regexes in Perl.
Sure, but at least I would prefer 25 lines of decent Python over 125 lines of overly verbose Java. Nothing is black and white, it is easy to make code too verbose and thus very taxing for the reader.
Have a look at Elixir and Phoenix. GC is per-process so no global slowdown. Extremely fast response times and uptimes. Many other features that mesh nicely with webserving (some courtesy of Erlang's VM).
And a completely opposite philosophy from Go when it comes to error handling. Erlang/Elixir embraces failure (and immediately logs and restarts the process); Go seems to ignore it (unless you explicitly check, which to me seems... insane. For reasons why failing fast is better than failing silently, read https://blog.codinghorror.com/whats-worse-than-crashing/ .)
But you still have to check for errors after every line of code where they are possible (if you're covering all the bases), no? Due to lack of a traditional exception model?
Elixir/Erlang at least have pattern-matching which makes checks like that (from functions that return a non-OK value on errors) basically inlined:
`{:ok, val} = call_some_func(with_args)`
If it doesn't match on the :ok (i.e., an error occurred), you get a match error which gets logged, and the process typically gets killed and restarted by a supervisor process in a millisecond.
Is it that bad now in 1.6 with vendor support? And a tool like `govendor` makes it easy to stick things inside of vendor.
> it just really surprises me how little Google seems to care about the language and ecosystem
To be blunt, google's priority is google, not the open source community or other companies using golang. Dependency management wasn't a priority because of mono repo. Having said that, they are pretty good about improving things for everyone, but it will never be like a company such as Typesafe whose product is the language and tooling itself.
I switched to Glide, a vendor-focused package manager, on 1.5. I think the community has already decided in favor of this approach. Post 1.5, Godeps' workflow just feels weird and unnecessary.
To be fair, Go's GC got A LOT better over the past few versions and it's getting much better still.
What I would really like to be able to do though is be able to write unmanaged blocks when I need them and know better than the compiler rather than having to "run my own heap" on top of a buffer like I typically have to do in managed languages that don't have that opt out.
That is, you do this if you want a block to run without the GC interrupting it. That is, it's a really high-priority (realtime, or close) block. When it needs to run, it's the most important use of the CPU. But if you demand that the GC makes progress, you're saying that it also is important. And it is, but it's less important than the critical block. If you don't say that, then you wind up with "everything is important", and that way lies madness.
But you may be able to do it. You need enough CPU bandwidth that you can run your critical sections and spend enough time outside them that the garbage collector keeps up. (And then, every time you add to your code, you need to make sure that the CPU still has enough time...)
What does it mean for a GC to "make progress"? My understanding was that the OP wanted to stop the GC for a particular code block, so it would seem that any "progress making" would be undesirable.
> Google itself doesn't actually have any real API libraries for their cloud services. They autogenerate all API libraries for golang, which means they're not idiomatic, are convoluted to use, and the documentation is a jungle.
Are you sure? Google offers 2 Google Cloud APIs for Go:
- Google API Client Library for Go, which is auto-generated like you wrote;
- Google Cloud Platform Client Library for Go, which is intended to be idiomatically-designed.
> They autogenerate all API libraries for golang, which means they're not idiomatic, are convoluted to use, and the documentation is a jungle.
Are idiomatic API libraries really a good thing? If an API is used heavily throughout your application, doesn't that mean it's time to wrap that part of the API in some idiomatic wrapper code that is meaningful in your domain?
Functions are idiomatic in all languages. Personally, I'd rather APIs mostly stick to simple functions, and let the application developer build idioms around them if they like. And if the API call involves a web request, I am happy to just make the HTTP call myself. Half the time you need to understand the interchange format to debug your code anyway. My experience is that wrappers rarely protect you from that.
I guess if your application is 90% calls to an API, and you use a huge range of different functions in that API then you'd want a good, complete library. But how common is that?
That's right. It's production-quality, but the API surface might change.
If you're OK with changing your code sometime in the future, then I'd recommend giving it a try for this or your next project. Changes will likely be minimal.
Functions are idiomatic in all languages. Personally, I'd rather APIs mostly stick to simple functions, and let the application developer build idioms around them if they like.
Idiomatic APIs can be a real PITA, especially when you call them from a different language. KISS should reign supreme. (With bad 80's special effects in a made for TV movie filmed at a carnival.) If you leave it up to the client developer to build the idiomatic facade/interface, then the API is simpler, the code is more properly idiomatic in the end, and everyone is happier.
Just a quick note about dependency management: we use "go get" to download / update dependencies, integrate / test, and then just include them all in the project's git repo. Never had an issue.
Another major issue I have had with Go is database connectivity. The db drivers are really lackijg for Go. Makes it tough for those of us who use Teradata or Hive.
Source: I've written many servers in both C and Java and given the choice will never use a GC language again for a server. Holding out some hope for Rust..
There are definitely some pain points building a server in Rust, but I think on the whole the tradeoff is worth it, and will become moreso as the ecosystem matures.
Presumably he is complaining about the GC implementation's characteristics, not its existence. You can read a description of the team's design decisions here: https://blog.golang.org/go15gc but a simple summary would be: memory is cheap and getting cheaper, so focus on making GC fast rather than small memory footprints.
And in my opinion, this is a great tradeoff. Ultimately, adding more RAM to a server (or clicking a button on the AWS console) is a very easy fix. However, reducing latency is almost never easy. If they can prevent GC stuttering, I'll take the higher memory overhead every time.
> I tried a couple different server libraries on Python but they all seemed to struggle with even tiny loads. Not sure what was up with that
Could the problem have been the "global interpreter lock"? The GIL makes it impossible for multiple threads to simultaneously perform certain basic operations, effectively making the system single-threaded.
Slide 13 (https://talks.golang.org/2016/applicative.slide#13) is interesting. I was expecting Go to be very close to C/C++ on the X axis (fast/efficient) as it doesn't use VM, but it is more close to Java ?
Ideally, Go should be pretty close to C, but currently there are reasons for it lagging behind (though there are also Go programs which perform better than their C counterparts):
- The Go compiler certainly isn't as sophisticated as the better C compilers. It looks that the code produces by Go 1.7 is quite improves vs. 1.6 with the SSA compiler and I would expect further gains in future releases.
- The Go compiler does not offer optimization levels and overall is optimized for compilation speed to, so these tradeoffs could be limiting.
- While the Go GC is very good and improving, so that the pauses are very low, it does use CPU, also while running in parallel to the program. How much exactly very much depends on the allocation behavior of the program, but should be accounted for.
It's true that Go compiles down to an executable and doesn't run on a VM like Java, but it does have it own runtime compiled into it, which manages the GC and goroutine scheduler to name two, so there is definitely some more overhead to Go then just straight C/C++
In addition, the language isn't designed for zero-cost abstractions like C++ is. You see this in the design of things like defer, which requires allocating records on the heap in certain cases, as compared to exceptions in C++ which can be implemented in a zero-cost manner.
After a well optimized implementation, speed comes down to manual vs GC memory management, static vs virtual function dispatch, dynamic vs static typing and heap vs stack allocation.
Compile speed is usually slowed down by any sort of type inference or templating/generics, which is probably one reason why go resists implementing generics or subclassing
Your rules of thumb are probably mostly related to CPU bound tasks. If database i/o is the bottleneck, then the differences across language probably don't matter much.
And, runtime performance isn't always the most important thing. In many cases, "time to market" or "cost of development" might be a higher order priority. In those cases, a higher level language might be more attractive.
Sorry, but I feel like I'm a little confused by your post. How can it be 6 times slower to program in JS or Python than C? Isn't this very much contrary to the conventional wisdom?
dep mgmt, at least, is poised to improve a lot in the coming months. we've got growing consensus around some metadata files, and i'm nearly done with my SAT solver (github.com/sdboyer/vsolver), which will be in glide, and maybe others, soon.
Go has been great for me at providing things like simple microservices, network plumbing, CLI tools and that kind of thing. The C integration is also super simple and makes it easy to wrap up third-party libraries.
It's also a bit tedious to write in practice. It's dogmatic, and that's obviously a benefit in some ways but comes with the cost that quite a lot of time in my experience is wasted fiddling around with program structure to beat it into the way Go wants it to work. Dependency management is better with Glide but still not perfect. The type system is quite annoying, and although it's a cliche the lack of generics is quite annoying. Lots of silly casting to and from interface{} or copy-and-pasting code gets old quickly.
Still, it's a great tool for its niches and I really think everyone should pick it up and use it - the idea of simplicity it promotes is actually kind of interesting, in contrast to the "showy" features one might expect of a modern language.
Last night I was beating my head against the desk on some highly concurrent code that involved each co-routine satisfying a simple rate limit, among other things. Two hours later I had it working and the final implementation ending up half the size of the original (~1k LOC -> 500 LOC).
I would upgrade your phrasing to: "frustratingly dogmatic in an ok way".
> When writing code, it should be clear how to make the program do what you want. Sometimes this means writing out a loop instead of invoking an obscure function.
For example instead of the obscure function
a.reverse()
you can use the clear for loop
for i := len(a)/2-1; i >= 0; i-- {
opp := len(a)-1-i
a[i], a[opp] = a[opp], a[i]
}
A lot of the time it is clearer as an explicit loop, especially for other people to read. But it is annoying a lot of the time. I still think they should add generics.
I think one of the simple differences between functional and imperative code is just that functional code is less likely to name its intermediate values. Imperative code is probably going to put something in a local variable, which hopefully has a useful name, but functional code is probably going to chain a bunch of generic methods together.
This isn't a big deal when what I'm looking at is "students.filter(...MALE)" because that's obviously the "male students". But it quickly gets more confusing. What is "students.filter(... == year - 3)"? Is that "freshman students" or "sophomore students" or something else entirely?
Of course there's nothing stopping functional programmers from naming their intermediate results, and nothing stopping imperative programmers from choosing useless 1-letter variable names. But the two different programming styles seem to encourage different things.
The irony in your smug reply is, it echoes the broken leftpad mentality of javascript programmers.
Go is statically typed, and this either requires generics or a new built-in for just reversing an array/slice. And I can't see how a trivial operation as array reversion is worth it, and there's just no end to adding such trivial operations.
If you need a slice with reverse, just add a typedef and define reverse on it --it's just a trivial loop.
What is it that you want to say? That array reversal too much for mortals and better be left to array experts?
Array sorting is both important and nontrivial subject, and its place in the library is justified (similar to the situation with C).
BTW, I don't understand why your for-loop runs backwards (which makes the code look awkward to me) or why you have to define opp in the loop body. Just define len(a)-1 as a variable in the initialization and be done with it.
Guilty as charged on the smugness, sorry about that. This reply is meant to be smugness-free.
My example above is the semi-official one from https://github.com/golang/go/wiki/SliceTricks. The `oop` expression contains a reference to i, and so it needs to be in the loop body. I'm not sure why they opted to do it in reverse though. (Maybe they don't want to compute `len(a)/2` every time, and don't want to use a temp variable? Not sure if it would get optimized away.)
In my head there are three reasons (apart from my own laziness) that I like having reverse() in a language's standard library:
- Code is read more often than it's written. A for loop like this takes time to read, if you don't already know what it does.
- Tricky loops hide bugs. If we forgot the `-1` term in the initializer, I'd probably miss it in a code review, and the bug would only show up for even-length lists.
- Expert attention turns out to be useful here! In languages that check array bounds by default, you can gain a little speed by avoiding bounds checks in reverse(). Rust does this (https://doc.rust-lang.org/src/core/up/src/libcore/slice.rs.h...), but I wouldn't want an `unsafe` block like that in code I had to copy-paste around.
Yes, you can miss anything in any piece of code in a review. No, this doesn't mean you should use someone else's code for trivial stuff a la leftpad. However, if you're saying that you don't have confidence that you can properly review an array reversal, then it's something personal and you're extrapolating, and maybe shouldn't do code reviews at all (as harsh as this may sound, it's not meant to be a shallow insult ---take a second an think about it in the context you've given: if you're going to miss that -1 or the fact that array indices start from 0 rather than 1 in Go, how can you expect anyone to trust you with reviewing their code that involves arrays?).
Using 3rd party code for "basic" stuff only makes sense to me when it's a tricky algorithm to get right (such as quicksort) and the author of the 3rd part code is a renowned/reputable top developer. I definitely don't trust the b+-tree implementation of some random dude on github(which what javascript people do mostly).
No idea who wrote that piece of code, but you can simply define it without the -i part, and write a[last-i] = a[last-i], a[i] in the loop (and better not call it "oop" ---whatever that means). That way, it's very readable and understandable to me:
for i, last := 0, len(a)-1; i < len(a)/2; i++ {
a[i], a[last-i] = a[last-i], a[i]
}
I still don't see anything in what you wrote that justifies adding a new built-in function for a trivial operation such as reverse, or generics to the language as a whole.
If profiling shows that bounds checking in reverse() is a bottleneck in your program (although it's very difficult for me to imagine that this will happen a real-world program), then you probably don't want to use that unsafe code went through "expert attention".
While being a mortal without any specific degree on array handling myself, in the unlikely event that I find reverse() to be a bottleneck in my program, I'd rather write an assembly function which makes use of SSE2/AVX2 to get real speed-up (note that LLVM can't autovectorize that Rust code for you, that code needs a rewrite it you want vectorization).
It has become the default Go-To (pun intended) language for me for almost anything that needs to be small and portable.
However, I don't see myself writing a full server with it, I would still prefer a dynamic language like Ruby/Python for that and use Go for micro-services CLIs and the rest.
For example:
Our main application is Rails, it communicates with SOLR as the search index, in between the application and SOLR there's a proxy server that backups the documents onto S3 and also does Round-Robin between slaves.
One other thing is that we use Go to communicate with all external APIs of 3rd parties, the application code is rails and it communicates transparently with a Go server that fetches the data from 3rd parties and responds to the main application.
Go type rigidity makes Go code tedious to write. Instead of thinking "How can we solve that problem" developers writing Go end up thinking "How can we make the problem fit Go type system". I'm not even talking about concurrency here, I'm talking about Types. Saying otherwise would be dishonest, unless one has never used anything but C... Anybody who doesn't believe me just has to look the reflect package. Reflection packages are usually a good indication of language capabilities when it comes to statically typed ones.
It doesn't make Go a bad language,it has a some good percs, it's just frustrating that its authors conflated simplicity with rigidity. Also I hate when languages have hidden APIs, i.e. things the language can do that the programmer can't. Go is full of these (for instance append which is a parametric function since it knows the proper return type no matter what type of slice you pass it, but you can't write your own? ).
It's good think that it requires very little investment to get started, but it becomes highly frustrating when one stumbles on its limitations.
I've been writing Go daily for almost two years now and outside of wishing for generics a few times I've never struggled to fit a solution into the type system. I've certainly never considered going back to a duck typed language like Python or Ruby. Not once. Not ever. We have slowly replaced even our glue scripts that are written in Python with Go versions because maintenance and understandability trump any perceived speed advantage of writing something in Python.
As soon as a system reaches a given size, not having static types becomes unwieldy. Go's type system is great. Though my code still uses the var type declarations.
The further I get away from Python the smaller that given size limit becomes. After two years? It's at about 100 lines...
To me, the power of Go's simplicity is almost always underestimated by the language's detractors. I can look at code my team wrote two years ago and with a few <leader>gd's in Vim I know what's going on. Obviously Python fails this test, but even a high-level static typed language like C# can suffer greatly from all the magic one can invoke (Linq being a great example).
What it comes down to is that Go doesn't give you many ways to be clever. Younger me who loved template metaprogramming in C++ would scoff at this statement, but if you go back and have to reverse engineer your own cleverness enough times you really, really start to dislike the practice.
Yeah, I find that Go has influenced my code structure for the better in other languages. I'm much more likely to think carefully about a problem and consider alternative approaches for simplicity before just dropping a `template` keyword in C++, for instance. In Ruby, I am far less inclined to reopen classes. Etc.
Exactly what I tell people I like most about Go, even when there are magic things happening (looking at you, Kubernetes source code), it's not NEAR the level that a language like Ruby makes the code look like arcane magic.
Python is in between, I can find myself pretty damn well in a big Python codebase after awhile but Ruby... Ruby is a pleasure to write first and get lost later, I've done too much of it to like, all of the metaprogramming is gonna come back and bite your mind chunk by chunk when the codebase gets big enough. Few times I've been more frustrated in my life than when debugging Ruby and thinking "where the hell is this function defined at?" to find out it was some sort of generator stuff spitting out code.
All languages require investment. Zero sum game, with some opting for syntactic simplicity that entails a long term investment of "idioms", subtle semantics, post-processing, etc., and others present a high initial investment and subsequent clarity, regularity, and possibly robustness. C/Go/Java/C# are in the former category. Scala, Haskell, Rust, for the latter.
All of my lambda functions are a thin Node wrapper on top of Go applications.
The go application is simply a command line accepting JSON via stdin (from the Node wrapper). It is a Joy to test, you can run it locally/independently etc...
Likewise. I use for set and forget services. Once a Go application has been properly tested it runs quietly like a mainframe in the corner of a datacenter.
Same experience here. Writing tiny, focused, small footprint services with Go has been a blast. It really has its niche. Especially since learning to write (and read) Go is a matter of hours, more than days (or even weeks).
I have been running multiple programs on my DO vps, my uptime is 305 days. 1 program was updated in Feb to accommodate an API change another was updated in Dec 2014.
It's amazing how good they perform with minimal memory usage.
What about debugging? This is the major pain point for me. I've tried using GDB, but...
> GDB does not understand Go programs well. The stack management, threading, and runtime contain aspects that differ enough from the execution model GDB expects that they can confuse the debugger, even when the program is compiled with gccgo. As a consequence, although GDB can be useful in some situations, it is not a reliable debugger for Go programs, particularly heavily concurrent ones. Moreover, it is not a priority for the Go project to address these issues, which are difficult. In short, the instructions below should be taken only as a guide to how to use GDB when it works, not as a guarantee of success.
I've only used it for simple cases, so I don't know if it helps with your criticisms, but it was designed for Go. I mention this because it seems like many gophers aren't aware of Delve.
It is strange you ask this, cause it was one of my major issues with diving into golang.
The reality is that I don't miss it any more, for two reasons.
1. I have been forced to write solid comprehensive testing. Its a pain, but the pay off has made it worth while.
2. Because you never find all the bugs anyway, instrumentation and logging has become part of everything I do. A framework like go kit can do this out of the box for you or function as a guide for you.
There's actually a couple of native go debuggers these days. I don't remember what they're called off the top of my head, but the couple I've played with are very good, bordering on awesome.
There's also a go frontend to GDB that's okay, but going native is going to be a better choice in the long run I think.
What just happened?
In just a few simple transformations we used Go's concurrency primitives
to convert a
- slow
- sequential
- failure-sensitive
program into one that is
- fast
- concurrent
- replicated
- robust.
No locks. No condition variables. No futures. No callbacks.
It's the ability to make these kind of transformations effortlessly at any level, whenever I need to, that make me appreciate choosing Go when solving many tasks.
I'm sure the author means there's no explicit locking done by the programmer, but readers should be aware that channels are actually implemented internally using locks (which are 4x slower than using a sync.Mutex yourself).
I found switching from channels to plain ol' queues to be an enormous performance improvement in a program sending hundreds of millions of messages, though I do agree in general that most programmers won't need to care about it.
The program was a financial model backtesting framework which I ended up rewriting in Rust because Go was simply too slow for what I wanted to do.
Funny, I've seen more locks in Go than in my time with most other populate languages (excluding C). This is probably because there's no built-in concurrent hashmap or generics to implement one, so it's common to see a lock accompanying every map.
> Also perl is apparently as fun for humans python which I also find suspect after maintaining perl 5.x code.
When people talk about a language being 'fun', they need to distinguish between writing something new for the first time vs maintaining something or figuring out someone else's code.
Yeah I suppose at this point the graph falls apart since even the prospect of writing new perl code does not sound fun to me but I am sure could be fun to someone.
Yeah, I wouldn't want to be writing new Perl either. It was actually Ruby's "optimized for developer happiness" mantra that triggered my comment.
Sometimes I mumble under my breath that Ruby is Perl for the 21st century - gleefully creative hackers leaving behind overly clever code for far less happy developers to deal with.
A culture of optimising the elegance of the API over the simplicity and maintenance of the implementation.
I disagree. With practice, you can write very clean Perl code that's easy to follow along because all the modules play along nicely. Ruby OTOH has a culture of gems doing all kinds of mysterious black magic that make it difficult for a maintainer to figure out the data flow.
Source: Maintained a 300 kLOC Perl application for 3 years, now working with legacy and new Rails applications.
Yes, you're right that it is in an early stage of development and doesn't yet support Windows (to be fair, even Rust added full Windows support fairly late in their development cycle to 1.0). Crystal is probably in a slightly earlier stage than Nim, though not by much. Both are very interesting languages that I think would plot in similar locations on the chart. D is also a lot of fun. I think all are competing in this same native compiled realm that seems to be popular right now.
Rust had "windows support" far before 1.0, though it was through minGW only for the first few years. MSVC support was added four days after 1.0 was released.
The whole axis is so subjective. It really depends what you are working on. For certain tasks you can even find C more fun than other languages.
I think what they were trying to show is whether languages are high or low level (i.e. how far are they from machine code). In C for example every statement (maybe except switch) you can guess what assembly code it would translate to. Not so much with languages on top.
I will have to try Go again. It seemed really awesome at first then quickly seemed like a regression in a lot of PL design things (which is good in some cases). I personally like rust but maybe I am a glutton for type based punishment.
Solution: design the language for large code bases
This seems crazy but whatever works. I would assume that would only buy you some wiggle room inside whatever order of magnitude of committers you have. It seems like eventually you would need to split up the code base if you are having contention issues.
I once tried to convince an enterprise java developer to give golang a try. The guy passionately hated it and the reasons were very very petty. The other younger engineers who did not have prior bias loved golang and they were productive so fast.
The person truly had a java supremacy attitude that was very difficult to deal with. Golang is a kind of shift in thinking that you have to first unlearn your existing ways of thinking and then you will have a place for it. Some people are not willing to take that leap of faith unfortunately.
> you have to first unlearn your existing ways of thinking and then you will have a place for it.
Unlearning is not always acceptable, especially when you have to unlearn sound and proven practices, which Go often requires to do.
I think it really depends where you're coming from: people coming from dynamically typed languages like Python and Ruby are quite happy with Go since it's a small ramp up on the type ladder, but anyone who's used to static types and generics will usually see Go as a step back and refuse to take that step (for good reasons in my opinion).
To draw an analogy, imagine a Go developer is being asked to switch to Python and in order to convince them, you tell them they just need to unlearn a few things. To them, you are asking them to give up types and other practices that makes their code more robust, so it's not an acceptable argument.
> especially when you have to unlearn sound and proven practices, which Go often requires to do.
Such as? I'm not really sure what "sound and proven practices" a Java developer would have to "unlearn" to adopt Go. Most of the differences between Go and Java amount to removing features that 20 years of Java experience have proved to be unsound or unnecessary (inheritance and exceptions, for example). From a feature perspective, Go is mostly a subset of Java. The features Go adds are mostly related to concurrency, and I've not heard anyone say Java does concurrency better than Go.
Maybe if you are taking Java the language in isolation but if you consider the JVM ecosystem than I'll absolutely say the JVM does concurrency better. While the JVM offers something like quasar (http://docs.paralleluniverse.co/quasar/) which can match go's goroutine and channels plus an actor interface or the really excellent Akka (http://akka.io/) for a non blocking async actor framework. Offering a lot more choice than golang's single concurrency pattern. I've written major applications with both go and Akka using Java (and scala) and while both were successful stable and performant it's hard to argue with the JVM's concurrency story.
Java for sure has good tools for concurrency, and you listed some of the very interesting choices!
However the flexible nature of Java concurrency also has the drawback that you might need to integrate different libraries that utilize different concurrency solutions (blocking threads, actors, non-blocking eventloops) with each other. This can end up in a lot more effort than if you try to use multiple libraries in an ecosystem with a single opinionated concurrency solution (like Go for blocking IO or node for pure nonblocking IO).
The weakest ecosystem regarding this is imho C++, where most concurrency/IO solutions only work well if you don't try to also use other solutions in parallel (e.g. QT plus glib plus boost asio plus libuv...).
I'll say that Java doesn't do currency worse than Go, that's for sure. It has all of the primitives in whatever arrangement you want to put them (Javaflow and now Coroutines if you want go-I'm-sorry-coroutines, native threads if you want those, and Go channels can be implemented in maybe two dozen lines), more flexible, battle-tested abstractions (such as Akka offering you an asynchronous, message-passing actor model, which could be written in Go but seems in practice to be passed up in favor of channels), and tooling around these that I find to be head-and-shoulders better than anything Go has (like multi-threaded debugging).
I've basically (willingly or unwillingly) turned into a Ruby person over the last few years, as neither Go nor the JVM really have a ton to offer me right now, but I don't think a fair comparison of concurrency-related stuff, either in terms of tooling, libraries, or the language itself (I'd give Go this, except that you can't build an unbounded buffered channel and at that point the use of channels for what I write rapidly approaches zero), is nearly as clear as you assert.
Java's concurrency story is weak overall. Nearly all code still uses the old-style "synchronized" blocks, rather than ReentrantLock. This shouldn't be a big surprise, considering that ReentrantLock was only introduced recently. With synchronized blocks, you don't have any way of releasing the lock other than by exiting the block, which leads to some very contorted-looking code.
The fact that you can synchronize on literally any object means that your object lock is effectively part of your public API. Some other piece of code can easily grab your object, synchronize on it, and then start calling your methods, assuming that this will be atomic. And if you change to use a different lock later, it will break.
Sure you could use BlockingQueue to get some of the benefits of Go channels. But the standard library and pretty much any software you'll interact with were written before BlockingQueue existed, so they won't make use of it. You will have to fight your lonely crusade to use message passing on your own. Which in practice means that you won't be using message passing, just plain old mutexes and volatiles.
In Go, all code runs in goroutines which get multiplexed to kernel threads. In Java, nearly all code is blocking and uses an entire kernel thread. Sure you can use NIO to write an event loop-- just as long as you're careful to never, ever call a blocking function. But nearly every interesting function in Java can block. Including the DNS lookup functions Java provides.
> The fact that you can synchronize on literally any object means that your object lock is effectively part of your public API
I'd argue that the fact you can lock on any object is a strength. These days, hardly any Java developer will use synchronized on methods and instead prefer the idiom:
public class A {
private Object lock = "";
public void foo() {
synchronized(lock) {
}
}
This allows Java code to be extremely granular in what gets locked, which has enabled very powerful multithreaded constructs and libraries such as ForkJoinPool and many others described in the Java Concurrency In Practice book.
So this doesn't map to anything I see in the JVM. It's super interesting that you say it (and I want to stress I'm not calling you a liar or anything, it's just different experiences). Personally? I haven't used synchronized blocks since college (so ~2010 or so). I've been using NIO about that time. I don't feel like I'm swimming upstream using it.
Debugging is definitely a valid counter-point. Go's debugging story is rapidly evolving, but it's not particularly friendly or settled at this point.
Goroutines are subtly different than coroutines, mostly in that goroutines are not bound to the OS thread they're created on, and also that they don't need to be explicitly yielded--any I/O or lock-blocking (including reading or writing on channels) are potential interrupt points. Channels are probably also less easy than you might think, particularly the nuances around Go's `select` keyword.
I'm not sure where exactly Java stands on all of this. I've heard that the Quasar library comes close to bringing Go's concurrency tools to the JVM, but it's not widely used and any non-Quasar I/O is likely going to block your OS thread. At any rate, while I agree that Java can theoretically come very close to matching Go's concurrency story, it falls very short in practice.
> you can't build an unbounded buffered channel and at that point the use of channels for what I write rapidly approaches zero
If you're using Ruby, I'm guessing you don't need thread safety, so you probably just want a slice.
> I don't think a fair comparison of concurrency-related stuff ... is nearly as clear as you assert
Like I said, I think libraries like Quasar have a lot of potential, but they're not widely used. If it was the de-facto solution for concurrency in Java, I would agree that "concurrency in Java is no worse than in Go" (and I would have to, since Quasar's stated purpose is to bring Go-like concurrency to Java).
> goroutines are not bound to the OS thread they're created on
Neither is a coroutine in Java using the Coroutines library (Javaflow used thread-locals, but Coroutines doesn't), or a Lua-based coroutine...I'm not sure what you're driving at here?
> If you're using Ruby
Sorry, this was inartfully said. If I am using the JVM, i.e. I want to be using something where I can be bombing around on multiple threads etc., I'm probably going to want my inter-thread channels to be unbounded. BlockingQueues in Java give you that; Go channels don't.
> Neither is a coroutine in Java using the Coroutines library (Javaflow used thread-locals, but Coroutines doesn't), or a Lua-based coroutine...I'm not sure what you're driving at here?
I'm not familiar with Coroutines or Lua-based coroutines; coroutines are almost always bound to the thread on which they're created, I was pointing out that this is a primary difference between goroutines and coroutines--goroutines are M:N threads.
> BlockingQueues in Java give you that; Go channels don't.
Go channels aren't meant for this purpose; they're synchronization primitives, not dynamic data structures. You can easily build a BlockingQueue in Go.
> coroutines are almost always bound to the thread on which they're created
Can you provide a cite for this? I'm not saying you're wrong, just that I've never heard this assertion before (and I have been doing really stupid stuff with coroutines for way too long). My understanding of a coroutine is just that it's just a cooperatively yielding function where a yield returns a continuation for later resumption.
I don't have a cite for this; almost no coroutine implementations are multiplexed across threads. I spent quite a while Googling around, and I wasn't able to come across any thing. The stuff that I did come across (without searching for 'goroutine') were comparisons of coroutines and goroutines in which one of the defining characteristics seem to be the ability (or inability, in the case of coroutines) to be multiplexed across OS threads.
In general, the term "coroutine" has come to be pretty watered down.
> ost of the differences between Go and Java amount to removing features that 20 years of Java experience have proved to be unsound or unnecessary (inheritance and exceptions, for example).
The good programmers I know are not attached to their tools. They prefer to use the right tool for the job. Many other programmers want to solve the problems using tools they know. There is nothing wrong with that. A company with good programmers who could do similar things more efficiently will add to competitive advantage.
If you need to screw something and you have a choice between
1. A screwdriver
2. Electric drill A
3. Electric drill B
you will certainly look funny at someone considering the screwdriver over the alternatives.
Someone who automatically narrows this choice between the two electric drills is not "attached to their tools", as you say. They are just picking the better tool.
So as someone who's written some Go, I'd argue that Go is the screwdriver - it's one of the only modern languages which explicitly refuses to tackle the error handling problem, which has resulted in some of my code being more about the failure case than the success case.
Of course, others will disagree - fine. But to argue that e.g. Java is definitely the screwdriver is a subjective judgement.
To add to this: Go's inexpressivity (hi, generics!) makes common patterns that I see in Kotlin, Java, and Scala (as well as Rust, off-JVM) makes error handling a complete bear, to the point where my eyebrows are really raised at vertex-four being downvoted for this.
The use of please-check-this error conditions instead of something like a Try<T, E> (Result<V, E> in Rust) and an inability to just map over these as 0- or 1-element collections is such a huge pain, and it certainly does matter when you're piling up multiple error-handling cases. Even more when you'd otherwise use Scala's `recover` to get back on a happy path. Go's inexpressivity directly impinges on one's ability to get things done.
(Food for thought: forgetting to handle the error case from Rust's Result<V, E> is a compiler error. Is "if err != nil" really that good an idea in such a universe?)
I'd use a different analogy - there are definitely circumstance where you'd want the screwdriver over the electric drill. Screwdrivers can't run out of batteries, are lighter weight, and give you more feedback and control while being used. If I was going to be in a crawlspace for three hours doing wiring and knew that I'd need to drive in a screw at some point, I'd rather have the screwdriver on me than lug around or go back for the drill.
Nothing wrong using a screwdriver when you do cabinet work as opposed to using electric drills for carpentry. A manual and more precise work calls for a different tool. Electric drills will strip the head of the screws over time. And every tiem I need to use an electric drill, its battery needs to be charged first, sending me back to the manual screwdriver.
IMO switching an enterprise developer from language such as Java to Go is like asking someone who has very developed vocabulary in English to try Toki Pona[1].
Yes, it is simple, and you can learn it fast, you also can also communicate with it, but you will often have to fight with the language to express what you want. That person generally won't be satisfied.
Go's shortcomings wouldn't be so bad if in exchange, the language would ensure that your code is less prone to errors, but that doesn't seem to be true. In fact Go programs from my experience appear to be slightly below average in terms of stability and robustness compared to other statically typed languages.
Your analogy seems weak at best. Go is mostly a subset of Java, so there's almost no difficulty in a Java developer learning Go. A Java developer can immediately read 90% of Go programs, and within a couple of hours, he can write real, interesting programs. I'm not sure that a good analogy could be made incorporating English, nor do I think there's value in doing so.
I don't think you understood my analogy. Go supposed to be Toki Pona.
I'm saying that once you are used to a language that's more powerful you feel constrained.
I know many languages and some bring interesting things to the table, go doesn't really deliver (at least based on the hype). The concurrency supposed to be the killer feature, but it is limited to specific cases.
> I'm saying that once you are used to a language that's more powerful you feel constrained.
I did misunderstand your point, though I don't find this one more compelling. While Java has more features than Go, the only one I would consider to be "more powerful" would be generics.
> The concurrency supposed to be the killer feature, but it is limited to specific cases.
How do you figure? To which specific cases is it limited? How is Java better?
> BTW: The Go is not subset of Java if anything it's very similar to Algol-68
My argument wasn't that Go is most similar to Java; only that Go's featureset is almost a strict subset of Java's. Go gives you tighter control over memory and a better concurrency story, but most of the rest of it looks the same.
Your opinion on this is couple of standard deviations away from popular one. Is there a specific example where a program written in golang lacked compared to other statically typed languages?
I program go full time and have for a couple of years. My biggest complaint about it is that there is some magical shift in thinking required to use it.
The only shift I've found is moving on when the Golang way isn't as good as other tools you are used to. Because the ecosystem & language really are not on par with other environments I've worked in.
Agreed--the culture always strikes me as a weird one and, having just had to dip back into Go for a project recently, your observations ring really true. I feel like there is a strong sense of epistemic closure around Go advocates (as separate from the Go team, I've had interesting and good conversations with a couple) that lead to contortions like "magical shifts of thinking" rather than an acceptance of problems and a desire to fix them. The advantages of a tool that is unmistakably disadvantaged in some areas (like Go's not-great ecosystem and relatively weak expressiveness compared to many of its competitors) may outweigh those disadvantages, but the amount of aggressive you-don't-need-thatting and the insistence that the emperor has the finest clothes is...weird.
This feeds into language supremacy mindset. There is no universal best language nor there will ever be the one.
Right tool for the job is a better flexible mindset. If you are a master painter, you could paint something amazing with anything you have got. Same thing applies to programmers.
> There is no universal best language nor there will ever be the one.
I never made that claim, I just emphasized the widely accepted fact that having types is better than not having them.
To a Java developer, Go feels like the Java of ten years ago in that respect, so you will encounter some justified push back.
> Right tool for the job is a better flexible mindset
Of course, but not all tools are equal. In programming languages, languages that have a static type system have an insurmountable advantage of dynamically typed ones.
Of course, but not all tools are equal. In programming
languages, languages that have a static type system have
an insurmountable advantage of dynamically typed ones.
Your response:
javascript and python would disagree with you.
My question:
How so?
Since the claim is about the advantages of static type systems, I'll respond on that claim alone. The primary advantage of static type systems, particularly expressive ones, but even of less expressive ones like C, is that a large class of errors can never occur in run-time code. You cannot, possibly, without deliberate effort to defeat the type checker do the following in a statically typed language without at least a compile time warning (to permit C and its weak type system and occasional implicit casting):
Define a function in your language of type: int -> int -> int [or (int, int) -> int]. Pass in something that is not an int to either parameter.
Python will happily accept this code:
def add(a,b):
a + b
... some context
add("aoeu", 3)
And not tell you until that add call occurs. A run-time error. Could be very infrequent, which makes it really hard to reproduce.
In C:
int add(int a, int b) {
return a+b;
}
... some context
add("aoeu",3);
the add call won't even make it past the compiler.
JavaScript is even worse: You won't get an error at all!
function add(a,b) { return a + b; }
... some context
add("aoeu",3) // results in "aoeu3" as the return!
Dynamic and weak typing!
This doesn't mean python and javascript are bad. But it does mean they possess disadvantages relative to statically typed languages. Their type systems mean that significant testing has to be put in to verify/validate your program for guarantees that are baked into statically typed languages (caveat for implicit conversions of certain types, again, in languages like C, but this usually gets at least a warning if not an error).
It's not simple when you have to maintain it or debug it.
There are many classes of errors that can occur when writing software. Languages with implicit variable creation like python obscure errors like mistyping the name of a new variable (versus an explicit declaration like in C or an ML where the mistyped name will result in an error immediately modulo name conflicts). It looks correct at a glance, but:
def foo(a_name, b_name):
# computations
a_nam = #some more computations
# computations using a_name, not a_nam, returns
# erroneous values
(NB: The above is bad practice anyways, an advantage of the single static assignment of the dynamically typed erlang.)
Oops, we forgot an 'e' at some point. Now we have a new variable, but we used the old variable name for future computations and returned a result based on that.
Type errors, I've already discussed.
Logic errors like:
if(a < b) // when we meant a <= b
Are universal to all languages, they can't eliminate these. Actually, this leads to a major gripe I have with C. The duplication of meaning for = as both initial value assignment and later reassignment paired with the use of non-zero values to indicate true.
if (a = b) // well, shit. a has a wrong value, and we go
// down the wrong branch now depending on the
// value of b.
Good practices only get us so far. Moving those good practices (static typing paired with type inference for simplicity, single static assignment or immutability by default, etc.) into the language does add mental overhead to programming. But it also produces less errorful final products.
As a guy who writes software that can literally save or kill someone depending on how well or not it functions, I'm in favor of better languages.
Per your typo point, this has already been solved by linting. If I made a typo like that, any decent editor (Sublime in my case) would draw a big red box and complain at me for using an undeclared variable. In the case of a typo on assignment as in your example, the linter would report a variable declaration without usages.
Per your testing point, so what? Doesn't everyone strive for 100% code coverage anyway? One of the big advantages of dynamic languages is that more functionality can be implemented in less code which in turn makes it easier to hit that 100% coverage.
Good point, one just has to every now and then open all files in Sublime and check for red squiggles :)
And not everyone is striving for 100% code coverage unless it really matters (e.g. SQLite). A beneficial activity becomes harmful if taken to extremes.
> Right tool for the job is a better flexible mindset. If you are a master painter, you could paint something amazing with anything you have got. Same thing applies to programmers.
I find the opposite for practical arts: The better the practitioner, the more reliable their tools must be, not less. A beginner painter might not be able to point out a bad brush from a good one; a master absolutely will. Furthermore, a bad brush won't necessarily hinder a beginner painter, but it will absolutely hinder a master painter. There are simply some techniques that the master will not be able to execute unless the brush is of a good enough quality.
I don't agree with this analogy, because languages overlap a lot more. You could choose one of several different languages to build a web app (Python, JavaScript, Ruby, PHP, etc.).
Only particular tasks have languages that fit best. Many other tasks can be solved in multiple languages. In that case, the best tool for the job is the language you know best.
I've been running into this phenomenon a lot and its not isolated to java developers. We recently had a class on Clojure 101 and the main audience was Obj-C/Swift developers. A lot of the developers went into the class actively trying to prove Clojure was dumb and the way they were doing it was better.
I think any language one it reaches a critical mass attracts people who are not problem solvers but memorizers. There is a correlation between people who rely on copy pasting existing code and SO answers and people heavily invested in their language.
Point being most people who have trouble adopting other languages tend to be memorizers vs problem solvers and become insecure when working in a poorly defined environment. Pulling them into something new after others have solved the hard problems and created best practices tends to be easier and more productive for everybody.
Not necessarily. I think by providing native slice and map types Go reduces the need for generics already by a large margin. Other things that often use generics (higher order functions, future types, ...) are no idiomatic Go which leans more to the imperative way of doing things. In total I have not really missed generics in Go up to now (but I have up to now only written about 20kloc in it) - while I certainly missed them in early Java and C# versions.
As long as your needs are sufficiently basic that you never need to create data structures then what's in Go can be ok. People who are fans of Go seem to be people who don't know what they're missing in more advanced languages. This seems to include C programmers and dynamic language programmers. Programmers used to better type systems are generally not happy with Go.
I think I've done things in most well known programming languages that are quite far from basic - from dynamically typed languages up to static typed functional languages. And I'm not a fan of dynamic typed languages.
But I still stand behind my opinion from the parent post: Go's type system is quite primitive, but in combination with the typical way of doing things there it is sufficient for most cases. Whereas other languages need generics a lot more, especially functional languages where monadic types are often used or languages that don't provide builtin list and dictionary types.
This comment is patronizing and implies that liking Go makes someone a "junior varsity" programmer, or ignorant of alternative programming models. It doesn't. I'm well-versed in half a dozen other languages, many of which include generics. I like Go just fine. Yes, there are cases where it is not the best choice. That's fine, too.
Bad programmers are always going to be bad programmers, no matter what their age. Bad programmers are inflexible and unadaptable; unable to keep up with new languages or idioms. Ken Thompson is about as old school as they come and he wrote much of Go.
I think dismissing anything invented after 1960/not invented at Google counts as "inflexible and unadaptable; unable to keep up with new languages or idioms".
I don't think anyone considers Python or Java modern.
Nevertheless, this isn't about where adoption comes from, it's about how the language design was influenced by the advancements in language design in the last 40 years.
> I don't think anyone considers Python or Java modern.
This is why I put "modern" in quotes; these languages are "modern" relative to the 1960s-era languages.
> Nevertheless, this isn't about where adoption comes from, it's about how the language design was influenced by the advancements in language design in the last 40 years.
Precisely. The OP implied that Go programmers are "bad programmers" because they can't adapt to post-1960s languages. I countered his hypothesis by pointing out that the lion's share of Go developers were previously competent Python, Ruby, JavaScript, or Java developers. If his hypothesis were correct, one would expect the Go community to be primarily C expats.
For whatever reason, a large swath of developers find the features Go adds to be more useful than the "advancements" Go omits (or perhaps they just find value in the omission of those "advancements" altogether). At any rate, Go's popularity can't be reasonably attributed to graybeard developers who can't grok Java.
How do the bullet points in "Why does Go leave out those features?" address why Go leaves out the features on the preceding slide?
All it talks about is clarity (important but not the only important thing) and I just don't see how any of the left-out things are inherently unclear. I think you can write clear and unclear code alike with all of those left-out features.
In regards to your last comment, abstraction is a good thing but there's a conversation to be had around quality of abstraction. Good abstraction doesn't age, or at least ages very slowly. Lack of abstraction in languages leads to innovation and iteration, that then leads to good abstraction. We are general too quick to assume that a new thing is good abstraction. Better, in some cases, to leave abstraction discovery in userland, as good abstraction is rare. The cost of poor abstraction within a language is API and cultural lockin, when better solutions are found.
I had a PHP program that processed HTTP requests and stored some data onto a local database, and decided I needed to rewrite it for various reasons so I decided to choose Go. Some points I recall:
* Static typing is good.
* As I expected, the standard library and other packages available had the http & routing stuff I needed, which is all good.
* I like that errors are specified in function signatures, unlike exceptions in languages like ruby/python.
* I don't like errors being easily ignored, and return values being assigned default or arbitrary values. I once may have also accidentally used the wrong equality operator against nil.
* Defer is nice, but would be better if it was based on current {} scope.
* Append on arrays? has very bizarre semantics sometimes mutating or returning a different reference.
* Initially I ran into trouble reasoning how to use some sql package and ran into "invalid memory" deference issues or some such when passing a reference. Thus, I'm skeptical about "memory safety."
This was only a simple program though and turned out to be worthwhile for me in the end.
memory safety imply that if you do something bad you will be stopped (with "invalid memory" issues for example), not that it is illegal to write something bad
Can't speak for perl because I have only seen some horribly complicated code in it (which probably speak more of the author and not the language itself) but what's not fun about JavaScript ?
- many mistakes are silently ignored. If you mistype object field name, divide a number by zero, add number to a string, access missing array element, no error is raised
- no proper OOP and classes
- no type hints for variables, function arguments or return values (well, there is TypeScript but it is another language)
- package manager (npm) loves to create deep hierarchies of folders
I used Perl for years, and it's still my go to language for quick text parsing. I stayed away from the bizzare "object oriented" syntax (They're not classes. They're packages, just called with arrow operators instead of like normal functions. But being Perl, you can just use the package and call the functions yourself because TMTOWTDI! (Blech.)) Also Perl's support (or at least Perl 5 (Does 6 even exist? It's like a unicorn like Duke Nukem Forever or Guns-n-Roses Chinese Democracy, only those eventually got released.)) for complex data structures (including multidimensional arrays) requires explicit reference instanteation and dereferencing, like C pointers. It's the biggest pain point.
especially with React Native, NodeJS and Electron... you can pretty much write anything you want, easily, that will run on a bunch of platform. I still prefer Python/Ruby/Perl... over JS. But I would pick JS anytime over Java.
As an Erlang fan, my guess is that Go is a bit faster for many things, whereas Erlang's concurrency, fault tolerance, and distributed stories are "better". Not everyone needs all those though. Go is probably a bit more 'generic' - you can do anything with it. Erlang has always felt like it's a tool designed to do a few things really, really well.
I've not used Erlang, but I've also heard comparisons that prefer Go to Erlang for its familiar syntax and comprehensive standard library. Obviously I can't vouch for these claims.
When you want to hire good developers pick Erlang, Elixir, Rust, Haskell, ML, LISP. Because chances they went out of their way to learn the language as it is not taught in US universities in depth. They are probably curious, good learners and could probably learn or adapt to other new things thrown at them.
I am at shop were we use Erlang and syntax has not phased new people that much. If typing . instead of ; is a big deal, then what are they doing to do when a netsplit hits or they have to understand other concurrency issues.
I don't know, I find Erlang's syntax rather nice, if anything it tells my brain "this is not C anymore, don't think like C". Works for me, any way.
There is Elixir of course if syntax is an issue. Elixir has other useful features as well but still runs on the same battle proven platform.
I've also found though that these languages attract primadonnas who care more about "attractive" code and getting to play with cutting edge technology that might not be ready for prime time yet than they care about providing business value.
I don't think it's about features, but rather a combination of fashion, chance and some other key element (such as author, supporting company, etc) that in addition to the technical merits lead to a piece of technology becoming cool and liked.
Interestingly Go programmers have turned lack of features into a definining feature in itself and are proud of it. If you want to be oart of the in crowd you have to use that signal.
The last place I worked, we built a system that included Erlang. When it was time to hire someone, we found a guy who was curious about Erlang, and although he didn't know it pat, he was a good hire, as he was a smart guy, hard worker, and curious not only about Erlang, but a lot of stuff.
That's the kind of person you want to hire anyway, unless you are super crunched for time in getting someone up to speed.
Never had a problem finding or training great Erlang engineers, and even had a few cases where the absolute best of the candidates were only interested because they'd get to use Erlang.
To those replying with anecdotes about how you managed to hire or train an Erlang developer, you missed this part of the presentation: "Go is an answer to problems of scale at Google."
You are scared of Erlang because it has uglyish syntax. Only reason I can think of. The "need to do computation" argument is silly. Just call into a C lib from Erlang.
I've found dropping down to C to be a pain to set up the first time in any language (Ruby, Tcl, etc...) but once you have the tooling, it's just another workflow. When you think about it, even JavaScript takes a lot of setting up these days (preprocessors, linters, endless debates about why coffeescript is still a good language (hint: it's the existential operator)). But, you amortize the cost of this setup over the lifetime of the project so they are actually quite small.
But, it does seem that there are not regular blog posts about how to drop into C to make even the worst scripting language performant. Seems like maybe it's a dying art???
Dropping down to C is pretty cool in a language like Tcl. To toot my own horn, a bit, I worked on those chapters here: http://amzn.to/1U6sFPN
Erlang is a bit different though: if you're really concerned about being 'robust' with it, you don't want to link in some random C code in the main node. Not only could it crash, it might simply block for an unacceptable length of time, messing up the system's internal scheduler and making it unresponsive to events.
So with Erlang, to do things properly, you need to get the architecture right as well, which makes it that much tricker.
See my other reply though: it's not just about the API, which is nice, but about getting the architecture right. It does make the system more robust, but it requires more thought and planning.
Sure. Once you cross that barrier, you now have unlimited ammo and all the feet that ever were or will be, so you have to be a lot more diligent about what you're doing.
I guess my point is that in, say, Tcl, you pretty much just worry about calling out to the C code and if it takes 2 seconds to process an image or whatever, so be it. You merrily go about your business once the call returns. In Erlang there's more to think about.
I think that's technically true in Erlang as well... it's just that it makes all the other stuff that people expect from Erlang to mostly stop working smoothly. :-)
I think it's fairer to say that "Go is better at that". "Bad" depends on the user and their needs. In some cases, Go is "bad" at that too and you should use C. In that case, you could always connect the C code to either Go or Erlang, although that introduces complexity into the system you're creating.
Such as what exactly? "Computations to return results" is rather generic, to say the least (and, if ‘computations’ includes handling requests, something that servers tend to always do).
C is best at problems that are CPU and memory bound, Go can handle problems that are CPU bound-ish and goroutines help with IO bound problems. Erlang handles problems that are MASSIVELY IO bound.
I won't downvote you because you are certainly entitled to your opinion, and there is no way I'm going to get into an Erlang vs Go For Writing Servers argument, but if you immediately write off Erlang/OTP because of how it looks, you are going to miss out on some pretty amazing server writing functionality.
Different stokes for different folks. Looks fine to me. But I'd prefer you don't try to offer advice if you don't actually know Erlang in the slightest.
I wonder what unit is being counted here. I don't think it's possible to actually review and rethink 50% of what has been created before. That's just not sustainable.
That doesn't matter. Only growth of manpower matters. The fundamental problem stays the same regardless of how many people you have.
If you add code and revise 50% of it every month, the code base is bound to grow and the share of time spent on maintaining the old code grows as well until development of new code grinds to a halt.
Unless of course there is massive growth in hiring. But that isn't sustainable.
[Edit] Well, I forgot one possibility: Deleting code.
One important niche I see that Go serves very well is in distributed, fault-tolerant deploy platforms (aka schedulers), like Kubernetes or Mesos. If you look at the amount of tooling that uses Go, you almost feel there just is no other choice out there.
I would not adventure to say state-of-the-art schedulers would not have been possible without Go, but for sure Go fits the requirements pretty well.
Rust's type system is a bit less awful, and it's better oriented towards immutability-by-default. Its concurrency model is more explicit (which is to say, more verbose to use, but gives you a lot more control).
In a situation where Go is acceptable (i.e. GC is acceptable) you should probably be looking at OCaml (probably what I'd recommend; http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-r... may be worth a look) or Haskell rather than Rust though. If you're talking about a long-running server process then I'd add Scala/Ceylon/F# to the list.
I did write a CRUD app with Go and after that experience, IMHO, for CRUD apps, one is better off with a dynamic language (JS, Ruby, Python) or a static one which supports generics (C#, Java) or Macros (Nim).
Right on. Go is best suited for network-heavy, concurrent applications (infrastructure systems, databases, OS-tools), or software that needs to be dead easy to set up / deploy (single static binary, free cross-platform compilation).
If you want to go build a CRUD app, use RoR/Django/Whatever(tm) and be happy with it
Well, I can't do that (haven't used any of them enough for that) but I find it telling that Rust is missing from the slide where they compare the languages... :)
I think the tldr; of this (often raised...) argument boils down to:
Don't use rust if you're asking that question.
If you could implement your solution in go or rust, then go is probably a more appropriate choice; it's a good high level solution for high level problems.
Rust is not a good solution for high level problems; it's a good solution for low level problems where go would be a terrible choice; and it's a good solution for high level problems where other choices are even worse (eg. C++).
They're both good languages, and you'll learn way more from picking up rust than you will from picking up go, so if you're just screwing around and want to learn a language this year, absolutely, pick rust or clojure. Go isn't on the list for 'interesting programming languages'.
...but if you have an actual problem you're trying to solve, I would be hard pressed to enumerate the reasons why you would pick rust if the problem was solvable using go. Maybe because your C dependencies would be easier to call from rust than go? That's all I can think of.
> Rust is not a good solution for high level problems
Who is saying that ? Not the Rust team, nor the Rust users …
Rust is a general purpose programming language, and it's pretty high-level (in terms of features, think about functional programming for instance). The only reason I wouldn't advise everyone to write their stuff in Rust atm, is the youth of the ecosystem (which is growing rapidly, but is still a bit too early-stage): their is no intrinsic limitation in the language that make Rust «not a good solution».
The piston developers are doing what right now? Writing a new programming language (dyon). Why are they doing that? Because rust is great for prototyping games in? no.
...rust is verbose. It is statically typed, it is less productive than some other languages and it is hard to learn.
Now, you get a whole lot of other benefits in exchange for that, absolutely, and technically speaking, rust is a super awesome and sophisticated language.
...but right now, it has neither the ecosystem nor simplicity for picking up and practically solving high level problems.
If you have a problem, right now, you want to solve: pick go.
...unless your problem is something that rust would be better at, and those things are low level things, like building game engines, not high level things like building web applications and cross platform desktop chat applications or machine learning solutions to driving cars.
You can build high level applications in rust, and in C++; the point I'm making is that unless you actually have a reason for picking them (and there are plenty...), don't.
Pick the tool for the job; rust is a great tool for the right job.
Go is a better tool for a lot of applications; and a totally useless one for others.
/shrug.
We don't need to pretend rust is general high level language you should pick up and use for any problem domain. It's not. You're shooting yourself in the foot if you use it as though it was.
> The piston developers are doing what right now? Writing a new programming language (dyon). Why are they doing that? Because rust is great for prototyping games in?
I agree with that, Rust is not a scripting language. Dynamically-typed scripting languages have proven their efficiency for prototyping.
> rust is verbose. It is statically type, it is less productive than some other languages
Than scripting languages yes, but it's exactly on the same side than Go on this point. Scripting languages are extremely convenient for small projects, but are more difficult to maintain in the long run if the project grows too big. (I personally live coding JavaScript, and a the project grows we are progressively add a static-typing layer (flowtype.org) to our code for the sake of maintenance).
> it has neither the ecosystem nor simplicity for picking up and practically solving high level problems.
There is a real trade-off between scripting languages (Python, Ruby and Nodejs), and statically-typed ones (Java, Go, and Rust), but it's not related to being high-level or not. In term of abstractions and features, Rust is at least as high-level as Java & Go.
> If you have a problem, right now, you want to solve: pick go.
We are using some Go at my company because we wanted to follow the trend, but frankly unless you want to built a simple system with no dependencies, Go is not ready to solve problem «right now» either, because the ecosystem is still really poor. If you have a problem, right now, you want to solve, you should probably still pick Java over Go, even if a lot of people (me included) don't like Java …
Basically I think Java is the most important factor of success of the Go language: Go feels like Java, but in a younger and trendier way.
> We don't need to pretend rust is general high level language you should pick up and use for any problem domain. It's not. You're shooting yourself in the foot if you use it as though it was.
Rust is a «general high level language», with the same high-level generality than Go or Java.
I bet one can chose any* Go or Java code sample, and rewrite it in Rust with around the same amount of code and without introducing any memory issues (what you wouldn't be able to do in C or C++). I think that's a good illustration of Rust being a «general high level language».
*unless it depends on a library that has no equivalent in Rust (as I said before, the youth of Rust's ecosystem is the reason not to use Rust in S1 2016).
I would and many more people also would but for sure it would be harder/more complex in C/C++ than in Go. You wrote that as it would be impossible to do it in C or C++ which is not true.
I've wrote that you can do it with memory safety which is possible BUT it's more complex in languages like C/C++ that do not guarantee memory safety (it's your responsibility). If it was not possible then you would not have anything on your screen right now. I am writing that because someone that is not familiar with those languages would get impression from your comment that memory safety is impossible in C/C++ which is not true.
Verbose low level memory safety and a high level trait system is a massive step up from C++/C. It's of mixed value if you're doing something you could do in java or go. It's of questionable value if you're doing something you could just whip up in python or js.
Perhaps you're right; rather than just straight out saying go, I'll preface my answer next time I hear the go/rust question:
'Should a write my next project in go or rust?'
'If you didn't use either, would you write it in C++?'
'...no?' -> 'Then pick go.'
'...yes?' -> 'Wtf dude, there's no way you could do it in go then. use rust.'
Because Go is a garbage collected language, programs will often require more memory than they would in Rust, although pretty often that's an acceptable trade-off. If the trade-off is acceptable though, it might be worth looking at Nim and D too.
When you need to write high performance code this is a great maxim. I enjoy the simplicity of Go and the guarantees it provides. Being able to reason about code and not having to guess is a win for any development team.
IME, clarity and reasoning are weak points of Go, relative to other systems programming languages (but perhaps not to dynamic languages):
1. Slices make it hard to reason about aliasing. bar = append(foo, val): does bar now alias foo? The answer is the worst possible: "sometimes."
2. Closure semantics make it hard to reason about thread safety. I converted this serial loop to parallel using goroutines; did I introduce a race condition? I have to look at each variable to decide. (Any sort of const capturing would go a long way here).
3. Goroutine leaks can be hard to reason about. For example, a channel that is not sufficiently buffered can result in a leak.
4. Nullable maps and channels reduce clarity.
5. The "redeclare" semantics means := sometimes does not introduce a new variable
I didn't mean to gloss over the go routine problems. You're absolutely correct, when there exist tools like the race detector it makes it evident that you can write incorrect concurrent programs. Becoming proficient at using the concurrency patterns in Go takes time but it is an advanced topic.
Those aren't common mistakes, just things he dislikes about the language. For example, I don't care whether := sometimes does not introduce a new variable (have never had a bug related to that). I don't find the closure semantics any worse than Java's (sure Java requires captured variables to be final, but it doesn't require them to be immutable).
I find append's semantics to be pretty intuitive. But then again, I'm familiar with realloc in C, which is where it came from. At any rate, slices are references to an underlying array. If you are making one slice from another slice in a way that potentially doesn't involve copying, you should expect the new thing to alias the old thing.
People often complain about channels having limited buffer sizes. But if channels had unlimited buffering, they'd complain about memory leaks and inefficiency.
A list of common mistakes in Go would be interesting. It would probably start with the "assigning a typed nil value to an interface leads to interface != nil" wart.
I use Go exclusively for command-line applications, previously using Perl (ducks). It's a fairly simple language, you can pick it up quickly, and gofmt/godoc/etc are useful utilities in reducing friction.
I really like go. I just love that it compiles to a native binary and is so easy to distribute. I love the way interfaces work and that types specify interfaces automatically without explicitly specifying that.
I love the "strictness" of the language - for example the code won't compile if you declare a variable and not use it, or import a library and not use it. I love that there is a standard gofmt which means code auto formats to a standard format. These features really help set some "discipline" when working in a team.
I love the way concurrent code can be called easily and the use of channels. I love the performance - it has been more than fast enough for my use cases so far. I love that I can get started with an HTTP server using just the standard library, and the most popular web frameworks in go are micro frameworks.
Overall, there's a kind of a simplicity about the language that underlies all of the above things, and that is what makes me excited about go.
I have used go in some minor projects that have been running peacefully for months without any hitches, and am using it in a big project mostly in the form of microservices and scripts. It has become my favorite language now.
"Sometimes this means writing out a loop instead of invoking an obscure function."
I can't help but think this is specifically a dig in C++'s direction. Since C++11 lambda's I've been using <algorithm> a lot more and I don't think you could get me to go back at this point... Yes I had to learn exactly what a few methods do, but now I have beautiful straight line code...
I've been experimenting with this concept with C# recently [0], where I have a small backend written in C#, exposing a simple, RESTful HTTP server, that automatically finds itself a local port to run on and opens the default browser to a default page.
It's actually kind of nice. Until I did this the first time, I hadn't realized just how much bullshit I had previously put up with, with setting up local web servers, trying to get configurations down, etc., etc. At some point, I think most web framework's configuration options just got too complex to be considered configuration options and became weird, poorly defined scripting languages for defining web servers. Having a real programming language to do that instead is just a wonderfully smooth experience.
Some things I plan on implementing with it:
* local file system access, to ultimately implement an FSN [1] clone in my WebVR project.
* my own Leap Motion WebSocket service, because the default one doesn't use the latest Orion beta and its associate JS library is complete garbage.
* A similar dude for MS Kinect data.
* Ultimately, get the previous two to run over WebRTC instead (not easy, there is no WebRTC library for Windows outside of major browser implementations) to be able to stream their respective camera data.
* Live raytracing of model textures for baked lighting in scenes in the WebVR session.
Right now, it's just a source file I drop into a standard C# console project. I'm thinking about making it a full-on library, though at this point there isn't much need.
server is a pretty loose term. Most servers* these days require some sort of full stack, with frontend, ORM, etc... Go adds a lot of development time if you need all of that. ...On the other hand, one off tiny microservices, it's absolutely great!
The slides are awesome and I really am fond of go, but the examples using channels are all more code to write considerably than I'd write in C# or JavaScript with async/await and not any more robust or safe.
Go is great for actor based systems where you model things using channels and goroutines for what they stand conceptually - not when you use it to simulate Task.WhenAll/Promise.all with a timeout.
I think _that's_ what they should be selling - that your server's architecture should typically be different.
Yes, but in Go you don't have to deal with the [colored function problem][1], and the code to parallelize I/O is no different than the code to parallelize computations. I agree that these examples don't do justice to Go's concurrency facilities--the most compelling examples are probably too complex for a slide deck.
The code to make I/O concurrent _should_ in my humble opinion look different from the code to parallelize computations since well - in I/O you only care about concurrency and in computation you care about parallelism.
I don't think Swift is particularly mature in this area yet, and it's concurrency depends upon Grand Central Dispatch. Go on the other hand is rather mature, especially for its age.
Other than that, it really depends on what the server is for, but in general I would say Go is probably a better choice.
If you're interested in something similar to Swift for the server, I think Kotlin w/ Spring Boot would probably be a safer bet at this point.
Swift is a much more expressive language, designed by folks that accept the world of programming paradigms in the mainstream has progressed and is sponsored by IBM for server programming.
Swift is most likely not ready for this until 3.0/SPM release, but once it'll be very competitive. Swift generics are very good and Go's are not. There's no language-level concurrency, but there's GCD and https://github.com/VeniceX/Venice
Go is a phenomenal systems programming language and becoming quite useful as a general programming language too. It's clear from the projects that are now coming into existence that Go lends itself well to the world of distributed systems and from the language design you can see that it was created with network programming in mind. The fact that concurrency is built into the language and errors are treated as values that should be dealt with just highlights those facts.
We used Go at Hailo for our microservices platform and it served us incredibly well. I've gone on to create an open source project called Micro https://github.com/micro/micro that builds on those past experiences. It's just a joy to write micro services in Go.
The presentation focuses a lot on [web] servers and google scale, but I found that Go also works quite well for applications/services on embedded linux systems.
Main pros for me there are:
- Easy to cross compile and deploy
- Daemons often need to do a lot of communication (some also for providing web APIs) and need to embrace concurrency. Both are covered very well by Go's ecosystem.
- Compile-to-binary eases distribution concerns in cases where you want to avoid to publish all source code (and thereby know-how) compared to VM languages or scripting languages.
golang is great, i use it to do 3 things thus far: restful api server (net/http, gorilla mux), dynamic web server (net/http, amber, sql), and websocket server (net/http, gorilla websocket, redigo/redis). the libraries are well implemented, the syntax is beautiful imho, and i'm able to quickly write code similar to interpreted languages like ruby, python, but scale much higher. i used to do lamp, then shifted to python tornado, ruby sinatra, nodejs/expressjs, but find golang to just be more compact and fast. my sinatra environment required rbenv, gems, and i just wasn't impressed with it.
what i like the most about golang is that the end result is a binary where my production server doesn't need to have any dependencies except for the ability to run elf binaries. i like having this option, but in reality the binary size gets pretty unwieldy for upload, so i actually end up doing a pull on the source code, compiling and starting up.
package management has not been a problem for me.
i do find html template packages to be a bit deficient, amber, ace, there are ports of haml and jade, but they all seem pretty half baked. i had to have a lot of hacks in my code to get this stuff working.
also sucks that there isn't a standard orm, but i can hang and keep up with raw sql.
the language expressiveness is not as convenient as say ruby, but it's pretty close.
Perfect example for this is Hashicorps products, taking out Vagrant it is 100% written in Go.
Having a single executable tool that you can download and run anywhere is super powerful. You develop once and you build it for every system. It's the definition of delight to me.
Which kind of applications does one write in Go? Asking this from perspective of a developer working mostly on business apps with Angular frontend and .NET (C#/F#) backend.
If you use C#/F# you don't need Go, .net is coming to Linux by the way so you definitely don't need Go.
With Go you'll basically have to rewrite asp.net from scratch if you're used to that, because frankly the ecosystem is poor if you don't stick to data transformation/ marshaling with an HTTP server. No full featured ORM, no good Logging library, no Razor like view layer, piss poor web frameworks and an extremely rigid language with a rigid type system if you are used to F# and C#. The only advantage of Go is the fact that you can deploy a single executable with no dependencies on a server. That's it.
Go is a fine language and a very good run time, though having written a large program with it I've learned its warts well enough to not want to use it again personally.
Not that I know of. That sounds like it goes against much of what Golang builds around and the areas I've had to hook into low level areas such as the kernel for NetFilter required me to go down to C
I think "Program your next server in Go" is a little too broad, the specific language features that Go explicitly leaves out makes it hard to build an extensive backend server. Go is best suited to use cases listed in these slides: simple services that do very focused things.
I love Go and used it to build some very useful web hook and CLI tools. It just doesn't lend itself to something where you expect to have a vast set of APIs under one Go project.
Here are the problems I had when tried to write a simple CLI utility (tool to run any program in seccomp-bpf based sandbox) in Go:
- using case of a first letter of identifier as a public/private flag. You end up with half names starting in a lowercase letter, half in an uppercase (the code looks inconsistent) and forgetting how to spell them. And having to rename the function everywhere when you decide to change it from private to public.
- no official package manager. Unclear how to add external libraries to your project and how to set specific version you need. I ended up adding necessary files into a separate folder in my project.
- Go manual suggests you have single directory for all projects and libraries. That was inconvinient because I develop on Windows and use Linux only to test and run code in /tmp directory, I do not keep the code there. And why would I want to keep unrelated projects inside the same directory anyway?
- no rules how to split contants, types and functions into files and folders. For example in PHP there are certain rules: each class goes to its own file and you always know that class Some\Name is stored at src/Some/Name.php. Easy to remember. And in Go you never know what goes where. Large projects probably look like a mess of functions scattered around randomly
- no default values for struct members, no constructors
- no proper OOP with classes
- standard library is poor
- open source libraries you can find on github are not always good. I looked for library to handle config files and command line arguments and didn't like any.
- standard testing library doesn't have asserts
- easy to forget that you need to pass structures by pointer (in OOP objects are passed by reference by default). And generally use of pointers makes the code harder to read and to write.
- weird syntax for structure methods. They are declared separately from the structure.
- go has 2 assignment operators (= and :=) and it is easy to use the wrong one
- having to check and pass error value through function calls instead of using an exception. So most of functions in your code will have two return values - result and error
- no collections library
- simple things like reading a file by lines are not so simple to implement without mistakes
- static typing is good but sometimes you cannot use it. For example I wanted to have the options in a configuration file mapped to the fields of a structure. I had to use reflection and every mistake lead to runtime panic. And you cannot use complex types like "pointer to any structure" or "pointer to a reflect.Value containg structure" or "list of anything" or "bool, string or int".
Of course Go has also many good parts that might outweight its disadvantages but I am not writing about them. For example I have not used goroutines but they look like a simple solution for processing async tasks or writing servers.
I think Go is not ready yet for writing large applications. It might be ok if you write a small utility but I cannot imagine ORM like Hibernate or web application written in Go.
Also I took a look at the code in the presentation. I wouldn't want to write such code. For example, here https://talks.golang.org/2016/applicative.slide#20 they use static methods (http.HandleFunc(), log.Fatal()) instead of instance methods. So you cannot have two logs or two servers. Using static methods everywhere is bad especially in large applications. Google itself uses Go only for small utilities like simple proxy servers.
I'm afraid HN has a serious problem with downvoters. Why -in heavens name- is the above a question that deserves downvoting?
UPDATE: whoray - I got downvoted too. Gee man. Just not worth it. Buy and thanks for the fish.
I'd venture to guess because it has an unnecessary negative tone to it. Specifically the "Sounds stupid" part. That two word postfix is unnecessary to ask the question and adds a negative and dismissive attitude to the discussion. A good gauge for any comment here is "Would you say this at a business meeting?" I believe the answer here is that the snide remark "Sounds stupid" would be left off when speaking amongst professional company.
I don't know what your original comment was because it's been flagged. But, this slideshow did not work as I expected when I first opened the page. Honestly, if I have to think about how to get to the next slide, your software is worse than PowerPoint. Forgive me for being so harsh.
It's not for you. You are not the primary audience. The user of the software is the person giving the talk. The fact that the slides are online is just a bonus.
That was the good. The bad? Garbage collection, dependency management, and lack of first-tier support in various libraries. Garbage collection makes the otherwise lightweight and speedy language a memory hog under heavy loads. Not too bad, but I have to kick the memory up on my servers. Dependency management is a nightmare; honestly the worst part about it. The lack of first-tier support in various libraries is a close second. AWS's API libraries had relentless, undocumented breaking changes when we were using them, all on the master branch of their one repo (breaking Golang's guidelines for dependencies). Google itself doesn't actually have any real API libraries for their cloud services. They autogenerate all API libraries for golang, which means they're not idiomatic, are convoluted to use, and the documentation is a jungle.
We continue to use Go because of its strengths, but it just really surprises me how little Google seems to care about the language and ecosystem.