As I've said before, Go has the advantage of mediocrity. It's boring as a language, but it does automatically most of the things you need for web back-end stuff. It's garbage-collected and does subscript checking, so you're covered on memory safety. There are stable libraries for most things you need in a web server, and those are mostly the same libraries Google is using internally, so they're well tested. The green thread/goroutine approach means you don't have the problems which come from "async" and threads in the same program; there's only one concurrent task construct.
There's a lot to be said for that. You can put junior programmers on something and they'll probably get it more or less right.
Rust is very clever. The borrow checker was a huge step forward. It changed programming. Now, everybody gets ownership semantics. Any new language that isn't garbage collected will probably have ownership semantics. Before Rust, only theorists discussed ownership semantics much. Ownership was implicit in programs, and not talked about much.
Tying locking to ownership seems to have worked in Rust. A big problem with locking has been that languages didn't address which lock covers what data. Java approached that with "synchronized", but that seems to have been a flop. (Why?) Ada had the "rendezvous", a similar idea. Rust seems to have made forward progress in that area.
I used to say that the big problems in C are "How big is it?", "Who owns it for deletion purposes?", and "Who locks it?" At last, with Rust we see strong solutions to those problems in wide use.
Much work has gone into Rust, and it will have much influence on the design of later languages. We're finding out what happens with that model, what's useful, what's missing, and what's cruft.
In the next round of languages, we'll probably have to deal directly with non-shared memory.
Totally shared memory in multiprocessors is an illusion maintained by elaborate cache interlocking and huge inter-cache bandwidth. That has scaling limits. Future languages will probably have to track which CPUs can access which data. "Thread local" and "immutable" are a start.
> You can put junior programmers on something and they'll probably get it more or less right.
I see junior programmers screw up Go catastrophically. Thinking green threads are magical, they forget the existence of mutexes and the resulting code has data races left and right. Languages like Haskell combine green threads with carefully controlled mutation (be it IORef or STM) to avoid this. Go doesn't, so you still need low-level knowledge like how to avoid deadlocks.
As I've said before, Go gives you the veneer of an easy language when it's in fact full of traps for those who aren't experts.
Most companies, in my experience, want tools that are easy to get right in 80% of cases, and in 20% of cases require experts. Go has the benefit of most benign work is deadass simple to get right. The things you referenced in your post are obviously the remaining 20%.
Rust, as an example, is a pretty difficult language to become productive in. Sure, once you've invested the time, you're equipped to handle a wider array of problems, but most organizations don't particularly care about the top. They care about becoming relatively productive relatively quickly.
Go's approach to certain problems require expertise, it is not a silver bullet. But there are no silver bullets, and the examples you gave require expertise in any language. The issue is that you've kind of just cherry picked those three concepts as proof of complexity, ignoring the boatloads of other examples in which Go's simplicity does add value.
> Rust, as an example, is a pretty difficult language to become productive in.
Perhaps it's experience with other languages, or the fact that I grew up when that sentence started with C++, but I honestly found rust pretty easy to get up to speed in.
Am I an expert? No, and until I get the chance to use it for a living that won't change. But getting from "I want to do X and Y with this language" to having done so was straightforward enough.
Stripping a language down to the point we can learn it in a week is a bad tradeoff, because we'll never improve after that. The toolbox will always be empty, best treated as a codegen target for abstractions in some more insightful language.
There are plenty of languages that you can go from "Never touched" to "Having contributed something" in a week. Are you saying that those languages aren't productive or valuable?
A language is productive if it reduces the effort to solve your problems. I think some people feel productive as soon as they're able to solve a problem by cranking out a lot of code, but I view that as the language failing to help them.
Totally agree. I never understood as a demerit that go is boring ... like it's some slight to programmer swagger. Most engineers prefer simple. Beginner programmers may screw up go but in my experience it's more a limited gestalt issue: they can't wrangle the requirements, the domain types, code structure (and whether those are in shape or no) ... Consequently they react to the code in terms of how it comes to them.
> I see junior programmers screw up Go catastrophically. Thinking green threads are magical, they forget the existence of mutexes and the resulting code has data races left and right....
When I worked at a company that used Go, I saw senior programmers with decades of C++ experience screw Go up in exactly this way.
Go is comparatively easy. However, if you think it is very easy, it is very probably that you will make many mistakes in using Go.
I have said it many times: Thinking Go is easy to master is considered harmful. Holding such opinion (thinking Go is easy to master) will make you understand Go shallowly and do many mistakes in Go. One the other hand, if you learn Go seriously, Go could help avoid these mistakes easily (and with a happy programming experience).
I see this in Elixir all the time too. Lots of people seem to think that immutability combined with message passing makes their program immune to races. As a consequence I see all sorts of designs with races all over the place.
Yes. Go's locking is really no better than C/C++. There are queues, and there are mutexes. Everybody has those now. Go just has a new story for them. The whole "share by communicating" thing tends to lead to people just putting "tokens" on queues, not the actual data.
But Go does have a run-time race condition checker.
Yes, in retrospect Go's push to "make concurrency easy" ended up being one of its worst attributes; Go's real value is in its simplicity, efficiency, and boringness, all of which concurrency runs counter to. On the other hand, it's questionable whether Go would be as popular as it is today if it hadn't hyped up the concurrency angle.
> "it's in fact full of traps for those who aren't experts." This is true for pretty much every languages.
Yes, but every language has a different number of traps, and each trap has a different difficulty, and each person has a different likelihood of encountering each trap.
So when someone says that, they mean that it's worse than the average (well known) language on one or more of those axes.
Mutable parallelism is indeed hard, but you don’t have to be an expert to know the basic rules (lock mutable things) and design your application to minimize concurrent mutation. By contrast, learning Haskell well enough to be productive takes weeks if not months. As for Haskell, it sure addresses parallelism, but it introduces far more significant problems with respect to application development. There are reasons it isn’t widely used in production.
> Tying locking to ownership seems to have worked in Rust
I'm not sure whether I would call it "locking to ownership". But the thread safety guarantees that Rust provides through the type system (`Send/Sync`) are probably the main reason why I would consider using it also for applications which would otherwise favor something along C#, Java or Kotlin to be used. Those languages are for me super productive, and by using them you don't have to worry about memory safety either. However they provide no safety net against threading issues.
And unfortunately threading issues are far too commmon - basically everytime I see an entry to mid-level engineer using threads there will exist at least one issue which will cause some pain later on (because it's invisible, will break in weird ways, and will be hard to find). With Rust the compiler will tell people that something is lacking synchronization.
There will obviously still be issues - e.g. if someone tries to be clever with atomics and misses the correct dependencies and ordering. Or just because some architecture is not perfect and things deadlock.
But there are a lot less of the "oh, I didn't knew this required synchronization at all" issues.
> In the next round of languages, we'll probably have to deal directly with non-shared memory. Totally shared memory in multiprocessors is an illusion maintained by elaborate cache interlocking and huge inter-cache bandwidth. That has scaling limits. Future languages will probably have to track which CPUs can access which data. "Thread local" and "immutable" are a start.
Interesting prediction. It is precisely the model of threading which Nim follows, global variables are thread local and sharing them is restricted. I'm not sure if the Nim designer thought of it in the terms you've described, but I will certainly point this out to him :)
It's the route JavaScript is taking too. JavaScript has traditionally been single threaded, so all the existing types are single-thread only. Multi-threading is being introduced (still very experimental atm) and the only way to have shared memory is through dedicated "shared" types.
I've settled into sort of a hierarchy. If I really want to hack out a small prototype quickly, I use NodeJS. Not having types helps me change things around quickly. If I already have a pretty good idea of my data model but still want to develop quickly, I'll use Go. If it's something 1.0.0+ and I want to make it as reliable as possible, I'd use Rust. My problem seems to be few of my projects ever get to that stage, so I'm mostly writing Go these days...
You still program with types when hacking out a small project with NodeJS. How does moving type errors from compile time to run time help you move faster? It always made me move slower.
To each their own. For small projects, I feel faster with a scripting language. When prototyping with Go, I constantly have to move back and forth between the type definition and the places I'm using the type. With JS you just use it. Lots of things like manipulating JSON or sending requests have less boilerplate as well. Like I said, once the data model is more firm I move faster with a typed language.
> When prototyping with Go, I constantly have to move back and forth between the type definition and the places I'm using the type.
I do the same but I don't feel slower doing it. I mean I have to move around between function bodies and callers, for example, when I'm hacking in python, so the two don't feel that different to me (moving between a user and a provider or definition of something when hacking on code).
But what does feel different to me is if I hack up some go and get a compile error, that feels MUCH faster and nicer to me than hacking up some python, running for a minute, and half way through hitting a runtime type error.
There's at least some speed up when prototyping - think having n functions, and wanting to do some sort of refactoring. You might want to iterate on some refactoring idea, and test it out on the most problematic of those n functions. Dynamic typing gives you the possibility to avoid refactoring the whole codebase (the parts touched by your refactoring, I mean) and test out the refactoring idea on just that function.
I'm not sure I'm convinced, though the idea is interesting.
I feel like if I was refactoring a function and that refactor had a small ripple effect - say I didn't have to change any data structures outside the function and didn't have to change many other functions (callers or callers) - then I don't think there will be much speed up.
And if this was a small refactor on a larger function, or a larger refactor which included changing data types used in other places, then I want to be sure everything I'm about to run as part of my experimental partial refactor was updated.
Sure, if the type system complains about some unrelated function which no longer compiles then I might lose some time commenting out that function or fixing it up, but also the type system may point out a spot I didn't think was part of my experiment but which really was, and I'll save time on runtime errors or debugging. Or it may point out an issue in unrelated code that I didn't think about but which may make the refactor infeasible.
In my experience this kind of thing is a wash, but I would not be surprised if other people's experiences differ.
> It's garbage-collected and does subscript checking, so you're covered on memory safety.
Go is only memory safe in sequential code, or concurrent code that never accesses shared data from multiple threads. It does not protect against data races as Rust does.
Go needs synchronized and immutable collections, or at least to start trusting users to roll their own. Concurrent slice appends will overwrite each other unpredictably, and concurrent map writes and reads can panic.
I’ve always found it better to avoid concurrent mutability as much as possible. That has worked very well for me, and I don’t generally have the negative experiences other Go programmers have complained about. Not as nice as static type system guarantees, but it’s available today.
Yeah, you’re absolutely right that there is always some situation where some amount of shared mutable state is necessary, but there’s still a lot of fat to trim in most cases.
What you say is prone to make people who are not familiar with Go think there will not a safe way to accesses shared data from multiple threads in Go. This is not the truth. In Go, multiple threads could access shared data safely. When the code is implemented correctly (which is easy), these threads will never access the shared data at the same time.
Rust just makes this guarantee in the syntax level (at compiling time), which is different from most other languages, but with the cost of slow code complication, rigid, and not easy to learn.
BTW, what you say shows you don't understand basic Go knowledge at all.
> As I've said before, Go has the advantage of mediocrity. It's boring as a language, but it does automatically most of the things you need for web back-end stuff.
Given that Go is essentially a safer, smarter C, it boggles my mind to see it being used for applications where getting the business logic right is much, much more important than performance or complicated bit-twiddling.
I echo this. I've been exposed to some Go at work recently, and Rust on my free time. I've been learning both over last few months.
Go is like C. More like C than any other mainstream language I use. IMO this is bad. C is very verbose and hard to read. Rust is more like C++. Close to hardware but many high level language features.
I could rant a lot about what I don't like about Go, but it boils down to this. No generics, bad package system, bad error handling, bad code generation, bad GC, limiting syntax encourages copy paste and bad abstractions, defying common conventions to be cute (like capitalization in variables), mediocre standard library.
I love Rust more as I get better. I already hate Go for some of the above reasons, I wouldn't even use it if it wasn't for work.
Google hypes Go like crazy, I'm convinced that's the only reason it's popular. I think there's many better options, even Java, that don't have many of the shortcomings listed above
I have written a _lot_ of business logic in go, and I can say that imo it is a superb language for doing so. Vastly better than C, and I’m coming around to it being better even than my beloved python.
Mainly that it’s just so aggressively boring (which I love). No exceptions means you have to deal with your errors all the time, and it’s just generally very easy, when dropped into a random spot in the code, to figure out what’s happening. Very little magic.
I am looking forward that the generics 2020 edition actually make it, as Go is becoming anyway harder to avoid on my domain.
I think ultimately the biggest Rust contribution will be for GC languages (tracing GC | RC) to also adopt some kind of early reclamation, similarly to Swift's ongoing approach, and we will reach a good enough situation and that will be it.
This is true in many aspects. Go shares many common feelings with other popular languages.
However, there are also something in Go those are not mediocrity. Being an almost static language but also as flexible as (sometimes even more flexible than) many dynamic languages is one of those not-mediocrity things in Go.
Beyond a certain scale there are by definition no good programmers or bad programmers, there are only average programmers. So it seems pretty clear to me that depending on them to be experts is a losing proposition.
But which vendor is going to convince the industry to produce software that doesn't depend on the legacy cache coherence stuff so they can leave it off the die? I'm reminded of Itanium where "sufficiently smart compilers" to take advantage of VLIW seemed plausible but never really arrived.
It doesn’t speak well for Itanium if they shipped before they could deliver better bang/buck (even after customers’ porting costs) than the commodity architecture.
There's a lot to be said for that. You can put junior programmers on something and they'll probably get it more or less right.
Rust is very clever. The borrow checker was a huge step forward. It changed programming. Now, everybody gets ownership semantics. Any new language that isn't garbage collected will probably have ownership semantics. Before Rust, only theorists discussed ownership semantics much. Ownership was implicit in programs, and not talked about much.
Tying locking to ownership seems to have worked in Rust. A big problem with locking has been that languages didn't address which lock covers what data. Java approached that with "synchronized", but that seems to have been a flop. (Why?) Ada had the "rendezvous", a similar idea. Rust seems to have made forward progress in that area.
I used to say that the big problems in C are "How big is it?", "Who owns it for deletion purposes?", and "Who locks it?" At last, with Rust we see strong solutions to those problems in wide use.
Much work has gone into Rust, and it will have much influence on the design of later languages. We're finding out what happens with that model, what's useful, what's missing, and what's cruft.
In the next round of languages, we'll probably have to deal directly with non-shared memory. Totally shared memory in multiprocessors is an illusion maintained by elaborate cache interlocking and huge inter-cache bandwidth. That has scaling limits. Future languages will probably have to track which CPUs can access which data. "Thread local" and "immutable" are a start.