Hacker News new | past | comments | ask | show | jobs | submit login

That was the second bug. The first bug was this one (CL sent on 2014-08-19 and merged on 2014-09-24):

https://github.com/golang/go/issues/7978 https://codereview.appspot.com/131910043

I particularly like this comment:

"Ping. Could somebody review this?

Internal Google users suspect this is the cause of some of their problems."

I guess once fixing Google production, always fixing Google production (I ran into this at my next job after leaving Google).

But after that first bug, and having to dive deep into the golang runtime... it just didn't look well engineered to me, at all. Things like having architecture-specific details scattered open-coded all through stuff like the stack tracer (IIRC when they added ARM support they had to add an `lr` argument to a ton of codepaths). That first app was doing some Cgo stuff, and after looking carefully through how Cgo is supposed to work and how it actually works and what guarantees there are... it was just a mess. The only way to guarantee memory wouldn't be yanked out from under C was, apparently, to use malloc() and copy, which means you cannot do zero-copy calls into C code. There was no formal specification for what memory pinning is guaranteed or not.

This is a general theme in Go. It feels like an ad-hoc language with a bunch of interesting design decisions, but largely superficial; in the end the deeper you dive into it, the more the answer to how things work is "shrug" (and "may change in the future"). It's clearly written by C people with a C mindset, trying to make a "better C" without letting go of the bad habits that C brings with it, and this is more evident the deeper you look. And I say this as primarily a C and Python coder.

Then there's the whole "reinventing libc" insanity, even on macOS (where no such ABI stability was guaranteed, but they did it anyway, and that ended up with a macOS update breaking all Go apps). On Windows they can't get away with that, so they use Cgo instead, and then we're back at the Cgo mess/overhead. This design decision is also, ultimately, how the vDSO problem happened.

I've also seen a tendency towards bloat in Go (see: the stories about Go binary size forever increasing). I find it particularly crazy that these days they need to have metadata that describes stack layout at every possible instruction in the program, to make their GC work.

I don't mean all of this in a "Go is a bad language that should go away" sense; it has things it does well, and if I ever have to put together a high-performance concurrent network server with no external dependencies I might choose Go. No language is perfect. I just find that, after ending up deep in the bowels of Go, I'm not really inclined to default to it for anything that isn't very clearly its forte.

I'm really looking forward to learning Rust, which seems much more serious in this regard, but I had a very different problem there; after writing a relatively simple Rust app once, I tried to engineer a more complex application in it, and ended up unable to wrap my head around how to make an abstract interface work with proper ownership rules. I should go back to doing something in Rust some day, something a bit less ambitious...




Or maybe go back to the ambitious application and take a fresh run up at it. Maybe first identify one or two aspects you struggled with most and brush up on those, the Crust of Rust series does some nice worked examples, live but recorded of some harder to grasp features: https://www.youtube.com/playlist?list=PLqbS7AVVErFiWDOAVrPt7...

Still, even if Rust is "much more serious" there are coherency problems right at the heart of what we do still in programming computers. Notice that some vital elements of the core of Rust are basically hand-waved, for example the details of how Memory works. Other "serious" languages like C hand-wave this stuff too, typically describing a non-existent abstract machine and saying your program will perform as if on that machine, which is fine right up until your low-level code could not exist on such a hypothetical machine and instead very much depends on the details of the real machine the program runs on.


It's been a long time so I'm sure Rust is nicer now too, though that project is well in the back burner stack now so I'd probably want to find something else anyway (also, in the mean time, a platform that would make a nice base for that project is already being developed... with a Rust codebase... so I might as well wait for that to make progress).

And yes, you can shoot yourself in the foot in any language. The deeper you go into memory models, the more bizarre everything gets. I spent two weeks trying to understand ARM memory ordering guarantees, and that actually wound up with me finding an on-paper bug in Linux's atomic operation implementation on ARM64 (written by the ARM employee doing all this formal memory model work!), complete with a formal proof of the bug (though I'm not sure if any real CPU actually triggers it, so it hasn't been fixed yet, but the maintainer at least knows about it now).

To be clear, C is a bad language in this day and age too; I continue to use it because it's supported everywhere and I'm familiar with it and it does some things well like any other language. The reason why I don't use Go is because I find it does not fix enough of C's problems, and introduces some of its own. Rust does seem to do a much better job here, making much harder guarantees and having more well thought-out design decisions. I find it particularly unfortunate, as someone doing low-level embedded work, that Go is completely unfit for that purpose, as it has a hard requirement for an OS. Rust does not, which means I will be able to use it in almost any context I use C today.


Definitely post on the /r/rust subreddit if you're new to Rust and struggle to architect your code. They're super helpful, and you often get responses from well known figures within the Rust community.

Perhaps also of interest, the Generic Associated Types (GATs) feature is due to land on stable soon (hopefully before the end of the year), which should bring quote a bit more flexibility to trait definitions, including around lifetime issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: