> *Even if we are using a language that manages memory itself like Erlang, nothing prevents us from understanding how memory is allocated and deallocated. Unlike Go Language Memory Model Documentation Page that advices “If you must read the rest of this document to understand the behavior of your program, you are being too clever. Don’t be clever.”, I believe that we must be clever enough to make our system faster and safer, and sometimes it doesn’t happen unless we dig deeper into what is going on.*
I dunno, article seems right about this. Golang docs sound condescending.
Golang isn't a language for the person writing code today—it's a language for the person who has to come along and take over their codebase after they leave.
The farther away I get from "fresh CS grad", actually, the less I give a shit about catering to the person writing the code and helping them express themselves optimally, and the more I care about how much hair I'll lose trying to figure out someone's meta programming or shoehorned-in-because-it's-fashionable functional crap. Either of which I can write just fine, thank you very much, but god knows I'm sick of inheriting such codebases, to the point that these days I basically just won't do that anymore.
You have a Rails "app" that's been through three development teams? Cool, good luck with that, next opportunity, thanks. You're not only using React, but your codebase is purely functional (I mean, your functions are objects, you know, but that's quibbling, OK, sure, it's purely functional conceptually-speaking) top-to-bottom and you've got all kinds of supporting libraries to make Javascript contort into kinda-halfway working for this pattern? And you're thinking about porting to Purescript so you can better express yourselves with types? That sounds very nice, but I'm sure it's just too smart for me, thanks for the offer.
You're using Go, Typescript where you have to, and your mobile apps are boring, straightforward native code? Oh thank god. When do I start?
> it's a language for the person who has to come along and take over their codebase after they leave.
Hard disagree. I'll grant that in the trivial case , basic cli apps that have to do one thing, the maintainability of go is high, maybe even a very simple microservice.
But once you get something more inherently complex, especially if it must do concurrency in all but the most trivial of things (especially as you get concurrent requests that must handle faults, say from a crappy network or a 3rd party system) the language is far, far less easy to get a mental model of, much less debug in anger (or with a system in prod), or even write tests. In my experience, go tests are terribly illegible, often my go coworkers just don't write them, which is worse.
Maybe I've just been spoiled by elixir (relevant given the OP topic). And for the record, I've inherited a codebase that has gone through three-ish teams. I work on my corner and I can be fairly sure that the changes I make, features I build are isolated and safe. And this is even with parts of the codebase that I cringe at (and over time I gradually refactor them, with confidence). I have all the tools to ameliorate the team code, and even for the crap parts, reading isn't hellish, and refactoring feels good.
In JS functions are objects, yeah, most frontend functional languages still require some amount of FFI so we haven’t totally lost touch with reality. One could make the argument that JS was one of the first truly mainstream languages with significant FP features, first-class functions, lambdas, and closures, so why the hell are there people out there modeling everything in objects? JS is a multi-paradigm language, my feeling is as long as you're meeting the needs of the business and users, the paradigm you use should really be up to the engineers.
I’m sure there are people who do functional programming who fit your description and I'm sure I would also find them quite annoying, but as far as the community goes, I believe them to be on the fringes. I don't see them at FP conferences, meet-ups, or in chatrooms. Many of us are motivated by the same reasons you stated, we want code and test suites that we find straightforward and easy to maintain.
I'm not sure how fashionable these languages really are, even talking to experienced engineers, when I say I prefer to work in Haskell, often they've never heard of it and think I've said "Pascal".
Of those only Java (and, by association, Kotlin, since you'll likely be interacting with Java) strikes me as often-hard-to-read, mostly because of the cultural love of using ten layers to do what two or three layers could. But the tools are good enough that it makes it merely annoying, rather than productivity-torpedoing.
What's so hard to understand? Go combines static typing, green threads, garbage collection, and lower-level control over memory [1]. These aren't new ideas. It could have been more ambitious. But it's the only mainstream-enough-to-be-useful language that offers that combination of things, making it useful for certain classes of problems (obviously not your problems).
To compare to the "PhD level languages", last I checked:
Kotlin: no green threads or low-level memory control
Swift: no GC, green threads, reflection
TypeScript: no threads of any kind or low-level memory control
Objective-C: I've used this extensively. It was even more verbose than Go, the non-C parts were less efficient, and it even had a similar primitive type system (although it got a few more features at the end of its life). Also no GC or green threads.
I get that you hate using Go, but your Go-developers-are-just-idiots take that you post every thread is basically just flamebait at this point. It's currently a useful tool for problems where Kotlin is too high level but don't need C++. In a few years the JVM will probably have value types and fibers, and Go will have generics, and maybe then it will be possible to compare apples-to-apples.
[1] value types, explicit pointers, default buffer/list type that can refer to arbitrary memory ranges, etc
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"
"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"
Most of that could apply to Kotlin also (familiar from Java, etc). Go's relative lack of features seems more to do with their idiosyncratic programming philosophy [1]. The tone is a bit condescending, but they clearly consider it good enough for themselves and it's usually become their primary tool of choice when Go team members work on new projects.
Personally I can see some appeal of fewer features. When trying to solve some medium-sized problem in Go, I often end up working through in my head 5-10 ways to structure it with Go's existing features (and almost always one of them ends up being satisfactory). If I had a lot more features, I could work through 20+ ways, but what good would that do?
Of course, I'm looking forward to user-defined generics (although their absence hasn't been critical for me). I used them in C#, but I would run into limitations [2]. Although it's taken a long time to arrive, I'd prefer to work with the system that the Go team has designed (and now accepted). To me this doesn't make them look bad?
People complain about nullability etc, but I just don't spend a lot of time on those bugs...
Let's say I just don't understand other languages. What language should I use then? I need memory control (value types), reasonable perf, if GC < 2ms pauses, also cross platform. That seems to mostly leave Rust and C++... are the extra features really worth giving up GC over?
That isn't a rhetorical question by the way, that's how I ended up with Go.
If your position was "most projects should use Kotlin and not Go" I think that's defensible. But you constantly saying Go programmers just can't understand other languages (despite plenty of contrary evidence) gets tiresome.
Possibly, but the other side of the spectrum doesn't look any prettier either: people need to spend nearly a decade in the industry just to ne able to have a decent grasp of some of the languages because someone, somewhere decided that bundling a bucket, rocket, anti-gravity device,and Stanley knife was a good idea. I've gone through majority of the comments in this thread and it's still the never ending search for the holy grail of programming languages.
Google is an enterprise. Go is an enterprise language. For the usual enterprise reasons: bean counting, internal politics, and cubicles.
It is a great language because it solves Google's problems. That's true of all Google tools and services. To the degree a person has the same problems as Google, their tools are great. Most people and organizations don't have Google's problems to a significant degree.
> Yet the majority of Google's problems are solved with Python, Java, Kotlin, and C++, not Go.
That's not very informative... Google had millions of lines of code in Python, Java, and C++ before Go hit 1.0 (to the extent that Kotlin enjoys wide adoption inside Google, it could be explained by compatibility with existing Java code, especially the Android niche). It would be more interesting to look at Greenfield projects which don't depend on existing Google libraries, although even then Go would have to be quite a lot better than the competition in order to get a team with a long history of proficiency in Java to migrate to a new, relatively unproven language.
> Go doesn't address any Python issue, besides AOT compilation, easily solved if PyPy had more community love.
Are you suggesting that if PyPy had more investment it would become an AOT compiler rather than a JIT compiler? Anyway, "if only the community invested in it more" is like the whole problem with the Python ecosystem. If the community invested in solving package management, Python wouldn't have such an abysmal package management story. If the community invested in a smaller C extension interface, PyPy could be fast and compatible and CPython would be freer to optimize without worrying about breaking compatibility.
I've been working with Python professionally for ~15 years now and I'm pulling for it, but come on... The idea that Go is only better with respect to AOT (ignoring that AOT is not an end to itself) is out of touch with reality. Go is 2-3 orders of magnitude more performant than Python, it's statically typed (yes, mypy exists but it's still alpha quality and I'm happy to back up that assertion if necessary), it has a sane package management story (yes there are warts, but that's still far better than the Python package management story), documentation is much better in every respect (the tooling, the centralization, the readability, the accuracy, etc), etc. These aren't small things--I've seen projects fail for poor performance, and I've spent weeks trying to find a way to get fast, reproducible builds (before giving up).
There are certainly areas where Python bests Go (e.g., data science ecosystem, web frameworks, etc), but the idea that Go only has AOT over Python is absurd.
Had Docker not happened, and the Kubernetes team not gotten some new members that pushed for the Go rewrite from the original Java prototype, most likely we wouldn't be even talking about Go.
No doubt those are prominent applications in the Go ecosystem, and thus I can understand why people with little Go experience (or little experience circa 2012-2015) have this perception, Go was gaining momentum in the devops and network-services spaces before those apps came to fruition.
Personally, I started using it because it was the only language at the time that made it easy to build and deploy static binaries--I didn't have to learn a DSL to download and wire together a bunch of dependencies (nor run a flaky daemon in the background because the performance of the CLI is so poor) or cargo cult the incantation to package them into an archive or figure out how to make sure that my targets have compatible versions of the target runtime or etc. Similarly I didn't have to figure out how to wire together a third party testing library and integrate it into the build system. Similarly I didn't need to worry about packaging and hosting my libraries or documentation. I also didn't need to operate an external web server like uwsgi or jetty--Go's standard library has a production-quality HTTP server builtin (and its concurrency features made it fast and scalable). The language features were also a pretty big boon: it had value types, first class functions, type inference, "duck typed" interfaces, goroutines, etc and best of all: an ecosystem free from the pitfalls of OOP (inheritance, banana-gorilla-jungle architecture, etc).
Frankly, it's just an amazing little language. I do wish I could take its runtime, tooling, familiar syntax, etc and plug in a different type system, mostly one with rust-like enums and generics; however, the particulars of a static type system are relatively minor factors in the outcome of a given project.
At the time Go was getting started most software was either Java which had to coordinate with the JVM installation or scripting languages which required you to manage a bunch of packages. Static binaries was virtually a feature unique to go from the point of view of popular languages in the domain of web dev and dev ops.
"Most people and organizations don't have Google's problems to a significant degree"
Except in this particular case, i believe the average junior Googler is far more capable of handling advanced language as the average junior employee of any regular corp.
Which means go's tradeoff toward "simplicity" is probably the wisest for them as well.
No, I'd argue that both are incapable, you can't teach experience. Junior Googlers are just more "dangerous" as they probably start their career on bigger problems and share with the average a similar set of "unknown unknowns".
Enterprise languages, like Java and Go (in contrast to Scala or Perl) are designed to keep people on rails. This can be accomplished by for example, limiting the number of ways a problem can be solved by the primitives of the language.
This is why the "most fast and break things" paradigm at Facebook didn't align with me. They basically announced that they're not concerned about these rails. Not to say it's a bad approach, FB is a very successful software company, just one that doesn't align with _my_ values. I think they've since moved away from this.
I get that Go doesn't have generics and is generally in the C-family of languages, but beyond that there's not much to compare--I'm shocked that anyone would even make the assertion. Go has first class functions, value types, generic slice/hashmap/channels/etc, goroutines, and type inference, and it lacks inheritance and constructors. Those different feature sets predictably breed very different programming paradigms. Notably, I recall early Java was all about inheritance and distributing your object graph arbitrarily among constructors.
Honestly I'd argue that that is more of a Google/large corporate problem. The thing about junior devs is that in order to hire a bunch of them and get anything productive out of them, you need a ton of people above them. You need to spend more time on design to give them more clear requirements, you need more senior devs to provide mentorship, you need more management.
Smaller dev shops don't have that, they can only bring on 1-2 juniors at a time and hope for the best.
That's a very weird reference. The go memory model docs aren't describing garbage collector implementation behavior. I think they're correct that if you need to study up the happens before spec, you're on thin ice.
I think you are confusing memory management with memory model. Memory management is about garbage collection, RC, malloc / free, and allocations. Memory models are about what happens when you read and write to shared mutable memory. I'm not a Erlang programmer but in general the Actors concurrency model does not support shared mutable memory. Don't be clever.
your browser extension sounds so sick (heard about it 20 minutes ago on the Crystal post) - any chance you'd make it available, or are you concerned some villain would use it to perform title based post manipulation?
I use Refined Hacker News[0] to get a list of past threads at the bottom of the page. dang himself has said[1] it overlaps surprisingly with their internal extension, so it might prove useful
The bottleneck is that it's a single codebase in which moderation functionality and ordinary functionality (browsing, voting, commenting, editing) are blended together. I'd need to tease those apart in order to release a public version, and it's not a high-enough priority compared to the other things that need doing. One of these days! The upside will be that when it does finally get released, it will be the fruit of many years of development, so there will be a lot of goodies in there.
EDIT: how to do it for yourself: register, bookmark the page in your browser and then open quarchive.com to see the comments on hn/reddit etc. If you have ideas send me an email
I think also super interesting is how erlang gives you hooks in it's FFI to create and destroy C ABI entities that are refcounted and thus tied to the GC.
I had a waveform animation that would get jerky periodically due to the GC sawtooth. Given a relatively fixed cost per frame, the GC would tend to come every N frames, which is the sort of pattern the human brain is good at picking up on. This meant the behavior was perceptually 'worst' when you just sat and stared at the screen, doing nothing else.
So I just started forcing a GC any time the buffers were full at the end of the previous frame. The next frame could be drawn entirely from buffer, which only took a couple of milliseconds, reducing the potential jitter from a few frames to maybe half a frame. You'd still get random stutters but there was no pattern to it anymore, unless the network was being very badly behaved (which in this case was useful information).
Not only did that smooth out the display but it let us increase the max number of things you could display by a little over half.
I dunno, article seems right about this. Golang docs sound condescending.