I don't doubt that it has good CPU performance and concurrency, the developer experience just seems so frustrating.
Go was built for software development, and it addresses real, practical problems.
(That said I really didn't like Java either, and it later became my favorite language for a decade or so.)
(I started liking Java a lot more after Java 8 came out.)
I can relate (and feel the same) about several of the points you made in your blog post.
You actually put a smile on my face :-)
But when you consider how many developers have a hard time to learn GIT after so many years, you realize why companies need something as simple as GO.
Another reason is rise of new specialized engineers that programming isn’t their main focus and hence they are looking for easy solutions and easy languages to pick up. (DevOps, Data Scientists, SRE etc)
Key words being to design and maintain concurrence as compared to C++ or Java. Go is everything but "backwards" in that regard, it's fantastically efficient and actually fun to code. It builds upon SCP, a formal language designed in 1978 for concurrency, and it's Pike 3rd time (the charm of v3!) in that regard. They aced it, seriously, and thus scaling is part of the core paradigm (if you want to build for that).
The trade-off is in elementary complexity, core and syntax sugar ("absence" of generics etc). But in actual practice, people are effectively hard-pressed to find examples where Go applies (we're talking server / CLI / infra software, backend and middleware mostly) and e.g. the absence of generics is an issue. It turns out, in actual practice, you can very pragmatically approach the problem from the Go programming paradigm standpoint and solve it quite elegantly (if a little verbosely sometimes, but it's nowhere near C++ or Java either once you factor in everything else).
The "absence" of a feature in a nonetheless 'complete' programming language is only a lack if you want to think this way and use said abstraction; the makers of Go argue that in a number of cases, it's not just possible but better to keep it simpler at the unit level, because you massively improve readability, and simplicity makes performance and security also simpler.
Keep in mind, when they made Go, they never thought it would become so big, nor was it the intent. They made Go for Google to solve Google-y problems of scaling. It just turns out, as Docker illustrated, that many other companies and problems could benefit from Go. Hence, it has its place. It'll probably never be "THE" programming language like Python is, but it doesn't try either. I find there's elegance in that restraint, there's a certain nobility to stay true to the UNIX philosophy of solving your problem space, no more no less, and doing it as efficienctly as possible.
As for "frustrating", one of the language's explicit goals is to "make programming fun again" — back to sitting in a Google office to solve tech-giant scaling problems without going insane thanks to C++. Maybe they failed for you : ) But for many people, beginners notably, the language is really fun — if you forget everything else and strive for "idiomatic" Go, in syntax and patterns and architecture. The tooling is young but great — things like integrated testing, benchmarking, documenting, fast compiler.
Sorry for a long piece. This low-profile language is one jewel if you don't expect it to be Python or Js. I hear the same enthusiasm from the community (let's call it: love) as I used to hear from Ruby lovers for instance — the "match" is real between the Go philosophy and some people.
I'll add that Go's community has real lasting power for a number of positive reasons, and it seems to attract HN-level of smart thus you learn a lot, quite fast.
Does anyone know if there's any big picture work on GC throughput in progress? The idea of the ROC (fast freeing of memory never shared between threads on goroutine exit) was canned since it regressed on some workloads and I'm having trouble finding signs Austin's idea with hashing memory landed either. A moving GC or a read barrier seem mostly out because they can hurt other things (C interop, performance of code that doesn't allocate much). Though they seem to have addressed pauses quite well, and lots of code out there isn't GC-heavy, it still seems like a key area where it might be possible to give a "free" throughput win to existing code.
Are we behind the curve?
Python on the other hand yielded almost 200.
This is in the US, not sure if it's the same in other countries.
They will also have often have projects in "support languages" for internal tools and services. Often in python atleast, and more and more in Go. Those are often unlisted.
Edit: Just noticed you said in your country. I suspect this is true broadly but what do I know?
Try to search Golang.
This page shows job postings per programming language from searches on Indeed.com: https://www.codeplatoon.org/the-best-paying-and-most-in-dema... Go isn't on the list. The TIOBE index has Go at #17: https://www.tiobe.com/tiobe-index/
There is an absolutely mind-boggling amount of Ruby jobs in the US.
If a skill commands a higher salary, does that mean I can actually get paid for it more in Houston, or does it mean that it's only used in tech hubs which pay way more to keep up with cost of living?
Also, for a sense of scale, this rewrite took about 4 years to complete from first concepts.
The most important advantage of Go was that it is statically typed and there was a broad agreement that we should use a typed language to prevent errors. The other advantage was that Go is faster than Python, allowing us to move logic out of the C codebase and unify the high-level operations in a single codebase. The other other advantage of Go was that it is not C++, for which we were all grateful. As with most real projects, the rewrite occurred concurrently with adding features and tightening coding standards — `go fmt` was a big motivator and the Go codebase had much more comprehensive test coverage than the Python one.
>Also, for a sense of scale, this rewrite took about 4 years to complete from first concepts.
That’s not quite accurate. In fact the Python production code was largely ported by mid-2015, and the Go library was feature-complete by 2016, albeit still linking to C. However, there was a lot of non-production prototype code, implementing more complex optimizations and written in a few additional languages, which took another two years to port, and also to be made production-ready. Perhaps what’s important is that we were able to keep adding new improvements while still “porting” this code. (Real life is a poor laboratory.)
>effectively never updated the library.
It is, or was, still maintained. Most of the very low level file operations had been optimized to death already. Higher-level functionality was slowly moved out of the C library. I left Salesforce in 2017 so I’m not sure what happened next. (Incidentally I left not only Salesforce but tech entirely - I’m now at grad school for medical physics.)
>I imagine it is easier these days to find Go devs than C devs
Maybe? None of us knew Go when the porting began. I think if you know at least 2 programming languages and one of those is C-like, Go will be a piece of cake. I was hired with no Go experience and ramped up quickly.
That said, considering the requirements this thing sounds like it had to meet, perhaps golang was the right choice after all. Unlike certain other projects I know about where they HAD to write everything in golang/microservices/k8s/etc because it HAD TO SCALE, took 18 months instead of the 3 months it would have taken with rails but credit where credit is due - those 2 or 3 requests a minute (peak) are handled very, very quickly.
Yes, in this particular case I don't think 'the new hotness syndrome' applies. The performance limits of python given that it's dynamic and the multithreading problems are fundamental to the language.
The only thing lacking is more community love for PyPy and similar endeavours.
Implementing compilers for dynamic languages is no trivial feat. The amount of labour that went into something like V8 to get it to its current performance is astonishing.
This is from personal experience where Go is used when performance is needed, and the end result is a single program, vs a C lib and a Python wrapper, gaining some level of simplicity along with the needed performance.
We don't get quite the performance of C, but we also don't need to include memory management. Sort of a happy medium between Python and C.
Not really actually, there are still loads of C devs (or C++ at least) and a lot more jobs in C and C++ than for Go.
Having made engineering decisions between C++ and Go, my key reasons for picking Go over C++ are:
* Simplicity when multi-threading
* It's much easier for someone who knows neither language to become productive in a professional environment in Go than in C/C++
* Fast compile times - no more typing "make" and then going off to get a coffee
* Lots of modern niceties that are more fragmented in the C world: third-party vendoring, unit testing, style guides, etc.
Mixing C and C++ like you do makes little sense, they're very different.
Maybe the Blub paradox has been hitting its limits. The point of using leftfield expressive languages was that developer time is orders of magnitude more expensive than machine chugging time. But maybe web-scale-big-data-etc. needs enough (latency) juice that the scales tip and the comparative advantage is clearly on the side of adopting "industrial-strength" enterprisey technologies again.
The argument is nothing to do with machine chugging time, and is entirely towards developer time. The problem with expressive languages like Lisp, Ruby, Python, etc. is that the language ends up varying from person to person - the more expressive the language, the more variance there is. This is a feature when you're a small team because the abstractions you build let you move quickly, but it is a bug when you're a large team maintaining a piece of software over years, where developers have come and gone. The ramp-up time to learn and understand the various abstractions that people have built over the years ends up accumulating and cancelling out the gains that those abstractions gave earlier on.
Blub languages on the other hand tend to be more uniform, so it's easier for someone who isn't very familiar with the code to dive in and understand what is going on.
And yet Java somehow manages to be both a boring language and still have too many things to lean. This achievement is probably not appreciated enough by those that criticize it.
While Go doesn't have formal semantics that let you specify whether a value is allocated on the heap or stack, it seems like it would be easy enough to create your own. The compiler has flags that cause it to output the sites where it heap-allocates--if you add comments above a function or an allocation site such as `// noalloc`, then you could write a "linter" that compares the allocation sites against those comments and errors if one of those noalloc sites allocates.
In lieu of allocation semantics, this seems like a better approach than writing performance tests for each of these sites.
Perf sensitive code can often be that way. Some innocent coworker comes in to add a feature and they don't get why two idioms are not equivalent from the compiler's perspective, and so they change things.
Meanwhile, you're busy doing something else. The code is still correct from a testing standpoint. It still satisfies your code standards. But it no longer meets your response time expectations. It's a lot of work to maintain the sort of toolchain that lets you reliably spot this sort of thing at build time or even in pre-prod environments, and it's difficult to identify places where those tools are failing to detect problems until there's been a problem.
I worked on this project; my comment is here:
When I replied to their survey that I wanted "tech articles written by other developers", I was imagining a platform for Stack Overflow authors to contribute longer-form work -- an idea that's been floated by staff for most of the life of the site! I wasn't expecting random cross-promotional content.
The truth is probably rather ordinary: even dull corporations want to look cool.
Python doesnt handle threads well this can be true. also if you are straining python threading system you ought to take a look at your own abstractions and design. chances are you wont go very far before you hit the same bottle neck in a different language if you assumed python gave up at X you could achive 100X with go but you will almost certainly hit the same issue when your data grows 100X.( i am aware of computational complexity is a factor. but the author claims that part is handled in C , from what is described it looks like they replaced flask with go)
Actually, having worked with some great type systems in my life (especially in the ML family of languages) I feel they can assist with design and thinking through a problem by explicitly stating your intentions without being tedious. Obviously type systems like C++/Java do not fit this bill.
But lastly, they are even more useful in my experience when dealing with other people's code. Types can be self documenting to an extent and provide certain guarantees (or lack of guarantees) about the code itself. Of course how much information the type system conveys varies wildly based on the power of the type system (Haskell vs. C++) and the desire of the original developer to communicate intent through types.
Once you have types in your source code, automatic refactorings are not just blazingly fast, they are also safe.
These days, dynamically typed languages have realistically zero advantages over a modern statically typed language.
from my exp with java/c++ your data model is sacrosanct it cannot be touched especially if you have untraceable down stream dependencies.
Python is a bit like a high-interest credit cards for coders. You can use it to make it look like you just whipped out a huge project from scratch. But you will spend the following decade paying down the tech debt.
although its not for everyone , but if your talking about large projects you could use metaclasses to enforce type checking during instantiation.
tech debt can sometimes be unavoidable. i often find the case to be that a project stops progressing due to fear of breaking something and not knowing how to fix it. In python this fear is reduced almost by an order of magnitude compared to other programming languages.
also in typed languages used in large projects. the project itself behaves like a standard or a protocol , people memorize the values of bytes each request contains , people start maintaining informal codes to access the project or it becomes some hob goblin of word documents , pdf and jira .
and god forbid you need to make a change in the datamodel to a more optimized model, you would most likely have to throw away most of your code which you wont do because solving all the build issues and waiting for it to compile from start doesn't go on the clock as time spent coding something useful.
Static typing allows for a lot of static decisions which in general brings better performance for a language. CPython pushes so many decisions to runtime that it is a very significant performance impact.
But the article cites more significant challenges related to type. Not performance, but design:
> "First, Python uses loose typing, which was great for a small team rapidly developing new ideas and putting them into production –but less great for an enterprise-scale application that some customers were paying millions of dollars for," he writes.
Cue the reminder that in fact Python is strongly (but dynamically) typed. But LeStum's point stands: dynamic type hurts developers trying to read/write unfamiliar python code. The mypy static typing should help out a lot but I don't think it's very popular yet.
For performance, IMO in this order you should consider (1) PyPy, (2) multiprocessing, (3) cython and/or c-extensions. (and I suppose implicitly (0) analyze your algorithm, exploit numpy where possible). If you exhaust those, Go seems like a great alternative.
They are all very limited in what they can do (neither can replace others) and bring tons of complications.
Modern languages can give you all of that without big issues.
The article never claimed this:
There are tradeoffs with both static and dynamic typing. Know the tradeoffs, know your business, then make the right decision for your use cases.
As much as I hate js. Nodejs is actually better for one of these servers. Then again it's mostly the database that determines the schemas.
Maybe ending up rewriting large parts of their system. If not to Golang then to some other fast and statically typed language like c# or java.
Code review is supposed to catch people trying to mix return types, mess with or overload parameters in unexpected ways.
Static typing removes this need but adds a million more.
If your people aren't catching this stuff in review, they're probably not catching other stuff, that is, static typing isn't going to save you like you think it is.