It's written with knocking down a very specific set of straw men in mind, but rather carefully avoids coming anywhere close to addressing the legitimate criticisms of Go as a language. One of the things that's most irritating about Go enthusiasts is the way they try to close ranks on legitimate critique and reframe their language's warts as "simplicity".
Also: 20 bonus blub points for pulling the old 'I don't need generics, therefore NOBODY does' gambit.
> rather carefully avoids coming anywhere close to addressing the legitimate criticisms of Go as a language.
The "criticisms of Go are addressed every time the language is discussed: in practice, generics are rarely missed. They would be nice to have, but their absence is outweighed by other benefits of the language.
Critiques of Go tend to be principal-based: "Go doesn't have features X, Y, and Z, therefore it cannot be a good language. QED." Praise of Go, on the other hand, tends to be pragmatic: "We built something in Go, features X, Y, and Z weren't missed and we enjoyed features A, B, and C, which the language's detractors oddly refuse to acknowledge." Which view carries more weight is left up to the reader.
I know this is heresy, but has this article really held up well over time? Since 1995, when ViaWeb was founded, how many other companies have been able to run rings around their competitors by using Lisp or something similar? Are we really going to base our arguments on a sample size of 1? How many counterexamples are there?
It's clear that if a powerful language was such a competitive advantage, languages like Haskell would rule the world - instead they're hardly used for business projects.
Along comes Go and in 8 years people have built more production code with it than probably all the functional programming languages of the world combined. If that's not true yet, it will be soon the way the trend is going. And those languages have been around for many decades.
I think this means Paul Graham was wrong - a powerful language isn't a killer advantage. Other things matter more. Like simplicity that means being able to read other people's code and cooperate on a large code base. Strong language tooling. A batteries included standard library. Being able to hire developers not skilled in language X and get them up to speed quickly.
Outside of very specific fields, language doesn't matter.
I was quite strongly attacked on another thread for implying that WhatsApp is just another CRUD app.
The thing was that I wasn't bashing their dev team. They could have done a crazy amazing job, and it helped their company take off, but what was their secret sauce?
The ability to have an (almost) free SMS/MMS app which worked the same across all devices and worked with numbers rather than names/userid's, followed by network affects.
They could have written it in PHP and have it take off. And a competitor could have written it Haskell and had it fail.
(FB is written in PHP, MySpace was written in CF, Friendster was written in jsp, so logically PHP > CF > Friendster. Therefore, PHP > Java. QED?)
Twitter didn't fail because it was written in Ruby. It failed because of business.
Diaspora isn't FB not because of tech, but because of business.
I don't think you deserve to be attacked for implying that about WhatsApp. You're right, in that the truth is that in the vast majority of cases for the vast majority of cases, programming language doesn't matter all that much.
Perhaps people disagreed with you because it seems to me that WhatsApp is an outlier where a specific language and runtime (Erlang & OTP) provided a clear advantage in a specific domain (realtime networked messaging). In this specific instance, the language provided facilities that made it easier to solve a specific type of problem than it would have been in other languages. It helped them get to market more quickly, and in that way could have directly contributed to the application's chance of success. I suppose what I'm getting at is that I think there's a decent case to be made that WhatsApp is one of those very specific instances you mentioned where the language is part of the company's secret sauce.
That's still no reason for anyone to attack you, though! It's easy enough to disagree in a friendly way. :)
The problem is that WhatsApp really isn't just another CRUD app and it's very unlikely that you could pull off an engineering feat of that scale with a different technology. It is the exact counterexample of what you say.
The thing is that it is a CRUD app. You send it data (text, images or video) and get back data (text, images or video).
That's it.
It's not that different from Facebook.
Now, to get it to scale it may pay to use a high-reliability language. To make it secure you may not want to write it in C. But it would have taken off the same had it been written in Perl, PHP, Lisp, C, Assembly or Erlang.
It's nothing against Go, but we have to be realistic that having Google's weight behind it significantly increases its marketing while also making business owners a little more comfortable (aka - a modern nobody ever got fired for buying IBM).
There are a lot of frontend frameworks out there but React and Angular dominate largely because of Facebook and Google's influence.
That said, every language has tradeoffs. All you have to be able to do in order to have a rational conversation is to acknowledge those tradeoffs. Go has some tradeoffs but there's a general feeling that the resulting balance is worth it.
Yes. A toolchain that's easy to code, easy to test, and easy to hire for provides more business value, especially as teams get larger. Why this is assumed to be some xor with advanced or clever programmers is really too bad, because actually the same toolchain supports keeping more of the solution in the advanced programmer's head over the same experience with a complicated toolchain, in my experience.
It does seem like Rails -- which relies heavily on Ruby features for speed of development -- has been a big part of the strategy of Silicon Valley startups for many years.
Or Haskell is just too different from what most programmers are familiar with. It's not really Go versus Haskell, It's Haskell/Ocaml/ML/Lisp versus mainstream languages.
It's also not like Go is the only popular language. Other popular languages like Javascript, C# or Python have plenty of features and magic. Also, Elixir seems to be doing pretty well, so maybe that's a way for functional languages to gain traction.
Elixir, being newer, was made with modern environments and tooling in mind. Go has the same advantage.
And really, it's Go versus C++, C# and Java that's the actual comparison, not Haskell. That's what Go competes with, and then Python, Ruby, PHP and Node on the web server side.
Lauding Go's success over Haskell is not really saying much.
> I know this is heresy, but has this article really held up well over time?
One immediate thought that I have - programming is now much more accessible. High-quality compilers are much easier to get, good documentation is much easier to find. Much fewer people have done nothing but work on one language, one technology stack, etc. They still exist, of course, but they're much fewer in number.
This means that people are more likely to choose the right tool for the job, which kinda defeats the point of the Blub paradox - that people's perspectives on problems are constrained by the languages that they know.
Seeing as how a lot more people are dicking around with Haskell and Lisp in college and in toy projects, I don't think that this is as much of an issue. You know what generics are, and you've used generics in other code, but you're making the conscious decision not to use a language that has them in order to get better traits in other areas.
Assuming the number of programmers choosing Go eclipses those choosing languages with generics in them, like C# or Java, or duck typed languages.
I don't think it does. So maybe the people picking Go have a preference for minimalism, or they pick Go despite it's lack of generics and other features, because of performance, or popularity, or tooling.
This is riduculous. Where are the strawmen? The article makes very few value judgements about the Go language at all! It's recounting one person's experiences writing Go, but it's not defending any specific features of Go.
He makes one generalization (in the "Overall Simplicity" section) about Go's simplicity being a useful trait, and he's right. You could make a similar observation about Java, but about very few other languages — C, C++, Scala, Ruby and Python all have a lot of pitfalls, hidden side effects etc. that Go doesn't suffer from.
The rest of the article is about how they did testing, how it's one monorepo, how they manage dependencies, etc.
> 'I don't need generics, therefore NOBODY does'
The author said no such thing. He wrote: "Only once or twice did I ever personally feel like I missed having generics". That's all. No generalizations about other people.
I can't believe that I read a nice article about working on a massive project in Go over a couple of years - a meaningful experience that we could probably all learn from - and the top comment doesn't build on the content of the post at all, but is rather a thinly veiled accusation of "Go is a terrible language".
Over time, I've come to notice that once any online forum reaches a certain critical mass of programmers, nearly any mention of a specific programming language will devolve into a flamewar about that language.
I find it frustrating, but I also suspect that it's been that case since approximately forever ago. One day I plan to don a flame retardant suit and wade into Usenet archives to see if programmers were as prone to language flamewars 30+ years ago as they are now.
... if programmers were as prone to language
flamewars 30+ years ago as they are now.
I'm guessing the answer is probably yes. :)
Your guess is spot on. Not only languages (C vs Pascal, "why doesn't anyone use Ada?"), but probably best remembered for the vi vs emacs crusades. Though about 20+ years ago there were some pretty heated exchanges regarding OOP which seem to still smolder to this day (see HN history for evidence :-)).
There's a reason why Godwin's law[0] came about :-D
The author never said that, just 'I don't need generics' and they very rarely missed them.
What other criticisms did they dismiss/reframe?
Lack of stack traces in errors is mentioned and mostly dismissed, and error handling in general is reframed I guess... but these claims are backed up by evidence that it works. Crashes and NPE are extremely rare, and for those standard errors are sufficient to locate problem areas.
But then package management is listed as an honest pain point.
The Blub Paradox argument basically states that you must use the most powerful language because it's the only one that'll let you have a broader perspective to judge all the other programming languages.
Ironically, this kind of perspective is very narrow itself. You gotta consider the ultimate goal of writing software is to generate solutions that successfully solve users' problems. For complex problems that require lots of people working together, usually the effectiveness of that collaboration is the hardest of all problems, well above the technical problems.
Most of all modern programming languages can solve all the problems, in different ways. So the criteria for selecting a language is not power, but how it facilitates existing and new members of a team to make progress.
I agree with your framing that the collaboration aspects of large projects are probably harder than the technical aspects.
However, I think there are still lingering questions about whether the "power" of a language has noteworthy interaction with the difficulties of collaboration. For example, some might argue that the guard rails put up by FP languages allows them to reason better about interaction with colleagues' code. Talking to colleagues about code interaction is a human collaboration problem too.
If that's the case, then part of choosing a programming language is also in service to mitigating the difficulties of collaboration.
The mentioned paradox is simply not true. How would being fluent in Haskell make you appreciate the need for assembler programming in, say, math opitimizations in Go crypto packages, or any low level stuff?
I would like to hear this claim with more specificity, like if the claim is that learning Clojure will help you predict how the language atoms of Go might lead to some structures, or more narrowly that if Clojure has more supported concurrency models, then it helps you judge Go vs Elixir from a concurrency perspective, or if learning a multi-paradigm language helps you judge a language with a paradigm focus.
You're now the third person in the past couple of weeks I've seen who are insisting that the only explanation is that Go language users must be ignorant of all these other wonderful features.
But I'd say that in order for that to be true, you must believe that there is a large number of Go programmers who either learned Go for their first language, or their other languages they know are all bereft of these features.
That sounds superficially more appealing in the abstract, but when you try to concretize what languages the Go programmers could possibly be migrating from that don't have these features it gets a lot harder. When I tried this recently on /r/programming, I had at least one commenter suggest that maybe a lot of people learning Go might be coming from Visual Basic. I'm not sure how serious they were being, because I honestly couldn't tell if they were being funny or seriously reaching for the argument that Go programmers must all be coming from Visual Basic.
Because the only practically-likely language that fits the bill is C, and I still don't think there are all that many Go programmers who came to Go from a position of knowing only C.
Personally, I use Go a lot at work, tend not to be too pissed off about it (though certainly every once in a while, I am), and I know Haskell. I don't just mean "I can write the fib function", I mean, I wrote code that used conduits, lenses (the "real" ekmett version, and a non-trivial use where I passed lenses themselves around as first-class objects, not just as a field-accessor library), forkIO, STM, a custom monad, typeclasses, and a type family, so not just a casual "I can write the hi/low guessing game in Haskell".
The key is that what I'm looking for in a work language diverges from what I'm looking for in a personal fun language. I've actually learned (from experience!) to be really nervous about the code that FP ethusiasts and snobs produce in a professional context... FP code that strives to use every cool feature possible is often quite poor from most objective standards. Since I actually understand the paradigms in question, I can often see that they are either using features where they don't belong, or even at times outright misunderstanding and misusing them. On a couple of occasions I've even had to clean it up, so if I sound a bit bitter here, well... it's earned. There is a time and a place for code that uses the minimal feature set possible to get the job done, even if that means a few more lines of code and even if it means some programmers might find what they are writing distasteful to their refined palettes, and generally large-scale collaborative code is such a place.
I know what masters can do with functional programming. I've seen their work in the Haskell community. (I fall into the set of people who looooove the ekmett lens library, which even a good chunk of the Haskell community finds "a bit much".) There's a lot more people who can create a snarled complicated ball of "features" than there are those masters, though, and if you work with more than a couple of carefully-selected people, you'll be working with them, not the masters.
I'll submit another explanation for your consideration: A lot of us do understand those other features, and we actually, truly do find that where we are using Go, it is not catastrophic that they are missing. Instead of ignorance, what if instead it is a matter of different priorities and different problems resulting in different optimal solutions?
>... Go language users must be ignorant of all these other wonderful features.
All of your comments are reasonable but I don't think it addresses what grabcocque was complaining about.
It seems grabcocque was criticizing the writing about Go but others seem to be interpreting it as an insult to Go programmers. They are 2 different things!
(My guess is that the word "blub" triggers the misinterpretation.)
All of the following can be true: 1) Go is productive and helps get real world work done; 2) Many Go programmers are happy with it; 3) Go's feature set matches the type of work Go programmers are using it for
However, all those positives are orthogonal to writing flawed arguments about Go. For example, the author's writes : "Because Go has so little magic,..."
Whenever I see a writer talk about "magic", "simplicity", "spooky action at a distance", etc it usually turns out that the author picked adjectives that sounded good but not well-defined enough for readers to learn anything useful from it.
Consider the well-known C Language function "printf()". Is that "magic"?!? No? Why not? On MS DOS, its assembly language source code calls int21h function ah9[1].
If we want to be simple & explicit with no magic, why don't we insert int21h-ah9 in-line assembly whenever we need to display a string? What about the "return" keyword that "magically" moves that x86 stack pointer (ESP) back?
Why are those abstractions not derided as "magic? What is the computer science theory that classifies that these things over here are "explicit" and "simple" but those other things over there are "magic" and "complex"?
If an evangelist describes programming languages with words like "magic", you're not educating me because I have no idea what your threshold for perceiving it as such is.
tldr: Golang is a great language but that doesn't excuse the flawed intellectual writing about it. (And not to pick on just Go because some of the Rust essays also suffer the same flaws.)
"Whenever I see a writer talk about "magic", "simplicity", "spooky action at a distance", etc it usually turns out that the author picked adjectives that sounded good but not well-defined enough for readers to learn anything useful from it."
A valid objection.
Here's my personal definition for "magic", which is still a bit fuzzy around the edges, but much more solid than most loose definitions of it: A bit of code is magic when it is not sufficient to examine the tokens the code is made up of and go back to the static textual definitions of those tokens to understand what is going on.
In Python-esque psuedo-code, this is magic:
class Something:
def method(): return 1
s = Something()
s.WhereDidThisMethodComeFrom()
and the last line does something useful, rather than crash with method not found. You follow s back to its definition, you see it's a "Something". You follow back to the Something class... and there's no "WhereDidThisMethodComeFrom". Something came along later and added it. Who? Where? Oftimes these are so magical that grepping over the entire code base for "WhereDidThisMethodComeFrom" may not help because the name itself may be constructed.
In more Pythonic Python, the following is middling magic:
@somedecorator
class Something: # entire rest of code example pasted here
Following back to "Something" you can at least see that something has decorated it and it's a good guess that that much be involved. Still, it's a bit subtle and decorators aren't generally supposed to do that.
Not magic at all:
class Something(Superclass): # rest of example follows
Ah, WDTMCF comes from the superclass. Inheritance is defined as doing that in the language spec so you can follow it up the inheritance hierarchy. (But note this holds for statically-coded inheritance; the fancier your dynamic setting up of the hierarchy gets, the more magical it gets.)
Go is entirely non-magical by this definition. The two closest things to magic is having an interface value and calling a method on it, where you can't statically determine exactly which method will be used, and struct composition making methods appear on the composing struct, but both are in the spec and can still be traced back. (The only tricky thing about the struct composition is if you compose in multiple things you might have some non-trivial work to figure out which thing is providing which method.) Haskell, perhaps surprisingly given its reputation, is mostly unmagical. (The OverloadedStrings and friends extensions make it a bit magical, and there is some syntax you can bring in via extension which can be tricky. But otherwise you can, if you work at it, pretty much just use term rewriting by hand to understand anything Haskell is doing.) Python can be magical, though the community tends to avoid it. Ruby and certain chunks of the Javascript community can be very magical. (No non-esolang mandates magic that I can think of. INTERCAL's COME FROM operator/statement/whatever it is may be the epitome of magical.)
And like... OK, that's clever, but when I'm debugging the fooClicked() method 4 months from now, I can't just do a Find All for "fooClicked" and track down where it's being called from.
Ask a dynamic language enthusiast about that. Dynamic languages don't have "proper generics" but what they do have is all the generic use cases covered... so... which is more important, having a particular feature, or being able to do all the things that the feature can do?
A generic enthusiast can argue that the type safety is a fundamental difference. A dynamic language enthusiast is obviously going to take issue with that. I'll let the two of you fight it out.
From a Haskell perspective, dynamic types and the sort of generics that Java implements are probably closer together than you might think. Haskell expects your "generics" (which are actually the type classes; what haskell calls "generics" is more like what other languages call "reflection") to be able to provide mathematical guarantees like "If I have an A, I am guaranteed to be able to produce a B for that" which is much stronger than what imperative generics tend to be able to guarantee.
>Ask a dynamic language enthusiast about that. Dynamic languages don't have "proper generics" but what they do have is all the generic use cases covered... so... which is more important, having a particular feature, or being able to do all the things that the feature can do?
Well, the "things that the feature can do" in a dynamic language you can do in Go with interface{}.
It's the one thing you can't do (in Go or a dynamic language) that Generics are all about: type-safe generic code.
Well what's more important that dynamic languages don't give you is knowing what the hell the type is supposed to be able to do when you maintain the code.
I worked in C++, C#, and Java for 15 years before my work in Go. I've used generics. I will freely admit to not having a lot of experience in functional or ML style programming languages a la Rust. I'd love to spend some time getting up to speed on Rust, it seems like an interesting language, and at the very least, a great learning experience.
I would definitely recommend looking at Rust (and the ML family).
I have spent most of my programming life in C++, Java, and Prolog (probably in that order). Two years ago I switched to Go for most of my work, because I needed a native language, C++ was just becoming too much effort, and Rust was still changing monthly. I have written some substantial projects in Go, including a neural net natural language parser and a part-of-speech tagger, both used to annotate web corpora.
Since 1.0 has been released I have slowly transitioned into Rust and am now using it for most new work. What I strongly like about Rust above Go: parametric polymorphism (which I do use on a daily basis), limited operator overloading, sum types, RAII, the borrows checker, Cargo, quickcheck [1]. What I strongly dislike compared to Go: compile times.
Deep in my heart, I like ML more than Rust ;), but having a quickly-growing ecosystem is also important.
[1] There is property-based checking for Go, but in my experience it's usefulness grows with the strength of the type system.
I really want to learn Rust. It's been on my next-to-learn list for a while now. I have limited time because of young kids and getting embroiled in politics (actual politics). Someday :)
Rust is not a functional programming language :-) it has features which are ML-like, and it has first-class functions, but it doesn't really encourage functional idioms like the ML languages do.
If you want a great intro to ML-style statically typed functional programming (without the Haskell-style monads and functors jargon), check out https://www.coursera.org/learn/programming-languages/ . You'll get a lot out of even just going through the (10 minute each) lecture videos, Prof. Dan Grossman is a great explainer and it's a treat finding out the cool syntax, semantics and idioms of an ML language.
You're missing his point. Maybe you should consider that people who don't have the same opinion as you (Go doesn't have this feature, so it sucks) don't have the same opinion for valid and rationale reasons, not just ignorance.
I actually lean in your direction. I've strongly headed back to static typing in the last 3-4 years after a lot of dynamic typing. I'm just not in the mood to have that argument with dynamic typing enthusiasts right now. :) I've done my time on that battlefield.
I write Go all the time. The number of times I've said "I wish I had generics" is 0. It's absurd how goddamn often this gets said ON EVERY GO THREAD EVER.
There is absolutely 0 chance in the time you've written go that you haven't encountered a problem that would be best solved by generics. Chances are you're just implementing them in a different way. I say this as someone who uses Go.
If your project has high turnover (say 20%+) then a blub language makes a lot of sense.
Any gains from using a richer language are dwarfed by extra time spent getting new developers up to speed.
I've worked on projects with an average tenure of 9 months and projects with an average tenure of 4 years. The approach you have to take is completely different.
There is just so much variation in our industry.
As an example the most important work I did on one project was setup and maintain a prebuilt dev environment image. It saved many dev years worth of effort. We were adding ~15 new devs/month.
On the other hand I've also worked on projects where setting up and maintaining a prebuilt dev environment would be a complete waste of resources. We were adding a new dev every ~18 months.
The blub article assumes that the computer is the ultimate arbiter of language utility:
But Lisp is a computer language, and computers speak whatever language you, the programmer, tell them to.
A program, especially a large one, spends far more time being read by humans than being written. Computer languages need to be readable by humans, they need the right abstractions, and the minimum of those, so that there is little to learn, little to forget, little to be confused by.
If your language lets you build abstractions only your present-day self can grok, it is less useful than a more limited one which lets you build something you and your colleagues can easily understand and modify in the weeks and months ahead.
Doesn't Graham in his Lisp book talk about how using macros properly leads to more readable and maintainable code, and that Lisp is great at producing that kind of code in general because you can mold it to closely fit the problem domain?
Well exactly, this is why languages like Lisp are so seductive and powerful - they let you write languages within the language. When you have a really flexible language like Lisp, it's tempting to produce a meta-language to describe your problem which is specific, terse, and powerful. However there's an interesting trade-off here.
At first this seems like a wonderful solution, but if you ever work with a few other people, and you don't want to be constantly code-switching between your own thoughts today, Bob 6 months ago, Alice 5 years ago, and Bob last year, you don't really want your language to have so much expressive power. It's exhausting and ultimately fruitless in the long term and across teams; what feels powerful and expressive today as one person writing code can change to being a heavy burden when everyone is inventing their own language and forcing you to speak it, or worse when you attempt dialogue with your past self of 2 years ago and have no idea what you were talking about...
Now that's not an argument for lower expressive power always being better, or against generics (for example), but it does behove language designers and users to consider the limits of complexity, as well as the limits of simplicity.
That's a legitimate concern. But then again, the world is full of programming languages. Sometimes you really appreciate what an R gives you when needing to do a lot of statistical computing and data wrangling, or what C or Rust offers for systems programming, and so on.
And speaking of DSLs, the Pandas and Numpy libraries in Python are super useful, and they're possible because Python offers enough metaprogramming facilities to create such libraries, which go a long ways to offering to kinds of programming you find in R or Matlab.
So maybe the Lisp way wasn't wrong? Instead of a bunch of Lisp DSLs we end up with a bunch of programming languages.
I think the point is that Lisp lets you create an ad-hoc, poorly-specified version of any language you like, or perhaps several mashed together, which feels great at the time, but feels horrible when you come back to it later.
DSLs/Jargon/New Languages are constructing a new world, and you need to be really sure the costs of that abstraction are outweighed by concrete and lasting benefits in the domain (sometimes they are, oftentimes they are not), crucially, this sort of language is often best used for a very specific domain and nowhere else (say R for statistics).
The DSL doesn't go away. Your only choices are to implement it, or else read and write the boilerplate you get by imperfectly compiling it in your head.
I agree we're just shifting complexity around, and sometimes it's a matter of taste where it goes, but very flexible languages do allow you to construct a world that is very difficult for others to understand or trace execution in.
That same argument could be applied to functions (or procedures, if you prefer) as well as macros, and yet I don't believe anyone sane is arguing against functions/procedures. Bad code is bad code, just as bad metacode is bad metacode.
I don't believe that Lisp macros are particularly tricky for a professional-level programmer. Yes, students can use them improperly — but students can and will use anything improperly! One grows out of it eventually.
Why is this a surprise? Microsoft employs a fair chunk of the researchers currently working on functional programming languages in industry. The ideas get exchanged between Haskell, F#, and C#. Look at C# 7.0, where many of the new features seem to be an effort to make the language friendlier to Haskell programmers.
At least, that's my perspective as someone who uses Haskell.
Actually, I can't deny that I'm a language snob. I also don't think that's a bad thing. I find Go's magical maps and slices, its null pointers, its verbose error handling and its lack of generics to all be in fairly poor taste.
+1 for admitting to snobbery it is an important step!
Different problems (and people) have different solution domains. For instance, the Union-Find algorithm [1] is straight forward to implement in an imperative language with mutation. It is significantly harder to achieve an optimal immutable version as demonstrated by a paper from 2007 [2].
There are real trade-offs between features in terms of what is easy to express. Your favorite way to program may be more difficult in Go. You think Go's features are in "poor taste." However, it is totally reasonable that different people with different problems might actually like the language. I myself have programmed in many other languages including SML and Scala and believe that for my current problems Go is a good fit.
That said, whenever I have to do even a little bit of numerical work (as I currently am doing) I miss a language with good numerical options (Python, R, Matlab, Julia, ...). Go stinks for numerical work and none of your criticisms have anything to do with why it stinks. Not having a nil pointer would not suddenly make Go a great language for numerical work.
Language choice is once again about trade-offs. I will happily take the trade-off of poor numerical support (less than 1% of my code) for concurrency primitives, compilation to native code, easy C integration, memory safety, and garbage collection. There are things to like about go and things to hate. I do hate the way errors are dealt with, poor support for writing collections, etc... But, just because I don't like those things doesn't mean it can't "get the job done."
Few people would accuse airline pilots, surgeons, construction engineers of snobbery for demanding high standards.
When it comes to software, challenging this "everything goes" culture is called "snobbery".
People are called "senior engineer" after 2 years of copypasting javascript from stack overflow - and become "proficient" in a language in a week.
And yet we wonder why so much software is bloated, unreliable, insecure, overly expensive to develop.
> and become "proficient" in a language in a week.
I'd argue that this might not be a stretch in some situations. If you have a lot of software development experience in varied environments (back end, front end, desktop, command line tools, embedded, etc.) and with various dissimilar programming languages (static/dynamic, compiled/interpreted, Algol-inspired/not-Algol inspired, etc.), you should at some point be able to pick up a language at a decent rate.
And even though you won't master its idioms, unlike a total newbie, you'll be aware that you don't know its idioms. A sort of "known unknowns", if you will.
There's having standards and there's snobbery. Rejecting a language simply because it doesn't fit your conception of what a language should be is snobbery.
I've chosen Go for projects specifically because it produces better software (within certain contexts). A fast simple binary, a simple language (so co-workers can look and hack on the code), etc.
To act like people are picking Go because they are ignorant or stupid or lazy (which is what the notion of "blub" always is) is unfair and lazy.
I think you and I have different definitions of 'magic'. I agree that Go does not have user-defined generics. I don't think of a few blessed data-structures as 'magic', but I won't argue semantics.
Philosophically, I agree. Pragmatically, generics seem to tempt programmers into writing really weird code in pursuit of some purist notions of terseness or reusability. After having used Go as well as lots of languages with generics, I will say that most Go programmers write very similar code for very similar problems, and it does make it easier for programmers to understand each other's code. There are a few times that user-defined generics would be handy, but those times are really very rare and I'm not convinced that adding generics to the language would be a net positive. In fact, I think the case for algebraic data types is stronger than the case for generics.
I need a tree. Sorted lg(n) structures are incredibly useful, and maps aren't sorted, but as it stands I can't have one without risking runtime type errors.
Static typing that doesn't cover common use-cases is worse then useless. It gets in your way.
> I need a tree. Sorted lg(n) structures are incredibly useful, and maps aren't sorted, but as it stands I can't have one without risking runtime type errors.
I agree. See my previous comments.
> Static typing that doesn't cover common use-cases is worse then useless. It gets in your way.
I agree. Fortunately Go's static typing covers the most common use cases. If your primary deliverable is typesafe tree algorithms, Go might not be for you.
I think algebraic data types and not having nil would be a huge improvement to Go. I would agree with you about generics if Go had a good macro or templating system. Having to generate code feels a bit silly.
Grabcocque is not advocating for less in the marketplace of ideas. He's advocating for quality criticism.
But what are you trying to say? That grabcocque doesn't think there's more than one way to do things? That he's a snob? That he should get over himself? Why do you hide your derogatory speech behind implications and snideness?
Grabcocque is not advocating for anything, but attacking the article and go enthusiasts. There is no criticism to Go as a language presented, only criticism of those that claim to be productive in it.
In fact in its original form the comment contained only the link to the blub article and nothing else. This isn't enlightened discourse, this is a knee-jerk reaction :)
I just found it problematic that aaron-lebo was specifically and snidely attacking a user's personality, whereas grabcocque was attacking an article author.
At this juncture the conversation had shifted from Go to a question of conduct.
I've edited the comment and see that this discussion is not productive, but to defend myself, I see no difference between a simplistic "this is blub" response and "attacking a user's personality".
The discussion had nothing to do with the article which is why I responded the way I did.
http://paulgraham.com/avg.html
It's written with knocking down a very specific set of straw men in mind, but rather carefully avoids coming anywhere close to addressing the legitimate criticisms of Go as a language. One of the things that's most irritating about Go enthusiasts is the way they try to close ranks on legitimate critique and reframe their language's warts as "simplicity".
Also: 20 bonus blub points for pulling the old 'I don't need generics, therefore NOBODY does' gambit.