> "We also weren't sure if we would be able hire top talent if we chose Go, but we soon found out that we could get top talent because we chose Go."
I feel that a smart/talented C/C++/anything developer can go from someone who has never seen or heard of golang to a proficient and productive Go developer in a matter of a few weeks, maybe even _days_, if not less.
That's how long it takes to go through the following materials (and fav some for later reference) and play with the language a bit.
And a some more similar things that you can mostly get to from golang.org site. The beauty of how concise the language and even its website are, is that you can literally just go through everything there one thing after another.
 This is my personal opinion based on playing with go the last few weeks/months. I'd love to verify this theory. It's not yet the primary language in which I do things in (I use C++11 atm), but for all my side tasks it proved to be indispensable. And I found it very easy to pick up. I can't wait until I start doing all my work in Go, that will be a true test of its productivity efficiency.
It's quite strange to me that people would identify as or look for a "[language] programmer". Sure, I happen to write more C++, Python, and C than anything else, but I've dabbled in just about everything and could reach comfortable proficiency in a matter of weeks. Most of programming and all of computer science is universal.
Any serious programmer should be a polyglot by default.
> Any serious programmer should be a polyglot by default.
It depends if you're going to spend years training someone or if you need an expert right now.
My experience is that it is impossible to maintain expert level skills in more than one or two language + library environments. You can remain familiar with other environments but you don't have the time to be an expert.
While I sometimes switch between C-family languages for different projects, it can take months to get up-to-speed with the changes in a language and its environment since you last programmed in it. I'm talking about situations where I know the language well but the environment has changed. Languages change a little but the libraries they use can change dramatically. And along with the change comes a whole body of implied knowledge about how to safely and effectively use it all and this impacts on how you use the language itself.
If you've literally never programmed in a language before, it can be a few years before you know about all the eccentricities, before you understand why the language follows certain patterns, before you understand the risks with certain behaviors.
If you need an expert now, not in 12 months time, then they need to know both the language and the environment. If you can wait a couple months, then they still need to know the language.
"It depends if you're going to spend years training someone or if you need an expert right now."
I'm not going to claim that there are no legitimate reasons to hire people who are narrow experts in Blub (and only Blub), but I'm having a hard time thinking of any.
At an established company, you've got the luxury of time -- there's rarely a good reason to "need an expert right now" that isn't just a contractor. At a startup, where hiring the wrong person is a disaster, hiring an "expert right now" is like holding a loaded gun to your head. Ideally, you should be hiring "T" people -- lots of breadth, with lots of depth in at least one area.
A truly good programmer will pick up your language/framework in days, and be at full productivity in months, even from scratch. It's hard to do better than that.
Even at an established company, a really smart language newbie could write a bunch of ad-hoc code that does almost the same thing as well-understood libraries that everyone more experienced in the language uses. Even if his code was relatively high-quality, now you have a bunch of extra stuff to maintain, and everyone who interacts with it will have to learn this thing instead of just using the library everyone knows. I've done this in languages I'm unfamiliar with and will probably continue to do so.
Not Invented Here syndrome goes away with general experience. It doesn't come back every time you switch to a new language.
Just because I've never written Erlang doesn't mean that I will automatically try to write a random-number generator (say) the first time I need one in Erlang. I have enough experience to look for a library function first.
Empirically, NIH tends to be more common in single-language developers, not less. People who place a lot of value on their "expertise" in Blub tend to do so because they're over-weighting the importance of their memory of the API details. When they don't automatically remember something, they leap to the conclusion that it doesn't exist. They're also typically a less-experienced cohort than people who have written in lots of different languages.
I wasn't talking about NIH syndrome, I was talking about "I don't know that a common library exists for this standard use-case so I'm going to write my own one-off because I have a job to do". I mean, you can google for libraries but sometimes you just don't find them and then find out a few weeks later what you should have used.
I don't know that a common library exists for this standard use-case so I'm going to write my own one-off because I have a job to do.
That's called "laziness", and is even more likely than NIH syndrome to be mitigated by experience. Certainly, having lots of coding time with a single language doesn't make you less lazy, and having lots of experience with different languages doesn't make you lazier.
It's certainly possible that you'll miss some oddball idiomatic way of doing things in a new language (e.g. Python itertools or using C++ STL algorithms or something like that), but this is rarely a real problem. The job gets done correctly -- and in any case, we're all learning new ways of doing things. It's not as if you gain total prescience after N days on the job with Blub.
The point isn't that the generalist programmer will be 100% correct in all the details in new language, it's that they'll be able to quickly (and correctly) implement the important parts in whatever language you're using. Idioms tend to be the low-order bit of a solution anyway.
Things like the Counter in python collections. Actually, the python collections library in general. You don't need that stuff to do the job, but it makes the code a lot clearer.
It probably took a couple of months full time in python before I had enough understanding of the std lib to know where to find everything (I sat down and read through a description of most of the functions in there).
There are still plenty of libraries that would have made my life easier if I knew about them. I'm sure there are plenty more.
Having said all that, that's just for example. When it comes to hiring I'd still pick a good experienced developer over a specific language expert for a full time position. For me it's not so much about the understanding of a specific language as it is the appetite for understanding computers / systems in general that makes truely good developers.
Ultimately the task is going to dictate the best type of hire. Short term contract, get a specific expert. Long term hire, find someone who actually enjoys developing software :)
I think there is a fine line between using a library and writin your own to understand better the domain
It's something to do with how critical the library functions are to you / your system. I would never write my own compression software, but I can see why people would just to learn about the trade offs.
That's where code review can be such a valuable tool. I was learning Python coming from Ruby about a year ago and, sure, the languages are similar and it wasn't too hard to get started, but having code review sped up how quickly I became proficient by a lot. It's great having a bunch of people around you that can say things like "There's this thing that exists, don't do that." or "It's more idiomatic to do this that way instead of how you're doing it." or "People hate that because, do it like this.". It makes a huge difference.
It depends. If you're the only one knowing the language at this company... it's a bad idea, because the bus factor becomes unacceptable.
Otherwise you have a methodology problem. You don't do daily standups, where you'd have the opportunity to say "Yesterday, I started implementing X because I couldn't find a library" and more experienced coworkers would tell you "Use library Y instead".
Being a programmer is not about knowing everything. With experience one should be able to separate library functions that should always exist from those that are unique for each language. Everything else should just be a matter of information searching.
You talk as if we develop code by chiseling it out in stone. We write it in text files, usually with IDEs. No one needs to learn the new libraries the newbie made, they can talk to the newbie, point out what the library already gives (and where to look in future) and write to well known interfaces.
Any project should have time allocated for "tech debt" and that's where this sort of thing gets addressed.
"I'm not going to claim that there are no legitimate reasons to hire people who are narrow experts in Blub (and only Blub), but I'm having a hard time thinking of any."
Blub is a very useful tool but has a bunch of quite esoteric gotcha's that an expert will be able to avoid due to long experience.
Expert has used almost all of the Blub framework over the years, including the more popular third-party addons and already knows what's best and fastest in a variety of situations without having to think about it much.
"A truly good programmer will pick up your language/framework in days, and be at full productivity in months, even from scratch. It's hard to do better than that."
Yeah, but then HR won't be able to tick off the 'Blub expert' box and, well, nobody would ever get hired! Or something...
By this definition, Ruby/Rails are pretty crappy (I kind of disagree). As a newcomer, the amount of stuff going on in a Rails app and the stack trace when there's an error are pretty overwhelming. Meanwhile, people expound on how simple and elegant Rails is. Lately I've been thinking that this is because those people started using it 5 years ago when it was small and their knowledge has built incrementally with the environment.
My other theory is that it's a lot harder to track down bugs for a newcomer because like C (and unlike Python in most cases), importing functionality from another file is implicit. That is, when you 'require' a bunch of files, there's no indication which functions are coming from where. For me, this is one of the things Python solves marvelously (it's generally considered bad form to 'import *').
Rails is pretty crappy if you ask me. It's "omakase" which is Japanese for "acts according to what DHH wants despite what the community wants". And there's a lot of magic happening that isn't explained very well. There are better frameworks in Ruby. Ruby itself doesn't take all that long to be an expert at.
Though I could argue that all the languages pretty much take the same amount of time to become an expert. Just because some languages are supposedly higher-level than others doesn't mean that the complexity they allow you to tackle (and associated challenges as a programmer or "expert") is any less.
Especially since being a good programmer is more about design, choice of interfaces, reactivity to change, etc. which are by definition language agnostic.
I firmly disagree that Ruby requires any less effort than say C, C++, Java or LISP to become an expert.
I didn't mean to say that Ruby was less complex than C, I meant to say that if you take out the magic it is comprehensable and one can be good at it quickly enough just like a straightforward language like C. I agree that learning design, etc. is the real key in any language, thanks for pointing that out, because that's what everyone should focus on.
That's kind of what I'm talking about. I've never heard of Pry. With Rails I have to learn about the 100 most common gems (devise, paperclip, mongomapper or mongoid), Pry (thanks for that), bundler, rvm, and ActionEverything before I can be productive (or understand a simple app) . With Node.js, something newer with less "maturity", I figure out npm and I'm good to go.
I really don't think it's FUD to say that Rails has gotten much bigger in the past 5 years, and it's definitely not FUD to say that as codebases and tooling grows, so does barrier to entry.
The specific thing I guess you're objecting to is that it's harder for a noob to understand implicit imports and where something is coming from if you don't know much about what you're importing. If you use a tool to solve that language deficiency, that doesn't remove the deficiency from the language. By that logic, adding an IDE to Java makes it a very concise language.
Until your form isn't coming out quite right, and you need to start digging around in the source for the form class to figure out why. The happy path generally doesn't need expert-level skills, but debugging often results in a quick deep plunge.
A seasoned Java developer is expected to have worked with one of the mainstream ORM like Hibernate or Ebean. But a Go developer gets away with not having to know an ORM because there is no mainstream ORM in Go. :)
So what, the lack of a mainstream ORM implies "So there is really nothing to learn besides the language itself"? I don't think so.
And there's probably more to a lack of an ORM other than "Go is immature." It's a fairly common opinion among the Go community that ORMs are not worth their complexity. I tend to share that opinion myself, after having worked with a few in a couple different languages.
I've moved away from ORMs in Java because they don't really solve the problem faced. Hibernate could be a really good solution to providing a blanket system for SQL interaction except for the performance issues. If one was really using Domain objects (not pojos, or dumb DTOs) the typing ability of Hibernate would be great. Unfortunately for simple CRUD apps the issues presented by Hibernate (like loading it in an EAR) are such that it requires too much maintenance time on complex projects.
I know that I've used Hibernate for years. I've used it with DTOs. I know that it does provide a certain benefit for simple reads and writes. Unfortunately, Hibernate was designed to work in a disconnected manner like the Web. As a result you get into complications of session management. This is just one example of Hibernate maintenance costs.
Now I use Spring JDBC. This removes the noise of the checked exceptions, connection management, transactions, just like Hibernate. But I write simple or complex SQL in one place and map in and out contently.
Well, I can't speak to Java here but NHibernate works fantastic. I've checked the generated SQL on complex joins and it always ended up writing what I would have by hand.
In C# the session management is no big deal: you access everything via the Repository pattern, turn on trait injection and just make sure any repository method that will need a session has a transaction attribute. If you need to do a series of read/writes in one transaction you throw that whole method into the repository class and put a transaction trait on that method.
It's not. I create a DTO and I set up an exact mapping of how that appears in the database. Then I create a type generic Repository pattern that makes the DB access explicit as opposed to NHibernate's magic database access. Then the rest is just one time wiring I set up so I don't have to worry about actually connecting to the DB in my code.
In any solution you come up with you'll either need a repository that handles sessions and so on, or you'll have to explicitly connect to the db every time you need it. Either case will be more work than what I've done with this solution.
What's wrong with using SQL in your program? As long as your database layer is able to perform parameter substitutions to avoid SQL injection, this is a pretty efficient way to get stuff out of the database (and only the stuff you want). Why would using an ORM be a 'requirement' for OO-oriented languages?
There are quite a few problems using SQL directly in your program. Separation of concern issues, typing issues (e.g. the compiler can't tell you the USER table isn't appropriate here because it has no way to see what you're doing).
In a multi-paradigm language you wouldn't be tied to using an ORM but you should still use something that provides some kind of anti-corrupt support. In an OO language you can have nothing but objects so there is nothing else your database library can return. It may as well return objects that represent the records rather than, say a list of strings.
Even though it is terribly out of fashion, the performance of database interface classes (a database access layer) that wraps and abstracts SQL prepared statements and/or stored procedure calls is almost always much more efficient than ORM when the SQL written by someone minimally SQL competent. ORM can be much faster to implement, ah- classic tradeoffs.
I find a lot of people who use high-level languages are terrified of tediously direct contact with the machine, and a lot of people who use low-level languages are terrified of the performance costs of abstraction. I'm terrified of both.
I think that in general, C++ is a lot harder to get your head around than C, being for all intents and purposes a superset, and an easily five times bigger one at that.
That's how you gauge the experience of a programmer: how much of the field she/he's terrified by.
EDIT: I originally meant this as a joke, but seriously, if someone accurately knows where the gotchas are, that's valuable. Also note if they're biased to false positives and/or false negatives, and by how much. Are their heuristics for dealing with unknown territory efficient and likely to converge on good approximate results?
Thanks guys, I guess I really have had some rough times. I like how Lisp and Assembler at the top of the hierarchy capture the two extremes. Some hypothetical language in the middle would be great, but maybe the best we can do is to straddle that point, e.g. with C++ and Python.
Rather than straddle the middle, I think I'd go with as high-level a language as I could get, coupled with a simple low-enough-level language to get whatever performance benefits I needed. Some combo like Python/C, Clojure/Java, or maybe some other lisp dialect and C.
http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te... might be a better link to show how Ada performance compares to C: ranging from twice as fast to three times as slow, with a median in favor of C. It's true that it's pretty close. I don't know if those numbers are representative of how performance works out in the real world; any insights from your experience?
I derive that the Nth most popular language on Github is used in 24% * N -0.81 of projects, with an R² of 0.93. This suggests that Ada should be in use on about 0.98% of projects on Github, which makes me wonder why https://github.com/languages/Ada/updated?page=10 can only find 200 Ada projects that have been updated in the last nine months. (JS, the #1 language, has 200 projects updated in the last 22 minutes.)
C++ and Python pretty much run everything, everywhere. Standard, open languages are hard to beat when you want to fully control your development stack and not worry about future control issues. I wish go was an ISO standard like C++, I'd be more interested if it was.
RPG was more oriented towards mini-computers than mainframes, from what I recall. It's niche was the IBM S/36, S/38, AS/400 and iSeries boxes. I'm sure you probably could get RPG for your S/360 or S/390 or whatever, but from what I've seen over the years, it was mostly COBOL, PL/I, and MVS Assembler on the mainframes.
I agree with the premise that, as a professional software engineer, it is my responsibility to be a polyglot.
As for myself, when I started writing Python, I mainly wrote C or Java code in Python - in a similar way, perhaps, how early C programs were littered with __asm__() constructs.
It took a long time to learn how to write things in (though I hate the term) a "pythonic" way. That is, to learn the common language idioms that are not taught in any tutorial, or are part of pep8, which illustrate common patterns in Python code, the ins-and-outs of PYTHONPATH, etc etc.
So while I agree that, in a weekend, a reasonably proficient programmer can pick up reasonable proficiency in a given language, being a "Python Programmer" to me means that one has developed an intuition for the common patterns, libraries, pitfalls, platforms, and clever specific features of a given language and its ecosystem.
I agree, but I think the age of the language and community is another factor. Java, Python, C++, these are all old languages with decades of history and habits. Newer things like Node.js and Go have no history, no baggage to learn or avoid. I think starting in something with such a clean slate is somewhat easier because there is less ecosystem to learn.
I once heard a quote like this "programming languages are frozen knowledge about software engineering". (Anybody got a clue who said something like this?)
New languages usually improve upon older languages by making certain errors impossible by design. For example, Go allows pointers, but not pointer arithmetic. With C we learned pointer arithmetic is often harmful (though sometimes necessary), so Java removed pointers from the language (not completely, though). Go takes a sensible middle way, because no pointers sometimes is ugly. Such a clean slate is good, because with older languages programmers fight their language deficiencies with habits (e.g. if (5 == x) instead of if (x == 5) to prevent if (x = 5)).
On the other hand, we will find the pitfalls and dark corners of Go over time, but currently we do not know them enough. Go will acquire its own conventions, but we do not know all of them yet. This is the risk of a clean slate.
> On the other hand, we will find the pitfalls and dark corners of Go over time, but currently we do not know them enough. Go will acquire its own conventions, but we do not know all of them yet. This is the risk of a clean slate.
That's bullshit, Go is not a "clean slate", and a number of design issues of Go are known and have been known from the first release regardless of its designer's refusal to acknowledge them. We know pervasive nullable types are a source of errors, we know shared-memory concurrency is an error-prone default, we know a lack of generics makes userland code painful and generics are hard to retrofit in an established language (and even Go's designers know it, why do you think they build special-case generic collections in the interpreter?), we know allowing implicitly ignoring errors is a bad idea and making it easier to ignore than handle errors also is. These are not recent issues, they're well known and there are a number of possible strategies for handling them.
And Go's worst sin, to me: we know that foisting complexity and repetitiveness upon the user leads to forgetting, and forgetting leads to mistakes. And that's exactly Go's approach to errors, resources management and shared structures mutability. Human error is something you can very reliably bet on, human infallibility... not so much.
This is why Joel Spolsky correctly observed that the replacement of C/C++ and functional languages with Java in university CS curricula is a tragedy. If you don't understand pointers or recursion, you are not a polyglot and you cannot pick up just any language in a matter of weeks.
The "[language] programmer" (where [language] usually = Java or .NET) trend is a misguided attempt by industry to commoditize programmer talent.
My intro programming course (in high school) was in C++, and I think it would've scared me off programming entirely if I hadn't picked up mIRC-script (a very different language) on my own as a hobby. There is so much accidental complexity in C++ that it's a pretty terrible introductory language. We literally spent several weeks of the semester on how to get input and output to work properly through the giant mess that is iostreams.
At the introductory level, I think the biggest issue is teaching "computational thinking"  or "procedural literacy" : getting people thinking about the idea that you're writing specifications for a machine to carry out computations. From that perspective, it's best to pick a language that lets you get to algorithmic logic as quickly as possible.
Me too (well, ok, my first programming class was in Fortran taught by a 85 year old man who spent most of the time telling us about how much harder it was back when you had to use fortran).
I hated C++. I still think it's a fairly terrible language. But, for fun I took the Harvard CS50 course to refresh my knowledge of C and I found that WAY better than my C++ course. I think C is brilliant for introducing programming because it's very very simple, yet also very very difficult. There's not much to learn, except a lot of concepts (memory usage, data structures, etc).
I also think Objective-C is a really great language though, so I might be crazy. But, you give me a choice between C++ and a language that is basically C with a few additional keywords and garbage collection, and I find that an easy choice...
C++ is terrifying, and teaching an introductory programming course in C++ is an awful idea. Using plain C would make much more sense. C is also a better choice than Java because Java doesn't force you to learn about pointers, or memory in general, really. Countless Java bugs are introduced by programmers who don't understand what operations give you a copy of something, and which ones give you a reference to something (especially if that something is mutable).
I also think Objective-C is pretty good though, especially with the addition of ARC. It's like a better C, with Smalltalk-style objects and a lot more (mutable and immutable) datatype options. Much saner and less painful than C++, but more challenging than Java.
I think he was talking about pointer from the "idea" point of view, not the language feature.
References in high level languages are mostly hidden pointers. A good knowledge of how pointers work is really important to get a good understanding of how high level languages work (a lot of Java programmers don't well understand the distinction between value and reference types, for instance).
Likely, be at ease with recursion is also very important as many complex problems are recursive by nature, and so way easier to solve using functional techniques.
And I think that is a valuable lesson too. As a developer, the end-goal is to produce something of value, working software, not spend hours frowning over pointers and references. That was my early experience with C/C++, in any case. I did end up finishing my assignments (ranging from a fractal generator via a microcontroller operating an RC boat to a simple 3D FPS game), but I did feel a lot of overhead with non-functional things.
Of course, you get the same thing in Java when you first encounter a NullPointerException, ;)
I agree that any serious programmer should be able to be productive in any language, but mastery of a style of programming takes time.
It's easy to move about in the same style of programming, but trying to move someone from enterprise-y OO (Java, C#) to functional (Haskell, Lisp) or even Go is a bit of a leap. Concepts often just don't cross over. A Go channel, for example, makes sense to an Erlang developer, but not a Java developer. Goroutines don't make sense to someone familiar with traditional threads.
There's a certain amount of rewiring that needs to be done. Looking for a "[language] programmer" is strange, but looking for someone proficient in the style of programming used makes sense to me.
> It's quite strange to me that people would identify as or look for a "[language] programmer".
I understand people doing this.
First you can be more sure of what you're getting. It's sad that tests like FizzBuzz are so useful, but it's a fact. If I hire a Java developer for a Java position, I can figure out there Java skills. If I look at a PHP developer, it's more of a crapshoot. They may have a lot of PHP experience, but can they convert that to Java or have they just been doing cargo-cult stuff? This would be a smaller issue with something like C# that's closer to Java.
Second, and I suspect more common, is time investment. It might take someone quite a bit of time to switch languages. If they haven't done it before, you might find out it's a weakness for them. In my experience, at least in smaller companies, the fact that you're hiring means you need someone now. That extra time could be killer, because you waited to long to start hiring.
I would at least expect candidates to look into what we're using. When I changed jobs 2 years ago, one place I applied was a Python shop. I've never used Python professionally, but I've tinkered with it. If you're applying for a Java position, I expect you to have at least looked at Java before the interview. Sadly, I bet that would happen a non-trivial number of times.
I didn't get hired for the job, after the first interview I bowed out. I ended up in another mostly Java position (although we've also started doing some Obj-C).
I had 4+ years of working on a reasonably sized and complicated Java application. In addition, a few years before I used Python for personal projects so I could show some experience.
But it turned out the position was largely for front end development, and not some of the more back-end stuff I'm interested in. I think they liked me (don't know how I compared to other candidates) but I thanked them for their time and told them the position wouldn't be a fit for me.
If your code in that language is good enough then you can usually get a job for it. Of course, I'm talking about places where programmer/HR department have a big overlap. I don't much like working for the other places.
I think to completely explain this phenomena, one would have to make references to people's skill levels, and it's hard to explain it in a few words. So I'll just not say anything at all on this subject beyond that.
If only the morons that hire could figure this out. I almost NEVER use Java, but obviously I have and CAN use Java. Yet interview after interview demands that I use Java day to day in my current job to be considered. And this is for data transformation/analysis jobs, in which my use of Python makes me dramatically faster and more productive than my Java using colleagues.
Generally I don't understand "C++ programmer" to mean a programmer who is an expert in C++, and C++ only, but rather a programmer who may be comfortable in many languages but has a special expertise in C++. All my team identifies as C++ programmers, but we all speak Python and Java, and some can do Haskell, etc.
It takes a long time to become really good at C++.
"Any serious programmer should be a polyglot by default."
The language itself is not usually a problem for me. I mean an array is an array in any language. It's environment. Example: c# (which I think in MS's best work, by the way); c# was easy to pick up. The .Net framework was another matter. That took time. Knowing where the resources are and hell - even getting comfortable with the documentation and the IDE took time.
So now, when I hire a programer, I'm not too interested in his language skills. I'm assume he can code well enough not to embarrass himself. It's his knowledge of the environment that we run. That's what I look for.
>Any serious programmer should be a polyglot by default.
My own experience is that there are PLENTY of programmers who have used, say, PHP and Python, but who you wouldn't want to touch your C codebase in a pair-programming session.
If you've used C++ and C, then sure, you can probably jump to just about any language. Someone who's only used Python (or worse, Java or PHP) will likely be a danger to themselves and others for the first year or three using C.
I doubt that. Learn a language's syntax and semantics, perhaps, if it's not C++. Learn its standard library so one uses it aptly rather than re-inventing, less likely, depending on the size of that library. Become proficient in the language's idioms, know its gotchas and what are the more efficient of the choices it presents, less likely still.
Getting things done means using the best tool for the job. Sometimes that's a language outside of your immediate preference, because that's what the best tool/library/platform is written in.
Today I wrote Java (LDAP server plugin), and ssembly (for unit testing some code that has to minimally interpret x86 assembly). I'll also occasionally need to write some JS/HTML/CSS for the web, work on systems programming for other architectures (ARM, AVR), write kernel code and drivers (C/C++), write/extend a Python script (HTTP test client), write Tcl (to drive simulavr), ObjC (iOS apps), and occasionally write some functional code for fun (I have no business case for it, sometimes it's just nice to work on something clean).
I've done and do all those things (and more) not because I have 'tool tricks', but because in nearly every case, I needed to use the tool most suited to getting things done in that problem domain.
Programmers that refuse to adapt to the given problem domain do their users a disservice. It's like the old joke about which nationalities run heaven and hell -- the worst possible people set to tasks for which they're innately ill-suited. However, programmers can learn more -- if they're willing -- and adapt to better suit themselves to the problem at hand.
> I feel that a smart/talented C/C++/anything developer can go from someone who has never seen or heard of golang to a proficient and productive Go developer in a matter of a few weeks, maybe even _days_, if not less.
Agreed! My career is mostly database development. Very high level. Yet I felt comfortable in Go in a few weeks. In a few months I was handling low-level stuff (to me, anyway) like tweaking Go's web server. Go's official online documentation is excellent, although I wish it had lots more examples. The next best learning resource I have on it is the book Programming in Go (also get the Go Programming Language Phrasebook).
Re : I think that's a reasonable statement. My issue with it, however, is that after just a few weeks (or even a few months), said developer might still not know about various design pitfalls or best practices in the new language.
In short, they might not be writing idiomatic code, and you'll end up maintaining it for a long time. IMHO, becoming basically proficient with a language is very different to being able to create software which will scale well to large teams, while also aging gracefully over time.