I think the exact opposite advice is far better: never assume the way you know how to do something is best, and always be on the lookout for what others are doing that might be better.
Here's the thing about learning: the more you learn, the easier learning the next thing becomes. You form links, insights on relationships, new concepts that apply to old things, etc. If this guy thinks learning is such a burden, it's probably because he refuses to learn anything in the first place.
If he thinks he's a poor programmer, it probably has less to do with innate ability and 100% to do with the attitudes he gives as "advice" in this blog.
I am of extremely average intelligence. On the absolute top of the bell curve, looking down at the rest of you. I am extremely tired of the whole attitude that _anything is possible if you just put your mind to it_. I put my mind to it. That is how I got through CS in university. I worked at least twice as hard as the people that said _just put your mind to it_ and then cruised through even the hardest course without ever studying more than an hour a day and then doing an all-nighter to get an essay in.
Working hard as hell to understand what some low-effort-high-intelligence "just put your mind to it" brat understood as soon as it left the professor's mouth is hard. It just shows me again and again that I drew the short straw in the gene lottery. I have to work a lot harder to do the same things.
Being told to just try harder by someone who's idea of intellectual work is "wherever my curiosity takes me" is really friggin taxing on my self esteem.
Now, he was drunk and angry, but there is something to it. I think both him and I agree with you on a matter of principle, but it is more to it than that.
Some people start programming at a very young age, by the time they’re in college they have the ability to relate all kinds of concepts. An ability that I have as well now, which I obtained after my bachelor and used in my master.
I have a bachelor degree, 36 years old, and I feel like I have reached my ceiling.
I still think that I make progression, but it is so slow compared to some really good programmers at my job.
It sometimes makes me feel insecure, and I think of quiting my job, because it feels like I will never be able to be as good as others.
On the other hand, maybe I should just accept that this is it, it wont get any better.
CS is an extension of math, a hard science where you seek validation from reality instead of others. If you take the style of the mathematician all comes easy. CS is an extension of linguistics, there might be a style that works there too, but there is also the lost people who count snowflakes instead of noticing snow is all about the same.
Example: Learning CoffeeScript was a total waste of time. Learning JQuery helped me for a few years, but now JQuery is basically useless to me.
Based on past experience, I strongly suspect the same will happen with React, Rust and a bunch of other new exciting tech. There are countless examples besides the ones I mentioned.
But on the other hand, the time I put into mastering SQL or Unix will probably continue benefiting me for the rest of my career. Even C will continue to benefit me, even though it'll only ever be a small part of my job.
So I would modify his rule: Avoid Learning Anything New – Learn Something Old
The failure there is learning something that is not useful to you right now, not "learning something new".
Are their opportunity costs associated with learning one thing over another and making the "wrong choice" in terms of which stuck around longer? Sure. But you can't predict the future. Focus less on what you learned that didn't get much mileage, and focus instead on what you can learn next. Learn new things and old things - just keep learning! In the end, there are very few things you can learn that are a total waste of time - the more things you learn, the more perspective you have, whether or not you are specifically using the exact thing you learned.
And recent experience has shown me that you can both be working with the worst legacy code and CGI has to throw at you and also having to learn pre-release ECMAScript and TypeScript, to use on the same project. Some skills transfer — I’ve never had to fully relearn how to loop through something in a new language once I managed to master both for loops and map/each functional approaches, for example.
Other things are maddeningly poorly developed — tools to help you understand and refactor code are still very much language-specific at the moment and it’s always a one-off to port out-of-fashion languages and syntax to newer ones. Let’s not start on how testing guidelines and TDD haven’t significantly changed since 2003 unless you maybe include SRE work and better “testing in production” techniques... Some knowledge seems evergreen... or stale.
But it’s amazing how when you do keep an open mind, and focus on how something really works at a lower level, it still pays off over time. I still haven’t deployed anything in production with Docker and K8S but that doesn’t mean it wasn’t worth learning, it helped clarify that there’s more to production reliability than simple deployment scripts or imperative commands vs declarative repos — simply knowing it helped me better understand related topics, including additional reasons why immutable, fully-reproducible build systems are a good thing, or how deployment can be seen as a complete system instead of simply “what version of the code is on the servers now?” It also provides a promise of a cloud-agnostic future, and an alternative look at why 12-factor was good, but incomplete.
And there, again, one might say, why did you bother learning 12-factor when most places let you store log files on local EC2 drives, for example? Well, because if you can get over the information overload, having multiple ways of doing things means you’re more likely to be flexible in your approach, or ask “why” when someone says “just do it this way”. It of course has to be balanced with healthy pragmatism, to both ignore the “best option” when it’s too much work, or to recognize that there’s always a chance to continue improving things later on, or as a team.
What’s key is trying to think strategically about your code rather than tactically. A tactician would say “learning nothing new outside of what I need at this moment gets the job done faster” or “this fixes the bug” while a strategist recognizes that there’s always a future opportunity cost, or asks, “but what caused the misunderstanding that allowed it to occur in the first place?”
Also, it’s inevitable that things will change, so you should always either keep up with your specialty or keep expanding your generalist skill sets. And even your knowledge isn’t constant — there’s always something to re-learn if you haven’t used a thing in awhile or in some new way...
(now, of course, you I may be wrong on the cost benefit analysis here. I don't think I am, and the ubiquitousness of the new stuff implies that most of the industry agrees... but no matter the answer, that is the way to think about whether learning something is worth it)
Out of all of the languages I have learned over the years, only one has stayed important in every job and mostly static - SQL
There are many languages and libraries, but at this moment in time, all of them have to talk to a database at some stage, and in order to do that all of them convert the request to SQL.
Besides, having problem context makes the learning much easier, vs just saying...I’m going to learn react today in a void.
Learning an applicable skill is almost never useless.
While the payoff of learning old > new is typically much higher (search: Taleb Lindy effect), I think matching your learning to the 'human api' is more important. For me, learning is very emotional: when I feel a sense of curiosity and intrinsic drive to know, I'll follow my nose, spend my time where it takes me. I want to spend this kind of energy in a certain way.
When I find myself with an instrumental cause or external need to know, it's most likely going to be because it's something typical, something old, in which case learning about it and being useful with it will require less of my spirit and drive to crack.
New language/tech fanatics tend to pressure through both sources (a la "look at our ingenious design breaking paradigms" and "it's so good your boss will want you to work in it (well, soon)"). Often the former argument is stronger, so it appeals to your curiosity - in which case you're best off searching for the useful kernel, followed by a swift exit in order to preserve your sense of discernment. Should you return there, provoked by your general interest, that ought to be your indicator of importance. How hyped you felt that one afternoon you learned about it after reading HN comments is likely not.
On the other hand, the 'should learn' brigade will tend to target the latter source of pressure (employability & centrality). If you find yourself feeling resistant to this and force yourself into it anyway, you'll easily burn out and douse your curiousity for the day. I've arranged learning resources to languages I find fundamentally dull multiple times, and made only very shallow dives into them.
When choosing what to learn, your built-in heuristics will tend to serve you much better than either a long list of common 'shoulds' or a tangle of overhyped 'musts'. Let natural forces do their thing in shaping what things will be presented to you: avoid paying much attention to the loud people where marketing, shills and zealots tend to roam.
I'm generally the guy who knows the boring shit. C, networking, 'Linux', hardware, RF... But I keep telling myself I'm going to jump on the next bandwagon just to see how the ride goes.
That said, some technologies are fads that learning them for the sake of learning doesn't have a big payoff. I learned TCL at one job and have never used it again. There will always be things you learn for software development that proves to be dud. Get used to learning and be happy when something has long term success. It is vary hard to predict that the long term success will be.
Probably not, because even if you don't use it today, the problems and solutions you learned while using it probably made you a better developer and you know what was good and not very good about it, so even if you are making something with React, you probably remember the "old jQuery days" and don't make the same mistakes.
That is exactly how I understood it.
Then you see that syntaxes and framework API changes but it’s not a big deal because you already know what you want to do, look it up in the docs, and can pick up new languages in a few days.
For example learning high level concepts like filter-map-reduce with the appropriate data structures is relatively language agnostic, so I have the same reasoning between languages and all I see is syntax choices. It greatly improves learning speed and then I am free to choose the best tool for practical considerations for each project.
In what way? I too jumped on the CoffeeScript train, but considered it on the whole time well spent. First of all I wrote a couple of useful things with it so there is that. But more importantly having gotten used to the CS way of doing things made it much easier to jump to ES6 and many of concepts from CS could easily be re-used.
He said he was annoyed that he always had to manually join other peoples code. They would send the files they changed to him, and then he would search what changed, and add their changes to his own code.
I asked why they didn't use source version control like Subversion. He asked me what it was. He was so happy to hear that such tools exist, and was going to look into it further.
And I on my part, was perplexed that none of those 4 programmers knew about it.
If you don't regularly read shit to improve yourself, maybe it's time to find another job.
When you go so low-level, then everything else just translates to "overhead" and it's always a trade-off (for example exchanging performance with "fast coding" or "easy of use").
At that point, you know there is nothing better, if better is understood in terms as "performance" (I develop firmware), so there is little room for improvement. Maybe rust? I don't know. It may just be another trade-off.
"Ease of use" and "coding speed" are things that should be interesting to our bosses in order to save a few bucks.
> others are doing that might be better
Of course, in my case I am always open for new techniques: I'd be rather looking and understanding how John Carmack splits and scans BSP trees instead of thinking that X language will do it for me (at the cost of "insert trade-off here").
But no. Just as the author, I'm not going to learn node.js because "new".
I would make no such assumption. Most C++ programmers I've known would have a hard time being productive in a language where they had no collections, templates, virtual functions, destructors, lambdas, exceptions, etc. Trading deep knowledge of how C++ manages memory for knowing how to do it yourself might be even more of a challenge. A lot of C++ programmers know C because they were around as the two evolved, but at this point there are many who entered the profession with C++ and would find C almost as confounding as COBOL.
So yeah, somebody who knew C++ in 1990 was probably within spitting distance of knowing C at the time ... but now, they know it the same way I know Lisp or Prolog, since I used those long ago too. But if they've stuck with C++ ever since, do they effectively know C today? Could they be productive in it more quickly than they could in Rust or Go which they'd never even seen before? That's not clear at all.
(I was looking at the source because the stupid thing would disconnect whenever the PPP connection went donw; it was over-reacting to EHOSTUNREACH errors on the socket, which are recoverable; I fixed that, at least for myself.)
Here it is: https://github.com/marado/netkit-telnet/tree/master/netkit-t...
Focus on platform languages and be a late adopter, it saves a lot of frustration advocating stuff that eventually never takes off, or happens to die and then one is stuck doing maintenance work.
ALAN is probably a very good heuristic for precisely the type of person who reads HN. I would be most of us err too far in the direction of learning new stuff, while letting the projects we've started stagnate.
> After one semester at the University of Illinois at Urbana-Champaign, he transferred to Reed College in Portland, Oregon, where received his BA in physics in 1985, and then received his PhD in computer science from the University of Illinois, Urbana-Champaign in 1991. He then joined the faculty at Indiana University as an assistant professor. From 1994 to 1996 he was a visiting professor at Cornell University. He then joined the faculty at the University of Utah, where he taught until 2008 when he joined NVIDIA as a research scientist.
I think he probably knows more than you give him credit for.
It is phrased a bit ambiguously for shock value, I think.
There are some things that are relatively easy to learn, and would make it so much easier on you. How, as a poor programmer, can you afford not to learn them ?
Something like "there are more new langs than you have time. Choose wisely"
Poor programmers do things like allowing an incoming request to spin up unlimited concurrent threads. Poor programmers erroneously throw exceptions on any operational deviation - even it if can be handled without error.
Most importantly - poor programmers do not learn from their mistakes and are unable to see that they are poor programmers.
Another trait is the belief that they either never need to touch their code after they write it, or worse, only they will ever touch their code. It fires me up to see code that was written with the complete intention of never coming back to it again, despite the fact that it has to be maintained to keep the business running - reports have to use the data it generates, other applications have to use it's services because we can't afford to keep re-implementing everything. I also really hate seeing code that seems like the dev locked themselves up in their basement for a decade to write it. All sorts of custom switches, hardcoded configuration values, experimental tech embedded so deep the only graceful way to relieve that debt is to nuke it.
When I started programming seriously I had a mentor who gave me a long list of my shortcomings. It's taken years of hard, dedicated work to rectify most of those problems and I still have a huge distance to cover to be as good as I want to be.
Being a good programmer is a skillet acquired with hard, concentrated effort over time; not just a good attitude with some self-awareness.
Being aware of what you are lacking allows you to improve on those things, work around those issues and accept when collegue is actually good at stuff you are not good at.
All of these examples make sense in the context of your business, but in some environments these might be best practice ;)
What are those environments? Can you be more specific?
High-risk systems where a software bug could result in loss of life - aviation, submarine tech, defense, medical etc. Edit: This is for throwing exceptions on undefined behavior.
For unlimited threads I imagine scenarios exist particularly when scalable computing is so popular. Large high traffic web stores such as Amazon/Ebay, financial institutions, etc. Either way, the problem shouldn't be solved by an individual developer on a project making things up as they go - its a problem that probably needs to be defined at a framework layer, and discussed within the team of developers and requirements definers.
Moreover, I trust the advice of someone who rates themselves poorly more than someone proclaiming that they're a hotshot.
Terrible advice and mindset,
It's good to learn for pleasure or curiosity.
This way, you can enjoy reading SICP, effective java, code complete... Or you can use a new system, Linux, Mac, Android, iOS.
Doing it you model your thoughts and mind, learn new ways, practices, or patterns, you don’t have to use then only for having learned, but even so, they will be useful to you.
Taking some lessons from BJJ ( Brazillian Jiujitsu ), when you compete, you go in with your A game, the things you have practiced, the things you have made work over and over again, your high percentage moves. Over time, you add things into your A game as you gain experience in making those techniques work. You may occassionally find yourself in an odd situation which some "new" technique is screaming to be used and you might try it out. But usually, when you encounter a situation you don't really know, you start working at getting the problem to change to something you do know, then throw your A game at it.
I think programmers should understand what their A game is.
It sucks when this is happening to a production system, and then maybe it's a mentoring & leadership problem instead.
Recursion shouldn't even be considered advanced though, just like classes and first class functions. I'm all for making code as boring as possible, but to me understanding at least these concepts is a requirement of entering the profession of software development.
If this is what we mean by, "being a bad programmer" I guess I get it now. A refusal to actually track the state of the industry, instead being told by employers what matters.
Yeeees... so instead of being "told" by your employers, you're "told" by the hype-train? How exactly is this better?
Being fed up of the constant churn in JS frameworks is an entirely valid position.
Sure, maybe it only takes a month to learn the latest framework. Maybe it only takes a couple of weeks. But maybe I'd rather spend those 2 weeks doing something else that I consider to be more valuable (it's called opportunity cost).
And what about all the gotchas and quirks that every framework has? The pathological performance edge-cases, and suchlike? The ones you only discover after weeks and months of in-depth use? I'll have to learn a whole new set of those.
And what about my "legacy" codebase that used the last framework du jour. Do I just ignore it? Do I convert it? Hmm. Wonder how long that will take, and what else I could be doing with that time.
Maybe you enjoy the churn: endlessly learning useless knowledge that will be of no value to you in a few short years because it's no longer trendy. Lucky you. For me it got boring, because I've got stuff to build.
Because occasionally our peers are right? Do you really have so little respect for everyone in the industry around you that every new piece of tech that comes along, you just assume it is a mass of incompetence and marketing?
A diversity of perspectives, ideas and approaches is a fertile ground for personal growth. There are never any shortages of such hype trains.
And at least they're made by fellow software engineers. Not, you know, corporate hiring comittees.
> And what about all the gotchas and quirks that every framework has? The pathological performance edge-cases, and suchlike? The ones you only discover after weeks and months of in-depth use? I'll have to learn a whole new set of those.
Getting domain specific, you'd be doing that anyways because of how rapidly browsers are growing and changing.
> And what about my "legacy" codebase that used the last framework du jour. Do I just ignore it? Do I convert it? Hmm. Wonder how long that will take, and what else I could be doing with that time.
> Maybe you enjoy the churn: endlessly learning useless knowledge that will be of no value to you in a few short years because it's no longer trendy. Lucky you.
Why is it that the value of software is defined by if it is trendy 5 years later? That's a conflation of concerns I can't follow.
> For me it got boring, because I've got stuff to build.
For me, squatting on one stagnant pile of never-really-that-good technology building the same boring things over and over again at the behest of others is equally boring.
You suggest all the frameworks are poorly designed hype, but then decide you want to take an arbitrary moment in time (when you showed up, that fated day) and freeze everything there.
That reasoning seems unconvincing.
Lest we forget, the opening post advised you never learn anything new.
It's a strategy for a big corporate hiring committee, not an individual worker.
Of course the actual successful software companies "build the next proven thing" so even for them, this "wait and see" strategy is clearly not effective.
There's always an exploration/exploitation tradeoff when choosing your tech stack, and in my opinion most programmers overinvest in exploration, given that the terrain is infinite and covered in roughly equal local optima.
Also, having spent time on frameworks before React/Vue makes you really appreciate what React/Vue do for you.
But in reality, all advice depends, they depend on what are you doing, what you know, what is your experience....
No, sorry. Invalidates the entire post, #3 notwithstanding. Such a cheap, low effort, lazy cop-out when writing about any topic.
What the hell was this guy thinking? I'm certain he's got better wisdom to share than this.
If he were to say that most coding advice was noisy... I mean, I get that feeling too. There's lots of different approaches to the same problem that work in different scenarios, different work environments, domains of expertise, constraints, etc. And yet lots of coding advice is similar to, "<always/never> do X". Unilaterally. Period. And you will end up receiving lots of conflicting advice.
It seems like the smarter thing to do is take everything with a grain of salt. You don't have to change everything you do as soon as you read a new blog post, it's just a different approach you could add to your toolbox, then use it someday if it makes sense.
Unit tests and sensible comments are a couple of the key differentiators between a workaday jobber in it for the money and a badass. And now what? While I kind of agree that some people might be in the wrong job, that's true for every profession and the comparison seems lacking. If we're talking day job programming, a badass is neither a requirement, nor desirable in most environments honestly.
> Only learn something new when forced
I think there is a balance between always doing things in a new way, versus always doing things as you've done before. When engineers are pushed too hard on deadlines, some will avoid learning new things as a short term approach for quick delivery. If your in that environment, you aren't going to grow.
> Avoid linking to other software unless forced. It empirically rarely goes well.
Source? The rapid growth of npm, rubygems, and other ecosystems suggests otherwise.
I was hoping this would talk about how to support your co-workers (code review, culture, cohesion) or how to succeed at non-engineering tasks other 'good' programers may overlook
Probably referring to the dependency hell that can occur. Things work great when you first choose your libraries, but then months or years later while maintaining code you find x feature is deprecated or lib y no longer plays nicely with lib z or lib a is dependent on lib b version 2 and lib c needs lib b to be version 3, etc.
Learning how to use something like Vue when you already know how to use React (or vice versa) is stupid because they both solve the exact same problem in a reasonably similar way with a reasonably different API.
A better example might be something like Postgres and DynamoDB, since it can go either way. If your problem is 'I need a database for a CRUD app' then learning the second one is stupid because they both solve that problem just fine. But if your problem is 'I have a complex use case, my data is in a bad format for the one I'm using and I'm taking a huge hit in performance' then learning the other one is a reasonable choice and probably not a waste of time.
Basically whenever you take the time to learn something, make sure you're getting something out of it in terms of end results. It feels good to just learn more of the same tech, and if the API is different enough it'll feel like you're making progress, but you're probably not.
Settling on Mithril as my front-end library was a long journey from framework to framework. If I had stopped at Vue or React, I'd be much worse off for it.
Really, if I had stopped at the first front-end library/framework I used in web dev, I'd still be using PHP. Sometimes you need to move on, and if later asked about your technical choices in a professional environment, you need to have a professional answer that comes from wide experience.
Well, I did specifically give an example of two techs that do similar things that might be worth learning if they're different enough that one solves a problem the other doesn't.
I don't know anything about Mithril, but if you're right that you'd be 'much worse off' then that would be a similar case, no?
The second part about linking to other software is valid. What if the thing you are linking to is not going to be maintained? What if the thing you are linking to has a security issue? Now you are probably going to have to either link to something else or replicate the functionality. By only linking to things that are essentially hard to replicate instead of blindly linking to anything else.
In the above examples it is more of striking a balance instead of mere absolutes.
This gave me the feeling that the truth is somewhere in-between.
Maybe NPM packages are overall low quality and it would be okay to use more of them if they were better, but I'm now just installing a package when I don't have the time or skills to code it myself. This saved me from dependency hell, but it also helped me to move faster than doing all on my own.
This could be devastating, much more in little/medium company. I know of successful companies (I mean company with successful products ) with a C/C++ stack they considered "good enough" so they didn't change anything: the C++ standards, architecture, structures. Often that line of conduct was supported by a management looking any change or improvement as a cost. The result is always the same: a day they wake-up realizing that the "product" is a pile of crappy legacy code. I know some cases. One of those company was bought by a bigger company that asked to modify the stack to modern standards with disastrous results because the programmers wasn't skilled to port the code base to modern standards/architectures. In another case, the owners sold the entire division to another company, interested to the clients more than to the product and, after checking the status of the code base, the buyers hired a group of consultants that rewrote all in Java, with results that you can imagine.
Example if you write good code now using C++14. If it's good code now it will continue to be good code 20 years later. There's no property that adds bugs or "crappiness" to the code as the years go past. The invention of a new "c++40" standard doesn't obsolete old code or turn it "crap".
KISS => everyone should do this
YAGNI => 95% of the shit I add is ignored (and I've got fairly objective proof that I can develop good projects)
ALAN => Master SQL, one scripting language, and how to use Stack Overflow. You'll be the most useful dev on the team.
I agree with most of the rest.
Bonus point: never be afraid to tell a business person that what they want is an exceptionally bad idea. it usually is.
The advice about KISS is something many "good" programmers would benefit from following. Likewise, I've seen good programmers recommend using arrays as your first attempt at a data structure as well (e.g. Jonathan Blow, https://www.youtube.com/watch?v=JjDsP5n2kSM )
So I’m working on getting better and getting more confident.
Don't. There are so many poorly performing queries and databases in the world that you could make a very tidy income just specializing in sorting that out. Even if you knew nothing else that skill will keep you employable for decades to come.
I suspect there is quite some business helping companies wanting to transition from Oracle or MongoDB to PostgreSQL.
If the administration part is a necessary tax to be paid then so be it if I get to do the other part.
FWIW, I'd say the most important characteristic of terrible programmers is their supreme unearned confidence!
Which is to say: Focus on getting better. The confidence is secondary.
My secret trick is simply to ask people for help when I'm stuck.
I was a professional programmer (now retired), and not a very good one. I'm familiar with Dunning-Kruger; I've worked with good programmers and bad ones, and I can tell the difference. Very good programmers are far and few.
I noticed that most of my colleagues were keen to learn new shit, like new JS libraries, new languages, new source-code management systems and so on. I think I lost interest in newness (for its own sake) about 15 years ago; I got turned over one time too many by a vendor that decided to withdraw support for a programming language that I had committed myself to.
I recommend retiring from programming. You don't have to keep up with the young whippersnappers any more, you can carry on coding in bash, you don't have to use git, Docker, or weird NoSQL systems. I realise that some of this new-fangled stuff is better than FORTRAN or VB6 or whatever; but learning a new programming system every 6 months is a total waste of time and effort. Get to be good with a few useful tools, then concentrate on people skills.
Or give it up completely, and learn cooking, or drumming, or interior-decorating.
I think there may be some rationale behind ageism in software development. For the first 25 years or so I got better at it, but I think after I turned 50 I started getting worse. Or at least, I got better slower. It took me longer to learn new tricks.
But I really think that some of those new tricks were not worth learning - for example, you can stuff Node and that ridiculous dependency system where the sun don't shine. JS is a very clever language; but cleverness isn't always best.
You're really taking embedded computing to the next level
He's got one thing right. He's a poor programmer. "Success" must be mighty loosely defined here.
 $7k per annum take-home-pay.
That's heartening. I bet a very bad doctor or lawyer can still do some good somewhere too, as long as they don't convince themselves they are better than they are.
On a larger scale, it's remarkable that a lot of the general programming advice given today is more of a heuristic/guide than a tautology, but many people still assert that their advice is The One Right Way To Do Things. It's important to understand that the vast majority of all this advice comes from real examples of what worked and what didn't. So, perhaps the best thing one can do as a programmer, in any domain, with any technology, whether you're a great programmer or a poor programmer, is to listen to it all with zen and a grain of salt.
I guess what he's trying to say is Avoid Learning Anything New if it's ancillary to your job, and instead focus on your core competences.
Yeah, I don’t know. The cloud bill is going to be expensive.
"We're doing a combo-box component, but to keep it simple let's not support multiselect."
Retrofitting such a feature when You Eventually Do Need It(YEDNI?) is neither pleasant nor simple.
My take is that one's skill level is not the most relevant thing - we have code reviews to deal with exactly this problem.
What ultimately matters is whether you're adding or subtracting value.
I've worked with people who were aggressively incompetent. As in: they had bad ideas and were insistent on implementing them, even going as far as bypassing the regular review cycle.
Learning fundamental knowledge such as programming paradigms, data structures, algorithms is something that will likely not become obsolete in a year by year basis. You should totally learn this if you have the time.
Now, memorizing every API in a framework that is likely going to change in 6 months, is probably not going to be very useful in 5 years (but it can be beneficial to achieve your short-term goals and move your career forward).
It's not about "avoid learning anything new", it's about being tactical about what to learn.
UPD: just realized who the author is. He _does_ learn a lot of new things, just not in software development, because it's not the focus of his career.
I had a famous cryptographer professor. He refused to learn anything past Pascal. He freely admitted he was no longer a programmer and that he shouldn't be doing it.
That seems like a different message from the messages presented here.
Except for the ALAN thing. I think that should be modified with a few qualifications. Don't learn anything you will ever use less than 10 times or something.
Sounds like it's advice on how to sandbox yourself in the interest of the org.
The others are arguable. But, in 2019, this is flat out bad advice.
Your "go to" data structures should be hash tables about 70% of the time and vectors about 30% of the time. In 2019, memory and CPU are so stupidly abundant that the abstraction costs nothing in 99.9% of all cases. The programmer gain for not having allocation, dereference, fencepost, and invalidation errors is enormous.
But, then, this is hardly surprising advice from someone who only learned about a "scripting language" in 2015. The rest of us realized that those silly "scripting languages" were better than C++ for 90+% of our problems way back in 1995.
And anyone who has used a "scripting language" realizes extremely quickly just how stupidly useful hash tables are.
I literally just wrote a comment elsewhere about how hash tables are obscenely overused and cause measurable performance degradation in many situations.
However, the number of times I see people hit a bug because they fenceposted or flat out overflowed a fixed size array VASTLY outnumbers the times I have seen people have to redo their underlying data structure because it just wasn't fast enough.
Depends on your program. In my world, it's vectors about 99% of the time when it's not a fixed-size array.