Or perhaps it's a luxury- you have to make the right career moves and only allow yourself to work for organisations and people who actively support you while you develop your skills. You need to convince people to really invest in you, that is. And that's hard to do.
Think though- what's the alternative? If you work for employers who believe that everything you do is always going to be a bit shit, what can you expect from them? You can certainly expect your pay to be a bit shit also, and then your say in anything that matters. Way to go to become a useless, incompetent, irrelevant, cost-center drone, there, buddy!
I'm not saying you should strive to be a rockstar. But you can get better at what you do, and one day, even get to be good at it. Like, really good. People do that in every craft, in every trade, in every art, even. Why is programming any different?
Well, it ain't. The "nihilist" programmer in the article is really a pessimist programmer. The stack is always half full for them (get it?). But if it wasn't for "optimists" we'd never have tail-call optimisation. See? It gets better.
When I started as a self-taught programmer I was basically fighting the code all the time and reading about these better practices and all. I followed a top-to-down learning focusing first on the deliverables. As I started learning more and more and constantly struggling with my own code, the ideal implementation (of course involving OOP) seemed further and further far off, always elusive. You could always add an extra layer, an extra abstraction here and there.
At some point I realized I was lost in an abstraction sea and not getting anything done. I abandoned some of the projects at mid-size and made https://picnicss.com/ as an example of the opposite. Oh boy, what a difference. A single stylesheet made in few days with great adoption and feedback from the community. I just saw a video of Google IO 2017 and they used it there in a demo! It also helped that I switched from PHP to Node.js around that time.
So I kind of got hooked to that. I have made quite a few tiny-sized, one-off projects ( https://github.com/franciscop/ ) and learned a lot about this quick-iteration coding. I wouldn't say it's the same as the OP's description of nihilist since that seems to be based on a large codebase. What I made was nihilist in the sense that some projects superseeded the previous ones. Example: first I made https://umbrellajs.com/ , then decided to re-implement it all and created https://superdom.site/ with an alternative syntax.
Now I think I've found a balance in the middle as I'm finishing https://serverjs.io/ ; the library's public API should be fairly stable, but the implementation details can have their shortcomings and be kind of messy at places. To finish off with a great saying for the situation:
Perfect is the enemy of good.
Could not agree more. Self taught programmer here too. For such a long time chasing "perfection" had me stuck where I was not finishing any projects.
Taking a step back I decided that finishing projects is what really gives me happiness. They don't have to be perfect or world changing. That coupled with detachment from how the project is received has helped me so much.
Congrats on all your projects. Inspiring!
Nowadays I see more beauty in the end product and refactoring. I still keep that idea of perfection in my mind and do my best to write good code from the start, I just don't let any of it become a blocker.
I'm more efficient, stress-free and productive as a result. And I don't feel as repulsed when working with other people's code.
Seriously, the only kind of perfect code is the one that has been proven correct. Or in fact the proof itself.
Otherwise you're talking about "code we think does the thing and looks nice".
1) The quality of the end product (what the target audience sees and interacts with)
2) The quality of the underlying implementation (the hidden stuff that makes it work)
They are not completely independent (e.g. uncertainty in the product design has strong implications about time spent polishing the code). But the distinction is important, and it can be worthwhile to analyze them separately.
Practices tend to emerge within a specific project situation as a pragmatic way of addressing concerns. When the practices are communicated across projects a game of telephone is played, the nuances lost, and the meaning is eventually replaced with dogma.
Consider which would be more useful: a workout log that suggests how much additional difficulty you should add each week, or a fixed plan found in a magazine that says that you should be performing an exact workout in each week?
This is the struggle I think programmers really face, because the codebase at any moment in time needs an appropriate "workout plan" to successfully reach the next stage. Sometimes a form of cheat can be used to accelerate it towards a goal, but the concrete progress is reliant on a similar formula to progressive overload cycles.
This focus on feedback also guides healthy cutoffs between prototyping and production solutions: the production solution only makes sense once prototype learning has been done, and the prototype likewise stops making sense when it conflicts with the demands of feedback.
I would even go as far as to say that, if you do the same thing over and over again, you can only avoid getting better at it if you try, really hard, to be worse at it.
What I think it means is that you have to know your objectives, as everything is a trade-off. Time you spend refactoring is time you don't spend testing or documenting it, for instance. Or it might be better to have two features at 90% "perfection" than one at 99%. Or spend that time with better marketing/landing page/etc.
Sometimes, management dooms a project and as a programmer there is only so much you can do to make it forward. Interestingly, poor technical quality does not prevent a project from being commercially successful.
Other times, especially at the start of a project, or as a project manager, you have the ability of making it right. Do not pass on it! Have some realistic expectations about how far people will go with recommendations and best practices, but recognize opportunities to improve a design, as they are rare and valuable.
tl;dr: be the nihilist 95% of the time, but do not miss the optimist's opportunity that will happen 5% of the time!
Isn't that a good way to approach software development? You probably shouldn't make decisive decisions, you probably shouldn't freeze your project, and if you define it you risk missing opportunities outside that definition.
Not if your software needs to actually meet users' needs--in other words, not if you want to make a living at it. To meet users' needs, your software has to ship, which means you need to make decisive decisions, you need to freeze your product, and you need to define what it does.
What you don't need to do is stop doing those things once you've done them once. You can always update your software, you can always make new decisions, and you can always redefine what your software does, in response to changing user needs or changing understanding on your part of what user needs actually are. But the only way to do that is to do it, which means you have to start somewhere. And never making decisions, never freezing anything, and never defining anything means never starting anywhere.
If you make decisions reversible or abstract away the interface from the implementation, you may be doing it for nihilistic reasons, but still a decent idea.
Also, "good enough" is underrated.
The "optimist" described here is a person that sees software in a more broader sense: functional AND non-functional requirements. Non-functional requirements include maintainability, scalability, performance, security, configuration, etc. and will strive to implement them.
Now, my opinion:
My name for the "nihilist" programmer are "feature fairies" or "duct tape programmers". The problem with feature fairies are that they create more problems than they solve, and never volunteer to fix them.
Feature fairies like to get credited with completed features, but never with the defects associated with their contributions. Therefore they will usually play dumb when a bug happens, or an incident is declared, and make someone else clean up after them while they implement the next feature.
So after a couple of years, you have someone credited with a lot of features, and a team of people that have been cleaning up after such person. The duct tape fairy is now a 10xer, a rockstar whose time is very valuable therefore needs to be paid more, even promoted, even though this person is responsible for wasting 90% of the engineering payroll in fixing trivially avoidable defects.
The way to prevent that is to leave a trail of evidence that can link commits to bugs. When an incident happens, make sure to identify the commit id causing the problem and put it directly in the ticket. Make it very clear where the defects are coming from and who they're coming from.
Never volunteer to clean up after a feature fairy. By the time you do this, the feature fairy marked their task as complete and from the eyes of management you would be wasting your time working on a completed task. Rather than doing that:
- When the feature fairy wants to take on the next task, ask if they have fixed one of the defects caused by them as per evidenced in the commit id.
- Rather than opening a new ticket, reopen the original ticket. This better reflects the situation: you are completing the work the duct taper failed to complete, you are not adding new work. This also denies the duct taper of their prized fake task completion.
When a feature fairy volunteers to be in a hiring committee, prevent it at all costs. They last thing you need is having to clean up after more people.
Be careful while doing this to not be perceived as negative.
If you are taking less time than the rest, there are a couple of possibilities:
- You are a legit virtuoso of software. <- this is possible, but less likely.
- You are cutting corners and leaving work for others to do. <- this is much more likely
In the first case there's no problem. In the second case you are not truly finishing your work and the team has a legitimate reason to ask you to finish your leftover work.
If everyone starts contributing code leaving left-overs to inflate their productivity, it quickly becomes a race to the bottom: technical bankruptcy.
If you are not able to explain what your code does word by word that means for a start that you did not read the documentation, you repurposed code you don't really understand. Therefore your time savings are at the expense of compromising the quality.
If you didn't handle errors, if you didn't write tests, if you hardcoded magic numbers and strings, if you didn't test your code, if you didn't stick to the coding standard, if your identifiers don't reflect the code they represent, if you coupled unrelated code and performed actions at a distance... you are also saving time at the expense of the product quality because you cheated to finish earlier. All those problems add up to technical debt.
I've had a lot of discussions and arguments with the 'pragmatists' you call feature fairies over the years, and the impression I get is that their world view is skewed to protect their own egos. They don't create bugs, some of them will tell you. It's supposed to work that way. They assume ther is a magic later when they will get to fix all the problems they created (but some will complain about "nothing to do" when there's a gap in the schedule). They plan and write features as if their luck will hold.
The optimists in the article on the other hand tend to be closer to realists. They don't traffic in luck. They know that people make mistakes. They see all the time frittered away on tech debt (we call it tech debt because it is accidental complexity that slows us down) and they know if you don't chip away at it constantly you'll drown. Fight the chaos or lose your way.
Or have a code review step in the workflow, so that the buggy commits don't get merged in the first place?
If they are reasonable and take feedback well, you can work with them and iron out any issues effortlessly.
But if they're toxic, don't antagonize them.
If I can offer an anecdote which is tangential: I like to say that I'm actually an Optimist, but it's just that I've been disappointed so many times that I may sound like a Pessimist.
Honestly, I think it requires a certain amount of optimism/hope for the future to keep working in this industry. The people are usually great, but the systems (both technological and organizational) are mostly just awful. Awful, awful, awful. I take a little solace in the fact that I can (most of the time) fix the technical systems.
I don't think there's a deductive chain from those axioms to that choice of action. If you're a "nihilist programmer" (not a fan of this usage of nihilist/optimist) in this sense, there's nothing in your ethos that says it can't be better. Sure, it will always be crap, but you can make it better crap. You can do that in small bits or big bits.
The analogy I use for legacy production software is that it's a tire fire, one should endeavor not to make it worse (by unnecessarily adding more tires or fanning the flames), and if possible make it better (spray water on the thing), but it's never going to be an Eden even if you managed to put out the fire since you've still got a stack of tires that could reignite at any moment. If you start out with an Eden in the beginning, maybe you can preserve that, but even if it's ever been done it's not done most of the time.
The best is the enemy of the good, but the good is the enemy of the better. There are few things more irritating to someone with a "nihilistic programmer" mindset as seeing some self-satisfied "optimistic programmer" with their Good crap, that's still crap and could be made better. Not seeing any chance or value of improvement from good is the inverse to seeing no chance or value of improvement from broken, the problem with either view is that doing better isn't tried.
If it ain't broke, don't fix it.