Ruthlessly focus on the minimum viable amount of code for a given problem, or if a problem can be avoided entirely. Minimalism in code and software design is a beautiful thing, but it doesn't get a lot of press because "the new hotness", "grow or die", "we have to do something", and so on.
I prefer working with the programmers who write a lot of code (often with bugs) and then refactor into something minimal. The advantage is that they produce something early that we can get feedback on and we can spiral into the right solution, both architecture / feature wise.
The problem with the "code a lot, then refactor" approach is that it takes a lot of different skills to actually pull off. Some of the pitfalls I have seen are:
1. You need an unusual amount of perseverance, or the feedback of very knowledgeable, outspoken and candid coworkers to actually perform an acceptable refactor, instead of the usual brushing of dirt under a thin (leaky) abstraction. Some of my best work I have done while challenged by colleagues that did not allow me to get away with the "good enough" version I initially wanted to settle with.
2. You need to command an amount of social capital that is not normally granted to mere programmers in order to let the higher-ups that: "Yes, it seems to be working, but no, it is not production ready yet. Do I need to remind you, again, that you agreed to create a throw-away prototype from the very beginning?" In my experience, experimental code - no matter how rough - gets promoted to officialdom status the very minute it's pushed into your source control system.
3. Even if you can navigate #1 and #2 successfully, you still need a lot of forethought in order to design a robust API that will allow your coworkers to interact with your module without being disrupted by your ongoing refactoring.
3.1 As a corollary to the previous one, the effort to communicate the robust API from #3 grows super-lineraly to the size of your team. Not only the probability of someone being left out of the loop grows with N, but also the probability of some idiot^H^H^H^H^H misguided person deciding to ignore all warnings and rely on some implementation detail to accomplish some short term milestone.
4. Finally, you need to actually catch the bugs in order to justify all of the above. This is not an actual skill that need to have in order to do that, but it means that any problem that you might have gets amplified by any defect or omission you may have in your testing strategy.
Refactoring is the most fun part! It's lovely when things become tidy, straight, and clean.
and new bugs get introduced into erstwhile fully validated code ;-)
I learned this when I dabbled in real time systems. Those dudes can crank out some hella reliable code. At first I thought it was despite the cumbersome dev processes that old school RT software uses. Now I realize it is because the process imposes a high cost on each line of code. So, there's less code. Less code = fewer bugs.
I work in embedded and the more code there is in there, the more chance a bug will happen. With embedded, we are not just talking about pure software bugs, we are talking about hardware bugs that is root caused to software functions. Sometimes easy to find, but also difficult to find.
1. Do not write *any* code (can't argue with that)
2. Write "good" code
3. Insert paradigm here (like TDD, etc)
I've worked with some incredible engineers so I'm going to share what I've noticed and learned over the years:
1. They write code which elegant and easy to read (cyclomatic complexity is "magically" low).
2. They know their data structures, algorithms and use them when required -- that is, when there isn't a nice, clean implementation of something, they aren't afraid of writing their own version.
3. They write unit tests. They don't shy away from creating test infrastructure even if this means a lot of work on its own. The stuff built is then easy to use by future contributors.
4. They're aware of new features and frameworks.
5. They're generally friendly and are willing to explain stuff to others.
Most of the very best developers I have had the opportunity to work with had this quality. They are very comfortable in what they know and what they don't know.
Nearly every cocky programmer I have ever worked with were average at best. And you could never teach them anything they didn't "already" know.
So true, and this is applicable in almost every discipline, really:
"Nearly every cocky x I have ever worked with were average at best. And you could never teach them anything they didn't "already" know."
It would be more useful to have a reference of quality techniques that have at least some data behind their claims of efficiency. I know of this list:
I would prefer something with more citations. Does anybody have something like that?
I don't know what your programming experience is, but if you've done some moderately complex projects this might sound obvious but is overlooked by a lot of people.
Source: Personal experience which lead to a lot of sleepless nights
Bob Martin's seems the most timeless approach to me, building things in small pieces and testing everything is as applicable now as it has ever been.
Knuth's documenting everything is still absolutely valid, but very few people are in a position to go months without testing, nor to offer bounties on bugs. Those things sort of require writing software that doesn't depend on other people's libraries. That used to be the norm decades ago, but it's really rare now, and most often not remotely possible.
Dijkstra's approach is now only metaphorical. The example is mathy, and if you're writing mathy code, proving things about it makes sense, but writing large async web projects with a lot of UI & network communication... by all means you want to think about how to cover all your bases, but proving things mathematically about that kind of code isn't realistic or practical, if it ever was.
When you are writing async code, it's more important than ever to try to think with the proving mindset, and at least consider in your mind why you won't get deadlocks or race conditions. It's more important in async conditions because you have no chance of finding all the deadlocks and race conditions through testing alone.
(Also, you misunderstood Dijkstra's approach. He didn't prove all his code, nor did he teach people to do that. He merely wanted them to have the mindset of trying to think of everything that can go wrong).
And my larger point still stands - Dijkstra's quote is anachronistic. Having a mindset of trying to think of everything that can go wrong can no longer in today's world be done without debugging or by "not introducing bugs to start with". That used to be possible with a proving mindset, but it's not anymore. A proving mindset is still valuable, but testing and debugging is now invaluable because we rely more than ever before on software we didn't write. Nobody can reason conclusively about async software that relies on dozens of npm projects.
* edit: I just thought of a better way to state what I mean: Having a proving mindset in a modern development environment means debugging and testing. The only way to cover all your bases and prove your code in all cases these days is to run it in all cases.
I'd never trust someone who claimed they proved their code in advance and didn't need to debug it because they don't write bugs in the first place. If someone said that to me today, it would be a like saying out loud that they have the exact opposite of a proving mindset, and then I'd prove it by running the code and finding the bugs.
Hmmmmm let try to understand what you are saying better then.
Nobody can reason conclusively about async software that relies on dozens of npm projects
Indeed, as we've seen, a single simple dependency can break a lot of things, and in general it's not practical to go through all the dependencies in npm and make sure they are ok.
And my larger point still stands - Dijkstra's quote is anachronistic "not introducing bugs to start with"
Dijkstra (especially after "goto considered harmful") had a habit of making hyperbolic quotes that didn't match what he actually believed. To understand what he actually believed, and what he actually did in his own programs, it's necessary to dig deeper.
Fortunately Dijkstra wrote a lot, and he wrote a text book which he used to teach beginning programmers the 'ideal' way to program. In his textbook, it's true, he began by teaching students how to prove program correctness. However, he quickly moved on from that, at one point saying, "we gain nothing here by going through the work of proving this program correct."
In other words, he was not trying to tell everyone to program by proof, nor did he do it in his own work; rather, he wanted people to have the proving mindset (as opposed to the "hey, it works, ship it!" mindset). Dijkstra was aware of Knuth's famous quote, "Be careful about using the following code — I've only proven that it works, I haven't tested it." In the OP I tried to show that all three of these programmers have that mindset, but it manifests in strikingly different ways. There are many techniques that can be used, but the underlying principal is the same for all of them.
And I think you will agree that NPM is just crying out for a little more formality :)
Then maybe we've learned that choosing to use hyperbolic quotes for blog posts can lead to unnecessary disagreement, or perhaps in our case, violent agreement? ;)
> And I think you will agree that NPM is just crying out for a little more formality :)
Oh hell yes, and how.
>or perhaps in our case, violent agreement?
I updated the essay to hopefully make it more clear, as a result of our discussion.
And then there are the programmers that actually write code on big, complex projects. Somehow you don't hear about these people very much.
Don't get me wrong, good theory matters. But good, clean design also matters. Clarity of thought (about the problem space as well as the theory) matters. Good tests matter. (Someone said, "It's amazing how many bugs a proven-correct program can have.") Good process matters. It all matters.
Also, some don't care to write or proselytize.
1. George W. Bush, Caesar Augustus, Genghis Khan, Alexander the Great, Qin Shi Huang
2. Bob Martin, Donald Knuth, Edsger W. Dijkstra
Why do you think it is awful? The parts of his code that I've looked at (admittedly mainly qmail) have been great.
You remember wrong, I had no trouble understanding qmail, neither the structure nor the code itself.
Most programmers I know simply demand the requirements are "clear". Unfortunately that often doesn't work, as it would require explaining half a year's worth of economic theory and then another half a year detailing a client firm's experiences selling X/manufacturing Y/...
There's just no working with that. You cannot think of all the possibilities because you cannot attach reasonable chances of actually happening to anything.
When starting a mission I always make a point of spending at least a few days manually solving the problem the software I write will solve. Because doing so lets me know what I know and don't know, and actually doing so informs me of both shortcuts used in practice and political problems/attitude towards automation.
Without reasonably correctly assessing all of this a software design needs to get very lucky to work at all.
Work on the server side.
Avoid anything that is multi threaded.
Always read an order of magnitude more code than you write.
(Any code, in any way. Debugging included.)