In my first corporate job out of college (a NOC at an ISP) I was asked to update the documentation for troubleshooting quality of service issues. I checked our wiki for what was already there and it horrendous. I started to mentally thrash the person and was going to go confront them about it. When I checked the edit history I was greeted by a single edit and my username a week after I started the job.
I learned a great deal of humility and compassion in that moment.
I experienced this same moment when I was ~24 years old and digging through a codebase I'd written a few years prior after returning to a former employer. Once you've hit rock-bottom there's no place to go but up :P.
I've subsequently noticed that those who are quickest to talk trash about nuanced engineering decisions and minor bugs are often the ones with the most fundamentally-indefensible coding practices (5000-line source files that throw innumerable compiler warnings, using deprecated frameworks, explicit silent failure modes, etc). Latent insecurity is a very real phenomenon.
My approach is to just do what I can to unfuck the code without wasting my life on it and then reflect on what went wrong so I can catch the pattern before it gets committed again. I just managed to stop a co-worker from making a mistake that cost my previous company an entire dev's full time attention for years.
Find incomprehensible code. Get email from blame. Search for them on Linkedin. If they are working as a developer in some senior capacity, then the code is probably a learning opportunity for me. If they are working as a scrum master code is probably as bad as it looks.
I dunno; I've known horrible senior devs who wrote solutions that were clearly resume driven development who ended up as tech leads elsewhere when there was a regime change and actual scrutiny began to be applied to them.
I had this happen _several times_ at an agency I used to work at. I stumbled across some really bad code in their main product, got irate and asked "who wrote this shit?". Ran git blame and discovered it was me. Ok, but this is still s#*t, I need to fix it. Spent a couple of hours and found no way of improving it, figured "well, it's ugly, but I guess it is what it is" and moved on.
A few months later I came across the same chunk of code... "Who wrote this shit?", and off we go again. I must have gone through this loop 4 or 5 times with the same piece of code.
Yeah a "why didn't I" thought occurred to me while I was writing the comment above. All I can say is that it was a long time ago now (15 years), and the notion of leaving notes in the code for future versions of myself (and others) is a much bigger part of my way of working these days.
More than once I've googled how to do something new and found the answer on Stackoverflow, written by me. I haven't even written that many answers on Stackoverflow.
It sometimes can be misleading. If I'm refactoring large parts of code (e.g. splitting a large file to smaller ones), then sometimes the original logic written by someone else will be left, but git blame will show me because I was the last to make changes
Completely agree. My manager at the time thought it was a brilliant idea to have new people write the wiki when they started, because “they just learned the right way to do it from a senior person!”. This is not something I took with me into my own transition to management.
And sometimes the horrendous code really is written by someone else. I recently came across some horrendous code [1] that needlessly wrapped syslog(), and separated the format string from the point of use (the format string was a #define).
The thing is---I know the developer (he left our company over a year ago) and is a great guy (former jazz musician), but let's just say he had some questionable coding practices.
The only thing I could think of is a centralized location of all the format strings could make things easier to translate to another language. But as I stated in my post, it's an internal tool; we don't offer it to other customers as something to install, all our developers speak English, and there are other ways to mark strings for translation (at least on Linux, which this currently requires).
Technically, a define is the same as hard-coding the string. Practically, at the point of use a define looks like a variable which might suggest it's ok to use another variable. Now it could take something from untrusted user input and you wouldn't notice.
On the topic of "who wrote this shit", I'd really like to plug the idea that some of the most high-impact documentation you can write is a good commit message.
Say you track down a bug, find a line of code that makes no sense, and `git blame` it, to discover that you wrote it yourself, 2 years ago. If the commit message is "bugfix flaky builds", good luck figuring it out.
If the commit subject rather, is "bugfix flaky builds", followed by a message that explains what the flakiness was, why you think the change will fix it, what other bugs or limitations you were working around, and what upstream changes you might be waiting on that prevented further work, you're in a much better position. Suddenly you have a lot more context on what you were doing, why you were doing it, why you didn't do it better at the time, and in some cases it can even catch you from making an obvious but subtly-wrong mis-step.
Similarly, if someone's confused by your code during code review, that's a great opportunity for either in-line comments, or commit messages, as appropriate.
Unlike PR discussions, tickets, emails, slack threads, wiki pages, or photos of whiteboards, commit messages + git blame has an uncanny ability to be exactly the documentation you need exactly when you need it. Good git history practice can be one of the highest returning investments.
What has gotten me the most value is having either the branch or the commit message tie back to a ticket somewhere. -That- has the original bug, the comment thread that led to the decision around why this particular fix, any additional comments around tradeoffs we were aware of, and what other options we dispensed with, etc.
A well written commit message might explain what the issue was, but it won't have anywhere near the context the ticket and resulting comment thread should have.
> What has gotten me the most value is having either the branch or the commit message tie back to a ticket somewhere. -That- has the original bug, the comment thread that led to the decision around why this particular fix, any additional comments around tradeoffs we were aware of, and what other options we dispensed with, etc.
That works until the bug tracker goes down or the company decides to use a different bug tracker and the import doesn't preserve information, or the link in the commit message doesn't resolve to the corresponding ticket in the new bug tracker. This is far less likely to happen to the git history given that it's distributed.
That being said, adding information to the merge commit message linking to the discussion or actually summarizing it in the commit message itself would definitely be an improvement. The merge commit has references to the commit the branch is based off of and the head commit of the branch, so you can limit git log output to just commits in the branch long after it has been merged.
These two aren't mutually exclusive. Tickets, however, have lower long-term survivability (in my experience). Outsourcing, migrations, there are many scenarios in which the original tickets become inaccessible over time - and some codebases do last for years and years. Meanwhile the repository content (and thus the complete version history) usually survives as-is.
I didn't mean to imply they were mutually exclusive; just that in terms of "most high-impact documentation you can write", I find ensuring I link the ticket higher than making sure I have a thoughtful commit message, for the reasons listed.
Fair that it can disappear eventually if you change ticket trackers or whatever; that's a risk of changing ticket trackers. Hopefully you maintain both for a bit, and once you're six months out or whatever and retire the old, you don't need as much context since things have moved on (and there's a generational effect in tickets akin to that in garbage collection; you tend to need recent things more often than old things, and the older, the less likely you are to need it).
But just in terms of "what would I rather have", a link to the ticket every time. And in terms of "what am I more likely to provide", a link to the ticket every time as well (since all the communication on the ticket came about out of need; writing a thorough commit message is out of preparation, and I, and everyone else, am WAY better at consistently doing things that I need to do than preparing for possible future things)
> But just in terms of "what would I rather have", a link to the ticket every time
In practice over the past 20+ years, I've had to rely on commit messages far more than tickets, but a well-written ticket is defnitely awesome to have. When I ran Engineering for a startup, one of the things we invested a lot of time in was making sure commits had good messages, tickets had good writeups, and the two were linked. We required a pull request to close a ticket, and our CI system would automatically append a link to the ticket to the PR when it was merged. It was such a level of awesomesauce.
Obviously my experience is my own, but in many cases it was the ticket system that changed vs. the version control system, which is why history wasn't always there. A lot of my early experience was at startups and I think I saw a version control migration only happen once (VSS to Git). I've even seen a couple places that didn't even have a ticket system. Unsurprisingly, those no longer exist.
In any case, I think the "correct" answer is proper commit messages AND solid issue tracking. My preference for commit when looking in the past was more around trying to understand particular changes to specific files or lines of code, which are more easily navigated in source control. A good commit message helps narrow down things when there is a long history, but a link in that message to the actual ticket would be a dream since that would likely have the larger context.
All that said, I have spent some time at a FAANG and neither commit messages nor tickets were useful at all there. Commit messages were usually along the lines of "fix a bug" or "add a feature" and the tickets rarely had more detail than "fix X" or "add Y". That was more of a symptom of the "go forward" culture there. Little time was spent making it easier for the next person since that wasn't really rewarded in the performance process.
The commit message idea always felt a little strange/off to me. It's a string that you can't (generally) fix/extend later for those who may seek this information. Also nobody except the committer can write them. (Imagine an explicit @docsguy role for documenting commits along with writing ticket-based documentation.)
What if VCSs used a single file or a folder, like .gitcommits, where anyone could append any sort of info in the same commit, so it could be a part of it. Then, when you commit a feature, you add to this file(-s):
@@ @@
+---
+added websocket support to the server
+ /ws - main socket
+ /ws-events - events socket
And few commits later you decide to extend it, editing the same record:
@@ @@
+---
+added json-rpc over websockets
+ /ws-json-rpc
And VCS would then extract these records at `git log`:
...
4509812 added websocket support to the server
0732691 <no .gitcommits message>
8712389 added json-rpc over websockets
Few commits later @docsguy expand on json-rpc:
@@ @@
---
-added json-rpc over websockets
+added lifetime-related json-rpc over websockets
+task: ./tasks/1873.md
+supports 'start' and 'stop' methods: ./doc/ws-lifetime.md
/ws-json-rpc
@@ @@
+---
+enhanced commit descriptions
A tasks/1873.md
A doc/ws-lifetime.md
4509812 added websocket support to the server
0732691 <no .gitcommits message>
8712389 added lifetime-related json-rpc over websockets
6034007 enhanced commit descriptions
Full commit messages would then be just diffs. Also, one could write a commit message gradually, with the sources they are modifying. Or write two commit messages at once (because we all do commit two+ changes sometimes):
@@ @@
+---
+refactored foo bar heavily, @docsguy please expand
+---
+fixed a bug in baz, didn't care to backport
...
0923423 refactored foo bar heavily, @docsguy please expand
fixed a bug in baz, didn't care to backport
A well-written commit message should explain "why", this may partially consist of linking to external things (bug tickets, whatever). Although it is probably important to have enough of a "why" that the reviewer can make an informed decision if they need to go check the external reference or can continue with the review as-is.
Code review discussions are precious context. Unfortunately, git does not keep them. This is a major shortcoming of git. We need a new source code management tool that stores code review comments. It can lower the cost of software maintenance.
> If the commit subject rather, is "bugfix flaky builds", followed by a message that explains what the flakiness was, why you think the change will fix it, what other bugs or limitations you were working around, and what upstream changes you might be waiting on that prevented further work, you're in a much better position.
Those are very different needs.
"Why?" belongs in code as a comment. "How?" only sometimes belongs in a comment--generally if the code is "clever".
"What?" generally belongs in the commit message as it can touch multiple files and subsystems.
"Who?" and "When?" generally belong in your ticketing system.
On the other end, it's fun when you encounter a bug and go search for an answer and end up reading an answer on Stack Overflow that _you wrote_ 5 years ago and totally forgot about it.
Haha, this happened to me for the first time recently. Asked a question, didn’t get any answers, ended up updating my post with the answer I eventually found. Even had a couple of responses thanking me for the follow up. Five years later, ran into the same problem and forgot the answer, my answer was the top result on google.
I was once looking to see if there were any projects or discussions regarding <insert niche domain> via the google search "<niche domain> Hacker News". I clicked on the top link because, read the comment, and was annoyed that it didn't actually address the <niche> things I wanted it to.
Then I realized I wrote the comment. I didn't address the <niche> things because I was looking for it then and I'm stilling looking for it.
Equally fun is using a package or a part of it that you haven't used for a few years and going to read the docs for a certain feature... and realising you're the one that wrote those docs :)
I heard a saying once: "all developers should be embarrassed about code they wrote more than a year ago" as a (cheeky) measure of ongoing growth and development. :-)
Interestingly, I recently looked back at an old disk drive with my first "real" unfinished side-project which I started as a teen learning to code.
And, to my surprise, it was not _that_ bad.
Sure, it was full of naively implemented stuff that could have been implemented way better. But, even a decade and a half after, it was pretty clear to read, and it was decently organized.
And, in some ways, I preferred this code to the one I'm writing professionally. I was honestly prouder of myself as a teen than myself as a professional programmer.
The difference, I think, is that I had a clear idea of what I wanted to achieve for this project to be considered as definitely finished. Since I started working on code (for money) I've always been working on never ending projects. This industry (and me, as a professional) is obsessed in writing code for long term maintenance and evolution instead of effectively finishing products.
So, to get back on topic, I'm not sure you really write better code with time. I mean, yes, you totally do, but it's not as important as learning how to code for others to be able to understand and modify it with constraints you can't even imagine.
And I saw a lot of people writing super nice open source side project and then, when you work with them on some professional level, well, they still write the same shitty code as everyone else.
I think you progress mainly by learning the type of code not to write. There is a large permutation of valid program terms that can achieve your desired result, and it takes time to know what sorts of permutations are unnatural or express "awkward" programs with undesirable properties.
Like when you learn a natural language's vocabulary, but you still need to figure what conjunctions of words are not intelligible or natural phrases to fluent speakers. Phrases that have ambiguous meanings or otherwise generate confusion.
I think there's a comparable notion of fluency in programming that goes what most people mean they say they're fluent in a programming language.
The code you wrote a year ago should have good points and bad. If it is just bad, then you either started programming a year ago, or should perhaps start thinking about changing jobs.
If you are actually improving then there should be some good in there (you got better from two years ago right?). If you can't identify that good, then either you are just chasing fashion (that perfectly good construct from two years ago is now out of fashion so it is "bad"), or you haven't improved your ability to tell good code from bad and just calling it bad because you are unfamiliar with it.
If you want to actually get better at code then you should do a real review of the code you wrote a year ago. Where did bugs occur in the code you wrote? Is there something that you could have done to make it less error prone? Is there code you wrote that is easy to understand? What design choices can you make so that more code ends up in the good code camp? How were the bugs detected? Can I integrate that in to what I do for testing? Then actively practice moving your code in the good direction and away from the bad.
Anything less is chasing fashion and using familiarity as a good and unfamiliarity as bad. "As is" it is a pithy blog post for Coding Horror, but won't get you there. Pursuing greatness requires directed improvement
Pretty funny, about 15 years ago I started looking back at myself a year ago and decided that until that dude from a year ago looks like an idiot then all is well with my life. Hasn't failed yet.
I honestly find that kindof frustrating. I look at code I wrote a year ago and think - why didn't I do it this (better) way. And I also think - haven't I learned to write decent code by now? Why do I keep writing code like that when I should know how to write it better.
I don't mean this sarcastically, you're probably an better coder than this comment implies, but what you described is the mind's mental content that should motivate you to write "decent" code.
Do you only feel this motivation when you look back at old projects? Perhaps the improvement could be in organization (code-level or otherwise) such that you do not feel lost or confused looking at your old stuff.
I've read that and think the saying is fundamentally wrong. If you're constantly embarrassed at code that is only a year old, I think it's much more likely that you are chasing fads rather than actually improving your code quality.
I would go as far as to say that it's a negative signal, because instead of improving code quality you are just changing things that don't really matter.
I've seriously considered removing some of the older semi-popular projects from my github solely because I would be embarrassed to show them to a prospective employer, or even other developers.
In the end I think it's actually kind of fun to look back through the code though, and think "surely I should have known about X back then... Why didn't I just do that instead."
As someone who looks at GitHub accounts as a resume, I can tell you that improvement over time is amazing to see. Assuming it’s clear what’s most interesting or recent.
After about 20 years of doing this (not particularly as a solopreneur), I understood the following: I sucked really bad yesterday compared to today and today I'm sucking so much more than I will tomorrow. And it makes me really happy that I can still say this. The moment I won't be able to say this anymore, is the moment I'm done and I can finally die.
Well, the fun thing is that at the moment you expire you can be sure that you sucked the least in your life but some young and inexperienced coder will still look at your code and ask “Who wrote that shit?” :-)
While I am not solo, I have still been around for longer than anyone and the primary maintainer of a few repos. I tend to skip straight to "WTF was I thinking", because if its weird it was definitely me.
I am working on a fairly ambitious project. It is in the stages where we are nearing the end, and the tofu is getting firmer.
I had an issue, last week, where a bug was caused by an omission I left from an SDK that I wrote, maybe, eight years ago. It was a simple thing. I didn't add a "RefCon" to a server interaction, so I couldn't attach a reference context to a callback. This is a classic pattern, that I've used for decades, and I have no idea why the hell I didn't add it to this SDK. I suspect that it might be that the SDK was for a fairly wide audience, and I probably thought that they wouldn't understand it. So here I am, paying the price for underestimating my user base (which, so far, is Yours Truly).
Anyway, the "fix" was to implement a semaphore, and treat the call as blocking, which is awful. I really need to go back and change the SDK, but that's a fairly fundamental change, that I can't afford, right now (the SDK tofu is rock-hard).
I'm not a solopreneur but instead a corporate drone, and every time I git-blame questionable code, it's mine. But like others have said, if you're not embarrassed of something you wrote a year ago, that's not good.
That's another old saying - you should try to make code readable not only for the sake of other developers who will work on it in the future, but also for your own sake...
I once worked as a freelancer for a tiny company that at some point refused to pay me for one day of work. One of their reasons was that my code wasn't up to their standards. I could see in git that the code they complained about was actually written by their lead (and only) developer. They still didn't pay, though.
I loved those moments, the work was done, you moved on, and they came with all the ammunition they collected to wiggle out of payments.
The trick is to give them easily defuseable bad ammunition when they "sneak" around to collect. If you disassemble all their arguments and have emails were they were informed, you can trash them thoroughly.
Unpayed though. Cause even good arguments cant help broke little shops.
I talked to a lawyer, and it just wasn't worth going after them for this amount. I wrote it off as a valuable lesson and went to make more money elsewhere.
Unless you are a citizen of the same state the corporation is, you’ll need a lawyer when they file a removal petition to have the case moved to federal court on the basis of diversity jurisdiction.
And if you don't consult a lawyer to prepare for small claims court, you stand a decent chance to be blindsided by that or something else.
Do it anyway. For them to hire a lawyer to determine this, then they have to pay quite a bit of money. It is quite possible they might be bluffed and pay up, or decide it is cheaper to pay up, and then actually pay up.
If the initial action costs nothing, you have nothing to lose, but they will almost certainly lose something. If you do nothing it will cost you nothing also, but you have no chance of gaining anything at all.
They don't need to hire a lawyer to determine it, the junior paralegal in the in-house counsel’s office with a checklist will do just fine, since all the information needed to make the determination will be on the papers they are served with.
> If the initial action costs nothing, you have nothing to lose
Not at all true; one of the other standard techniques for getting things out of small claims court is for the defendant to find anything about the interaction in question that they could countersue for that would raise the amount in controversy above the small claims limit (since the requirement for linked counterclaims to be handled in the same case and the small claims limit interact to requiring moving the case when this happens), even if it is something that they wouldn't sue over otherwise.
Haha, yeah. My reaction to the title was "my past self, usually." I've seen utterly horrible code written by people who I consider myself leagues beneath. Sometimes git blame is a lesson in humility.
It is kind of mentioned in the article but I think a lot of developers don't realise that there isn't just good code and bad code, there is a whole spectrum and the position on this spectrum is dictated by skill, experience, time pressure, money pressure and shifting requirements (assuming you ever had any!).
We pine for the perfect green field where all things are good but there are probably zero companies where all developers are expert, where the solutions are all unique and unambiguous, where the trade-off between maintainability/performance and speed of coding are all completely set in stone, where nothing has ever changed in strategy, framework, etc. where no framework update has ever broken something and needed some dirty hack to work around it, where you don't have managers come and go who are not 100% helpful or useful.
So better to look forwards always. Don't try and fix what is there unless it needs changing to move forwards. A lot of the code I have wanted to rewrite in my current company will be toast in the next 2 years as we are writing new apps so just don't lose sleep over it.
It's important to have empathy for those that came before you, but I do also try to be mindful of those that come after.
A comment explaining "why?" A quick refactor of your patch to make it easier to understand. A small README update. Those kind of little things add up and pay dividends.
Best write code with empathy for those that come after you -- including your future self.
Developers like you are few and far between. I find it painful when others devs constantly re write working code, instead of moving forward. It’s so easy to trash what is there. Very few have the maturity to work with existing code without complaint.
It was me ... several times I have found a bug or code smell and then been surprised that the I was the original author. For the last fifteen to twenty years, I've generally found looking at code I wrote six months ago equally distasteful. So now my default behavior is to assume the code met the business function at the time, acknowledge that I'm continuously improving in my craft and finally, gained a joy in spending part of my time doing maintenance programming.
I'm a strong believer in continuous refactoring. Improve existing code when you touch it. Defer architectural choices untill the moment you have enough info. And leave cleaning up to the moment that it starts becoming messy, not before.
That implies, code never is perfect. Not even good. But clunky, cobbled together, expermental or just plain stupid. But always just about 'good enough' to solve the issue at hand.
I just would add, that the urge to refactor things whenever possible, definitely introduced bugs for me, because also refactoring has to be done with consideration and some things were weird for a reason, you do not see at first glance.
And refactoring can also hurt you, or another person just used to that code in its old shape. And then missunderstanding things.
I'm an avid TDD developer. Red-green*refactor*. The latter, often overlooked, is IMO by far the most important part of TDD. But the refactor is only possible because of the tests you wrote, asserted, and tested (testing the tests) in the red-green phase.
It gets hairy when a refactor needs to also refactor the tests - often a sign that the tests were lacking in the first place (and the more reason to refactor them). In which case I try to refactor them not in lock-step but decoupled: first refactor the tests without touching the SUT. Then refactor SUT (and then, most likely, another round of this)
There is no fool-proof way. There will be bugs. There will be regressions. But that is a artifact of "change", not of refactoring. I'm also a believer in "never fix something that ain't broken". Which, unfortunately, only works for software that is not ever upgraded, has no dependencies, runs on hard- and software stacks that never change and has no changing business-needs ever. I.e.: non-existing software.
>But always just about 'good enough' to solve the issue at hand.
I think this is context dependent. I generally agree with this statement on relatively low-risk projects. The problem with "good enough" is that it often becomes a rationale for our cognitive biases to take the easier route. I don't want someone doing that on, say, safety-critical code. Maintaining high standards is a way of buffering against those cognitive biases.
I suppose, but by that same token standards would be the definition of pre-defined "good enough". In my experience, the nebulous nature of the term is usually a means of rationalizing a sub-standard effort. The benefit of defining that threshold upfront is that it's hopefully more objective, before you let cognitive biases influence your decision making.
It really comes down to understanding why the goalposts have moved. Is it because you have more information to reduce the uncertainty about the risks that standards are meant to be mitigate? If so, great! If not, it's a red flag you may be responding to something else that increases risk, like schedule or cost pressure.
The point is that there is no one predefined "good enough", but rather the level of quality that code needs to reach is context dependent. That is why you end up constantly refactoring a little each time you touch the code as that context has usually shifted.
I disagree. There are lots of examples of standards that define what is "good enough."
For example, NASA has different standards depending on risk categorization and the predefined threshold of quality gradually gets higher as the use gets riskier. A business application is held to a much lower level of quality than software for a robotic mission which is lower than a human rated development effort.
We are laying out an approach to software development, refactoring, and a model for how to view old code that you some acroas that seems bad.
You seem to be stuck on the term "good enough" and arguing semantics that don't make sense. Yes, sometimes there are standards. that you need to meet. Sometimes just meeting the offical standard is not "good enough", and you need to do more. "Good enough" is inherently contextual, and you seem to agree with this, but seem still be arguing against using that term?
Good enough is usually an excuse for vague and ill-defined practice. If you don't have a well-defined "good enough" you probably don't have a mature process. If you don't have a mature process, you probably shouldn't be writing critical code. Hence my original comment that it's not a good mindset for high-risk applications.
Well defined "good enough" often looks like a standard. Those standards should be risk based so that one person's biases don't result in a different level of risk mitigation than another person's. That risk is what contextualizes what is "good enough". I'm sure if you asked the Boeing managers, they felt their CST-100 software was "good enough", but the relevant safety standards say it wasn't "good enough". Since both sides can use the term, it makes the term somewhat useless. Like you say, there is no singular "good enough" so the question becomes: Good enough...for what? Good enough to meet schedule, or good enough to not risk crashing into the ISS? My main issue is that "good enough" often means "undefined". When people say "good enough", I've found it often means "we don't know what we need, but I'm sure we'll know it when we get there." I think that can be a bad approach to software development when the risks are high because it opens one up to cognitive biases that lead to subjective and poor decision making.
If you're saying "good enough" is precisely defined and based on risk, then I agree. But that is not how I've ever seen the term used in practice. It's almost always a nebulous term which means you've only vaguely defined the risk. Poorly defined, subjective judgement belongs more to art than engineering, especially not safety-critical engineering. "clunky, cobbled together, expermental or just plain stupid" as the OP said, just doesn't cut it on critical applications, even if the developer claims it's "good enough" and doesn't strike me as a professional mindset.
You have a highly specific definition of good enough that doesn't match my experience with it's usage. Every time I have discussed whether something is "good enough", the discussion centers on what the relevant risks, needs and priorities are. Indeed, adherence to relevant standards should be part of the "is it good enough" discussion, but following standards alone doesn't absolve you as a devolper from assessing the current contextual needs and risks.
Indeed, blind adherence to standards is bad as standards are not perfect and are designed to fit a general use case. You need your developers/engineers to think about the full context and asses whether their design will hold up under real life conditions and not just those that were prevalent when the standards were created.
I think you might be subjectively reading too much into it to make a point that doesn't need to be made.
Look at the actual wording of the post I originally responded to:
>"But clunky, cobbled together, expermental or just plain stupid. But always just about 'good enough' to solve the issue at hand."
Can you imagine a discussion about relevant risks and priorities that uses that definition of "good enough"? I can't, especially with safety-critical code.
I'll give another example: The NTSB report of the uber autonomous driving accident gives a good breakdown of events. Through that report you can see the developers programmed a delay (they call it an "action suppression") due to nuisance braking etc. It's hard for me to imagine a code engineer programming a delay on a static delay time-sensitive safety critical system if they understood the risks (even if the mitigation was the human driver, they didn't seem to have a good understanding of human factors engineering). Yet someone along the way thought the software was "good enough" for production. It's speculation on my part, but I doubt you'd find a good FMEA or hazard analysis on that system. This is my big worry as SV mindsets get into safety critical systems: the general "move fast and break things - because it's 'good enough'" doesn't translate well to systems where lives are at stake. That is what I was responding to: the over generalization that clunky code (in the OPs words, not mine) is 'good enough"
> Can you imagine a discussion about relevant risks and priorities that uses that definition of "good enough"? I can't, especially with safety-critical code.
That's not a definition, but a description. It was pretty clearly not an description of "safety-critical code".
You've inserted a context of safety-critical vehicle control systems into a comment responding to an anecdote about writing PHP4.
You say things like:
> Poorly defined, subjective judgement belongs more to art than engineering
That only seems true if you are extremely lucky and/or early in your career. It is extremely common for software engineers to face poorly defined, nebulous problems that you simply don't have the information to solve in an objective manner. The frequency with which this happens is why the approach described by the top comment is so effective. It is a process of continual improvememnt where you try to avoid making unnecessary decisions until you have better information to make them with.
What changes with safety critical code is how you gather that information (and what other processes to build to supplement developer judgment). You try to gather that information with as little risk as possible. Experimental clunky and cobbled together code has a place in this process, but not as a part of live, uncontrolled testing. You run it against models as you prototype solutions and then you refactor or rewrite that code to be good enough to test in riskier situations.
The quality of the assessment matters, but there is really no problem with people making an assessment of whether the code was "good enough" for it's context. In fact, I would refuse to work with a developer who refused to make such assessments. Standards and outside analysis are important, even in non-safety critical systems, but they ate no substitute for a developer making careful assements of if code is good enough.
This is part of the point the article and top comment are making. You can assume that the person who "wrote this shit" is an idiot and mock them, but you will learn more instead if you try to understand the context that drove that person to make the decision, how well that decision worked out, what it cost them and what it gained them. This is how you avoid cognitive biases, not by refusing to accept code that is truely "good enough" in some quixotic pursuit of impossible to achieve perfection.
> That is what I was responding to: the over generalization that clunky code (in the OPs words, not mine) is 'good enough"
I think you are tilting at windmills here. There is no such broad generalization. Clunkly code is often not good enough, which is why it needs to be refactored, "the moment that it starts becoming messy" (which is, I'm sorry to tell you, a context dependent subjective judement call.)
But clunky code can be fine or even great. I'll take a defect free clunky code base that solves a stable problem over an elegant rewrite that adheres to the latest coding standards any day.
Just out of curiosity, what do you think standards are meant to address?
In my experience they are meant to mitigate risk. Now maybe that risk is not credible on a particular project which means that standard doesn't apply. But in all other cases, not adhering to standards means you are incurring additional risk, by definition.
Now maybe you're just saying, "Yeah, but those are acceptable risks" in which case I don't really think we're saying anything different. My experience working on safety-critical code uses standards that explicitly state what risk is acceptable so there isn't much wiggle room for wishy-washy statements like "good enough". They aren't esoteric, abstract standards of practice (and maybe that's where our personal experience diverges). It becomes relatively clear, with a good testing plan to maps to said standards, whether that risk threshold was met.
It's easier to illustrate with hardware, but the same principles apply. Say there's a standard that states each critical component must have a specified reliability level. You could either install a single component that meets that reliability level or design redundancy so the overall reliability meets the standard requirement. What you can't do is install a lower-reliability component and claim it's "good enough" unless you change the definition of critical. And that's what sometimes happens in practice; people get through a design/build and realize they didn't meet the pre-defined/agreed upon standard and so they perform mental gymnastics to convince themselves and others that the component isn't reaaalllllly critical as originally defined. And that discussion shouldn't be based on subjective judgement. As the sign above my old quality manager's office said "In God we trust, but all others must bring data."
>I'll take a defect free clunky code base that solves a stable problem over an elegant rewrite that adheres to the latest coding standards any day.
This might be part of where our opinions diverge. My experience in hearing "good enough" seems different than yours. It sounds like you're using it as "it solves the problem, so it's good enough". My usual experience is more along the lines of "it doesn't meet the standard, but it's good enough." The issue in the latter case is that I think there's some hubris that one fully understands the problem. If you do, then you should have no problem bringing data to support that claim and we'd have no qualms. But if you can't, one thing standards are good at is helping to make you pause to consider all the aspects of the problem you didn't think of. Part of that hubris is the assumption that it's a stable problem. Standards capture the lessons learned when people realized it's not so stable. So clunky code may good enough to solve your conception of the problem, but that still may not be good enough if your conception of the problem diverges from reality (see: 737MAX MCAS, uber, CST-100, etc. as already brought up).
Like I said earlier, you got fixated on your own understanding of what "good enough" means and didn't actually pay attention to what people were actually talking about. Instead of learning something, you went on a diatribe and repeatedly misrepresented what people were actually saying.
I've seen people adhere blindy to standards and I've seen people ignore standards without a good reason. Both are failure modes that can increase risks.
I also think you are grossly simplifing what caused the engineering failures you mention. They have a lot more to do with systemic pressure and misplaced priorities than they do with engineers making contextual assements of risk beyond what is stipulated in the standards.
I don't think I was misrepresenting. I think it just boils down to we both read the OP differently. It's possible to have different takes without it being ascribed to malice or deliberate misrepresentation.
>I also think you are grossly simplifing what caused the engineering failures you mention.
I don't know how you arrived at this conclusion? I am in no way simplifying. I said those types of systems are complex to the point that subjective determination of good enough isn't adequate and how standards help fill those gaps of understanding. I've literally worked on some of those systems and have had listened to people at the highest levels of some of those organizations about the nature of those failures. I've withheld approving plans of one because I witnessed firsthand how the nature of external pressure corrupts what is meant by "good enough". If you know more intimate details on any of those examples, I'm all ears.
>They have a lot more to do with systemic pressure and misplaced priorities than they do with engineers making contextual assements of risk
This is the exact point I've been making but I think the two are interwined. Those competing pressures make fertile ground for rationalization and cognitive bias to influence decisions to change the definition of good enough more in-line with the verbiage of the OP (again, there was no discussion of risk in that post, you shoehorned that into your interpretation. There was only discussion of clunky, stupid code which was blessed as good enough). You seem to imply risk understanding occurs in an objective vacuum and I disagree. That's why I think subjective determination of good enough falls short in some scenarios. I'm not sure if you've been so focused on being right that you've ignored that central point, or if I'm just not communicating it effectively but it's not really worth belaboring further.
> It's possible to have different takes without it being ascribed to malice or deliberate misrepresentation.
Which I did not do. I don't think you are doing it deliberately or I would have ended the conversation long ago.
> You seem to imply risk understanding occurs in an objective vacuum and I disagree.
Not at all, where do I imply that? It is actually the opposite. I am arguing against your position that risk assements should happen in a vacuum and be based purely on standards with no need or room for subjective reasoning.
> again, there was no discussion of risk in that post, you shoehorned that into your interpretation
While the top comment did not explicitly mention "risk", the comment or did reply to you saying:
>> Isn't that by definition what good enough means? That on safety critical code "good enough" is a very different level than on a throwaway-script?
> That's why I think subjective determination of good enough falls short in some scenarios.
I've repeatedly said that subjective risk assessment is usually not enough:
>> Standards and outside analysis are important, even in non-safety critical systems, but they are no substitute for a developer making careful assements of if code is good enough.
>I'm not sure if you've been so focused on being right that you've ignored that central point, or if I'm just not communicating it effectively
I think your communication issues or on the listening side as you keep projecting a strawman onto people rather than actually listening to what people are saying.
But since we both appear to feel that the other one is not listening, that is probably a clue this conversation should end. I do encourage you to take some time carefully re-reading the thread to see if you can figure out why you seem to misinterpret so much of what people say.
What an odd and condescending take, even when try explaining to me the failure modes of a system I actually have firsthand experience with. Doesn't that seem like something that should give you pause to self-reflect?
I understand your point. What you seem to be missing is that we're talking about two different things. I agree that decisions should be made in the context of the risk of the engineering application. That's trivially apparent to the point where it's almost confusing that you would feel the need to bring up up (ad nauseum). It's also not particularly interesting because just about everybody will agree with that. What I'm talking about is when people fall prey to cognitive biases to the point where they can no longer make accurate risk assessments. That's a much more interesting problem because the engineering world is full of cases where otherwise smart engineers make terrible judgement calls, all the while telling themselves that they understand the risk. I literally brought up cognitive biases in my first post and instead of responding to what I'm actually discussing, you just keep underscoring a trivially simple point.
I think you're reading your own interpretation into what I'm trying to say and then somehow twisting it into being a miscommunication on my part. When I'm saying subjective judgement can lead to bad decisions, I am not saying "we take all the unique and contextual facts into consideration and arrive at a reasonable subjective risk assessment for this scenario". I'm saying people's cognitive biases can lead to them discounting risk without good evidence because it results in a decision they are emotionally attached to. E.g., "I don't want to miss schedule and look bad, so let's rationalize away this risk that really wasn't mitigated." That is not an objective risk-based decision, it's a biased emotional one. They may think it's "good enough" to get the job done, until it's not (as in the cases I specifically brought up).
While I already explained it but it didn't seem to sink in, I'll reiterate one last time:
You seem to say your definition of "good enough" is based on good, risk-based judgement. I already said if that's the case then we don't disagree. But I also said that is not the context that the term "good enough" is generally used in practice. In my experience, it's used to justify a sub-standard effort and I've given you concrete examples of that. That point of digression between what I'm saying and what you're interpreting seemed to fly right by you because you're more concerned with arguing, and there's some irony in you pointing out that someone else isn't listening.
> You seem to say your definition of "good enough" is based on good, risk-based judgement.
Not at all. I said that "good enough" is a contextual, subjective judgement and is a critical part of software engineering. The idiom "good enough" says nothing about quality of that subjective judgment, despite your insistence that it does.
> I'm saying people's cognitive biases can lead to them discounting risk without good evidence because it results in a decision they are emotionally attached to
Of course cognitive biases (and all sorts of other things) can degrade judgment. That doesn't mean that we should try to get by without it. We seem to be in agreement on this.
> that is not the context that the term "good enough" is generally used in practice.
Here you are simply wrong. "Good enough" means " adequate but not perfect, not "sub-standard". While it is possible you have been operating in a cultural bubble where that term is only used to mean "sub-standard", in this context the meaning of "good enough" that is being used has been clarified repeatedly but you insist that only your experience with the term matters and thus everyone must use your definition. Instead of working to understand what people are saying, you assume that they are using your definition. Perhaps this sort of assumption explains why you somehow missed out on noticing everyone who uses that term in the normal way. Seriously, go look as some definitions and try asking people what they mean when they use the term.
> That point of digression between what I'm saying and what you're interpreting seemed to fly right by you
Another example of you not really listening. I've repeatedly pointed out this exact divergence.
Note that you specific left out the contextual clue where I said "In my experience" that is not how it's used in practice as an immediate follow-up to that sentence about how it's used. I am not making a general case, but talking specifically about safety-critical code from the very start. I am not saying everyone must use my definition in the general case. I'm saying in the very specific subset of cases, there is an objective definition for good reason.
Let me try a different tack to see if we can get off this pedantic merry-go-round. You've agreed that cognitive biases affect decision-making. So let's say as a developer you are working on safety-critical code that is in danger of being over schedule and over budget. Their manager says if the project isn't successful, your company will lose future work to a competitor and that might leave you out of a job. But if it is delivered on time, you're company will get a massive windfall in terms of future contracts and profits, and likely lead you to a big promotion. What do you do to ensure those cognitive biases do not influence you to incorrectly discount risks and ship the software early before the risks are properly addressed?
(Btw, it's a really bad method of communicating to use absolute terms like 'everyone'. For one, it makes it look like you think you're smarter than you are and more importantly, it's easily falsifiable. That type of communication belongs more on r/iamverysmart than HN)
> always just about 'good enough' to solve the issue at hand
I've seen this approach really bite teams that only focus on the cost of implementing application behaviors rather than the long-term costs in terms of maintenance and systems complexity. If too many poor design decisions are made when a project is greenfield under the premise of "good enough" it can create technical debt so bad that the devs can't extricate the debt from core product features further down the line.
Me too. But that is typically a problem with how they define "good enough". And how that evolves over time. If "good enough" means "what we decided on 7 years ago" or "It works on my machine" then certainly that term is not covering what it seems to cover.
"Good enough" should, obviously, take future maintainability, security, new hires, evolving standards, moving business-cases and changing markets into consideration: i.e. overall complexity, reusability, consistency, maintainability etc.
Or, to put it differenty: if your "Definition of Done" is not evolving or changing over time you can be sure that the "quality" part in that DoD is sub-par in a few years and your project development will grind to halt somewhere in the next years surely. (edit: that, or it is so vague and up to intepretation that any new hire or insight can change it already. Which may be a good thing, IDK)
There's a bunch of little scripts I've had to write for my workflow. Time is short so I've taken parts and pieces from experiments, treated them as black boxes, and wrote interfaces to make them handle way more than I ever anticipated. It usually looks bad and has little useful documentation for the heavy math inside but it works and will work for years. I try to spend time on making the interface clear and friendly for anyone else that uses it after me but the internals are a nest of wires and obtuse mechanisms that I can barely remember how they work.
We had an excel spreadsheet that was used for at least a decade to process some measurements. There were a lot of magic numbers shoved in the equations to handle some issues with the equipment along with math that I'm surprised Excel can handle. The results would be correct but it was a total black box but the steps to get to them were nearly incomprehensible. A coworker decided to spend some time upgrading it to Matlab to allow for some better interfacing with new test equipment we were buying. She thought it would take a week, it took months. Talking to the original author was useless as he couldn't remember how he built any of it. Finally she got it working with both the old and new test equipment and now has properly documented everything so it explains the use of any magic numbers. I did not envy that job at all.
I was honestly surprised the author's answer was "my friend Torben" and not, "oh, it was me".
Part of the reason I like to write as little code as possible is that anything over a year old has some antipattern that I've come to loathe. I can't hate code that didn't get written in the first place.
Many people in this thread are saying they are surprised by their own shitty code, 6 month ago. I read this everywhere on the Web. It's like I should myself be finding my code from 6 months ago horrible.
I don't know. I tend to remember what code I wrote, and recognize my own code when seeing it, even years later.
My code from 6 months ago looks good to me. My code from 10 years ago looks "reasonable, if a bit messy". I remember what I was trying to achieve with what level of knowledge I had, and often what I had in mind at this moment (sometimes including unrelated feelings).
I'd probably write this code differently today and it's clear I learned things in the meantime, but a little bit of linting helps turn this code into "reasonable, if a bit less messy" (mainly limiting line length). I definitely find my way in this code and even find it kind of enjoyable. Sure, there are some details I don't remember but things are mostly here.
Do I have an exceptional memory, an exceptional tolerance to execrable code or both?
Re: the article, I've definitely felt "who wrote this shit" but I'm past it. Most surprising things have an explanation and this explanation should be sought before the final verdict… which is often not needed anyway. It's just a negative feeling that achieves nothing.
I suspect everyone was writing badly informed code at one point - just for some of us it was a long time ago, and we were young.
For example, I wrote a javascript image editor that stored images as hex strings internally, and saved them to file by screenshotting them. Completely mad design. But that was ~23 years ago, long forgotten by everyone except me, and I was age 14 at the time.
This is a young and growing industry; someone who coded like a 14-year-old ten years ago might simply be a 24 year old today.
I still don't get this. Not knowing everything I've learned for the past year did not prevent me to write clear code back then. I hope I'm writing better code now because I actively reflect on how to write correctly, but that does not prevent my old code to be reasonable.
If I found my code from a year ago bad despite all these years of programming, I'd be worried I'm not in fact any good. The whole point of taking care to write good code is that it will still be readable and maintainable (i.e., good) in a year and more, and I'm sure I can achieve this, and could back then.
I'm also able to recognize good code from other people from years ago, if I systematically thought my code from one year ago was bad, that would mean I'm consistently bad too.
I was "not any good" relative to how much better I am now. I'm happy and fortunate that trajectory is continuing. Thirteen years and counting, in fact.
Who knows, maybe you're a better coder than I am and you just "got" it instantly! It's immensely important for me to keep growing, so I guess it's a good thing I may be so much worse than you?
I have both experiences. Very broadly speaking, it's code I write for work where I have the "this is terrible" experience (twice something has annoyed me so much I trash-talked it in the team chat, then ran "svn blame" and found I'd written it ~4 years prior), while code for personal projects in general I don't.
I suspect it's having to work around teammates' styles and integrate it my own into it that causes friction at the edges, but I'm not really sure.
It's a bit of confirmation bias though: code you wrote well is less likely to come back and bite you in the ass. 20 years on, I still feel the 6 months ago thing.
I did some freelancing (of the "setup an ecommerce site on a vps with some customisation and plugins for mom-and-pop shop" variety) in the 90s. No big jobs just lots of small ones to make some beer money.
The usual process was install osCommerce, add some extras, zip it all up and manually deploy it to a VPS and transfer the credentials to the owners.
The work was mostly found by word of mouth so it wasn't unusual to get emails asking for assistance/to do work out of the blue.
Had one such email to change an existing shop up a bit. Probably no more than a few days work. Received their credentials and ssh'd to the host machine to take a look. Scanning the source files of the plugins had me head scratching in a couple of places, decided this was compiled by someone terrible. Scrolled some more and saw the author's details. Oops. So that's how they got my email address.
I find it a refreshing experience to look at your own code from 10 years ago, or even 5 and think "Who wrote this shit?". If I ever look at some of my old code and feel good about it, that means I have failed to improve.
I find that half the time I'll wonder who wrote it, realize I wrote it, start to rework it, and then realize why I wrote it that way in the first time, add a comment and move on.
I prefer to add comments which contain some rational, like "Changing this causes race condition headaches in the frobulater." Or "Ugly... but it handles the edge case of XYZ in the foobar config"
If I don't then the comments either have the effect of a) scaring future me, or b) getting ignored because I think that I can do better now.
I had a nearly identical experience a few months after starting my first job out of college. Huge software firm. I was looking through code with a senior colleague, and something looked really off to me. I literally said something like, "How could the person who wrote this be so stupid?" My colleague replied calmly,, "I wouldn't presume to know the mental capacity or state of the person who wrote this code at the moment they wrote it."
What he said humbled me and I basically never said anything like that ever again.
I also had the opposite experience. Two years in, wrote some code for a library that I thought was pretty clever. One day I got pulled into an online chat with a couple principal devs -- phenomenal engineers, respected the hell out of them -- and one was asking the other about this piece of code. He said something like, "Who wrote this shit? It's so complicated, I can't figure it out."
So from that moment I understood that you have to be careful not to be too clever when you write code. Changed my life.
To this day I'm so grateful to have had such amazing, patient mentors early in my career.
Any time you start having negative sentiment, anywhere in life, directed at the creations of other humans or the humans themselves I have two phrases I use. Four words. Be humble. Be curious. It’s saved me a lot of angst. I think as a developer this is a great mindset, it has helped me a lot, at least.
One of my first jobs while in college was working on adding features to a VB6 application using .NET 2.0 interop. The developer who wrote the majority of the VB code passed away from cancer a couple years prior. There were many times where I would stop and think "who the hell would write this?", and then pause to reflect that the code I'm complaining about was likely written while in a completely different mindset & battles I had no idea about. That shut me up pretty quickly. We're human after all, and humans have many flaws.
When it's written by someone else, I start out assuming it's not shit at all, and I simply don't understand it yet. So I ask the person who wrote it (if available), and often they tell me it was a hack and please fix it.
Often times the what is not known by anybody but the who and if you're lucky the who is still around to tell you. If they're not, quite often you screwed because that shit is the only place where the business rules are captured.
This is a good mind set to cultivate, but then I remember working on non-legacy projects.
My current project was started in the middle of 2021. The same people have been working on it for 6 months. I've been working on a related project and shifted over at the beginning of the year. I know the constraints and there haven't been deadlines.
The staff engineer decided to pick technology by what's cool. They haven't invested in development workflows. The infrastructure is held together manually by those who have admin access. Subsystems that need to communicate don't.
I have much sympathy for legacy projects, but those projects got to where they are because people made poor decisions. My current project is well on it's way to being a legacy project in just 6 months.
The team I'm on today doesn't say no to requirements and scope creep. They are too invested in tech and aren't adjusting. They don't cultivate truth about how parts of the project aren't coming together.
I blame problems on leadership much more than I do on individual contributors. I wish the commit log included the tech lead and managers on the project when code was written.
This field is growing so quickly the ratio of experienced to new devs is out of balance. It takes a critical mass of experienced devs on a team or in an org or the scenario you're in is the default.
And even that critical mass won't be enough if leadership implicitly or explicitly reward shiny visible progress and ignores structural work and integrity.
The experiences we've probably all seen of new buildings going up quickly and then looking like trash just a couple years later are great examples. The problem is that approach "works" during boom years because everyone can move on fast enough to make the problem someone else's.
Am I the only one who just... writes good code? I look back on code from when I was a junior and sure, it's bad. But code from 4 years ago? It's still perfectly fine. I don't believe that it's normal for long-time seniors to think their code from only a year ago is consistently bad, yet there are a lot of comments in here to that effect.
Let's face it: often (a) getting it done is more important than (b) getting it right. Often (a) is not only faster but also is the only way to get to profitability. Unless you write code that is touched many times or is performance critical, (b) will only add costs in the short term. Even if you spend more time fixing (a) later, it's often worth it because it made more money than it costs to fix it or your company wouldn't even be there, if all code would have to be of type (b)
Haha I’ve been working at the same place for 9 years. Last year I came across some code and thought “who the fuck wrote this shit”. Looked at the history. I wrote it 7 years earlier. Always fun to dunk on old code but still appreciate it pays the bills.
If you hang around long enough you'll end up with code you once wrote that was maybe a quick fix, maybe you thought it was brilliant, but now appears to you as a steaming pile of crap.
Sometimes it may however still be the best solution but your future self was unable to figure out why because your former self was too lazy to properly document the code.
I maintain an analytics tool for a large newspaper. I wrote the code for it in PHP 13 years ago. The app has been running without interruption for 13 years and is used by hundreds of employees every day. Still, it needs a bit of maintenance (APIs change). The code is scary and I will probably maintain the project for the rest of my life because anyone else would pull their hair out. I'm not proud of it and write better code in the meantime, but rewriting all the code from 13 years ago would be way too expensive for the company.
TBH this is a big motivator for my code comments - “Ideally this code would do X, but because of constraint Y we are settling for Z. If you can think of a way to achieve X without the compromise then by all means burn this module with fire, and add me as a code reviewer so we can celebrate together.”
We have a rule that if you have to leave shit code as it is for a serious reason (time constraints, shifting requirements) you must leave a TODO in the code which poins to a freshly created issue in the tracker which explains what's wrong with the code and how it can be fixed. The ideal is that these issues eventually get fixed, which is often not the case (new features are prioritized over tech debt etc.), but at least new devs will immediately see that it's a known problem and that there're known solutions.
> We have a rule that if you have to leave shit code as it is for a serious reason (time constraints, shifting requirements) you must leave a TODO in the code which poins to a freshly created issue in the tracker which explains what's wrong with the code and how it can be fixed.
This seems like a really sane thing to do!
In addition, if you want to keep track of the commits and the context behind them, i've found that merge/pull request descriptions are also really nice for this!
Back when i had to struggle with an Eldritch DB schema that someone wrote and had to patch in new functionality, i ended up painstakingly mapping out how it corresponded to the business concepts/objects (which was pretty loosely) and threw that diagram into the merge request, because sadly otherwise the schema still wasn't all that clear...
...just to have that very same diagram save my hide when i had to go back to it months later to update some of the code, which necessitated rediscovering how everything works.
Now, whether things belong in the issue tracker or somewhere that's more close to the code repo is probably just a cultural question, but i'd say the main thing is to have some place to store information like that.
Staying close to the code where possible always wins, I think. One code base I worked in had a particularly complex state machine, and right above its main function was a giant ascii art diagram of said state machine. It was perfect documentation.
Oh, definitely! I've had similarly positive experiences with temporal algebra - having some simple explanatory graphics of how two time spans would overlap is really nice!
Something I've always tried to drill into ambitious intermediate devs - you're writing the legacy shit of tomorrow, today! And when you're a senior, the next generation of ambitious intermediate devs will wonder aloud at wtf you were thinking when you wrote it, and it's just part of the software developer maturity cycle.
Code-bases grow through different phases along with the company - there's the "we need to ship the MVP, so just comment that out" codebase, then there's the "we're starting to understand the problem domain better" phase, followed by the "I just read a book by Martin Fowler/Uncle Bob, and I'm going to fix all the things", then the "wait, the problem domain is hairier than we thought, let's iterate on this", then perhaps, depending on company dynamics, the "a charismatic senior developer convinced enough people to use <technology X>, so we started moving towards it" followed somewhat later by "well, the senior dev left, and everyone decided that X was bollocks" moving away...
Or perhaps the entire model of the system changed. Your batch ETL pipeline delivered yesterday's data in time for start of today's business, and that was fine for a few years, but now the sales team want today's data refreshed twice a day, hang on actually, we want it updated every hour, now we want it updated within five minutes.
Code written for old paradigms always look crap when all you know is the new paradigm.
I'm in a constant state of this in the codebase I inherited. The previous solo front-end developer was the manager of R&D so he couldn't be fired easily, he was the only person willing and sort of able to do front-end (PHP + JS/Dojo), he was super productive (most code was written in 2012/2013), but not a very competent or self-critical developer.
Think a back-end that concatenates XML into a big global string over the span of thousands of lines of code, then passes it to a function that parses it and outputs it again as JSON.
Think functions spanning a thousand lines with triple-nested switch/case and if/else blocks
Think a front-end where JS is used to concatenate HTML, CSS and more nested JS together into a string
Said front-end will save and reload the currently active page on change of any form field, and there's dozens of form fields across dozens of dialog screens.
When I joined I was given free rein to rewrite it in the technologies I thought would suit best. It's been two years, at a stretch I'm about 20% of the way there. It's a project that needs one or two fully staffed development teams, but we have the budget for two people because our management resists faster growth or investments.
>> Think a back-end that concatenates XML into a big global string over the span of thousands of lines of code, then passes it to a function that parses it and outputs it again as JSON.
When I joined I was given free rein to rewrite it in the technologies I thought would suit best. It's been two years, at a stretch I'm about 20% of the way there.
This is the danger of rewrites (assuming this was not actually greenlit as a 10 year project)!
I'm talking out of my ass here, but it doesn't sound like you are trying to rewrite it, but rather refactor it in place. This often takes much longer than a proper rewrite in my experience.
I have a healthy habit of just assuming that if something is bad then i probably wrote it.
That way once you realize it's someone else's fault you feel slightly relieved and less prone to talk poorly about them.
It's self-deprecating sure, but I've never been a believer in the whole "believe in yourself"/pro-self-esteem mindset, if you are mentally strong enough it really doesn't have an impact on how you operate or think. There's always something more brilliant, and more stupid then you are; that poor deadline based decision has been made by both you and the person who wrote that piece of crap code. We are all the same (barring some exceptions).
Though I will say, I hate what modern program "design" has become, anytime someone mentions "sprints" you know that any maintenance you do on that codebase will be a fight uphill the whole way, there's no excuse other then an exec wanted something in half the time, just to have the poor sods that come after to be doomed to poor progress reports for months after trying to fix that garbage code they are tasked to maintain.
I love legacy software so much - all the trouble and care I put into making sure it continues to work while I fix and improve it, is like operating on a live patient. I see no point in judging; I like to dig and to understand the particular choices. Seeing how much trouble companies have with legacy code, I tried to market myself as a consultant around those kinds of problem: say, adapting a legacy system for a more reliable and pleasant development process, or things like this.
Unfortunately I found that it’s a hard work institutionally. Even where I found gratitude and respect of my people whose quality of life working with the software in question improved, I still was plagued by the problems of blame-assignment and career-making but futile huge rewrites orchestrated by people for whom destroying my work and denying its value was highly beneficial. I don’t know, I burnt out on this last time so hard I’ve been taking kind of a vacation.
As much as I love sustainable software development, it’s a losing battle and probably a futile goal in most of the industry. I’m just trying to accept this and move on.
I've found a love for legacy code too - after working on a greenfield project and seeing it slowly but surely turn to rubbish, making bad old code good again is a net positive.
I'm pretty sure the guy maintaining my code I wrote in my first (well, every...) job would get irritated at the style. Man, I can't even look at those trash now. I had the same expectations for other senior devs at my company when I started my new job and thought "why would they write such convoluted stuff?". Months later, I was writing similar code because I now know better.
I was working on some markdown related code years ago, and didn't test against utf-8 characters. I put a comment in there saying as much, and "I hope this doesn't come back to bite me."
Sure enough, a few years later, I was diagnosing a bug in this software, and realized it only happened in documents with some utf-8 in them. After digging a while, I found that comment.
Ah yes, one of the first things I learned when working on my own big code base: I can no longer blame those other idiots for their stupid design decisions. Because suddenly it is all on me.
And that probably made me actually grow as a developer. Because I write good code. I write bad code. Depending on the time of day, my mental condition and external pressures. The same like everyone else.
And I also was once placed in front of a half finished but abandoned PHP project, for me to finish it. That surely was no fun.
That code was not good. And I was stressed and angry with it.
But today I would no longer direct my anger at that actual person. He also just did, what he could with the given ressources.
And venting anger might be therapeutic in some instances, but I am not sure, it helps get stuff done. And it definitely makes for a bad social dynamic.
It's an exhausting cycle of writing code, hiring new people who say it's shit, who write their own code that next year the new batch of hires says it's shit again and needs to be rewritten. Of course all of these developers are too good to write a comment because their code is so easy to read that it's 'self documenting'
For the programmers working with the code, it definitely should be, but please let me know if you think there is a better place for explanations of code functionality other than the code itself. People like you are the reason why code becomes unmaintainable. You think some document separate from the code is a substitute for comments? It is not, your code is not as good as you think it is.
If you have an example on github of something commented the way you think is 'correct' please post it. I hope you're not confusing high level app documentation with code comments.
All I did, was saying comments shouldn't be default for documentation. To which you reply with this.
"People like you are the reason why code becomes unmaintainable."
So ... what do you think, I could say of people like you and something with the internet?
In any case, I use comments in code btw. But way less often nowdays. Because comments have a tendency to be ignored and still remain there, despite the code for that comment changed long ago or was even removed. That can happen with any documenation, sure - but comments are notorious for it.
No comment is way better, than a wrong, missleading comment.
I say 'people like you' because this isn't the first time I've heard arguments like, 'comments have a tendency to be ignored and still remain there' and 'No comment is way better, than a wrong, missleading comment'.
Those two statements combined is your justification (excuse) for never writing comments and why the code you and people like you write is unmaintainable.
Just write the damn comment. You're code isn't as good as you think it is. And no one can read your mind as to your intent when you set upon writing the code.
See, this is why comments are often so bad. They get ignored (and then forgotten) most of the time anyway. Like you did with my comment, where I explicitely stated, that I do write comments. Just less, than I used to.
To which you just made a strawman argument of
"Those two statements combined is your justification (excuse) for never writing comments and why the code you and people like you write is unmaintainable."
You do not know me, nor my code - yet you judge about it, claiming it unmaintainable. Well, what more is there to comment on? Probably nothing.
You have some sort of defeatist attitude with comments saying they're ignored/forgotten. I wish I knew your life experience to see where this happened and how you formed this opinion. Is it because YOU ignored them and didn't update them with code changes? Or others did so and it got past a code review? You don't maintain your comments with your code, given your attitude it sounds like you don't.
Your attitude in infectious. It causes new developers to have the same now unfounded opinions and not write comments themselves. The cycle of unmaintainable code continues.
Erm, you realize that it was YOU, who ignored the essential parts of my comment, proofing my point? Apparently not.
Otherwise, sure:
"Is it because YOU ignored them and didn't update them with code changes? Or others did so and it got past a code review? "
Both happened. And I doubt this never happened to you. And if it really never happened to you - then you either are a superhuman - or never had to ship lots of code with a tight deadline.
My entire argument is, on average there are way too many dead comments around and I would rather recommend to people using comments as exception where the code is not clear enough, to decrease the burden of maintaining them and rather focus to keep the proper higher level documentation up to date and readable.
When I was a junior engineer a long time ago. I wrote a change that someone approved, and merged.
A few weeks later a senior engineer saw the code in passing, proceeded to rewrite all of it, submitted a PR with a 2 page description tearing into the original code. Explaining why it was terrible and unacceptable and then posted the PR into our it into our team's slack channel with some comment like:
"@here everyone please read this PR as an example of terrible engineering"
The code was indeed quite poor, and the lessons were valuable, and I took them to heart. I also spent the next 18 months actively avoiding requesting feedback, in fear of this happening again.
I think as a senior, this kind of behavior tends to really leave a lasting impact on starry eyed juniors.
I once tried to do regular postings of 'funny bits of code' in the slack at my company. It was mostly meant to be in jest, but also had the side aim of trying to get the team to aim higher.
The first few editions were snippets of my own code. I got a few token lols from other devs.
I then saw a senior had committed what looked like a gem... something obviously contorted but fairly short and understandable for if they were coding on autopiliot
`if day.isWeekday() && !day.isWeekend() && day.isMonday() { // this is a monday`
The day I posted that to slack was the day I learned never to criticize in public, even non-personally and even in jest.
If someone did this on my team, I'd publicly tear into them if I wasn't their manager for being not just unprofessional, but cruel.
If I was their manager, they'd be forced to apologize for how it was handled to the person with myself and HR on the call, and then I'd force them to read a productive feedback book.
I'd also expect they quit. I usually find people overly cruel at work are especially insecure and in need of counseling. Usually something else driving that behaviour. Most people aren't just assholes.
I've had this exact experience, except I didn't see Torben's name, I saw my own name. And there was no excuse about deadlines and whatnot, I was just awful.
10 years from now, I'll be saying the same about the code I write now. And that's okay.
This is also true outside of this in functions like finance. We inherited an important spreadsheet from the contingent finance team brought in to hold things together while the company went through a structural transition. The spreadsheets were horrible - bad logic, lots of “Easter eggs”(points where people hard coded a number in a sea of formulas number where you would have excepted a calculation which is very hard to catch), and just overall poor incremental design that didn’t take much into account except to fix an immediate problem.
It was a pain but all the collective griping and work to improve it made us stronger as a group and also made us way better excel designers.
I think it is generally a sign of maturity when a developer recognizes the flaws with their software and desires to improve them rather than piling on new features. Cynically asking "who wrote this shit?" is probably not a very mature way of expressing that, though. In my experience, finger pointing does more harm than good. If a particular developer is consistently producing bad work that is hurting the project, that should be evident by reviews of their new work - not their legacy code. IMO once code has passed review and entered master/trunk/production it should be considered the team's code - not an individual's.
I have often said to other devs that if you write code long enough, you’ll eventually find some crusty piece of your own code that will make you want to throw up.
I always try to approach all code, even my own code as “lets improve this”.
I tell the junior developers to “Write code like you have to come back in a decade with no context”
When mentoring I also suggest the most important quality is a “think skin and an open mind”.
Your code always sucks. It’s a time -vs- constraint issue as the author mentions. As context changes, code must adapt. That’s why I’m not worried about AI any time soon.
I get this. It's true, a lot of code is written and we don't know the circumstances that led to it. Those circumstances could have led to monstrosities -- I have created a good many myself. I try to empathize with those people and commits of the past.
HOWEVER, sometimes we see things that no mess of external pressure and crazy circumstance could have produced. No, these gems are born out of individual madness (maybe I've even been lucky enough to produce some myself, one can hope).
Example, no. But here's a clue: they are usually accompanied by an equally insane commit message. "Magic." "Kill me." "Why not?" or the ultimate "".
One technique that helps me keep myself sane: every commit message should describe "why", everything else is in code. I like to think it prevents a lot of future problems.
I once was challenged on who authored a game (Empire) that another was claiming to have written. The person who was judging this turned to a particularly messy section, and asked the other guy to explain it. He mumbled and stumbled. I could, and why it was so weird. The other guy caved and apologized.
The are several insidious problems with trashing code are:
1. It becomes an impulse and ends up cropping up in places where code is not just perceived as shit, but also misunderstood. And once devs aren’t taking the time to fully understand code before judging it all is lost.
2. It takes mental resources to form the useless judgment and clouds the vision with its bias once it is made. Suddenly it’s more difficult to see the nuanced angles in the code since you’ve got a useless judgement taking up mental space and resources.
3. Again it’s habit forming and becomes a barrier to thinking freshly and creatively when faced with new code.
4. It’s negative.
5. As seen here in all the comments it is rediculously faulty.
6. ALL old code gets to be shit with varying degrees of speed. ALL code bases get old. If you want to work on an old codebase, it comes with the territory. You can be a grumpy old man about it, you can be a grumpy old man about anything, but the net effect is just making you a grumpier shittier developer.
In a similar way I have some coworkers who I know do not code as well because they don’t have the patience to be detail-oriented or pursue optimal solutions by any measure from code efficiency to maintainability. That’s about the worst thing I can think of to say about another programmer… and here’s the thing, they still occasionally write solutions borne of their own perspectives that I can learn from, their code still needs to be maintained, they sometimes do improve etc. If I let the author factor in I’m still going to miss things even if the prejudgment is right 90% of the time.
Finally, I think it is healthy to like people and find the good in them even if they are paddling against the boat, if you can’t lift them up or fire them it’s making the best of the situation. If you can understand that they are humans with their own things going on, that other humans could look at you exactly the same way, I think it lifts us all up.
Always useful to keep in mind the concept of Chesterton's fence: https://fs.blog/chestertons-fence/ If something looks stupid or weird, you may not have all the relevant information to judge.
I usually say that when I am pretty positive it was my code that's now... shit :)
Unfortunately, I was still warned that this may portray an atmosphere of non-acceptance when done in "public" channels since readers might not be aware that this was my code and that I am making a self-deprecating remark.
Honestly, I am not sure whether to keep doing it or not. I like the relaxed and jovial atmosphere that comes out of it (it's more of a joke that all of our past code is shit, and ultimately, that what we are writing today is the shit of tomorrow), but I struggle to come to peace with the PC crowd. Am I really messing it up for someone else?
Oh, man. There's some bad code out there with my name on it. It took me years to move beyond the stage where my opinions meant more to me than those of my teammates. I am sad that I was such a boor for as long as I was.
Anymore when I find something stupid in the codebase, I close my eyes and say what’ve taken to calling the Engineer’s Serenity Prayer: “It was the right decision at the time.”
Yeah, nothing beats good documentation or comments - quite often you find some code that doesn't make any sense but you can see someone made quite an effort to do it in this particular way. You do your best to understand why it was done this way, but if the comments/bug tracker/wiki links/people with the context are not there - you just shrug and move on.
I sometimes call out my CFO at my company who still works on the codebase(very Spaghettis like).
He is not a trained programmer but his code runs the company and we are the leaders in our industry. He apologises for his older work but without it I guess the company may not have existed in it current form.
The also board thinks the codebase is a potential liability since it all Coldfusion.
The most prolific writer of absolutely shit legacy code, in my experience, was always me. I was happy I had evolved to at least be able to recognize it as shit. Sometimes I didn't yet have a better idea!
Sometimes I knew it was shit when I committed it, too. Deadlines, frustration, tip-toes, and maybe even imposter syndrome contribute to that.
At one of my early software engineering jobs, my team was generous enough to entrust me with building a fairly complex component from start to finish. What I made worked reliably but the underlying code was spaghetti. I hope they felt no qualms about trashing it and have had an opportunity to refactor it since.
When I was the first engineer at one job my manager (who joined after I did) anticipated others saying this kind of thing about me, and told me to always remember that "the reason any of us engineers have jobs is because of your code".
It's a good reminder, and I appreciate others taking care to build others up.
I've been dealing with legacy part of our app for the couple of days now, and I'm pissed.
But not at the code, I fully understand how code can become hairy, deadlines can be tight, etc. No. I'm pissed at my team (I'm new) who didn't have time in the past 5 years to even attempt to clean this shit up.
In the jobs I’ve had, management is response for not allocating time for maintenance. It’s difficult to justify allocating that time without the numbers on a spreadsheet to prove it’s needed.
I was surprised you didn't find your name, because I did and it was like 6 months ago and for the life of me I couldn't understand why I did this stupid thing I did then.
I found that usually code written around and after 4pm tends to be like that and generally try to avoid more serious stuff in late PM.
legacy software or shit software? The two are not the same. So be clear with your statement, to circle it back to your intro, you're the creator of the _shit software_ -- right? That's what this is about? See how difficult it is to admit it. You couldn't even do it and you're writing the blog post about it. The ego is strong. But, I guess you're on the path towards this acceptance, you kinda semi-admitted to it. You have much more work to do but one day you'll finally understand this distinction and it'll be good for you and all the folks that have to maintain your shit. Keep improving your critical thinking and software engineering skills. The buck ends up with you. It's your choice what you produce and your standard of excellence. And you know what, it may be the case that the sooner you become a manager the better for everyone. That may also be a tough pill to swallow but fear not, you'll be happier... and also some content for a future blog post!
I've got similar stories to others here about dunking on code/documentation that it turns out I wrote. These days I try to assume that I wrote all of the bad code and just can't remember why. Doesn't always work, but tends to take the edge off my sassiness.
Do not do this to the CTO about their own code is a lesson I learnt painfully many years ago. And today as a technical lead there's far too much wince-worthy code I come across that I was personally responsible for a few years back. Onwards and upwards.
Feel like there has been increasingly more cuss / inappropriate words showing up on top page of HN. I recommend HN to my kids. Not that they are not exposed to these words online but I would rather not come this from their parent. Not sure how to feel.
Edit: To add more to this, feeds train our brain to consume content which we don't even need. Search on other hand is inherently required when we are working on something cool.
I'd suggest that if you don't want kids to read the word "shit" you keep them off the Internet entirely for life rather than running around trying to impose your morality on everyone else.
I’m not worried about my kids seeing curse words. I am concerned about the downward pull normalizing unprofessional writing has on my profession.
Does it make our whole environment less professional? Does it influence us to be unprofessional in other parts of our craft? Do outside observers view us as less a profession because of it?
I don’t know the answers but I certainly worry about it. Though perhaps I’m just aging out of this audience.
This audience is at a particular peak of glibness and superficiality this week. It's interesting to think about what kind of place it would be if there were a couple artificial norms of decorum, like "no swearing" (I'd find that difficult to adhere to, at least at first). It might be better; sometimes rules like that can subtly remind people that there's a minimum threshold of thoughtfulness required before chiming in.
I think there is something to worry about, but it's not 'unprofessional writing' or any kind of external downward pull, I'd say a lot of the problem is how much time professionals now spend in unprofessional contexts, even during working hours (he says, typing while clock-watching...).
This is a post on someone's personal blog with some reflections on something they saw at work. I'm not sure why we should expect him to write professionally (leaving aside the question of whether swearing is acceptable in professional writing or not).
I also wouldn't necessarily expect HN to only carry professional content - although some of what's shared here might be interesting from a professional viewpoint, it's not a portal for professionals (what even is 'hacker' as a profession? What percentage of the HN community are amateur hackers rather than professional hackers?).
Ultimately, it's a discussion forum on the internet, where self-proclaimed hackers share things they think are cool, and if others think it's cool too, then it gets shown to more and more 'hackers' until not enough new people think it's cool enough, at which point it gets shown to fewer and fewer people.
Isn't there a minimum age requirement to be on HN? I mean I don't see anything obvious.
That said, we're all adults here, mostly liberal leaning, and we all grew up with the internet; a bit of potty mouth won't hurt anyone. I mean you can take the moral high ground and consider anyone using swear words as immature and move on, I guess.
Yeah, and sometimes you can lose meaning too. If I were implementing this, I would bleep out things like slurs, and words like 'shit' would be replaced with their (arbitrarily less crude) counterparts - 'crap' in this case.
There are existing software and methods to get around this problem of accidental censorship. Someone mentioned maintaining a professional vocabulary, and I am inclined to agree: The more you read people writing with crude language, the more likely you are to accidentally let it slip around your parents, or someone like a small child.
Anyway, on the subject of the actual post, comments are you and your future self's mutual friends.
There is a risk if you stay in one place (or work on the same FOSS codebase) too long, because the answer to "who wrote this shit" often turns out to be "wow, what was I thinking?"
At the end of writing a feature, I always think "I know this is shit, I know there is a better way to do this, but I did what I could with the resources I had."
I can't count the number of times I complained about the code I saw, did a git blame to see who the hell wrote that and then found my name in the commit logs.
I once had a complicated codebase and had a extremely friendly and downright great person walking me through that. There was some things that bothered me (of course - entry level and dunning kruger). But I never uttered a word, the dev was so technically competent and overall personable guy, I tried my best not to be a jerk. He might have his reasons because we sometimes get lazy yet the contribution we make goes beyond that day and stays forever in the codebase.
That day I learned that, soft skills is the most important thing when it comes to interacting in a team. Suppress your feelings and think of the other guy. See beyond the code.
To Woods’ Maxim — “Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live” — I usually add the clause “and 90% of the time that maintainer will be future you!”
TLDR: I don't think that poorly written code is as common as devs like to thing it is. Context is just as important in our general perception of how well some code has been written/designed. There are several ways we can and should document our code including descriptive variable and method names, commit messages, tests and PR comments.
I learned a great deal of humility and compassion in that moment.