In my experience it's more often the other way around.
Most projects I've seen with actually readable code and a consistent overall structure have been written (mostly) by a single coder, of course usually with contributions from others, but not real 'team work'. Of course there are also messy projects by single authors, and readable code bases by teams. But in the latter case: the more the responsibilities are spread, the messier the outcome (IME at least).
I think in the end it comes down to the experience of the people involved.
And then of course there's personal taste, one person's readable code is a complete mess to another.
In any case, the post reads like the author stumbled over one messy project written by a single author and extrapolates from there to all other projects.
In fact, the biggest software atrocities I ever saw were team-based, with people having different opinions and wanting to modify the architecture every six months. And getting away with it because there was no vision.
This is where a good team lead or technical lead, or even Fred Brooks' "Surgical team", or your example of "single developer and contributors": have one person with the vision making the difficult architectural decisions and you'll get some conceptual integrity.
What I see a lot is people with little experience who learned things one way and become unable to understand or respect working code and want to change everything purely for personal preference. Maybe this is where the bias against lone developer code comes from.
This is how I have worked most of my career. It takes checking everyones ego, but turns out great software. Also, the same person doesn't have to be the surgeon on every project. If the project is small enough that 1 person can lead with the vision and everyone else helps, it produces a great outcome.
I think the flaw is that someone believes they turn up to a project and in reviewing code to fit it to a mental model they understand they believe they're improving the readability for others who think like them.
In reality it becomes design by committee, worst of all worlds.
Slightly tangential, but I'm about to leave a project where it gets the review process badly wrong - anyone can and will comment on a review, but only a select few can merge. They have a massive bottleneck of reviews with many held up by conflicting style preferences, or comments on neighbouring code not modified, none of which is able to be resolved by agreement as no one willing to take authority over any one view. Even the simplest one line change can't get merged within a month which essentially means development has stalled other than for the few who have merge rights.
Give me a lone developer's codebase to wrestle with any day.
> They have a massive bottleneck of reviews with many held up by conflicting style preferences
Stuff like this is so frustrating. And it's not only the reviewers fault. I don't consider myself a great coder, but a skill I do have is being able to fit my changes into the codes existing style. Consistency trumps any partial incremental improvement. Unless I have the time and desire to fix the entire code base, new code needs to fit in with the old.
For some reason, I've seen many programmers have a hard time with fitting the style of the existing system. This makes every small change slightly different from others and over time makes the entire system harder and harder to reason about.
I've been places where this has happened, but it's usually a fairly easy fix. Two things that are usually not that controversial: 1) Comments on unmodified code are either deleted/ignored or you create a ticket for the actual issue (and whoever made the comment is reminded this isn't helpful or appropriate in a review). 2) Nitpicky/stylistic things are handed via prettier, precommit hooks, etc. so that everything is consistent and largely doesn't need to be thought about.
And the potentially controversial one: anyone can merge into [almost] any branch, and you merge your own code - so quite often you'll get a PR approval and might wait a day or two to merge it until you decide it's ready. We let anyone merge into any branch but our production branch, however that gets merged automatically and deployed daily unless it's proactively cancelled.
> I'm about to leave a project where it gets the review process badly wrong - anyone can and will comment on a review, but only a select few can merge.
Ugh, I definitely recognize this. There is often a lack of management behind those things. My heuristic is: If I can’t do my job, I tell my manager and ask for help. If they can’t fix it, there’s no point in going on.
There is a large cohort of skilled developers that are desperate for a competent technical leader to come in the room, give them a well-defined work area and let them build widgets exactly to contract specifications. For many, it feels really good to complete these quests. Being placed in front of a blank canvas is overwhelming for most. It takes a lot of patience and experience to start from literally zero with no constraints in sight.
As a technical leader / architect, you don't have enough time to realize your vision throughout. Others have to help. This is exactly what team work looks like.
If some developer is not happy with "helping", then they are welcome to try and carry the entire double-edged sword that is owning 100% of the product architecture. Being king is fantastic when the peasants and customers are happy. It is the worst experience on earth when there is unrest.
Let's be clear - Design by committee works (engineering standards bodies, web standards, et. al.) but this path is usually not competitive in a typical startup environment.
Originality complex, the ability to write original software and document it, is a consideration of personality more so than intelligence. Due to cultural norms and selection bias most people will believe the opposite because education and institutions filter only on intelligence.
I recently learned that conscientiousness correlates negatively with intelligence at about -0.27 which means more than a quarter of the population can organize and architect better than highly intelligent people. This can explain why reliance upon frameworks and third party tools grows proportionally with the size of the team. A single author solving original problems will be less inclined to maintain and juggle concerns beyond the scope and control of their focus area where as larger teams are more inclined to preference administrative concerns, for example configurations and dependency management, over solving for originality.
Education and institutions often filter on certain subsets of intelligence or just good memory.
A lot of exams are a memory test. A lot of courses related to software development are either out of date or don't model the current trends / best practices.
The role of education is to feed the demands of industry... whatever those are. There is still need for COBOL. There is still need for C and C++.
The problem is expectation management. A bachelor's in CS teaching OOP and SQL will not prepare you to write functional code, event logic, transmission control systems, and so forth. Not at all. For some god forsaken reason people believe achievement of education is like checking a box on a job application otherwise already in hand and that said education prepared them for anything.
Most people lack the skills required to perform as most software platforms are designed. Remember, its all about institutionalization and intelligence only. That doesn't work. There simply aren't enough 145IQ people (2-5% of the population) in the world to fill the demand required to self-train across multiple paradigms of application design architectures. The solution provided by industry is to make it more stupid (easy) so that lower IQ people, as low as 115, can participate (about 35-40% of the population). This occurs because both education and industry only filter for intelligence and high intelligence does not correlate with organization, self-discipline, or any other form of conscientiousness.
The reality is that if industry filtered for personality in preference to intelligence they could choose from 25% of the population without dumbed-down (the need for easy) software platforms. That means less tech debt, better documentation, faster time to market, more valuable/flexible products, less burn out, happier employees. It would also mean they wouldn't have to pay under performing junior developers as much as doctors, and it would make expert developers more identifiable as determined by performance compared to peers when there are fewer restrictive constraints (guardrails imposed by industries artificially easier platforms).
In my own experience, dealing with software written with "forced collaboration" (meaning bringing more than 1 developer just for the sake of saying it was done by a team) usually results in more difficulties in later maintenance than the other way around. Specially in micro/tiny services, where sometimes one person can do most of it quickly.
I agree with you take and had the same feeling that the author just seems to have faced one of those cases and extrapolated.
I'm guessing that lone developer projects have a higher variance as they are a direct reflection of the skill of the single developer. Team projects trend to the average.
If you have a very good developer doing a lone project it's probably going to be good. Because they have full control. But I doubt an unskilled developer will have a very good lone project. You want them on a team environment for the structure and mentorship.
Finally, it's worth mixing your developers as their combined effort is worth more in the macro level rather than the micro level.
If you have one developer, he or she will be the expert of that project. If you want a team of multiple developers, and it's infeasible to have any one of them be an expert of the entire system (let alone all of them), then the amount of planning and standardization needed increases quickly. The bigger the team is, the stricter the rules for style need to be, the better the documentation needs to be, etc.
I would expect the cleanest projects to be ones written by single developers because A) these projects will typically be simpler than the ones that require teams, by nature, and B) they will be undistracted / avoid the flaws of design by committee. Think cinema auteur vs. a Marvel or Disney film with an army of writers.
Go lang is a good example of identifying and addressing the problems that appear when your code is managed by a large team.
Fred Brooks had a term for this: Conceptual Integrity. When there's one mind working on the codebase, it's natural that it will have a higher degree of Conceptual Integrity.
A team can achieve this, but they need to have been working together a long time or have a very, very strong technical lead.
For the single developer case, I think maintainability and scalability then comes down to whether the developer follows good practices or not. Commenting, refactoring, reorganizing code, unit testing, etc. But a product or codebase with high Conceptual Integrity is generally easier to expand on because there are common patterns throughout the code, even if it's lacking in comments.
This has also been my experience but I can understand that not all lone developers follow similar conventions.
It can take a while to get used to someone else's conventions, but once you do, the one advantage of a lone developer is that the conventions can be generally consistent. I say "can be" because that has been my experience in many cases, but not all – especially for long lived projects where the conventions evolved as the developer either gained experience or was influenced by other conventions. But, even then, their evolution still has personality to it, that is easy to perceive.
pd: Thinking about it, larger teams with conventions (when they're followed) sometimes achieve a similar consistency to lone developers. So: thank you to all those lone developers that have left their code in a state that has helped me thrive, not fail, :-)
Similarly, my experience is that it requires a single person on the team who cares “too much” about code quality, consistency, etc. They basically forced everyone to comply. This is not intended to be a negative.
Does not fit my experience. Best software I've seen was always single good developer. Plenty of great single-person open source software out there to prove it.
The problem is - good devs are actually very rare. A single not so good developer doesn't get any feedback, yet they have all the clarity and context in what their code is doing, so they can take their terrible code quite far.
As a sole dev you are also pushed towards simplicity, because your time and scope is so limited.
So a larger team might build some intricate DDD-microservices architecture with a services bus and a complex SPA frontend because, why not? That's what everyone else does.
As a single developer managing multiple microservices or separate backend/frontend codebases is a lot of overhead (unless it's a learning project). You have to do the simplest thing that will work. So if you can get away with server-side rendering, do that. A monolith makes more sense when you don't have multiple pizza teams, and so on.
Or in other words: Conway's Law makes single developer code bases less complex, since there's no communication and responsibility boundaries reflected in it.
That said, I've seen a lot of small teams create a ludicrous amount of microservices. One per independently working team is my usual heuristic.
There is a problem in the tech industry where people's pattern matching isn't terribly good. They will have worked for a big company, or have read literature produced by people working for big companies, and concluded that "microservices" (to pick one pattern) are the best practice, because it worked for $BIGCORP, or they expect to be $BIGCORP 2.0 at some point, so why not be ready for that?
But, as you point out, microservices are the result of Conway's Law and shipping your org chart, not necessarily the best way to build software all other things being equal. If your org chart fits comfortably inside a broom closet, then maybe you're not quite ready yet for microservices.
In Apple development, there's actually a couple of programming methodologies that were specifically designed to make the code more complex, so it can be farmed out to multiple devs.
In my experience, it's a real good idea to stick to writing basic MVC, when working alone, on a UIKit codebase. If I was working with SwiftUI, I would probably consider MVVM, but I don't really like to take it much past that.
my experience too - if I am the guy for that piece of software (supporting testing, fixing prod issues etc.) then I have all the motivation in the world to make my work as simple and effective as possible. No place for fluff just because its in the vogue and 'big beards' of industry talk about it.
Measure of success of software is pretty straightforward in normal situation - users are happy using it, they don't care about technical details a bit. That and efficiency via simplicity above is how to do this for a decade, instead of building 1-man cathedrals with practically inevitable results
I think the only scalability dead end you'll end up in is people-related. That is, you won't be able to scale further because it would require using patterns/solutions that balloon the complexity so much you now need a team to keep track of it, and/or it would require more money than you can manage handling on your own (e.g. you would need to turn your project into an actual business in order to fund it further - at which point you need to start hiring people, to either do the businessy bits for you, or the technical bits, whichever your preference).
Obviously experience will vary from person to person.
But your point is well made. The quality of the code depends a lot on the quality of the programmer. Better programmers write better code.
Of course very few of us are great programmers on day 1. we learn and get better. I'm spending a reasonable amount of my time now re-writing code I built 25 years ago.
Equally, we grow better, and learn faster, when we get feedback. All too often the lone programmer is not reading code written by others, and is not getting feedback on his own code. So growth is slowed, or in some cases stopped for decades. Bad habits from 20 years ago still exist because there's no-one to rail against them.
So yes, lone programmers can be great, especially if they are outstanding to begin with. But the vast majority are mediocre and need the assistance of peers, and seniors to grow.
On the other hand those that grow up under seniors with bad habits, who _enforce_ those habits, are screwed.
Whenever, I'm doing bug fixes I dive deeper into "how someone else would approach this".
It is actually quite easy to keep improving on your own. I read books on refactoring, clean architecture, etc. as part of my daily routine. The time not spent debating with another dev is spent learning.
I guess as a lone dev it is easier to do no self-improvement and keep apps chugging along until something terrible happens.
One way to put it is in that single developer code will have more variance, when compared to code developed by a group. This variance has a range of effects, and will sometimes be positive, sometimes negative. Certain idiosyncracies are unlikely to survive work in a group, which can be good for the group. But in the case of a single talented unicorn who is inclined to write well-architected code, a group will inhibit that, as each individual pulls in different directions.
This is key. Good projects are based on good ideas. Good ideas take a lot of time to develop, and often involves back and forth, iterations, friction and failure.
If you have a good data model and technical architecture, the code almost becomes good on its own, even if it’s implemented by more people that understand the model. The problem is that it’s irrational to spend that amount of time, and it’s directly opposed to incrementalist mainstream paradigms of software development. It’s the process that almost has to happen outside of companies, because management would never let projects be executed in such a way. But when there are personal drivers (either by the creative aesthetic types or sometimes the hacker tinkerer types) you can sometimes, depending on the domain of the problem, get some really coherent systems that make people go “this makes sense, why would it be done any other way?”.
To me, the story of git has many of those traits, especially when comparing to what existed before.
I have a different take on this. If my project somehow becomes wildly successful and is acquired, the next team is not going to want to maintain my code anyway. How do I know that? Because no one wants to maintain someone else's code. No one wants to deal with someone else's abstractions and bugs or anything else.
100% of the time if it's feasible, the next version will be a rewrite of all or a significant portion. Especially for relatively small projects in terms of lines of code, which all of mine are.
Now the fact that it's plain JavaScript and messy, console.logs all over the place, I would argue, actually helps that new team. Because it makes it easy for them to trash my code. Which they definitely will want to do anyway. Why? Because they don't want a back end in JavaScript. Actually they didn't want it in TypeScript either. They wanted it in Python.
What if it was in Python? Well chances are they wanted it to use a different Python framework. Or actually Rust.
But what if the front end was very very well factored React. AND the front end team loves React. Well sorry but it uses React Hooks which they hate. And they really switched to Svelte months ago actually.
I would argue the part that matters the most is just not having a lot of code. Which you accomplish by not reinventing the wheel. Another big part is having relatively small functions and low cyclical complexity, and descriptive but not overly long names.
So in my mind although my code is a mess on the surface and easy to trash, and could be organized better, it will be easier than average to rewrite they way the want. And easy to trash means less likely to be stuck maintaining someone else's code for an extended period of time.
What I care about is getting a usable product out that has features that provide value before I run out of money. I could do half the features and MAYBE if I am lucky they won't trash my code.. but they will actually be worse off because then they will be stuck maintaining a code base that actually isn't using the frameworks they preferred anyway.
In 23 years of software development I've picked up or handed over, both solely or as part of teams (mainly the latter), dozens of projects across seven companies. Some of these were projects bought as part of acquisitions.
In only three cases was a rewrite performed, and in only two of those cases was it justifiable (both of which involved creating a new version of the product with substantially better capabilities, and on a completely new technical foundation without which those capabilities couldn't have been enabled, nor the commercial value they unlocked).
None of the acquired projects were rewritten even though they'd mostly been built by single developers.
The idea that developers want to rebuild everything from scratch is a pretty tired cliche. Maybe it's true in some contexts but it's certainly far from a universal law.
Something I’ve repeatedly observed in my career: the rewrite is started, but it never gets finished, and then the rewrite and the original coexist long-term
“Never gets finished” takes different forms. Two I’ve observed: Form (1): half the product gets rewritten in the new language/framework/stack, but the other half stays in the original - for whatever reason, rewriting the other half never happens. I once worked on a product where the UI was a mix of Java Swing, Struts, and Angular. The Angular UI opened the Struts screens in new browser windows. I wanted to switch to IFRAMEs to make it more seamless but never happened. For the Swing UI, you had to download and install it - I daydreamed about embedding it in the web UI using a JavaScript-based VNC client (e.g noVNC) but that never got beyond daydreaming. I suppose it would have made more sense to just rewrite Swing+Struts in Angular but nobody was volunteering for that painful grunt work.
Form (2): new customers go on the rewrite but existing ones stay on the original. Sometimes the easy customers get migrated (small scale implementations with few customisations), but the massive customers with heaps of customisations don’t. “If it ain’t broke don’t fix it”, and people are scared that the migration will go bad, the customer will get upset and you’ll lose them. Eventually they’ll churn, or embark on some radical change in direction that requires reimplementing it anyway, or maybe even M&A with another customer and want to merge their instances.
Sometimes this is called the “lava layer anti-pattern”
Another common issue is the product team just going crazy with new features on the rewrite. Which makes sense from a business perspective but turns it into a Frankenstein-rewrite and dramatically increases the chances of "never getting finished".
Ah, yeah, I've had something similar to that situation as well: I worked a place where you could see the different layers of technology, like different strata in rocks (or layers of lava, to use your metaphor).
You could see the SQL Server 7 era database with tables names taken directly from AS/400 mainframe fixed field width text files. Then you could see the mid-to-late noughties era SQL Server 2005 tables, as part of the same actual database, with its more human readable table and column names, but it was effectively a copy of the old database. All the business logic was triggers and sprocs.
And then the latest iteration, which I was working on, had been broken into "micro"-services, with our service using a Couchbase cluster (which, by the way, was a bag of spanners, and which I will never use again if I can possibly help it) to store yet another partial copy of the same data, and then all the business logic at the "application layer" in the closest thing I've ever seen to an actual seven layer architecture. They were pretty keen on enforcing those layers so what you had was just a ton of boilerplate function calls. The cognitive load of understanding what was going on was completely unnecessary for a system that basically just stored names, email addresses, addresses and phone numbers. Actually it didn't store addresses: it stored references to addresses that were stored by a different micro-service. These kinds of decisions meant that most function calls that actually did anything were also RPCs. You can imagine how this all performed.
Latterly I discovered that the blasted AS/400 mainframe system also still existed when I had to work with some files it was spitting out, and had to write an EBCDIC to UTF-8 converter (not actually difficult at all) to do so with 100% reliability.
The main reason for all of this was dependencies: to retire any given iteration of the system you'd have to migrate all its dependencies (of which there were many) to use the new variant. This sometimes meant we still had to sometimes extend the capabilities of the old versions of the systems, even though almost nobody understood them. So yours truly ended up having to write a stored procedure that updated a bunch of both the SQL 2005 and SQL 7 era tables to get a particular business process to work, and this sproc was called by our new microservice.
Even with the number of people they had migrating dependencies was a huge undertaking that they were only just starting to look at when I decided to tap out. An interesting project and a valuable experience for someone, certainly, but something I realised I didn't have the patience or fortitude for. They weren't idiots though: they knew they had problems, and that they were being choked by complexity, but there's no easy route to getting out of that kind of situation.
Fully agree, always amazing how some can make up reasonable reasons and then based on no data argue it must be like that?
Ok, maybe unfair and it is based on some experience, but mine is that code sticks no matter how bad.
Most people and industry have grown up and fears a total rewrite unless it really really hurts, and this sometimes taken to absurd levels where every dev knows it should be done, but management is like: but it works, just add me that tiny (lol) feature.
On the other hand, single devs code is usually quite good, otherwise they wouldn't have gotten that far... and it usually also has some consistency in itself. I'd say 70% good, 10% too clever guy with code that noone understands but is amazing too look at, 10% too unexperienced student but still good enough too gradually improve on, and 10% bug ridden shit.
Who cares about frameworks that much nowadays, except the to be feared teams described below (and unless you have to integrate and it is a conflict)?
But what has to be feared is the code base of a mediocre team with a lacking vision and a lacking lead, that's where the horrible code bases that need an immediate full rewrite for me always stemmed from (:
It’s a function of experience. The more experienced you are, the less you want to rewrite. Sometimes you start in an experienced team and this comes for free.
There probably also is a bit of a difference in the companies business context. A rewrite in my company needs to be argued for and needs a solid business case, I.e you need to estimate that the burden of the current code base is choking the dev team and that the rewrite is a better alternative to plain refactoring.
Essentially most rewrites I am aware of were either of really small tools or when a project had been written in an esoteric programming language.
That is not my experience at all. People are reluctant to rewrite it all due to sheer amount of work and people who take over others software keep maintaining it. The rewrites I have seen happened after years of talking about rewrites. Or after the original code was utterly unmaintenable.
> Now the fact that it's plain JavaScript and messy, console.logs all over the place, I would argue, actually helps that new team. Because it makes it easy for them to trash my code.
That being said, your code is going to be rewritten, but you are not making it easier. No one needs you to write crap to be able to trash your code and people do not need to trash your code to rewrite it.
But messy code is much harder to rewrite while clean one is much easier to rewrite. Especially if your goal is different technology, having original clean means that you spend a lot less time puzzling over what it done.
Exactly. All I care about is getting things done quickly and moving on. At university I was taught about "maintainable code", but in real life, I have never seen such code.
I have inherited code written by teams and it was not good quality. I just rewrote each section when I needed to change something. This seems like a much better approach then trying to write maintainable code - write code that can be thrown out and rewritten.
I like writing legible code for its own elegance and for my own sanity, but given that you found your way to be productive, I admire your radically honest (and accurate) understanding of what happens to received code no matter what!
This might be unusual but I would prefer to read someone's mental model of how the code works than the code itself. With thousands of files and thousands of lines of code it's difficult for me to work out how the code fits together.
If you document your mental model of how the code works, I can probably map the code to your mental model and understand the code.
I want/like people to create "entrypoint" packages/folders where entry points and component registries are found. Especially int main() {} So I can follow the control flow of the software.
I think too few people understand cornerstone technologies, including compilers and browsers.
"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)
Its always mystifying to me how often programmers try to show me a new concept in their code starting with some complex functions. Start with your data structures. What data are you storing, why, and when? If you show me that, the code that actually moves that data around is usually pretty obvious. (And if not, your code will be 10x easier to understand once I understand the context).
> If you document your mental model of how the code works, I can probably map the code to your mental model and understand the code.
I agree.
This is something that I tried to do with my best project. [1] Only time will tell if I succeeded because it's really hard to document a mental model; it's like trying to explain to yourself what water is when you're a fish. [2]
This is great! I don't know if I'll have a project that I expect to last long enough to worry about bus factor, most of the work I do is either throw away stuff for myself or already part of a large shared team effort. But I love the idea. It's long but I skimmed it quickly and I think it's got a very good level of detail and is overall very readable with just the right amount of snark. :)
I wrote it in snatches of time nearly always late at night, so the snark just...happened. I was worried how it would be taken, so thank you for the encouragement, including about the level of detail.
Level of detail is something I think a lot about lately. Interviews go off the rails with candidates who answer in too much detail or not enough. Meetings go off the rails with non-actionable proclamations or tedious line-by-line details.
Machine generated / doxygen / javadoc documentation is the worst. I'd rather read the code, but code can also be difficult to read even if it's functional, especially when it's optimal. So having a guide to the code written by someone who knows how it is all laid out is pretty much the best case scenario.
That's why I hate Spring and similar technologies. They autostart things everywhere. Even from library jars. So terrible.
If I would write Java program for myself, I wouldn't use Spring. I would use dependency injection, but I would construct all objects manually, calling their constructors and setters. I did that, it works, it maintainable. Some repetitive code, but it's so much easier to reason about compared to reflection monsters or autogeneration nightmares.
> I would prefer to read someone's mental model of how the code works than the code itself.
I find this is often true for me. If there's a bunch of code that I'm having a particularly hard time understanding, it's usually because I don't actually have a working mental model of what the thing is supposed to do.
Problem is, with complex systems often the way to record the mental model is something like a design document, and getting programmers (including myself) to write thoughtful natural language to accompany our code is a constant uphill battle against business pressures to ship more.
> Problem is, with complex systems often the way to record the mental model is something like a design document, and getting programmers (including myself) to write thoughtful natural language to accompany our code is a constant uphill battle against business pressures to ship more.
In my experience, the more common issue isn't that an initial design doesn't exist, but that it was never touched again after being approved before the implementation, so issues that came about when actually trying to implement it and the corresponding changes to the design aren't captured anywhere, let alone changes made during the ongoing maintenance after it's actually put to use. The only thing more frustrating than a lack of intent written down somewhere is an intent written down that doesn't actually match what's going on, and then having to try to track down if the intent _ever_ matched the actual design, and if so when (and if you're lucky, why) that changed.
I think the success of notebooks can provide a positive influence here.
They give text and readability a higher priority than traditional source code artifacts, and it really might only take a fresh IDE/plugin and comment meta-syntax to make that more normal in production code as well.
Agreed. It seems to me that doctests (i.e. code examples in docs that also count as tests, common in Python and Rust codebases) can possibly cover parts of this.
Agreed, that sort of documentation is pure gold when done well.
It's something I always try to pay forward by doing in my own code. For example, one of my own lone-developer projects was an STB-style single-header <canvas>-like rasterizer library for C++. I started the implementation half of the library with a short overview of the rendering pipeline's dataflow and the top-level functions responsible for each stage:
I’m sure there is a real name for this, but I tend to call it a “Document of Intent”. It isn’t meticulously trying to explain every nuance, it’s a coarse treatment of the purpose of things and what they’re meant to do.
The bigger problem is when the code works (ish) but it’s clear that the lone developer didn’t even have a mental model of how the system works.
It’s usually very easy to see in a codebase when someone was flailing, and they wouldn’t be able to explain how or why this line of code works any better than you could. They just have strong conviction that this line of code needs to be there, because it was there at the stage in their flailing that things started working.
And to be clear, this can probably happen just as easily with a codebase with multiple developers.
> This might be unusual but I would prefer to read someone's mental model of how the code works than the code itself.
The hypothesis under Peter Naur's (the N on BNF) "Programming as Theory Building" paper is that transmitting (by talking) the mental model is far more important than other format of documentation to understand a system.
I do this for all code I add devs to or hand off to. For instance, startup code I walk away from and hand over to contractors. A markdown doc with some high-level concepts + a hierarchical bulleted list explaining the module structure goes a long way.
it's amazing how uncommon this is in the big business world. Individuals have pride, cogs do not. I've been in both camps. Currently I'm a cog because it's better for my life at the moment. I'm frustrated by the lack of simple documentation - but at the end of the day I'm not really interested in fixing it.
I started doing that with my own project after I reverted a (correct) change a few weeks later because I forgot to write down the original reasoning anywhere. No other contributors so far, so I can't say how much it helps on that front, but it has helped me a lot and I feel safer making big changes without fearing I'll break anything.
This article makes an explicit assumption: that code that is easy for you to read at a glance is inherently "good". The article then makes weird quasi-moral judgements like, maybe if you are a single person writing difficult-to-read code then "maybe you just don't need to write good code". Code that is optimized to be easy to read often has lots of duplication and very little abstraction.
This is even a well-understood tradeoff, and is why people working as cogs in massive organizations are often expected to code in languages like Java, Go, or C, which all purposefully offer very little in the way of abstraction, so the code is somehow more "obvious" and easier to read by whomever comes into the codebase next and needs to rapidly make some kind of change without breaking stuff.
The problem is... this is actually really shit code. The more duplication you have and the less abstraction you have leads to way more places for bugs to creep in. Yes: having the weird unified mechanism that no one is used to to handle memory management or authentication might seem a bit annoying when you sit down to someone else's code, as they are effectively using a bespoke framework.
But, let's reframe that: they are using a "framework"! And it probably has some kind of internal logic, and was likely designed for a reason to solve a real problem! If you take a few moments to orient yourself in the codebase, rather than assuming "I don't understand this so I guess it is bad code" you might come to understand it and then appreciate the work that went into the abstraction.
The alternative to this is frankly code that I enjoy, as I'm a security engineer looking for bugs: I'm going to look for things that are duplicated for bugs that were fixed in one place but not the other or for correct--yet somehow slightly different--behaviors which lead to parser differential vulnerabilities; even just boilerplate: the chance you got it right everywhere is about 0.
>Code that is optimized to be easy to read often has lots of duplication and very little abstraction.
Absolutely wrong and people need to stop pushing this narrative or put their money where their mouth is and use assembly. The people repeatedly saying this have probably experienced bad abstractions and gross OOP spaghetti code and improperly generalized this to "DRY bad, abstractions bad".
It’s much easier to sit down and start typing. And it’s somewhat easy to read that code and know what it does, line by line.
Abstractions inherently push away the details that enable this kind of lower level understanding. They have baked in assumptions that may or may not be explicitly documented.
Good abstractions provide leverage, but they are mini languages. They have their own vocabulary, their own execution model, extension mechanism and so on.
Good abstractions are clear and well factored pieces, but that doesn’t mean they are or should be easy to read without making an effort to understand their meaning. That’s not necessarily what abstractions are about.
That’s why tutorials and guides are so important. People need examples to ease into a new vocabulary and mental model.
Assembly (and JVM bytecode, WASM etc.) is very easy to read and understand. You learn these languages in what, an hour and a half?
But their vocabulary speaks about things you don’t necessarily care about when writing a web app or an ETL pipeline. People use abstractions to express something in a particular mental model or domain. Only in specific cases that means it’s optimized for ease.
It's not even well understood what abstractions are, in the first place. It usually ends bad when one starts with the premise of "I must abstract this, I must generalize this" etc. Most "abstractions" are terribly leaky.
I find it much more helpful to approach it like "This is what needs to happen, now how do I split this into different parts in a way that minimizes the interactions?".
I like how A Philosophy of Software Design discusses this issue you described:
> An abstraction that omits important details is a false abstraction: it might appear simple, but in reality it isn’t. The key to designing abstractions is to understand what is important, and to look for designs that minimize the amount of information that is important.
> Code that is optimized to be easy to read often has lots of duplication and very little abstraction.
I would disagree. I dont think you understand abstraction which is what Object Orientated Programming addresses. A class written once, deployed many times, which can include things like screen control resizing rules, or classes which handle reading and writing to disk or other things like that.
A class is a classic form of abstraction and thus can easily be read.
What I have not seen mentioned once in these discussions, is different coding styles.
I've seen UI design documentation but I've never seen coding framework's apart from Hungarian Notation in windows [1] but things like variable name formats, whether the code should be in classes, Routines, or Procedures.
Having worked on a number of different projects, I've worked on source written by single/sole dev's and I've worked on source written by multiple dev's from all around the world.
They all have their different styles of programming, they all have their quirks, their eccentricities and the current batch of programming tools including visual studio enables these differences which make debugging or securing code more difficult.
I dont think many managers or bosses understand programming either which is why the coding style is not something thats been mentioned. Even teams using tools like Github where there is discussion taking place with the code will force a group style onto the code.
Being the only person able to program in a small team, I know I am the target of the criticism in this post.
I think it is mostly true, I wish there were other people looking at my code, but for years I've been working on projects alone.
What I came to realize is that there are some kinds of software where quality is paramount, mainly the code that will carry on for years, and require constant changes.
But there are some software that only exist to fix a particular issue that doesn't require change over time. The typical Perl script that lives unchanged for years doing a single task. For this particular case, quality matters less than portability and relying on solid dependencies that do not require maintenance.
In the end, management does not care about code quality, as long as money is flowing.
For lonely programmers, identifying the code that should be high quality (the one you will be looking at continuously) and the one is only there to fix a single problem is really important.
> For lonely programmers, identifying the code that should be high quality (the one you will be looking at continuously) and the one is only there to fix a single problem is really important.
I feel your pain. I am the main coder, in a small team. There's one server guy (part time), and one semi-technical graphic designer (also part time), a couple of admin/marketing folks (also part time), and me (full, full time).
My software is really good, and I don't need anyone else to tell me it is or isn't.
That said, I am also quite aware that I have to make compromises, and that there are things that I'm not good at.
For example, I'm a higly advanced Swift programmer. I write Swift, every day (like, seven days a week, 52.14 weeks a year). I'm good with UIKit, and reasonably good with the other frameworks.
Server-side, I've been writing PHP for over 20 years, and never really got full mastery of it. I can write some very performant and robust server infrastructure (a lot of it lasts a long time, too), but I don't pretend that some kid couldn't code me into a corner.
But they might have trouble doing that, with Swift.
Doesn't stop them from being real judgmental, though. I suffered from that, myself.
Here's an example of stuff I do, now, that I would have sneered at, just a few years ago[0]:
let distance: CLLocationDistance = CLLocationDistance(rawMeetingObject["distance"] as? Double ?? Double.greatestFiniteMagnitude)
That uses a nil coalescing operator (??). In some cases, I can go through several of them, on one line (cascading nil coalescing operators).
That says that I need to get something from the "distance" key of a parsed JSON Dictionary, and, if it is not available (either as a key, or as a value that cannot be coerced into a Double), then I should return the maximum Double value (so it will blow up the distance sorters).
Here's a slightly more involved example (from a proprietary app):
If there's no default value in either the runtime, or the app defaults, then I use the hardcoded default. The reset is always false, after the first run.
That's actually fairly typical advanced Swift. I used to rail against it, but now, I do it all the time. I also tend to use tail closures a lot (that's when you declare a closure at the end of a function parameter list, so you can simply open the closure as a function result). That's also something I used to rail against.
If someone wants to maintain my code, then they need to have some decent chops. Usually, that's me. I will often revisit code that I wrote, six months ago (or more), and have learned how to read it. I write code that I want to see again; not some junior programmer.
Usage of the nil coalescing operator is not advanced. You've simply used it in a way that makes it less comprehendible to anyone not already familiar with the operator. Any actual Swift developer should be able to understand what's happening. But frankly, this isn't good code, and not just because you're using coalescing like that.
1. You should never have to use a raw object like that. Use a Decodable type and get a real Double from the parser. Among other things you have no error handling here to tell you when that as? Double cast fails. First thing you do in Swift is embrace the type system.
2. Fallbacks like that probably shouldn't be done inline. Instead, there's usually a facade on top of things like UserDefaults to properly name and handle different properties consistently. Using rawValue that much should be an obvious smell.
3. Use of untyped Any is almost always a bad idea in Swift. Create a proper type that encapsulates exactly what you should see and you'll likely write less code with fewer bugs.
4. A CLLocationDistance created from .greatestFiniteMagnitude is obviously an invalid value, so I assume you're checking it further down the line to see that. Better to just produce an error or a nil value when you don't have an actual distance than to pollute your system with invalid data.
Cool. If I'm doing it wrong, I wouldn't mind seeing it done right. I'm always willing to learn new stuff, and one thing that I've learned about Swift, is that there's always more/better ways to do stuff.
The obvious issue with not using rawValue, is that we don't use an enum, and that's what I usually see. I like to use enums, where others use static lets, because I can do things with enums, like limit the number of values (good for switches), and repurpose them, from time to time.
Also, since an enum is a type, I can add things like functions to do cool stuff, like extract and interpret the state, or do some good debug stringing. I could also use a struct, but why, if the enum will do it?
And, of course, since it's a type, I can extend it for special applications. I do that, a lot.
One of my "desmeller" exercises, is to go through my code, looking for "magic number constants," and see if I can replace them with enums.
The reason for the Any, is because the code extends another SPM module that I wrote, that forms an infrastructure for preferences, and this is just the bit inside the implementation.
No, the highest Double isn't an invalid value (that's actually the point). It's just one that won't apply to the filtered set that I produce, later on. That means I don't have to put in special branches (keeps the CC low), to look for invalid values.
I could also clamp it, but that can have unfortunate side effects, and I can add some assertions in the else clause.
Sometimes, there may be reasons for people doing stuff, other than they suck. I don't suck. I'm not being snooty, but I do this a lot, and have a lot of pretty good stuff, out there.
Maybe, and I know this is just crazy talk, I might actually be a halfway decent programmer.
As a lone developer, there is some truth. But a good lone developer is still consistent, more consistent then a team. The problem with the lone developer is they often get their efficiency by being "weird". However, I still code review my own code and catch issues with it. To be blunt, most people (developers included) don't actually learn new things easily and when they see a "weird" code base their learning mind shuts down. They aren't used to efficient patterns; they are used developers writing dumb CRUD code that is easy to "understand" but in truth do almost nothing of value.
I don't think this is really a "lone developer" problem. If you jump into any large-scale application codebase you'll be lost and confused, even if it was developed by hundreds of developers.
Maybe the reason people perceive this as a "lone developer" problem is lone developers usually share their code for free (or for equity), so anyone trying to get up to speed on it won't be paid for their efforts (at least not right away).
So I've noticed the same underlying phenomenon here: looking at code I wrote 1 year ago and being kind of disgusted. What's interesting is it seems the author came to a completely different conclusion from me.
For me I didn't think "oh, so 1 man team === bad code" - I instead thought "ok so I have no excuse. I have to focus on keeping the code non-confusing and writing comments even when alone". I've found that if I put in the effort, I can write code that's less disgusting 1 year later than if I hadn't cared.
So if you ask me, I don't think there's that much of a correlation between number of people on a team and code "quality". I don't even know what code quality is, honestly, because it's not all methods being <= 10 lines like some linters might have you believe.
For me personally, I have the best time reading code with comments. The self-documenting code thing just hasn't panned out for me. And of course I don't mean "var x = 1 // set x to 1" style comments - I mean as others posted the kinds of comments that just explain the code author's mental models or reasons for doing things.
> I have to focus on keeping the code non-confusing and writing comments even when alone
This is the key. When I was younger, it was drilled into my head "be kind to your future self". My future self is either maintaining the codebase, or ensuring that someone else is - and the best way to be kind to myself is to make both of these things easy.
I comment my code, even when it's not a difficult segment. I try to capture my thinking process - in the most organizationally friendly way. It makes a world of difference even to me, should I edit my own code a year later. I try to install this into everyone I mentor.
The article kind of touches on this when it says scrambling for the first 10 users.
For me the big realisation was that my company was more likely to fail because of lack of product market fit or demand, rather than unmaintainable code. Once I had that realisation code quality went out the window.
Just get something half working as fast as possible, then put it in front of customers. Probably the customers don't want what you've built, so throw it away and build something else. Repeat until a customer gives you money. When you're getting a reasonable MRR then start thinking about long term maintenance.
When you're solo developing, the laws of physics are different when it comes to code quality and refactoring. The scaling of the cost of technical debt is different depending on the size of the team.
In a team, large refactoring efforts are slow and difficult to do incrementally. The best way would be for one guy to lock the mutex and just go, come out however much time later with something cleaner.
If you have 10 people waiting for a week-long refactoring of some central aspect of the code, that's 1 week put in and 9 weeks of work put on hold. It's so inconceivably expensive it's usually not even an option. So instead the code must stay squeaky clean at all times.
On the other hand, if you're just one guy and nobody is waiting for you, it's that would-be impossibly expensive big-bang refactoring task that cost 10 weeks for the team is 1 week of work like any other.
If the primary outcome is not the code itself, but the software it compiles into, I think you're an absolute fool if you don't lean into this as a solo dev. You can be so much faster than a team dev. You can code dirty, cut all manner of corners and fix it later if and when it becomes an obstacle exactly because large code base changes are on a massive discount.
Although as the article notes, it's difficult when the team of 1 grows to more, or the project changes hands, as the solo dev may have known all the weird jank and how to fix it, but the new developers won't. That said, refactoring is cheap, so as long as there's not been an busses involved, a hand-over is very doable given some preparation.
Yeah maybe "must" should be "ought to if you want to stay productive".
You can absolutely accumulate technical debt as a team and I think it's common and if it's just superficial fairly fixable, but I also think there's a tipping point where it just spirals out of control and more developer time is spent putting out fires than fixing root causes.
Maybe someone will contradict me, but I don't know of any examples where a large multi-developer code base has had systemic technical debt where this has been "fixed".
I for one won't contradict you. I believe it's a rite of passage for a senior developer to realise this, among other things.
I remember trying to push for refactoring back when I was just starting my career and being frustrated with the pushback. Little did I know the cost of doing that was even larger than interest on the tech debt we were dealing with, which in hindsight was still manageable at the time.
By the time I started participating in 20+ person monstrosities I was already aware of this and figured that once tech debt in such projects starts accumulating, there's no going back and my task is to do my best, but start sending out CVs.
In my experience coding clearly in order to keep code alive in the long term as various contributors come to it is a skill all its own. Developers who integrate writing documentation and tests into their coding write much clearer code whether they are integrated with a team or working on their own.
The particular related point that strikes me is that current hiring processes are dramatically against this. Writing clear code with documentation and tests takes extra time and consideration and often goes slowly. But what gets people hired is quick leetcoding. Reverse the link list quickly or write a sort routine that will go fast and be as quick as you can! Does that imply good coding skill? Maybe. Does it imply the skill to write clear code and the patience and communication to produce documentation and tests along with the code? Absolutely not. Leet coding is the opposite of clear coding for long term utility and ability to readily share the code. So organizations want mature and responsible coders but hire twitchy hyperactive kids instead and then wonder that the results are not as expected.
When I was a teen a couple decades ago, I taught myself Java from a book I bought at a library. I forget the name of the book and the author, but the last chapter included advise on how to program in a team. And the advise went like this: never give negative feedback unless there is a very important reason to do so. Appreciating others' styles and choices reinforces a positive culture and makes everyone in the team happy. This is more important than what background color has been chosen for a UI, or what design pattern has been used for some code. I have followed this rule all through my life. This might make me a non-challenger, but I know from experience that feeling good about the people you work with is what produces the best results
*I actually am a very annoying reviewer, but it always is due to code tidiness (comments, indentation, simplicity, etc), rather than my personal view on what pattern or design needs to be implemented
I'm honestly shocked how some companies I've seen rely on single sourcing
They'll have one (arguably very smart) developer make a component that is crucial to multiple projects. Documentation? Non existant. Coding style? Unreadable mess.
They just hope that this one developer is never sick, never takes holidays and hopefully never retires.
> Relatedly, it’s it’s easier to have bad ideas if you don’t have to explain them.
This resonates a lot.
So many ideas that seem to make sense at first (you’re having the ideas in the first place after all), but start crumbling even with simple rubber ducking. Taking distance to try to explain if to a third party, even if they don’t exist, is underrated.
Funnily enough, writing comments and documentation don’t trigger that switch. There might be some hidden assumption that the reader is supposed to “get on board” and come share the same mental model perhaps.
For example the project "dear imgui" has been created almost by a single person, Omar, and I find the code amazing.
I have learned a lot from individuals like Norvig, Knuth, Carmack, Johnathan Blow or Casey Moratori or Rich Hickey or Paul Graham. Just reading the code they write.
Pair programming? Seriously? Pair programming only works for me if the person you program with is already an expert. A mediocre one is going to make your program mediocre. Two dull knives don't make for a sharpened one.
Some create job security though ivory tower structures. A surreal masterpiece conceived while failing to cure their schizophrenia by licking hallucinogenic frogs while on LSD.
Others in small firms sometimes notice a "team" is not beneficial... especially if some have seniority, rotten intent, and a string of prior failed launches.
Finally, one realizes it doesn't matter, as you're billing by the hour. A grim apathy replaces ambition, as one starts to fantasize about being a plumber. =)
I’ve experienced this first hand. I think the trick is to encode the experience of writing the code. That takes different shapes in different languages, but a test suite that illustrates the problem and the solution is a good example.
You don’t just want the finished film, you want the making off companion piece.
If you're a sole developer you can do quite a few things short of bringing in other developers: good test coverage, linters and formatters (in git pre-commit hooks or CI) etc.
For example, I try to aim for 100% test coverage in solo projects - that might be overkill in a project of multiple team members all reviewers and testing each others' code, but keeps me from making stupid mistakes when there's no-one else looking over my shoulder.
Another thing is how much you dog-food your project - if you are building something you use yourself every day, you are going to surface bugs and feature requirements better.
This note serves mostly to stand vigil against project requirements for 100% coverage, which I've seen a few clueless leads or managers attempt to implement for teams.
100% code coverage as a requirement is usually a waste of time and resources. Test code, especially units, at their edges. This frees you to change implementation details while preserving functionality. In two decades of experience, the best codebases I've worked on have around 50-80% coverage in unit tests. 100% is a false sense of security as bugs still present themselves to users. I've also seen devs burn days of effort chasing code coverage metrics, days of nearly zero-value add to the quality of the project.
I completely agree that "good test coverage" is critical; I've just not seen that it extends to 100%. Pre-commit hooks for linting and formatting and automatic test runs are all best practices. Dog fooding does some amazing things in terms of usability.
I mean sure, in principle I'd agree with the 50-60% for the average team as a decent metric (and as a caveat, targeting the most business critical or complex parts of the software). As a single developer (and time and needs allowing, of course: a throwaway little project might not warrant any testing) I aim to get as high as possible just because I don't have that coverage that comes from having other people looking at my code.
I know there will still be bugs, particularly in harder-to-test things like the UI, browser issues etc, but it gives me more confidence to move a bit quicker when I don't have others to check me.
Come on, really? There are plenty of tools to keep code legible in any language: linters, documentation comments, design patterns, idioms from authoritative references, etc.
Every time I see code that doesn't really "fit in" with more experienced code in that language I already know it's gonna be wrong in other ways despite technically "working".
I've too suffered from the neuroticism, imposter syndrome, and egotism of being new at something, but damn can't we just admit this? Guaranteed everyone who isn't a developer sees it and has to deal with it even outside the code itself.
I've definitely been on the receiving end of experienced (10+ years) and competent (or, at least, clearly not incompetent) developers solo-developing inscrutable code that their colleagues had to pick up later. In contexts where they contributed code to bigger projects, their code showed more focus on readability.
I'm sure I've inflicted my share of hard-to-understand code too, most likely out of not wanting to spend extra time on some byzantine section of legacy code.
I’ve recently joined a team where the vast majority of the code used across different projects has been written by a lone developer. The code quality is quite good, however, nothing is documented thoroughly and it’s mostly understood by a single person.
We’ve started to code review all new code being merged to the the codebase, we’ve been pair programming occasionally, and I’ve also been writing documentation about how to use the code as I get a handle on it. It’s been helpful so far.
Please document your code. Please provide examples.
What does well documented code look like for you? For me if I look at a new (webapp) repo I start from the controller level and look at the services and models. Would a good swagger page count as good documentation? My worry with documentation is that it can lie but the code doesn't.
I agree with this perspective. Documentation outside of code is almost never worth it (you cannot capture all the nuances or current bleeding edge - only code can) and usually just an excuse for someone to throw up their hands and not understand something.
There is value in guides, and good comments. For a library all the external functionality should be described (including prescripticely). For a big ol code base? Read the code and follow references!
The good documentation I’m envisioning refers to standardized comments throughout the codebase (we’re working with cpp, so doxygen in this case), with code examples within the header showing how the code should be used.
I’ve found there’s so many custom classes through my discovery if this codebase that it’s not quite clear how to use the classes or where to start, so having code examples along with comments describing their usage is pretty useful.
It’s also nice to utilize version control for the comments, which makes them easy to contribute to, as well as hopefully inspires others to contribute their own thorough comments when designing and contributing new classes to the codebase.
Documentation regarding process, coding guidelines, and the likes are better suited for a platform like confluence or some equivalent. We do that as well.
I think you can beat the lone developer if the original developers take the time to thoroughly explain their code, at minimum in the comments, where it is good to point to papers explaining algorithms and techniques used. But also lots of example use cases explaining the various features of the code base -- not just "here is something cool you can do" examples, but "here is how these various functions can be used together, and this example shows how you can generalize to a larger problem." White papers can be useful for larger projects. There also needs to be some teaching work done: one-on-one conversations with colleagues for small projects, online videos presentations, and/or attending conferences for larger frameworks.
If you don't put effort into explaining your project to others, very few people will take the effort to learn it on their own, since the effort to understand your code might cost more in man-hours than the effort required to program everything from scratch.
Life is too short to poorly build a house, a house that will break down, with the windows and doors in the wrong place.
Likewise, life is too short to write poor code. Either you already know from the outside it's going to be used, or you don't think it is going to be used and life surprises you. In both cases, you will want to have written good code. Only the edge case of 0 users is compatible with poor code, and that is perhaps code that should not be written at all, because your time is limited on earth.
Exception: code to learn.
N.B.: There is beautiful code written by single people (e.g. SQLite, REDIS, Dave Hanson's LCC compiler and his beautiful "C: Interfaces and Applications" library [1], Stanford GraphBase) and ugly code written by groups (not giving examples here not to offend folks - some of it is also still very useful, and used by me!).
Unless you're disciplined or have strong external motivations, there's no incentive to write code that's easy to understand for other people. Everyone struggles with this in all sorts of places in life. Diet, exercise, keeping your house tidy, and so on. Few are self disciplined enough, some need to exercise with a friend, some need a personal coach, some need to have friends over to visit.
While it helps to have other eyes judging your work, expectations can also fade once your relationship with them starts to get closer. For example if you have a relationship with someone where you're both messy, you might be trying to make things tidy for each other in the beginning, but as you start trusting each other and letting your guard down things become messy.
Perhaps something like copilot that focuses on judging you rather than doing the work for you would help. Like a laundry list of areas in the code that are difficult to understand.
This is something I learned the hard way. The value on software frameworks such as Java Spring or Python's Django is that it enables teamwork. At least in theory.
They have higher costs when scaling vertically but as a trade-off, it's easier to horizontally scale by the number of developers.
When I realized this is when I started to enjoy dependency injection tools.
Another solution you can use to protect you from yourself here is using a relatively mature framework, and keep to its conventions.
To underline -- the older, more stable, and more boring, the better!
What you have then is at least some well understood, well documented foundation for others to immediately understand the basic structure and flows of the code.
The hardest thing here is being disciplined about sticking to the conventions, even when they seem suboptimal for your 'special' usecase.
Really try hard to not do things like pulling in 'community plugins' and the like, using language features the framework was not designed to work with, or in the worst case forking the framework or going off on your own path with custom code that reimplements some of its features! Stay as boring as possible!
I have had experiences that contradict this! At the same company there was a ~30kloc frontend built by one developer and a ~20kloc backend built by four developers and the second was much harder to maintain and improve. Multiplying developers on a project can enforce some rigor and extensibility, or it can simply multiply the amount of time and effort available to just hack the next feature in. A single developer can create a mess only they can understand, or they can end up self-enforcing rigor and extensibility in order to handle the scale of the project. (Multiple devs certainly increases the oversight so you’re more likely to know if a mess is being made, so there’s something to be said for this approach).
I would not generalize and say that "a lone developer will always produce unreadable code". It's just that it requires a somewhat rare skill.
Think about it: writing readable code requires putting yourself in the shoes of a future reader/reviewer, while writing a complex piece of code, and often under tight time constraints. That is not easy.
It's also difficult to measure consistently. Lack of clarity has a delayed impact in months or years in the future, while it has some immediate benefits (more features produced more quickly, right now). Often the original author doesn't feel the impact at all, since they have moved out to a different project/team by that time. It is externalized to others.
Because all of this, code maintainability is generally not incentivized at the workplace (while producing new features quickly is). The programmer who "produced a single feature, with very maintainable code" quickly learns that he's less valued that the programmer who "produced 3 features very quickly". Code maintainability is not even in the picture.
I know some very talented developers that really struggle with code readability. For those, pairing with someone else can help. But that other person also needs to have the right skills. They must be methodical and organized - that is not rare in programmers. But they must not be afraid of asking the right questions, and signal contradictions, "slowing down" the other developer. That is less common.
I haven’t experienced this much, as a frontend developer. Not in my own code from years ago and not in other’s code on projects I’ve inherited.
I’ve joined some teams where the code was horrific after rounds of patchwork by people who didn’t understand the system. But by and large, systems made by one person have been really easy for me to work in. Except for the various cases of overengineering by senior engineers or when junior engineers try to apply too many ‘cool’ frameworks that aren’t necessary.
Maybe it’s because most webapps solve simple problems in simple ways. There are collections and entities and relationships and almost everything is a page or a list or a form.
I suspect most of these "problems" aren't really problems. If you're a lone developer and you want to make a 'great' product, then you really don't need to be concerned with writing great code. You just need it to be good enough, just readable enough, and - most importantly, an actual product that people are using.
95% of good software is just well-marketed software. Part of being well-marketing is filling an actual need, and doing it well. When it works, no one cares what's happening under the hood unless it causes a problem.
Part of the issue here is most people do not know how to actually attack a problem together in a non personal way. What I've noticed is one of the following issues:
Can't accept cristicism of their solution. Too much ego.
Don't really care about finding the best solution.
Can't communicate well enough to discuss the problem at hand in the moment.
Don't have enough domain knowledge to add meaningful feedback.
In essence, a lot of features are built by the lone developer. The only saving grace is the process, breaking up the a task into smaller chunks and code review.
I think the lone developer problem is really in truth the lack of good documentation. I have taken over plenty of code that was well documented where I definitely written it that way, but I understood it. In a lot of cases I rewrote parts and made sure that what I changed was well documented.
I have also worked on code where functions or variables were labelled a, b, c or aa, bb and orange! I don't recall who said it but one of the hardest problems in computer science is naming things.
I build everything alone and the way I've solved this is:
1. Use consistent patterns for how you implement things. Don't mix and match; nearly all implementations in a given context/domain rely on the same mechanics.
2. Document the patterns you use. If they're your own, take the time to write up a blog post or tutorial on the how/what/why of them.
This has enabled a surprising number of people to pick up a project I've worked on with very few questions and little effort.
Code has a strong dependency on context. It is not understandable without a context. Hence reading code is much harder than writing it.
More often than not, that context is something you need to guess - by reading the code - and then establish the context. Often it's a best guess because the author did not explicitly document the mental and required technical context by which the software was written.
In terms of coding standards and diverging designs, as other people has mentioned, this comes down to both tooling and leadership. It's an integrity question really; Do you wish to write software that other people understand ?
It's a pretty established fact that the brain only have so much cognitive ability, so by writing software that the brain can absorb is of course key. One way of doing this is writing small and well documented context-based code.
I highly recommend that if you are in a team, set up ruleset for how things should be and have your tool to enforce it. Class lenghts, spaces, variable names, null reference checks etc.
It's easy to be a lone developer. Everyone can write code. If you can get someone else to understand it while reading it, you're one step closer being a really good developer!
The over riding premise seems to be, get feedback. Always a good reminder.
However, this can be taken too far and contributed to me leaving my last job. I was pushed for review on too many things, in my opinion. It got to the point where it was a waste of time trying to decide anything for myself. I had to invite people from multiple disciplines in the off chance a good idea would surface. Not only did it demoralize the experts, it slowed progress from weeks to months with no appreciable difference in quality. Most meetings were met with little or no feedback because the people involved had no experience in the stack. Most suggestions were to solve the problem within their own area of expertise, the web service guy would always suggest a web service, the DBA would always suggest TSQL, etc.
What I would recommend is develop the ability to recognize when more feedback would be beneficial and schedule a review or meeting for that and act like a working professional and take responsibility when things don't go perfectly. Which suggests the real root of the problem and need for getting feedback all the time...
The lone developer problem I see is that it's cheap to start things, get them working and then it's really hard and boring to maintain things. So it's just easier to make more stuff.
Like consumerism, development creates stuff that just won't rot away, A lot like polystyrene cups. Code lives too long and of course it gets hard to understand as the domain experts retire or move on.
I work on a legacy software stack that started just before php was invented, Parts of it are php2. There's some aspx junk in there and a bit of crazy c-sharp inside sql server procs. It's wild.
However, I'm learning the business logic as we repair it and port it to a more standard stack. The young people who work with it are also learning it and that helps everyone understand what was going on.
If this project has lasted 5 years the cowboy shit that was done in the 90's wouldn't matter. But here we are, fixing things and writing tests and putting it in CI/CD so we can get a more orthodox boring solution because I know it's easily got another 20 years on the books.
This is something that the folks currently refusing to work with me, can look forward to.
It's ridiculous. We are highly unlikely to one day, wake up a different race, or [slightly more likely] wake up a different gender, but we will all become "old" (the alternative kinda sucks).
I suspect a lot of SV folks are already experiencing the "hoist by their own petard" problem. This will become a really big deal, with all these layoffs.
Also, the code I write isn't "overengineered," or "spaghetti." It's advanced code. Many modern corporations are obsessed with hiring the least possible qualified candidates, and throwing them at advanced ship projects. In these cases, someone right out of JS bootcamp, is supposed to take over a native Swift iOS app. They won't have fun.
Too cheap to hire the right people? You get what you pay for. Too bigoted to hire experienced people? You get what you ask for.
Depending on what your objectives are, I would argue that code is usually a means to an end, rather than the product. It's great if you enjoy writing beautiful code, but usually shipping it is what counts. It's fun to maintain beautiful code, but there won't be any maintenance if it never ships.
As a single developer, the #1 objective (unless you're working on fun/hobby projects) should be to ship, not to consider future maintainability. People have this perception that code "is permanent", which may be the case for code, but not for the environment its sitting in. Everything changes, all the time. Perfect code will be perfect but isolated from its surrounding progress, if not rewritten, thrown away, redone over and over again.
In most cases, I'd rather ship a product people use, albeit with horrible code, than glance at my Github repo of beautiful code optimizations that have so far gathered zero interest by the rest of the world.
I mean sweet Jesus, it's 2023. Write some fucking documentation. There are also these things called comments that you can put in your code so you can explain why you're doing what you're doing.
The code shows what, the comments show why. And it's the why that's important.
Anyway, the more code you read, the better you get at figuring out what code does. But instead of reading the code try to understand the intention behind the code. Think of it as you're reading the mind of the person writing it. Are they lazy? Bad? Overly complicated? Weird structure? What do they always do? What are they bad at? How do they approach what they're doing?
There are developers who are too clever by half, and you can see it in the code; they do something in a complicated way instead of doing it the simple way. They do things that look interesting but which have no real benefit.
Unmentioned reason: for the lone developer, the temptation to be "clever," or do interesting experiments is harder to resist. For me, sometimes that's driven by boredom. I've occasionally found myself switching paradigms ("This time I'm going to try a functional style (in python)...") just to keep things interesting, even if in the long run it might not be ideal. Even in the same project! Or, without the guardrails of peer criticism, I'll think "ok, this is a great time to try new library/framework X."
IMHO, there are some benefits to the above! Experimentation and trying new techniques/frameworks is how we learn and progress. The danger arises when the experiments congeal into half-baked production code.
I've often noticed developers seeing a project for the first time and not long after proclaim it as "bad code" and that it needs to be refactored or rewritten (with zero thought as to whether it's good for the customers or the business.)
Heh - I understand that impulse, but when I thought that, I think I was less rational. Why is it bad? I have seen bad codebases (whether for real or imagined cases) but to unravel something that is bad can be a real big chore. There was one case where I felt and still feel a certain project was overly-engineered, but rewriting it (which I took a little time doing) is a lesson in futility, particularly if there is no real technical or business impulse to partake such a mission.
From experience, those announcing that a codebase is bad sounds more resume-driven than technical. It sounds like they want to make a mark more than making a functional difference. "Bad code" that happens to have a knowledgeable team around it is not that bad, it's just not understood by outsiders immediately. Also, given the rate of turnaround in teams and companies, making large and significant rewrites and changes would probably not tend to be seen through to their fullest conclusions, in that one sees the long term effect of a large rewrite.
I think if a codebase is actually bad and if codebase aesthetics are an important aspect to one's career, finding the next company with a potentially better codebase is probably a better thing to do than blanket and uninformed critiques about other people's work. :)
>From experience, those announcing that a codebase is bad sounds more resume-driven than technical. It sounds like they want to make a mark more than making a functional difference.
Graybeard here (been doing development for money for 40+ years). It's one thing to say a codebase is bad because of style, aesthetics or lack of comment; it's another to try to enumerate what that means. To me, a codebase is bad when it's hard to modify and hard to unit/integration test because of the architectural decisions that were made (which boil down to it not being properly layered and modular, overusing inheritance vs composition, using inscrutable variable names, etc).
One way to mitigate this as a solo developer is to step away from the code for a while and forget about the structure. You become the facepalming teammate at that point and start doing a better job of documenting and organizing your code for the day 6 months from now when you need to upgrade something or fix a bug. Eventually you start building with less abstractions, and more clearly worded variables, and better folder structure. Unlike teams where someone can keep making mistakes that impact others, you are your own worst enemy if you have to keep dealing with your own code over time. The solution to incompetence or laziness is ownership.
* If you don’t treat maintainability as a requirement, you usually won’t write maintainable code.
* If you don’t know how differently other people model ideas, you usually won’t write maintainable code.
* If you don’t know how differently you’ll model ideas in the future, you usually won’t write maintainable code.
There’s nothing about being a lone developer that means you won’t write maintainable code, but gaining the perspective necessary to do so requires either experience or feedback. So the lone developer problem is only really a problem for early-career developers and incurious ones.
Hm, I would not call this 'The Lone Developer Problem' but maybe 'Future Comprehension Anomaly' wherein an author today understand their rationale for a design implementation but months down the line, it is as alien to them as it might be to a completely new set of eyes.
One solution is to leave meaningful comments that explain a thought process or design decision where it counts. This nudges future readers of the code in a direction towards total comprehension of the source.
i write a lot of stuff solo. i also realize that 75% of writing software is making sure others can read it. so what I usually do is this:
- write documentation upfront (with Hugo or something like that) to explain what I'm building, why, how to use and/or compile it, and how to test it. i do this before i lay down any code. this also helps me re-justify the time investment i'll be spending on writing the thing. (it is much much easier to maintain documentation before you start coding than after. once you're in the zone, you're in)
- write behavior/end to end tests that describe what major components of the software will do and how they will interact with each other. these tests usually change, but I consider this a second layer of documentation
- make code as readable as possible, i.e. small functions, high modularity, reasonable var names, etc. i also use linters heavily.
- i try to break up major pieces of work into pull requests despite me being the only contributor and explain what happened and what the work introduces within them. i'll also add comments to sections of the code that don't make sense. i use PRs for this instead of git commits or (only) comments because i've found them to be much better at digesting large codebases, and they are great spaces for discussing changes within context.
This was a poorly written blog post by framing this particular situation as a "problem". It becomes a problem when someone acts inside a team or professional organization like this but the article is about single developer projects. There can be many good reasons why code quality doesn't matter in such instances but framing it from the start as a problem is more akin to clickbait. I would expect higher quality content on HN getting upvoted.
Writing a readable, maintainable and future-proof code is a distinct skill, widely different from writing code per se. One-time contractors with fixed result-based pay are tend to ship "just working" code, thus most of closed-source commercial one-developer code is an utter garbage. Looking into open-source projects may give a totally different perspective, because OSS authors are actually care what other will say.
I always consider this a corollary of Conway’s law.
In the absence of any organizational division of labor, there is no communication structure which the system is forced by Conway to mimic, so instead other attractors in architectural space dominate - notably the ‘big ball of mud’ which is where code wants to end up unless you continually fight the entropy gradient with frequent refactorings and trimming and cleaning internal boundaries.
Single developer work is akin to single point of failure. This article brings to memory a famous article from an author name that it eludes me at the time. The article reads a line that goes alone this idea: "With enough eyeballs, all bugs are shallow". You wouldn't board a plane that was built by one single guy right?
This is my experience as well. I built an app in Go from scratch without prior experience in Go. I have 5 years experience in general, but in other languages.
The start was not bad and I think code was pretty good, but now I feel like lost my way at some point, and code has started degrading.
I hope when we hire new developers they will not hate me as much as I think.
This is the power of teams of software developers. When it comes to maintenance, if each team member remembers the same 40% and a few team members each uniquely remember 5%, then the team has only forgotten a small percentage of the code and can work together to produce those "Oooh now I remember what this is" moments.
I generally agree but for my own side projects, I like to keep a lot of code in one class if it makes sense to be there so I'm not jumping around a lot. For work code, I follow SOLID, DRY and 12 Factors so multiple people can be working in multiple classes and not usually running into conflicts.
Too many times have I found myself looking at some code I wrote couple of months back that don’t really make any sense, and then proceeding to refactor it only to find out that the original code was there for a reason… documentation is the key
I think this is true. Single developer code is often hard to read and understand. However, team-developed code is also often hard to read and understand.
Does this apply so much when there's model boilerplate, folder structure tooling?
If a lone developer does things like design documents, flow charts to explain infra and documentation.
I think the lone dev would be a problem if they didn't setup with a commercial interest. Not to mention time to setup everything properly instead of just getting things working.
Software should be developed by teams, period. Ideally with pair or mob programming, but with some sort of code review process for each commit at least. This is how you knowledge share and prevent understanding from being siloed in the mind of a single developer.
A good example from the open source world is the works of Fabrice Bellard. Fabrice has a tendency to one-and-done his legendary feats of coding, moving onto the next thing after he'd written something he deems complete. For projects like ffmpeg and qemu, maintenance teams have stepped up to keep them alive but the first, daunting task is always figuring out how the hell he did everything.
Maybe you didn’t mean to be so absolute in the first sentence? Our field is an art. An unmediated connection between one’s inner process and the code one writes can produce very compelling work that stands apart from others. In some cases, that work can prove “legendary”. That it may later take some support, documentation, study, and unwinding to make it maintainable doesn’t mean that the solitary effort shouldn’t be pursued in the first place.
A lot of celebrated software began as a guy in a room, and later was adopted by others because they thought it could solve their problems. The software never would have existed in the first place were it not for the lone person in the room.
If software is constrained to resemble the communication structure of the organization that produced it, well then programmer politics are _boring_, while an individual's mind is _novel_.
That's a recipe for the worst code base. Pair programming is designed to level up someone while bring down the other which is helpful in some situations but not in most. If you have two people who know little perhaps together they can produce something. Large teams break up tasks so individuals can work on pieces that fit together. Imagine a team of 20 sitting around debating that next line.
Pair programming is designed to level up someone while bring down the other
Wow, that is not what pair programming is "designed to do" at all. I never felt like pair programming brought me down. One of the best ways to learn something is to try and explain it to others. I am not talking about pair programming where one person just types like a robot and another one explains what to do. I'm referring to the situation where two people work on a problem that one of them is having. I think both participants benefit. In explaining the problem you have, you learn. In offering possible solutions, especially ones that don't work, you also learn. When you solve it together, you build teamwork. I really don't see any downsides. Especially with remote work, it's a great way to share knowledge and build rapport with your team mates. What real project is specified to the point where all programmers are interchangeable and can work separately without EVER talking to each other?
Yeah, not all people work optimally that way, and many of those that don’t do uniquely good work regardless. There’s room for all of us.
> What real project is specified to the point where all programmers are interchangeable and can work separately without EVER talking to each other?
I have no idea what this has to do with anything that anyone said here. But lots of projects don’t require a lot of ongoing collaboration. You may just be working in a very particular sector if you’re kicking around all these narrow absolutes. It’s a really big industry with lots of different practices and projects.
I agree it may not be optimal for everyone, and I might be more social than some. I've just learned a lot working with people who knew more or less than me at the time and some of my more fun+stressful experiences involve spending hours in collaboration fixing a problem on a tight deadline.
I have no idea what this has to do with anything that anyone said here.
The original article talks about a hypothetical single developer project vs one with code review/pair programming and suggests that mob/pair/reviewed programming leads to better outcomes. I didn't think about solo open source projects which might be great work by one person, more like things that a business is built on which might be the responsibility of only one developer. I work on commercial software mostly for small companies with over stretched development teams. I've seen this kind of solo project built many times, and after the developer leaves it's abandoned or rewritten, which is just a huge waste of time. So "optimal" in the larger scale.
I do like pair programming, which I would define very loosely as real time collaboration on a single shared problem whether that's in person, in a zoom call or just over slack. I was trying to imagine a software development situation with multiple programmers and NO collaboration. Punch a clock, pick up jira tickets and never talk to a peer? I wasn't presenting that as a realistic scenario.
I admit I am prone to hyperbole, but however you define pair programming or code review, I think all software benefits from collaboration between programmers, users, designers, architects and product/business people. One of the more intense forms of that is pair programming, and I happen to enjoy it.
I might need you to explain how your API works, but I cannot write code while I’m listening to you talk, I need peace and quiet (or at least headphones).
Oh, absolutely, I can't listen to a book on tape and code. I mostly listen to death metal or electronic music with no (intelligible) words. I do think programming engages the language portion of the brain, which is why it can be so rewarding and draining.
But traditional pair programming, like actually sitting next to someone at one computer? I honestly enjoyed it! I might just be old? :) These days IntelliSense is effectively my pair programmer. TypeScript and VSCode allows me to actually enjoy front end dev work for once, fixing the squiggles is fun. But in the old days before Stack Overflow, time spent co-working next to someone who really knew an API or a library or a language was very much worth it.
My current team does do a shared screen pair programming thing a couple of times a week, there's a video call that we can drop in on while we are working on something.
Maybe it's more inspired by Twitch live coding and less like traditional pair programming but I still think it counts? It wasn't even my idea, but I participate because I enjoy it.
Mob programming is such an asinine concept to me. It's like all the people saying holotropic breathwork cures cancer. It's just developer mysticism. There is no amount of himing and hawing that would get me to believe treating one guy, capable of his own expression, as a code-input robot while developers around him bark commands at him is actually productive. Each line of code is so expensive and the probability of any one of the people barking the code remembering how something works is pretty low. There is significant evidence the action of typing/writing something produces better memory.
It works when you're helping a junior. It's insulting to anyone else. Part of our jobs as programmers is to be able to tease apart another developers mental model and learn the abstraction. If you cannot do this the codebase is either unsaveable or you are. The only codebases I have been unable to untangle in my long career have been either written by contractors or by a pseudo-intellectual moron developer that believes they are the second coming of Turing. Everyone else's code is usually able to be parsed with some patience and a debugger for more complicated tasks. I even gasp use pencil and paper to keep track of code flow often.
> as a code-input robot while developers around him bark commands at him is actually productive.
Nothing to do with mob/ensemble programming. At the end, is a discussion that a) make code easy to understand b) create shared knowledge c) avoid unnecessary work.
Most projects I've seen with actually readable code and a consistent overall structure have been written (mostly) by a single coder, of course usually with contributions from others, but not real 'team work'. Of course there are also messy projects by single authors, and readable code bases by teams. But in the latter case: the more the responsibilities are spread, the messier the outcome (IME at least).
I think in the end it comes down to the experience of the people involved.
And then of course there's personal taste, one person's readable code is a complete mess to another.
In any case, the post reads like the author stumbled over one messy project written by a single author and extrapolates from there to all other projects.