2. Does that within a necessary performance constraints.
3. Uses commonly accepted approaches, libraries, style, does not reinvent things which are not necessary to reinvent.
4. Readable by programmers who'll work with that code in the future. That could be juniors or seniors, that depends on a particular company and project.
5. Extendable for changes which are likely to happen in the future.
6. Not extendable for changes which are not likely to happen in the future.
Every item is necessary and going down this list requires more skill and more humility. It's easy to write complicated code. It's hard to write simple code which is as good as complicated where it matters.
That's what my opinion is and that's what I strive to.
That said, I might be completely wrong about it, as I see great churn with modern software, things are being rewritten over and over again. Code maintainability seems not to matter as much as it mattered in the past. May be that's my bubble. Sometimes I think that the good programmer is one who can spew JavaScript nonsense faster than his peers can review it, because who cares, that stuff is going to be rewritten tomorrow with revolutionary new age framework.
> does not reinvent things which are not necessary to reinvent.
This is one of those "eye of the beholder" things.
If I had a quarter for every time I've heard "That's a solved problem," with a reference to a dependency, somewhere, I'd be a rich man.
I tend to really avoid external (not written by me) dependencies, because I have had many problems with other people's code. Fixing someone else's badly written open-source code usually takes longer than just writing the small part of their library that I need, and it doesn't involve having to argue with them, or have my PR rejected, because I have "too many comments."
I know that's not a popular stance, around here, but it works for me. I use a ton of dependencies, in my work, but I wrote most of them.
Even within the "batteries included" aspects of the .NET6 framework, we insist on doing certain things our way.
Some of the biggest examples being AspNetCore logging and the JSON-style configuration abstractions. We completely rip this out and do it in code-time our way. I cannot account for the exact number of hours we wasted trying to do [x] the 'official' way only to be burned at deployment & framework upgrade time. It took at least 100 hours just to realize json-based runtime config is definitely not for us.
Separately, it took me fewer than 30 minutes to write a dumbass static logger that fills a sqlite database with entries and have it wired up throughout the application. I don't think you could even read & understand that AspNetCore logging article in 30 minutes, much less have functional code in place that you can hang your hat on for years to come.
If not reinventing the wheel is about saving time, then perhaps we need to iterate the aphorism. Maybe we flip it around: Only reuse other people's code when it's for something that you don't really care about or doesn't frustrate your project's long-term ambitions.
Reinventing the wheel can also lead to difficult-to-find bugs that explode years down the road. Yeah, you could reimplement the parts of zlib you need in a couple hours, but just use zlib, even if it's a core part of your functionality.
However, as an industry we don't have a great way to know what is or isn't reliable. I've started publishing component datasheets with my utility libraries because that conveys the sort of commoditization I think we should strive for, but they haven't been very useful so far.
> However, as an industry we don't have a great way to know what is or isn't reliable.
I'm not sure it matters. You can assume it will be as reliable as your own buggy code that you wrote yourself. The more important question is how maintainable or replaceable is it. Once you include a dependency it becomes your code to maintain, so it had better be something that you're comfortable jumping in and getting your hands dirty with if need be.
That's fine if nobody else has to maintain your code.
If they do it's difficult to see how your solutions to solved problems aren't going to be significantly more problematic and risky than established libraries.
And that is all good if the developer is qualified and has time to review your work.
Far easier to use something that 1000's of others are using as well. Far more likely for edge case bugs to surface.
That said, I have to agree that it can be painful trying to get one of those edge case bugs resolved if you are the only one being impacted. In this case I may be worth just implementing your own functionality and swallowing the future support. But I think it's got to be a backup plan, not your first response.
The real problem is that most developers don't have the experience or knowledge to go beyond accepting the "use xyz library" suggestion on Stack Overflow on face value, not having the ability to review xyz library for suitability. We end up with lots of code that is just collections of dependencies where only a very small % of functionality is used and no resources are put into ensuring that using the dependency is not introducing more problems than they solve.
> And that is all good if the developer is qualified and has time to review your work.
That's me. I'm the person that most often needs to go back into my code, six months later. I write code that I want to use, and that I want to see, months afterwards. I actually don't give a damn if I never get a single star on my repos. In fact, the fewer people that use it, the better. I still write every library as if it will be used by first responders, as I have pretty high standards.
I'm also insane about testing. If you look at those repos, you'll see that the testing code (either unit, or harness), far outweighs the code under test. Most of my test harnesses are App Store-ready full-fat applications, with localization, and documentation.
But I am not reinventing Facebook. I write end-user native apps for Apple systems. I don't have the same needs as someone writing a massive social media server.
Like I said, I do what I do, and it works for me. Your mileage may vary.
> Far easier to use something that 1000's of others are using as well. Far more likely for edge case bugs to surface.
Dunno, I feel this tends to drive a lot of complexity. A lot of canned solution are comically over-engineered and break fairly frequently (especially in the cloud-adjacent space for some reason). Often the code needed to integrate such a library and make it do what you need it to do may exceed the code of just doing it yourself in the first place.
What happened with Log4j also should be a lesson in why adopting needlessly complex standard solutions to problems that don't really need them is deeply problematic. Most of its users would probably have been served fairly well with a thin wrapper for sysout, possibly with a json-serializer.
What you are saying absolutely makes sense to me for domains like device drivers.
I think it becomes a lot more questionable when your coworker wants to write their own JSON parser or HTTP server for use on a commodity x86 linux box. It’s an incredible waste of resources and a major security hazard too boot.
I wrote JSON parser and http client for commodity platform, and they are 100 times safer than everything else, partly because they aren't overengineered beyond all reason.
Yikes, I think this is how teams end up with "we should just rewrite this whole thing because the guy who wrote it liked doing things his own nonstandard way and nobody else can understand it once he left".
Clever code that reimplements common packages just because one person thinks it's better for their particular use case might seem like a flex when you write it, but it's a pain in the ass for everyone after you... sigh.
Well, it's kind of too bad, that everyone seems to have such terrible confidence in their own abilities.
There's a lot of things that I'm not good at, and one thing that experience has taught me, is to own that.
But there's also a fair bit of stuff that I'm really quite good at; above average, even.
If I write something, it tends to work pretty well. If someone else does a better job than me, I'm happy to use their stuff, but I have higher standards than "Ooh, shiny!". I won't use just anything, and "Everyone else uses it!" is not really the most heavily-weighted coefficient in my calculation.
I invite people to see for themselves[0]. I have an enormous library of work out there. I don't give a damn whether or not anyone else wants to use it. I am my own best customer. I write code that I want to use, and I use that code. I have extremely high standards, and I insist that my work meet my standards.
I find it fascinating that folks are happy to cast aspersions on me and my work, without checking for themselves.
> Yikes, I think this is how teams end up with "we should just rewrite this whole thing because the guy who wrote it liked doing things his own nonstandard way and nobody else can understand it once he left".
I’d be willing to bet there’s more to the story in nearly every case. The non-standard way is often a deliberate choice to attempt to optimize for output, given unpredictable and unrealistic deadlines.
Then it proves futile when, for instance, the goalposts are moved again, and the last 25% of the implementation has to get rushed through. That’s when it turns into a pile of shit, and that’s when the person decides to leave. But not because of the deadline, but because of the incessant complaining, doubting, and negativity toward this developer, who got praise elsewhere for actually innovating. That’s why you hired him after all.
Nobody else apparently stepped up to help deal with the situation before it became a problem, hence the victimization. Companies like that are going to have a culture of finger-pointing and blame-throwing. Never taking responsibility for a situation they created. Problem-solvers need not apply.
This is how industries, not teams, fail. Good programmers know when to quit the rat race. Or they know how to choose companies that follow the golden rule. Take care of your people and they will take care of you.
But if someone is actually doing it to “flex”, they’re clearly not that experienced, in which case it’s also the company’s fault for giving them that much technical authority.
> I’d be willing to bet there’s more to the story in nearly every case. The non-standard way is often a deliberate choice to attempt to optimize for output, given unpredictable and unrealistic deadlines.
Yeah... I hit this... A lot, in the past couple of years. "just get it out, we'll polish it later". It was... primarily me in dev (I was doing about 90% of the app code), and there was never enough time to catch up. I've moved on, and there's now ~3 people, and they've been given more time and ability to reset schedules and... now it's all "why was this done so poorly? this is garbage", etc. I've pointed to my many messages asking for things to slow down, schedule more time, etc. Always a "later", and now that the personnel have changed, and there's "more time" for everything... most of what I did looks bad. I was the only one writing tests - left them with 500+ tests, some documentation and action/decision log docs - but no one looks at it. "This is bad". Well... sure, but it's tested, and you now can refactor, vs rebuilding from scratch (which... I'd lobbied to do 2 years earlier, before building on top of the very shaky MVP).
While those are all good guidelines (though I'm not too sure about 6) a 'master' programmer should probably know when to break each of those rules.
In the end programming is about telling a computer what it ought to do. Making this process as easy as possible is the sign of a good programmer, but there's no reason the process has to be limited to writing code, so talking about what code makes a good programmer seems reductive.
Also in this view 6 is usually a mistake as it makes it harder to communicate, not easier. That said some things don't make sense, and therefore probably shouldn't look like they make sense either.
One of the aspects of mastery (that I've observed in others) is being able to say yes to 99% of requests but knowing how to steer projects away from that one feature that will cause complexity to explode without delivering commensurate value.
5 and 6 are super hard decisions. You never really know what your code will be used for in the future, especially not if you make libraries.
I try really hard to do this and over the years I've found myself more than once caught out by something that I wished I had foreseen but did not and, conversely, I have spent a lot of time preparing code for changes which seemed to be likely to happen but that never did.
This is a really good point. Engineers are constantly trying to predict the future, but ability to do so often does not depend on engineering skills at all. Often it is completely out of your control.
Instead of trying to make code "extendable" I found that it is much better to make code "disposable".
Do not hope you'll make the right prediction. Expect to be wrong and be able to recover fast.
Knowledge of the domain and general experience with similar domains helps. My horse sense about what's likely to change has improved with time. As far as #6, you don't want to outright limit change because the future surprises, but use a YAGNI-ish view of unlikely changes to keep the system simpler. In other words, you don't make it hard to change, you just don't spend complexity on things unlikely to change. You only spend abstraction-related complexity on things likely to change.
> 4. Readable by programmers who'll work with that code in the future. That could be juniors or seniors, that depends on a particular company and project.
Underrated. Layers of unnecessary abstractions or clever solutions (e.g. complex types in python when simple would do) is a mark of a junior engineer.
"Rewrite all the things in X" is how you get 1 year of experience 10 times, instead of 10 years of experience. It's also how your company burns through tons of money for little substantial result.
Are we talking about industry programming? A good programmer knows how to get in and out, and retire early, and spend the rest of their days working on projects that fulfill their needs
I would argue NOT to do this if you're writing Java. Java paradigms are awful, lead to way too many layers of abstraction, and code that is impossible to debug because your stack traces are just 30 calls of ".invoke()".
> Readable by programmers who'll work with that code in the future.
Java was the language of choice when I was at university. The way they described it to us, was that everything about it was designed with teamwork (and large teams) in mind. I don't know how well it really did that even by the standards of the era of its introduction, but it has the air of plausibility.
Certainly, I expect anyone who is used to that style to prefer to encounter and work with Java written in that style over Java written in the style of C.
> And yet Perl became popular.
As I recall, the joke being that it was a "write only" language.
> In todays field of software development, people don’t come into it
To be as polite as possible without being dishonest; wow, there are a few rather sweeping, unsubstantiated, boiling hot takes embedded throughout there, i've just quoted one.
From my experience, most of what features here just simply isn't true, or is at least very anecdotal. Just as one example, i've never worked for any software company that "wasn't about the users". You really do just only need to look at how much "UX Specialist" roles have meteorically exploded over the last ~decade or so, as companies compete to offer increasingly smoother user experiences, and it is hammered into the actual developers too as a core operating principle everywhere i've worked.
> i've never worked for any software company that "wasn't about the users"
Startups are going to tend to have devs focused on users and "quality is everyone's job" etc. In fact, you should never hear the phrase "that's someone else's job" in a startup.
Big corp jobs, especially at non-software companies will tend to treat people as isolated cogs in the big machine, a machine you're told by everyone is "someone else's job" to understand and steer.
The article isn't dishonest. It's that the experience of "being a programmer" varies very widely. Our field has no professional standards.
> Big corp jobs, especially at non-software companies will tend to treat people as isolated cogs in the big machine, a machine you're told by everyone is "someone else's job" to understand and steer.
In my experience at a massive company this is spot on, as a mentor it feels like I spend more time on teaching new hires to realize that we're not a "technology company that does X". We are, in fact, a boring X company that uses technology when it's needed. Not everything has a complex technical solution. Sometimes the best thing you can do, even as a software engineer, involves changing processes, organizational structure, or bad habits.
When I read these "software" articles and books it feels like a radically different world than what I do.
If you can coordinate a large social event, you have all the mental capacity required to be a good programmer/ surgeon
All you have to do as a surgeon is to follow an extremely specific script.
and the script does not change that often since human bodies tend to be
more static than operating systems, compilers, and web browsers.
Can you eat a steak? Well then you have the cutting skills you need.
Have you ever taken needle and thread to do some sort of job?
Well then you understand sutures
Do you really need all As in high school and compete like crazy to get into
Med school, then learn all latin names and bone sequences etc school to be a surgeon?
I don't think so because these skills are something near every possesses.
Yes they try to make it seems like you have to be really smart to be a surgeon,
but if you scrape away all the complexity that is only there to protect the
"smart people" who already are doctors, nearly anyone could be a good surgeon
and we would have much better surgeon.
The profession would be far more inclusive and all that shared culture and experience will just make medicine richer and better for everyone.
We could finally do away with the insane expenses medical procedures have today.
So let us do away with the archaic ritual of a pointless medical degree and open it up for everyone.
This is a pretty bad comparison. Not only are doctors trained for years through a mentorship process, go on to teach others, and are tested against standards; every surgery they perform for the rest of their lives is subject to review which might result in an independent board of other doctors recommending changes to the process.
This is one of the things we're missing in software development (although retros are supposed to solve it), and the author points this out. Often, we get stuck in a bad framework or organizational structure that leaves us so we're forced to get use complex CS ideas to solve the problem when a better approach would allow for simpler solutions.
> Team A are those who work with metal utensils, team B work with wooden utensils, C use electrical appliances and D are in charge of all food heating. Trying to make sense of how one creates a crème brûlée with a worker grouping like that requires a lot of intelligence.
This is only tangentially related, but it dawned on me that we truly have a unique culture as programmers that you're basically forced to learn in the process of getting a CS degree.
And you can usual use cultural references (such as joking about 'Do you know how to close Vim?' or complaining about symbolic links, or talking about RMS) to tell who is a more traditional programmer versus someone who has had less exposure to the programmer culture (this obviously only applies to Americans, with international hires they'll have their entire own programmer culture)
Another thing I noticed is how niche cultures tend to be less homogeneous, I feel like in all 'mainstream' cultures, the celebrities are almost always white, attractive, and socially confident. While in programming, our celebrities are pretty off-putting to the mainstream.
> You need to be really smart and have a scientific mindset. Most importantly you need to love learning new things.
I'm not so sure that these are required. I think they are more akin to "commonly found characteristics."
Especially the "scientific mindset" part. I have a disciplined mindset, but I suspect most scientists would take issue with me, calling it a "scientific" mindset.
In my experience, being a good problem-solver is more important than just about anything. Being able to "divide and conquer" a problem, sniff out root causes, and not settle for symptomatic fixes, etc. Stubborn might be more important than smart.
We also don't all have to be Mensa members. Many programmers are probably not much smarter than their peers in other disciplines.
But "love learning new things" is, while maybe not "required," at least quite helpful. It certainly applies to almost any vocation; not just software development.
But the real gist of her essay was that it helps if programmers are good with people, as much as with machines, because they connect better to the end-users of their work.
Whether or not being "good with people" is required is debatable, but I think that it is important for everyone to keep an eye on the final deliverable, and its operation in the context of the end-user.
In my experience, many organizations go well out of their way, to ensure that engineers never hear directly from the user. That's Marketing's job, and Thou Shalt Have No Other God Before Me... etc. I've watched Marketing people throw huge tantrums at engineers that "dare" to suggest that they might envision the way their work would be perceived by the end user.
I worked with someone who didn't have what I'd call a scientific mindset, and it can really be a limiting factor. They confused correlation with causation, in other words were quick to launch on to theoretical explanations without testing or proving their thinking. When solving a problem, they tried tons of different approaches until they empirically found one that worked, without always fully rationally understanding the cause of the symptom they were trying to remedy. They usually succeeded, but the cost was the time spent, and the complexity of code that resulted.
There are a lot of ways to be a programmer, but that is not one of the better ways. The programmer was productive (prolific) enough, and the business was resource-constrained enough, that their work quality was not even close to consider firing or replacing, but I'm convinced it cost the team and the company a lot of productivity and effectiveness over time.
Stubborn problem solving isn't always good. A problem can be solved in many ways, some of then can lie in the realm of program architecture or even product strategy. If your solution conflicts with them, it will be thought of as narrow sighted.
True. It’s also possible to go all “Captain Ahab” on a bug, and waste a huge amount of resources, trying to fix a minor problem, or refusing to use a perfectly acceptable workaround.
Communication skills, team participation, and 10000+ hours of practice with real-world problems. You wish that were true... ;)
People often choose careers for the wrong reasons, compete to be at the bottom, and end up miserable. If you are over 30, the burn-and-churn cycle you thought would be exciting... turns out to be really destabilizing and lame. Consider being a plumber, you will make more money, and not have a student loan for years.
To be a good Programmer, you need to fantasize about being a Plumber everyday. As even knee deep in sewer water, people would have held you in higher regard given they could actually understand what you did professionally.
You sound bitter and resentful. Also, student loan debt is not a requirement for becoming a programmer. I have student loan debt, but that's because I decided to go to law school before switching to software development :)
Nope, I like what I do, and talking with the engineers I work with everyday. However, you are about to climb a mountain I've already ascended before, and can't understand my perspective yet.
You will start fantasizing about being a plumber soon... it is the logical choice after all.
I've no idea what you're talking about and I suspect other people you meet and interact with (at least online) will feel the same way. I know this might sound harsh, but that's my feedback in the event you're at all interested in communicating more effectively online.
Must agree with the other commenter: It's quite hard to understand what you're trying to say.
Also, comments like this are coming across as arrogant. You appear to have jumped to the conclusion that xwowsersx has little experience in the industry and thus come across as patronizing.
As to your original comment: As a SW engineer, I've never had to work more than 40 hours a week, have no student loan debt, and make a lot more than a plumber. If you find yourself forced to work a lot more, you likely have made poor career choices (not by picking the SW industry, but by picking the wrong sub-discipline and/or company). As such, the fact that you're doling out advice as if you know a lot looks even worse.
How many companies have you worked for in the past 30 years?
Although ones history is fairly meaningless when it comes to general opinions. Most professions don't have a sub 3 year Churn rate with fortune 500 companies.
We can have differing opinions, and from each perspective believe we are correct.
Yeah. In fact, I have ~15 years of experience (my current role is Director of Engineering). I note this just to point out how off Joel is. I offered my feedback on the off chance it might be well received, but it seems there's more going on with him. I feel that sympathy is the only correct response at this point.
Is it the same thing a being a good software engineer?
I've got relatively limited experience working a software engineer but to my disappointment, it seems more than 70% of my job isn't programming. Most of the work is discussing priorities, making tradeoffs, interacting with colleagues and management, writing design documents, giving feedback, writing doc, maintaining, benchmarking, and debugging services, learning proprietary tools and systems, reading documentation, figure out how these legacy systems work, interview candidates, mentor interns.
Programming is in the weekend and after working hours.
The single most useful thing I ever learned was how to break down a problem, make steps to a solution, make a flowchart of said steps, and annotate the flowchart with pseudo-code. After that, making the software was trivial. Even with that toolset I’m likely mediocre at best.
Yeah, to me, that's all programming is. Taking a task, breaking it down into tiny steps, and writing those steps down in a language a compiler/interpreter can understand.
The problem is trying to break down MASSIVE problems. Back when I was frequently dealing with newbie programmers, it was common to find ones that wanted to make an MMORPG while they could barely grok for-loops. They hadn't even completely understood what programming is, yet wanted to take on making a massive project that usually takes multiple teams.
How would you begin to understand programming (or anything for that matter) without first starting with a massive undertaking and breaking it down into smaller chunks that can be discovered? Give someone a for-loop in isolation and it means nothing. They will never really understand it because it doesn't relate to anything. Given something massive they will have to break it down, and in that it will dawn on them "Huh, I need to do this same thing over and over again. I wonder how I might do that?" at which point they will discover loops, and then it will make perfect sense.
This is how you end up with software that does the job it needs to do perfectly well when all of the expected inputs and environment are correct, but completely fails if anything unusual happens, because 'it wasn't in the flow chart'.
> I don’t care how good you are at chopping vegetables, how efficiently you handle the ingredients while cooking or how modern your kitchen appliances are – if the food tastes like crap, I’m not coming back to your restaurant!
This. Your users/customers don't care about your tech stack, pipeline, deployment model, or test coverage. They want software that works and doesn't get in their way.
Your tech stack, pipeline, deployment model, or test coverage might support producing software that works and helps the user, but it's a means to an end.
Yeah but if you're too slow at chopping vegetables and I have to wait 1 hour to get my dish, I'm not coming back to your restaurant either.
To produce software that works and doesn't get in the user's way, and not introduce regressions every other day, caring about your tech stack, pipeline, deploy model etc matters.
Depends IMO. I worked on some VERY specific program with lots of terms that just don't have a proper translation. And the program was internal to the company that only ever operated in one country and only ever will.
I have to admit, some of the absolute best programmers I've personally known, seemed to share these qualities:
- Natural curiosity, exploring all kinds of technical / scientific topics - not necessarily programming alone.
- Extreme eagerness to learn. I'm absolutely not the kind of guy that can just immerse myself in technical books, but these guys are the types that would dedicate their spare time to really read and understand relevant literature.
- Intellectual acumen to easily pick up concepts. This was especially apparent when you studied with them - and could see how they compared to the rest. While others would struggle with some topics for weeks, even months, some of these would pick them up in the mater of hours. But I also think this is equally due to having strong background, and really knowing the fundamental stuff.
- Strong discipline. Showed up to work / school / etc. every day, put in all their hours. Little slacking around.
- Willingness to discuss and teach others. On the contrary to the "arrogant genius" stereotype, most of the really strong people I know have been eager to teach or show - at least as long as the interest has been genuine.
Probably forgetting a lot of stuff - but those have been some of the patterns I've noticed. The really good programmers and engineers I know have spanned from high-school drop-outs to research scientists.
Great article, I really think programmers should be more conservative when it comes to new technology. You don't always have to use fancy react stuff when making simple websites and so on..
But I'm not sure about the whole "programming is simple, like planning a party" thing. When I am thinking about my own projects with more than 150K lines of code its not like any party planning process I have been involved in, it is actually a lot more complex. It is easy to forget how hard things are when you know them and have worked with it for many years.
Being a good programmer can also be situation dependent. The ideal author for code that powers a nuclear reactor may be different from the ideal programmer making a consumer product.
* Knows enough about what's actually going on under the hood to predict and avoid the relevant issues(SSD wear, odd cases where disk latency makes major slowdown, etc)
* Understands the relevant details of the domain, like algorithms and math if they do that kind of work, or common hardware issues if they do embedded, etc.
* Understands at least the basic level of application level stuff and what users expect.
* Does not do anything clever unless absolutely necessary, does not make themselves irreplaceable, finds ways to get rid of anything that is slightly interesting, writes code anyone can work on
* Avoids bikeshedding and going down rabbit holes. Doesn't spend 6 hours holding up the whole project because they want to mess with something "Really cool".
* Uses stuff people already know and is compatible with stuff people have. Doesn't reinvent bluetooth and MQTT and .csv files if they don't have to.
* Is comfortable using trusted libraries without fully understanding them, nobody fully understands every part of a large project, and code that already exists is likely code you don't have to maintain.
* Is comfortable letting frameworks make decisions for them, doesn't try to fight the tools they use by imposing a vision of how things should be on them. Don't download Ubuntu and then complain that it's too hard to swap the init system, that's not what it was built for.
* Doesn't secretly despise software, find ways to sabotage features, and wish we would all follow the guidance of Industrial Society and its Future.
I'm not going to go so far as to promote violence, by I disagree about despising software. There is a lot of software out there that should/could be replaced with simple process changes. If you despise software enough to know that it is not right tool for the job even when you are a programmer, you can safe yourself, management and users headaches by avoiding writing a line.
I'm not sure I've actually seen this IRL. I've seen cases where new custom software is the wrong approach(Actually, that's most of the time), but usually it's because there's some existing software that can already do whatever it is.
There hasn't been many times when I've seen something that people would normally use software for, and willingly chosen a process that isn't primarily software driven. I think the one of the only times is when dealing with physical tasks in personal life, I'll usually put up reminder signs like "Check lint trap" rather than use to-do list software.
Even then I'll use my Bluetooth printer to make the sign so it looks nice and neat....
The other case is tabletop RPGs, I've yet to see any tech at all that doesn't completely ruin immersion for non-scifi games, phones are very culturally charged objects that can instantly change the mood to "Screen staring club".
Shameless plug, but I’ve written a book about what differentiates a good programmers from great ones. [0]
In my opinion the soft-skills far outweigh a person’s technical skills in terms of importance when it comes to identifying whether is a good programmer and should be promoted to a senior position. Obviously the programmer needs to have strong technical fundamentals, but the essential soft-skills are things like:
* good communication skills (speaking, writing, and also listening)
* Reading and understanding unfamiliar code
* Recovering from mistakes
* Understanding how to add value
* Understanding how to manage risk
* Dealing with conflict in a professional way
* Managing your time efficiently
* So much more
A lot of those things can be learned over time, but historically I’ve seen junior engineers put these things off until a few years into their career when they’re ready to make the jump to a senior role, only to find that they’re lacking in one or more of those necessary soft-skills needed to get the promotion. I’ve always advocated for juniors to start working on these skills as early as possible in their careers, because it will make you a better programmer in the long run.
> We need to stop admiring those smart enough to understand how the software works, and start making the software easier to understand in the first place.
Yes, I agree. But I think it does take a lot of design skill to achieve the latter. Making the complicated simple is not an easy task. Hoare described a similar problem like this[0]:
> There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
> The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature.
I understand she is frustrated with the amount of new tech to learn, but why did she have to shit on teachers? That analogy was a mess of false equivalency and defective induction. Did it even prove any point (whatever the point was... "looking back on things," I suppose... oh, brother)?
What do we mean by "good" in this sentence? I've worked with plenty of programmers whose output a lot of code quickly, but the code is difficult to understand and maintain. Others whose output takes more time, but the solutions are elegant, well structured, and maintainable. Coding is a craft and assessing whether or not a programmer is good has as much to do with the audience as with any metric of quality.
I draw parallels to the world of literature, sure James Patterson sells a lot of books, but is he a "good" writer? I would say no, but the millions of copies he has sold make a counter claim that cannot be simply dismissed.
The most important quality that distinguishes a good programmer from a mediocre one is a very strong preference for simplicity.
To be more precise, they need to be able to cope with complexity in their environment, but be driven to simplify systems that respond to that complexity. This is done by making things with simple parts that can be plugged together cleanly. Systems have to match in overall complexity their environment, but small, composable parts and careful buildup of layers can keep complexity manageable locally.
Once a system gets away, it is very hard to bring it back.
> Programming is writing down unambiguous instructions, and then grouping those instructions in meaningful ways. We’re coordinating work. It’s not rocket science.
I think missing here is that to be a good programmer, one must be good at dealing with ambiguity. The output might be unambiguous instructions, but the input is generally anything but that. Working with product managers, designers, business stakeholders, etc - a good programmer can identify and resolve ambiguity in the requirements given to them in an effecting manner. This involves asking the right questions and often doing discovery/research.
> How are we getting away with this? Isn’t someone going to catch on soon? How many “new platforms” do we have to make, before people start realising that rewriting everything, using “the latest tech” is not going to solve the problems that matter?
There's a very simple solution: The tools will need to change more slowly. This will happen when operating systems, frameworks, and programming languages mature so much that it's fundamentally difficult to make meaningful improvements to them.
We're not there yet; but perhaps its time to start working in that direction?
Man, the most brilliant folks I've ever met in software dev realm in past 20 years were always, without exception, the least patient ones. They get bored very quickly with mundane use of stable set of frameworks and libs. Doing yet another similar app and not something obscure, using bleeding edge stuff (and bugs). By bored I mean resigning, often immediately.
They are basically spoiled little brats thanx to their talent, and (some) companies tolerate that since they do deliver. But often they need to be babisitted by managers to clean any mundane tasks from their roads, so you sometimes have a little ecosystem of people or even teams working around one 10x dev. Mostly still worth it for the companies, but there is hidden price that shows often very late, ie 6 months after they leave and somebody has to change their brilliant cathedrals a bit. And one thing is as sure as death and taxes - changes will always keep coming.
What you describe would be hell for them, they would keep tinkering and keep creating new frameworks, languages or even whole platforms just for given problem, because that's fun for them. That's how it all started anyway.
> What you describe would be hell for them, they would keep tinkering and keep creating new frameworks, languages or even whole platforms just for given problem, because that's fun for them. That's how it all started anyway.
I think at that point the people you describe will move to a newer field where things are still changing quickly.
The traits that are favored in one type of programming are not favored in other types of programming.
If you do systems programming on scientific, medical, defense systems you will be risk averse and conservative, and take your time and perhaps be successful. But those same traits will not be favored in let's say, web development for a seed stage startup.
Understanding things from the ground up can be good for some occupations, in others, everything that matters is finishing mediocre CRUD logic as fast as possible, and taking your time to think will be seen as bad.
I think I understand where OP is coming from. We are still in the wild-west era of software development and most programmers are like pioneers -- few rules, learning as they go, etc. I agree that as the domain matures we will see different types of people become programmers, and the job itself will change to something more conventional, but more stable too.
“New technology!” has become an existential business threat to my team.
Some years ago the company went deep on poly repo semver multi components. Over time it has had provided the same long term efficiency improvements as putting a cup of sand in your gas tank. We have neither the breadth nor depth of ability to make it work.
Good programmers, amongst other laudable traits, aren’t afraid to delete things.
> That’s what should matter. That’s the whole point of programming. The point of programming is to create software that delivers value to the users.
She's right.
A good programmer makes computers do good, useful things for people.
You could be a wizard at programming, and work to serve ads, or sell lootboxes, or skim pennies off high-frequency trades, but that wouldn't make you good.
I think the biggest indicator of a good programmer vs a bad programmer is their propensity towards reading other peoples' code. We all love to write code, but it's only the best ones that read it. And by extension get better at reading it.
I enjoyed this article. I echo many responses here and some of what’s in the article to reaffirm their importance in not just being good, but successful.
1. Clear communication
2. Thinking thoroughly about the problem and the solution
good article to read however op is missing few points imo…
“I hope we’ll manage to find a better balance between the eagerness to learn new tech, and the focus on actually delivering value.”
This is not always up to dev, org level changes to move to cloud or use new tech happens at mngr/arch/lead level. There needs to be trust in new tech as well as doubt if it solves the problem. I totally disagree with this thinking against new technologies. Can you imagine not going to cloud in 2022 and deal with old server technologies ?
I rarely use the cloud at my current role and it was pretty awesome to see sysadmin skills I got when I started in 2014 were still relevant. Sometimes I wonder how insanely productive a PHP developer must be when he's given an Apache Server and MySQL to build a CRUD app after 20 years experience of a single stack, compared to my eight years experience of completely random technologies.
Depends wildly on what you are doing - but tenacity in my case (paid consultant for lots of different companies) is my greatest quality. Push through - schedule carefully external dependency promises and follow through.
Maybe this is more of a project manager answer - but a consultant programmer has to adopt the role of a project manager in lieu of a component one.
Anyone who can solve a Sudoku puzzle can learn to program a computer, and probably should, just like anybody can learn to read and write and do basic math (in which I include algebra, trig, calculus, etc.) However, there's a subset of people who are really good at programming, far beyond the level of the normal folk in the first group. (I say "normal" because the first group outnumbers the second by a large ratio, maybe 20:1 or higher.)
The only prerequisite for being a good programmer is, in a word, Logic. If you're good at logic you'll be good at programming, and if you're not, you won't. (It also helps to have a deep passion for it.)
In re: diversity, I assume that the distribution of normals:programmers is fairly constant across human populations. Yet, there's no denying cultural/gender/race aspects affect who actually becomes working programmers. (I don't know what to do about this.)
In re: all the burgeoning complexity, that's a side-effect of all the normals flooding into the field to make money. Up until roughly the Dot-com Boom programmers were largely self-selecting. Once normals realized they could make money using computers and the Internet they began to infiltrate the ranks and dilute the ancient hard-won wisdom of the greybeards (so-called because they are old, almost exclusively male, and typically very hairy. They literally have grey beards for the most part.) Nowadays we have degenerated so far that we have insanity like JS server-side!
I call it "Eternal Eternal September" Most people can't remember or never experienced the staunchly non-commercial early phases of the Internet.
> The point of programming is to create software that delivers value to the users.
No, this is a side-effect of programming. If logic was entirely useless I would still spend the majority of my time doing it. It's beautiful.
The fact that a large number of human problems can be solved by means of digital logical programs is marvelous, but it's not the point of programming anymore than the point of poetry is to sell Hallmark cards, eh?
> What would the field of software development look like, if “the best programmers” were not the most mathematically inclined neophile workaholics, but rather people who are good at organising (sic) work? Good at communicating clearly. People who care, not so much about which tools are used, but about using whatever tools are available to maximise (sic) the value of the product being made?
These folks are important and valuable. Call them Product Managers.
Ideally, the elite programmers would write flawless efficient software, and the Product Managers and users would take it and configure it to solve their problems.
(BTW, it is possible and economical to write flawless software. The fact that there are still so many bugs is itself another symptom of the primary problem: the confusion between elite "Real" programmers, Product Managers, and end users.)
I just want to say, I'm really glad that we are talking about "programmers". I hate that stupid term, "software engineer". What a ridiculous title. As if programmers do not also engineer software. And then what are "software architects" for if you already have "software engineers"?
In companies of a certain size the differentiation between these (or similar titles) get more clear. In my opinion they are also strongly linked with experience (in certain areas).
1. Solves task.
2. Does that within a necessary performance constraints.
3. Uses commonly accepted approaches, libraries, style, does not reinvent things which are not necessary to reinvent.
4. Readable by programmers who'll work with that code in the future. That could be juniors or seniors, that depends on a particular company and project.
5. Extendable for changes which are likely to happen in the future.
6. Not extendable for changes which are not likely to happen in the future.
Every item is necessary and going down this list requires more skill and more humility. It's easy to write complicated code. It's hard to write simple code which is as good as complicated where it matters.
That's what my opinion is and that's what I strive to.
That said, I might be completely wrong about it, as I see great churn with modern software, things are being rewritten over and over again. Code maintainability seems not to matter as much as it mattered in the past. May be that's my bubble. Sometimes I think that the good programmer is one who can spew JavaScript nonsense faster than his peers can review it, because who cares, that stuff is going to be rewritten tomorrow with revolutionary new age framework.