Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Admittedly, my first days as a junior programmer were before some of you were born, but I'm thinking of a particular format here...

Learned as junior: If you report an OS bug or some other deep problem, seniors will not believe you and assume you're making excuses for your own bugs and lack of understanding.

Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

Learned as junior: Legacy code is hard to read.

Understood as senior: Legacy code that I wrote myself is hard to read.

Learned as junior: Technical skills matter most.

Understood as senior: Communication skills matter most.

Learned as junior: New tech solves old problems.

Understood as senior: New tech creates new problems.




> code that I wrote myself is hard to read

This has happened more times than it probably should:

1. Arrive upon some code I wrote at some point in the near or distant past.

2. Review it to get some idea of what I was trying to do

3. Laugh at my young self for being so naive

4. Refactor or Rewrite

5. Re-realize the edge-cases and difficulties

6. Remember this being a problem

7. Refactor And Rewrite

8. Either `git reset --hard HEAD` or end up with a similar solution, but with better comments

Once in a while, I end up with a [simpler | faster | clearer | otherwise better] version, which makes this process worth while - even with the false positives.


Semi-related story of mine:

1. Stumble upon some specific problem with a web framework we use.

2. Jump straight to stackoverflow.

3. Sbd had a similar issue, nice.

4. Sbd wrote a very concise answer, nice too.

5. There's my nickname under the answer. Oh, wait...


I'm getting the statement about once a quarter at work of:

Yeah I found this bug and I worked through it, did a bunch of googling or internal searching found a similar issue and welp it was a ticket from you (me) 2 years ago.


Which is why I am trying to get better at what I used to to: writing down and sharing problems I saw.


I have a giant dev file (well, 2 really: 1 for personal, 1 for work) that I use as a scratchpad. Anything I work on goes there. Makes it easy to find things.

Monthly I pull out useful things into notes about that topic. Run into a weird problem with spring framework? Copy out the relevant info into springframework.md.


It drives me nuts when you find a forum post and they never reported back. Or it's a terse "I figured it out."

I try really hard not to do that for internal or public forums. The odds are better than you think that you'll stumble on the same topic a few years from now.


Or it say "Google it" and thats how you ended up there.


Wisdom of the Ancients

https://xkcd.com/979/


I try to at least move the problem forward. However does anyone have a way they make sure to close those things out and circle back? I mean other than just doing it?

On internal tools I do do that but that's a smaller number.

Worse is stuff like car and fridge repair. I have no idea what forum I find qqs on.


I'm sad that I cannot share my experiences the same way :-( 99% of my "interesting" bugs are almost all Xbox One/PS4 related, so I can't write a blog post about "how I found out that Sony's implementation of file read is not compatible with regular stdio" - it's hugely interesting but this stuff is NDA'd to such an extent that I wouldn't risk writing openly about it - but I'd love to.


.


I've found a good way to do that is running a personal blog. In fact, I write as if the audience of my blog will be myself in the future. It's nice to help out others who face the same problem tho.


Does "Sbd" stand for something, or is that your username?


The abbreviation "sth" occurs here a lot, probably because of people who learned English with the help of dictionaries that use "sth" as an abbreviation for "something". I suspect "sbd" also comes from the same source.


Somebody


Uh, OK. Never seen that before.

"Sby" would've made way more sense. Or heaven forbid we use one more character to make the nearly obvious "sbdy."


It's a recurring theme with me, though a bit different where my name is under step 3, and I only realized that when I tried to upvote the question and couldn't.


Heh, that happened to me too... I loved it.


THANK YOU!!!

I really thought this only happened to me :-)


Have experienced this too. Haha.


SO is my notebook.


This.

Why create my own silo of documentation when I can just put the answer where I'm likely going to find it (Stack Overflow).


This is probably the primary actual use case for comments. Explain why something is done the way it is; to justify to those who come after why Chesterton's Fence [1] should apply in this case.

[1] https://wiki.lesswrong.com/wiki/Chesterton%27s_Fence


Yes.

In a wider sense, you also want to explain why something is NOT done. Ie why certain other ideas don't work, or why we don't offer certain features.


Same idea should apply to laws. The why gets forgotten a half century later because the problem was solved but not well documented. Because the problem is no longer happening, they think the law must no longer be necessary.


As a counterpoint, we often end up with laws, traditions, and social mores that long outlive whatever rationale they had for them in the first place. (Assuming they had a rationale, and weren’t just based on fear and superstition).


That someone being me not knowing why in hell would I write this monstrosity... git blame + some archeology work to get jira ticket number (I put issue numbers in commit message/branch name) and from that I know why.


Yes, naturally "those who come after" often includes the future you.


Much of my life as an older dev includes balancing my relationship with my past, current, and future selves. Be kind to your future self; be compassionate of your past self. And don't forget, they really are different people ;)


Here's what happened to me(A) and a friend(B) from a former workplace of mine(open-source project):

B: Take a look at this shit code that I found.

A: Whoah, it really is shit. Blame it so that we can see what kind of genious is behind this.

B: ...

A: Well?

B: Apparently you wrote and I reviewed/approved it.


The "who wrote this?" mentality is a trap that's good to avoid. Get comfortable with different ways of writing something that, while they might have different tradeoffs, accomplish the same thing, and try to see past that. Understand that most code wasn't written by anybody--lines 1 and 3 were written by Alice a year ago, line 2 was written by Bob 2 years ago, and line 4 was written by Alice yesterday. `git blame` is very useful for seeing the change in context, which can give a lot of insight into why it's written that way, but usually the author isn't very useful to know, unless you're planning to ask them about it. Sometimes it's useful if you happen to know that the author isn't very familiar with something when you're wondering why they didn't use it, but try to keep in mind what the actual benefits of your way are, whether they're especially pressing, and that the other person might have written it differently because they were thinking about other concerns that you forgot.


I agree that “who wrote this” is dangerous, and git blame is a terribad name. I will say though, if you can avoid value judgements, then knowing who wrote a block of code is super valuable in a legacy code base. I’ve found that every dev I’ve ever worked with has very real strengths and weaknesses. And knowing who wrote a piece of code can drastically reduce the time it takes me to find hard bugs. It often goes something like, so and so tends to forget certain kinds of edge cases, but they never seem to screw up this kind of logic... so I bet the problem is related to... ah found the problem. But never blame someone for creating bugs, unless it’s really egregious, and then, only if you can help them with better habits going forwards.


Yes, "blame" is not a good word.

Use "git annotate" instead.


I really like this approach of using git blame, it's original thinking and highlights how much human components there are in developing.


While lots of legacy code emerges organically the way you describe, there are in fact many people in the industry who I'd call "legacy coders." People who saw Dijkstra's "Goto considered harmful" and scoffed, "all these 'for' loops are much less readable than my 'goto loop1' solution." People who use global variables because parameter-based implementations are "needlessly complex."

Basically, not everyone works for Google.


You say that, but Golang has gotos, while being a very minimal language. Not everyone at Google is an amazing developer that's fully up to date with best practices and patterns.

Not that that needs to be said, no matter the company (if it's of any decent size.)


It’s often helpful to know who wrote a line of code, because then I can put myself into their shoes and try to figure out what was going on their mind when they wrote it.


I’ve always found the svn alias ‘svn praise’ pretty entertaining for this reason.


git has "git annotate"


Assuming those lines were getting out of hand, I think it would be a valid question as to why they were not tidied up during the review of the line 4 addition by Alice.


It’s Friday afternoon, you are exhausted after a busy week, the business is pushing for a fix before you leave and you have committed to be home by 6pm so your spouse can go out.


`git log -p` is vastly superior to `git blame` for determining why a file is the way it is.


That gives you all the history (or all the history of a file).

Git blame quickly gives you the commits you are likely interested in, then you can use them as a starting point for your git log digging.


I've had that happen several times, but once I actually did the inverse.

I was working on an extraordinarily bad codebase, and stumbled upon some modular, reusable code that made my life way easier. I wondered who wrote this rare gem in that pile of shit, and checked `svn praise` for a change.

It was me.

It would have been the highlight of my then short career, if not for the fact that it meant there were no other semi-competent people on that project.


This is really funny. So good. I ask all the time, "Who wrote this shit??" knowing it was probably me.


Honestly, this is usually a good sign. I'll code at the edge of my current skill. Six months from now, I hope I can look at that code and consider it primitive from where I hope to be then.


That is the only time I dare ask it. When else do I dare? Someone might take it the wrong way otherwise.


*genius


These days my rule of thumb is to not try to rewrite or generalize until I or my team has tried to do more or less the same thing three times. Before that, you just don't have a good feel for what is generalizable/edge case or what is invariant/variable in the problem space.

I've definitely run into this phenomenon of independently landing on the same design twice because of the same edge cases. At some point back in 2005 or so, I was working on a collision detection component for a physics engine. This was 1-2 years before the Box2D engine, so there was a significant lack of any open source options, so I was rolling my own stuff that was quite similar to Box2D (but Box2D was written much better).

One year later, I came back, looked at the code, thought "this is unreadable!" refactored it, and sure enough, stuff was falling through floors. I went back to my old code, found several comments discussing edge cases having to do with discrete time step problems, and concluded that my old code was in fact the right approach, it just didn't put comments about edge cases high enough in the call stack.


Generalizing even when you are only using your solution once can sometimes be useful.

When you know that certain information should not be used in a correct solution, a more abstract approach can make sure that information stays hidden.

A really simple example: for-loops in Python 'leak' their index variable. Stick that loop inside a local function, and then you know that you can not accidentally make use of the index variable later.

A more complicated example is deliberately coding to an interface that carefully exposes only what should be exposed. Eg using a map or filter higher-order function.


As a technique against this, at the point that you're writing the code to begin with, have you ever tried documenting the reasons you're doing something even if it's blatantly obvious at the time?

   // rather than use the API we parse a scrape here with a
   // regex because we signed up and it doesn't have half the fields we want.
like, totally obvious. except six years later when the regex stops matching, and you are already using the API all over the code anyway and you get to this part with the scrape and you don't get it. Are we trying to elude the API requests number limit? or what is the reason for this bizarre scrape.

a lot of people would refactor by seeing if they can put the API call in there, but this wastes massive time you could avoid if you knew the reason for this in the first place. and maybe the API still doesn't have it, and so you put the API call in, you try to remember the reason for the scrape, and then you realize that all this is still the best way to do this, you just have to update the regex to continue to match. work that could have been saved by a simple comment telling you the reason it looks this way.


> code that I wrote myself is hard to read

1. Found some extremely cool code, marveled how amazing it works

2. Realized I wrote it as a teenager

3. Got depressed, questioning my life decisions


The best feeling for me is when I come across old code and then re-write it to make it [simpler | faster | clearer | better] .. It is tangible proof to me that I have improved in my craft.


I usually do this same thing. But not before mentally cursing out the programmer that wrote this poorly documented spaghetti code... After which I realize it was me.


Even well factored code looks like spaghetti at a casual read in a lot of frameworks/cases, and makes sense once you've swapped all that info back in.


>but with better comments

Communication, especially with your future self, is an important skill.


Code twice: once to understand, once to solve.

As in "42", the first bit is more difficult.


And each time this happens, you get better about writing readable comments up front describing edge cases and difficulties so that future self can avoid steps 1 - 6 with a head start on refactor ideas / feasibility.


> git reset --hard HEAD

story of my life


> git stash # i might need this one day And i never need it.


I've started saving my stashes to branches instead.. adding a _ on the front of the branch name to remind me to delete the branch at some point


Out of curiosity, do you ever actually delete the branches? I would absolutely just end up with a number of _-prefixed branches on all my projects.


I don't allow myself to have more than 1 active stash at any given point on a project. You quickly learn to delete with no regret some code you wrote !


Better than the other way round.


> Re-realize the edge-cases and difficulties

I think comments can help avoid such scenarios.


This isn't quite the same situation but it reminds me a little bit of xkcd.com/1421


Oh this is such a good comment!

> Legacy code that I wrote myself is hard to read.

Sometimes I don’t even recognize me as the author for a while. Realizing I’m reading something I wrote and can’t understand it without studying carefully has been rather surprising and reminds me of the old Kernighan quote “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”

My goals used to be to write code that looked and felt cool to myself and others, to add features in clever ways with as little change to the function signatures and structure as possible so as to not disturb the architecture. While keeping changes small is a good goal to balance, it’s always possible to be too small and add unnamed concepts and fail to restructure around new concepts when you should. Do that a few times in a row (which I have done) and you end up with architectural spaghetti code that might look clean and modular at first glance but becomes a maintenance nightmare. My goals are now to: - make code so easy to read it’s boring, because if it looks interesting, it’s probably too clever - and to identify and name all the unnamed concepts and refactor often to accommodate new features.


> the old Kernighan quote “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”

This quote can be interpreted in an intelligence-positive way, to encourage you to learn by writing the cleverest possible code. Then, when you get to debug it, you will be forced to improve your skills. This interpretation is called Kernighan's lever [1] and it is very beautiful. The alternative is a life of boredom where you don't learn anything new.

[1] https://www.linusakesson.net/programming/kernighans-lever/in...


”Prod is on fire and nobody can figure out your burrito pasta”

”Isn’t this a wonderful learning opportunity for all of us”

Debugging time is never a good time to start honing new skills.


“Burrito pasta” is a great term! Is that something that you just came up with?

It sounds like spaghetti code but even worse because it’s covered with a messy layer of beans and guacamole and wrapped in layers of tortilla.


Just came up with it. The complexity of monads and other post-grad level CS stuff mixed with good old spaghetti code.

Next up: some doofus who learned high-order Haskell in middle school, and takes offense of me calling it complicated.

PS: https://blog.plover.com/prog/burritos.html


Of course, Kernighan's lever is most useful on individual projects. In an industrial context there are often other constraints. Nobody is saying that you have to always write the most clever code you can. But if you never do it, you will improve your skills very slowly, if at all.


Writing simple code is difficult, more difficult than writing clever code, and is a better skill to grow.


I would hope at least most of the debugging happened before the code made it to production.


I was just pondering this very issue. I think one of the reasons some teams end up with spaghetti code is because the developers are smart enough that they always muddle through it in the end. So when it comes to talking about technical debt, there's never a "it's too complicated," there's just "you're not trying hard enough." In that sense, it would help if the developers had a lower pain tolerance, because it would drive us to actually work hard to improve the ergonomics of our architectures.


> The alternative is a life of boredom where you don't learn anything new.

Now that's an impressive logical leap.


False dichotomy, anyone? I learn more from trying new ideas than debugging complex code.


I mean

When I try something new, usually what I learn is not that my assumptions were correct, but the more interesting part is where my assumptions were wrong. This usually only becomes apparent either when writing the code, or in some of the more interesting cases, when debugging.


This quote can be interpreted in an intelligence-positive way, to encourage you to learn by writing the cleverest possible code.

I would claim that any clever final result can be accomplished without any need for intermediate cleverness. Things that look clever to other people but are intuitive to you might be OK. Things that look clever to you should be avoided when possible.


That is absolutely horrific. I don't want to spend my energy learning the special skills needed to deal with things I should have been smart enough to prevent from ever existing.


When seniors (and above) complain about the low quality of junior code, I tell them to go look at their own code from 6 months ago.

It's an endemic issue, core to the problem of poor software.

> My goals used to be to write code that looked and felt cool to myself and others, ... My goals are now to: - make code so easy to read it’s boring

Same for me, and I'm sure same to many of the folks that have advanced past senior level. The problem of other people, other senior people, needing to read, understand, and significantly modify your code is one of the reasons why you can't really advance past senior in a startup. There aren't enough experienced folks that you have to write "up to". Of course the other part of being post-senior is the ability to scale your expertise, but now I'm digressing quite far. Then once you're that advanced, you don't want to take a pay and scope cut to work at a startup. This is a major contributor to startup technical debt accumulation, one that can't readily be solved.

I still do write "cool code", for code that I will only use myself that doesn't go into production. But for all others, I write easy-to-read code. And I review code with that in mind. When comments have a typo that alter the meaning of the comment, I insist that be fixed. Juniors hate me for being too picky. (and I hate them for being too sloppy.)

Remember, the most important part of your job as senior+ is not what you do yourself, it's how you guide others.


> seniors [...] complain about the low quality of junior code [...] their own code from 6 months ago.

What classes as 'senior' that their own coding style has changed that much in 6 months? O.o

> I still do write "cool code", for code that I will only use myself that doesn't go into production. But for all others, I write easy-to-read code.

Good dev. Remember, you are not your audience. Unless you're just writing play-code in which case go nuts and be as 'clever' as you want. :D

> When comments have a typo that alter the meaning of the comment, I insist that be fixed. Juniors hate me for being too picky.

Then they're wrong, and tbh that's not something I'd accept more than once from an employee.


> > seniors [...] complain about the low quality of junior code [...] their own code from 6 months ago.

> What classes as 'senior' that their own coding style has changed that much in 6 months? O.o

Your code style can remain exactly the same, and it would still happen. The reason is not that you would write the code differently today, but that you forgot the issues and edge cases that made you write it like that then, and that at the time you focused too much on the writing, not on the reading.


Here we get to the real heart of the problem. Engineers of all levels seem to default into “this code is shit - I could do better” instead of having some empathy and considering they don’t have the whole picture.

But the wrong lesson seems to be taken away from this. It’s not that all code is shit - it’s that you aren’t good at reading the code yet if you can’t see all the little hairs and bug fixes.


> Engineers of all levels seem to default into “this code is shit - I could do better” instead of having some empathy and considering they don’t have the whole picture.

several projects I've come in to - yeah, the code what shit, and yeah, I could do better. And I've done better. By asking questions, documenting the answers, writing sample data, and writing tests.

I get that code can be sloppy, have edge cases, etc. Took over a project that was halfway migrated from CI to Laravel. The migrator had close to a year on this 'migration'. We had not one unit test case, no migrations, no seeders, no integration tests when we took over. What we had was piles of half-baked uncommented model code, over-reliance on magic methods, Laravel/CI models with the same names and method names often being used in the same request but with unintentionally dissimilar behavior.

The 'code' isn't the (whole) problem. All the other stuff around the code that provides the context is the problem. We had ~ 20 tickets in a tracking system with vague notes, and were given 5 email threads of discussion about functionality questions, none with actual resolution.

> it’s that you aren’t good at reading the code yet if you can’t see all the little hairs and bug fixes.

Or... the person writing it before you simply didn't know how to write/document.

Sometimes - really, honestly - you can actually "do it better" because... really, honestly, sometimes you are actually better - more competent, more experienced, more diligent, more professional - than the person who left the code you're working on. Not always, but not never.


Oh absolutely code can be shit. But I’ve also seen code that’s made the company millions of dollars and run flawlessly for 20 years be called “shit” because it doesn’t look like modern code. The replacement naturally consumes many times more resources and has bugs that were long fixed in the old code.

In my experience the latter case is far more common. But I suppose experiences will differ dramatically depending on what you work on.


I'd meant to add that part as well too - code can be bad and still work, and work OK. The problems only come in when it needs to be changed.

I've advised a number of folks to care less about the code style, and focus more on making it at least understandable. I don't particularly care if you're using a factory pattern or not, but please do doc/comment someplace what the expected behavior for your 'backordering' logic is. I can fix things later if I understand what was intended, vs just what I have to guess at later.

Have worked on some projects in the last few years that are 'bad' from code perspective. One is bad, but the company as a whole operates... decently, and is improving, and more importantly, is providing a lot of value to their customers. The customers tolerate some bugs now and then because a) they still get value and b) the issues are addressed. There's a full process for changes/fixes/rollouts, and the team overall understands that there's tech debt to deal with. Some folks understand that they're still paying off tech debt from 3-4 years ago, and understand those decisions were bad, and try to avoid those same mistakes.

Hundreds of integration and unit tests (growing every week) help grow the confidence levels, and remove barriers to smaller refactoring efforts, because there's a reasonable way to detect the impact of changes. It's not perfect, but that's also understood and accepted.

Another one is the CI/Laravel situation from above. Small company, no real 'tech' people on staff - it's all 'outsourced'. They're frustrated because they see other companies progressing faster than they do, and everything seems to take 5x longer than they expect. It's because the code is bad (on many levels). If we were not trying to make any changes, and it just ran in its current state, it would still continue to make money for them, but they want new features, which requires actually understanding how everything fits together. It took two people several months to have a reasonable understanding of how all the running parts fit together (while also trying to add new features/etc), and finally get a small number of unit tests in place.


...you don't remember what you were doing 6 months ago? I'm sorry, that's still not a good excuse. Maybe you don't remember every tiny facet but hopefully you'd have a general idea. And above all that, if you're a senior dev then you'd understand that "this code is unfamiliar" does NOT mean "this code is bad."

I mean really, "this code is unfamiliar therefore it's messy therefore we should rewrite it" is a flaming red flag. Chesterton's fence, people!


One of my favorite things about comments in programming is that you can stick a label on that fence explaining why it's there. One of my biggest frustrations is that so many people don't bother, even when it would literally be just one little sentence.


> One of my favorite things about comments in programming is that you can stick a label on that fence explaining why it's there.

This is routine for ordinary cultural practices as well. However, the common explanation for a given cultural practice usually has nothing to do with the actual reasons it might be a good idea.


Unknown unknowns.

My coding style hasn't really changed in years - frankly, I don't write a lot of code, I do other things. But I often run into situations where I'm irritated at my own bad code from months earlier, when the shortcomings of the code are actually driven by things I know now that I didn't know then.


Review process surely mitigates differences between "junior" and "senior" code?


This also would require that devs who are on the PR reviews ACTUALLY look at the code. In about every job I've worked at in my short career there are people I work with that I don't trust them to actually review my code. I've come to accept that. I instead make sure the people I know will do a decent job are on the PR. Some people will just look at the diff and an even smaller few will actually pull the branch locally so they can see the entire context. That being said I always do my best to review other peoples code regardless of whether or not they will review mine.


Not really. It just ensures no really atrocious code makes it into master.

Of course you can stall any junior merge request until it looks like senior code, but at that point you might as well write it yourself.


I don't know if it's stalling so much as taking the time to request changes / pair up and teach.

If bad code is getting merged, that's tech debt / time someone else is going to have to spend anyway, plus the time needed to identify the issue and triage down the line. I would think it's better investment to use that time up front and help the jumior level up too.


> Sometimes I don’t even recognize me as the author for a while.

This happened to me just yesterday! I was helping a co-worker with a problem, and I noticed some redundant code in the same function, so I told him he could simplify it while he was there. His response was, "...but you wrote that". (And as it turns out, only a few days prior!)


This is why I will point out issues in code only on the form of stating potential improvements as best as I can. I especially try to avoid hating on the author - it might have been me or the boss who's standing nearby...

In fact, I often conciously refrain from using blame on an "interesting" piece of code because it doesn't matter who wrote it. Looking it up would just satisfy idle curiosity, but yield noninsight into how to improve the code. In fact, I think that blame should just list the commits, but hide the authors by default. "What changed? " and "how?" are always much more pressing questions than "who?".


My experience varies a lot. Maybe it's just my colleagues write better commit messages but blame (and looking up the PR and code review) is often a good method of understanding why the code is that way


Yes, but blame puts the focus too much on the authors instead of the changes themselves. I also use it to understand how code evolved over time, but only in circumstances where I suspect the history to hold import clues.

Pinning bad work on a person does not make progress. Fixing bad code does.


I am coming back from vacations next week and have 2 PRs to finish (couldn’t merge before because they are high risk), I am antecipating a lot of pain just to pick them up where I left off... I would rather do my tax returns instead lol


It always happens to me while I approve my own pull requests to master.. :’(


> Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

My first job was in finance and I remember one time that I had a glitch on a complicated Excel spreadsheet. I checked it over, checked again and checked again until I finally concluded that the bug was in Excel itself. So I go to my boss and tell him that the data isn't ready because there's a bug in Excel. I was laughed at by the entire team. They agreed to give me $1000 right then and there if it was really a bug, but if not, I had to admit the shame. Well... of course it wasn't a bug in Excel, I just made a careless, albeit hard-to-find, mistake.

Lesson learned. If millions of people use something, that doesn't mean it doesn't have bugs. But it does mean that you probably aren't going to find those bugs unless you are doing something strange.

oddly, I did once find an actual bug with indexes in Postgres. Because of my earlier experience, I spent a lot of time assuming it was me before I finally isolated it as being a bug with postgres itself. I submitted a bug report and it was patched within a day. But still, 99% of the time, it's me.


It's unusual, but it does happen. My example is actually from hardware.

I was writing a Windows NT (what's that?) device driver for a communications board we developed for one of our products. I kept running into a problem and struggled with it for days. I was experienced enough to understand that the error was probably mine and the problem was so basic that if it was in the chip pretty much everyone who tried to use it in that mode would be screaming about it not working. The chip was an 8-channel UART (serial converter) and IIRC, the bug was in one of the FIFO interrupt modes.

Finally I gave up, got the phone number for the chip vendor's (I think it was Texas Instruments) local Field Applications Engineer and explained the problem to him. "Oh, yeah, we know about that bug, there's a new version of the chip about to be released. You guys are actually only the second customer to sample that chip. Lucky we found the problem before it went into production!"


Junior programmers find an amazing amount of system-level issues.

One noob came to me with a serious codegen bug in GCC, where even with `-O0` it would fail to correctly run a trivial for loop. Another found a huge security hole in `sudo` that gave everyone unrestricted access. My favorite was one who asked if the JDK standard library had any known bugs processing the letter "g".

They all turned out to be user error, if you can believe it.


There was a junior in one of the early companies I worked at who claimed to have found a JVM bug. He insisted JVM handled comparison with null incorrectly. He had a String variable that was null, then he guarded against NPE with "if (str == null) return -1" followed by code that dereferenced str. The code looked innocent at first glance, but somehow it failed with NPE. Finally it turned out the string was "null" not null. :D

But I also have a good counter-example story:

Some day I found an HTTP rfc violation in one of very popular oss HTTP client libraries. I filed a bug report with detailed reproduction. It was closed immediately, with "works as designed, you misunderstood HTTP". Then we had a debate in ticket comments for many days and I couldn't convince them to the right interpretation of HTTP (I admit, the text is not easy sometimes). Finally I posted a message on HTTP mailing list and Roy Fielding confirmed I was right. They reopened and fixed. I must say this is really hard thing to argue with somebody with an edge in experience and not come out as arrogant.

In particular - when somebody responds with "I have more experience / I've been doing it for 20 years, and you say I'm wrong?". How to best handle such cases?


>"I've been doing it for 20 years, and you say I'm wrong?" How to best handle such cases?

I wish I knew. I try to limit the discussion to the purely technical, or to barely acknowledge it as in "sure, but RFC123 says X and Y implements it that way as shown in Z".

Of course, that goes for when I'm the authority as well. I don't care who is correct, I care about what.


Haha, that last one is hilarious!

Even though I'm still juniorish, I still run into issues like this that stump me. But then occasionally you do find bugs with existing software which keeps you second guessing everything. Usually those bugs come from using two things in conjunction that haven't been well tested together.

Also sometimes, you find a bug that isn't accepted by the software vendor/owning team as a bug, because it has some sort of obscure work around that would take you a week of tinkering to figure out. Those "aren't bugs" but yeah, they are bugs. Software vendors that also sell consulting and related services love to pull shit like that


I actually found an OS bug in AIX on my first real I-designed-this code project. Worse, the bug was discovered only when the code went into production - it behaved differently on the test servers than on the production servers! The bug caused mmap() system calls to randomly overwrite certain pages in memory with NULLs. Yeah, that was fun. And it was caused by the order in which patches had been applied, which was why it behaved differently in production than in dev/test.

If some second year programmer told me that nonsense, I'd make them go back and either find their bug, or write a test program that exercised the bug in isolation (which was what I did).


Were you finally believed?


Eventually. I had to write a test program, as small and neat as possible, that demonstrated the problem on both test and prod systems. Then I got permission to send it to IBM (with my binary, my source code, and my logs). They eventually determined it was caused by the order patches had been applied on the servers - they had the same patches, just applied in a different order.


I just say, even if it's a bug, there's no guarantee it'll be fixed on a timeline that's agreeable to our deadlines. Find a workaround.


You can't tease us like that without explaining what the "bug" in Excel was.


As I said, it wasn't a bug in the end :) The issue was with my formulas, but I don't recall exactly what I did wrong. It was a tiny typo I made a sheet with thousands of formulas if I remember correctly.


> Understood as senior: Legacy code that I wrote myself is hard to read.

For me, any code that I wrote more than 3 weeks, I forgot. That's why I comment the hell out of my code. The younger programmers have routinely told me "commented code means the code isn't very good." I chuckle and ignore them and wait for them to hit their mid-30s and older.


Also related:

> Understood as senior: Communication skills matter most.

The reason people dismiss comments is usually that they or others around them aren't good at writing useful comments.

Especially when there are linter rules requiring comments you'll have something like def open_thing(x, y): and a comment, "defines a function that opens thing."

Yes, those are pointless. Often what's going on is a person is dumping their stream of consciousness into the comment field.

It takes practice to understand what a reader needs to know. You have to actually practice reading comments and thinking things through (another reason code review is important in your team) to get good at undertanding what you should write down.

All that said, if you truly hate commenting, at least build a habit of descriptive naming and exploiting your type system as fully as possible.


I'm actually thinking more of social skills and written language, not programming. I said something about this in a different thread earlier this week, and someone was baffled as to why I thought being able to write and sell was important, since you just wind up doing what your boss tells you to do anyway.

As opposed to telling the boss what you're going to do.


> Especially when there are linter rules requiring comments you'll have something like def open_thing(x, y): and a comment, "defines a function that opens thing."

I actually think those comments are useful in two ways:

1. The process of writing a comment will help often help me rename the function/variables so e.g. “defines a function that opens thing” becomes something like “opens can_of_worms with the given instrument and restraints” for the method definition open_can_of_worms(instrument, restraints)

2. You can use variable/return value comments to further restrict the domain of values, e.g. non-null, positive or in the range 1-42 (arguably it would be better to express some of these in the type system, but that is a different discussion). These comments show up in my IDE when I try to call the code in a remote location, so I don’t have to guess or remember the constraints.

(edit formatting)


Comments rot, and details about what is going on is better incorporated using good variable names and functions that abstract aspects of a task from their implementation.

While I don't like comments that try to explain what code is doing (write better code), comments are very useful for annotating WHY code does what it does. They're also very useful for adding documentation references, code use gotchas and things that need to be addressed in the future.


Comments are critical for explaining the code that isn’t there. False starts, obvious optimizations that don’t actually work, etc.


I've heard these sentiments very frequently from junior programmers, and almost never from senior programmers.


I'm guessing most of the senior programmers you've interacted with are maintaining established software with low churn and high availability requirements.

I hear comment love very frequently from enterprise engineers working with 10+ year old Java codebases, but very infrequently from hackers working with young code bases in more concise languages (complex algorithms aside).


Complete opposite for me.


I think it's better to document the context/intention/business reasons and let the code speak for itself.


This assumes everybody reading the code at the company have the skill to read it.

Where I work, I can expect my scalacode to be read by people who can barely write a line of it, and I routinely read typescript and go code while being totally inept at those.

I bless comments that are here to help the reader read, and I let those behind too where there's some specialists-only syntax.


Early 30s here and I've realised that comments are worse than useless most of the time. Nothing enforces that the comment is correct, so a significant proportion of comments will be false, so no comments can be relied upon.

Descriptive types, clear tests, and sensible variable names are much more effective strategies for making code understandable. Comments should be a last-resort stopgap.


Honestly, I would add this whole comment to the list of "absolute truths" juniors unlearn as they get more experience. And I would also point to the original post's point that types of experience matter - just because you're early 30s doesn't necessary mean you've had the right experience. If you still believe this, then - to be brutally honest - I would question the quality of the teams you've worked with.

Comments don't have to decay. Discipline is important. Culture is important. And yes, these have to be intentionally set and upheld.

If you set a culture of discipline around maintaining the comments with the code, and ensuring they are updated, then it's really not that hard to do it. If the developer doesn't remember to do it when making changes, then the code reviewer can catch it and enforce it.

And nothing really substitutes for an english language explanation of the "why" and the intention of a particular section of code. A good comment explaining why something was done a particular way, or what the code was intended to accomplish, can save hours of walking up and down call stacks. It's also something that cannot be communicated through unit tests, or even integration tests, a lot of the time. Those communicate the "what" and the "how" - not the "why".


> Comments don't have to decay. Discipline is important. Culture is important. And yes, these have to be intentionally set and upheld.

These are things that are often completely outside your control.

> If you set a culture...

At most shops, you don't get to set the culture. About the only time you do is if you're a founder or early developer. Otherwise you have to fit into the existing culture, or attempt to find a company that better reflects what you want. Sure, it's not hopeless; you can likely influence to some extent, but your influence is usually limited.

> And nothing really substitutes for an english language explanation of the "why" and the intention of a particular section of code.

I do agree with this. Any code that can't be written in a self-documenting way absolutely must be commented. However, if you find the need to do this often, it might be a sign that you should focus more on code clarity and less on (likely premature) optimization, or perhaps consider if you're really using the right tool (language, framework, etc.) for the job at hand.

I will admit that I probably comment less than I should, but I feel like the average is way too verbose, and that enough comments are out of date and incorrect (often in very subtle ways) that it adds significantly to my overhead when trying to understand someone else's (or even my own) code.


> If you still believe this, then - to be brutally honest - I would question the quality of the team's you've worked with.

To be equally brutally honest: right back at you. I would trust the quality of those I've worked with over those who believe in comments, any day of the week.

My point was simply that I started as a believer in comments when I was more junior, and became anti-comment through experience. So even if we believe senior people are more likely to be right than junior people (which I very much doubt, frankly), that tells us little about whether comments are good or not.

> If you set a culture of discipline around maintaining the comments with the code, and ensuring they are updated, then it's really not that hard to do it.

Human programmers have a limited discipline budget, and if you're spending it on keeping the comments up to date then you're not spending it on other things. Yes, you can use manual effort to keep code explanations up to date, just as you can use manual effort to ensure that you don't use memory after it's freed, or that your code is formatted consistently, or that the tests were run before a PR is merged. But you're better off automating those things and saving your manual effort for the things that can't be automated.

> And nothing really substitutes for an english language explanation of the "why" and the intention of a particular section of code.

Disagree; code can be much more precise and clear than English, that's its great advantage. As the saying goes, the code is for humans to understand, and only incidentally for the computer to execute. The whole point of coding declaratively is that the "why" is front and center and the "what"/"how" follows from that.


> The whole point of coding declaratively is that the "why" is front and center and the "what"/"how" follows from that.

I've been writing Lisp off and on since late last century, so I know full well the value of declarative code. Preaching to the choir, there! But I can also report that every real program I've ever written (i.e., that had at least one user) needed significant non-declarative parts.

And for those non-declarative parts, you need the "why". Why is this call before that one? Why is this system call used? Why is this constant being passed to the call? And so on. (It's because when you run it on OS ${a} version ${b}, there's a bug in the ${c} library that requires us to force the initialization of the ${d} subsystem before it can ... true story.)

The declarative parts of your program don't require "why" comments, and that's great, but a corollary to that is the parts that can be written in a declarative style aren't the ones that require a "why". Building a DOM structure manually takes a lot of lines of code, but it's all still quite simple, and requires no explanation. Writing a trampoline necessitates a bunch of "why"s, and there's no way to just substitute a declaration for it (without pushing the whole mess somewhere else).

Code is first for humans to understand, and that requires comments, because humans speak English (or some other natural language), and no programming language is yet powerful enough to efficiently (in time or space) express everything that English can.


> Writing a trampoline necessitates a bunch of "why"s, and there's no way to just substitute a declaration for it (without pushing the whole mess somewhere else).

I've got a trampoline in my codebase to avoid a stack overflow. The why is the test that a certain repeated operation doesn't stack overflow.

There are a number of places where it could've been implemented with one technique or another, but there's no particular reason that the approach I've taken should be better or worse than one of the other options. If there was, I'd want to formalise that (e.g. if I'd chosen one approach because it performed better than another, I'd want a benchmark test that actually checked that).


> code can be much more precise and clear than English,

This doesn't address the parent's criticism. Clear, precise code only tells you what the computer is doing. What it can never tell you is why the computer needs to do it exactly like that.

Software breaks in weird ways when pushed to the limits. The fixes for these edge cases are not always obvious and may not be something that can be replicated with testing.

Without comments, some cowboy can come along and think, "it's flushing a buffer here? that's dumb. <delete>" The change gets put in, passes testing, spends four months in production, when a bug report comes in from a customer complaining about an issue that they had three years ago.

Now someone has to spend a bunch of time figuring out the problem, QAing the fix, then getting it back into production. It's thousands of dollars that the company could have saved if only there was a comment about why that buffer flush was there.

You might think this is some crazy edge case, but it's not.


This is my problem as with this argument as well. English and other spoken languages seem first and foremost about conveying ideas. Programming languages seem first and foremost about conveying instructions to computers that don't comprehend "ideas"..

Reconstructing the original idea or meaning can often involve far more context than local variable and functioning naming can provide.


I don't understand what sort of environment you work in where you don't encounter situations where comments could add clarity to the code.

Do you never see code that has global side effects? Or that is written a particular way to take advantage of the hardware that it is running on? Or any other of the many ways that the intention and meaning of a piece of code within the codebase it exists in can be not immediately obvious?


>Do you never see code that has global side effects?

The answer for modern languages and frameworks is "write pure functions."

>Or that is written a particular way to take advantage of the hardware that it is running on?

Move to service/helper/utility class for that particular hardware or with a name that clarifies it's for that particular hardware.

I find comments to be necessary very rarely. Atm looking at a codebase where they are made to cover up for a lack of desire to think.


> The whole point of coding declaratively is that the "why" is front and center and the "what"/"how" follows from that.

Declarative means that we specify the "what", and the machine deduces the "how".

There is no room for "why", because our present-day machines do not require motivating argumentation in order to do our bidding. They either need the "what" or the "how", or whatever blend of the two that we find convenient.

We need the "why" in the documentation. Such as: why is this program written in the first place? The "why" is not in the code. When code is clear, it's just easy to understand its "what" or "how", never the "why". Unclear code obscures the "how" or "what", but clear code doesn't reveal "why".

Every "how" breaks down into smaller steps. Of course, those smaller steps have a "why" related to their role in relation to the other steps; that's not the "why" that I'm talking about here. Of course we know why we increment a loop counter when walking though an array: to get to the next element. If you start commenting that kind of why, you will soon be flogged by your team mates.


code can be much more precise and clear than English

Agreed, code is much more precise than English. But precision is not the same thing as being meaningful and without context, precision is useless. Code generally sucks at context, which is why every programming language worth its salt has comments.


You are missing the point entirely. No matter how clear your code is, it is only expressing the “what”, not the why. I can see that you’re using a binary tree, but why a binary tree and not a hash table? Why a decision forest and not a linear regression? Why a regularization penalty of 0.1 and not 0.2? Why cache A but not B? Why create an object at startup instead of generating it on the fly? You need comments to explain these decisions.


If there's an important difference (e.g. a performance requirement that a hash table wouldn't meet), I'd have a test that checks that. If not, it's probably an arbitrary choice that doesn't matter. If the decision is worth recording, it's worth recording properly.


For the cases you mention a combination of package name, class name and method / function name could serve as a comment with the benefit of making sure any place referencing the code also "documents" why something is happening (tests for example, or callers of your methods).

This is not always possible, and in those cases I also strongly prefer well written, concise comments explaining what is going on and why, ideally with a link to a reference/source which explains the background.

Some examples of method names:

- generateTreeToAllowPartitioningOfItems(...)

- getMatchingRegularizationPenaltyForSpecialCaseX(...)

- getShortTermRedisProxyCache(...)

- createNewPrefilledTemplateObjectForXYZ(...)

I hope this doesn't sound snarky. But more often than not comments do date in my experience (and they don't handle refactoring well), while (compiler-known) names are handled as 1st class citizens by the current IDEs and thus are corrected and updated anywhere.

In code reviews we usually aim for "low comment" density, the implementer shouldn't explain what or why he was doing, the reviewer has to understand just from the code (as it would happen if she/he has to maintain it later on). The review is "not good" or even fails if the reviewer doesn't understand why and what is happening. The outcome will in most cases be an improved design, not more comments.


But those method names are still hard to relate to the business cases your customer requested. So you implemented something for some reason; your code and methods tell you what you implemented but not why... Why did you use a tree? I read your top level code and think ; dude, that would have been so much simpler and faster using a Wobble instead of a Tree! Then I try that and it turns out it has to be a Tree; you went through the same process, did not tell me why and I lost a day retrying. For instance.

(assuming, which you should always assume imho, that you left the company many years ago when this event occurs)


If I wrote an explanation then what would check that explanation? Maybe I write "we use a Tree instead of a Wobble because Wobble doesn't support the APIs we need". But then maybe when you come to work on it, it turns out that the current version of Wobble does support those APIs. Maybe it's actually better at them than Tree. Whereas if I have a unit test around Tree that exercises the kind operations that we need to do, then you can just try dropping Wobble in there and see for yourself whether it works or not.


> code can be much more precise and clear than English. [my emphasis.]

As a common feature of most higher-level languages is that they co-opt natural language terms (and also mathematical notation, which is an option in commenting) with the intent to increase clarity, can you show us an example where code is more clear than natural language in explaining both what it is doing and why?

If you are working in something like APL, I can see there might be a case...

I am not so much interested in the precision issue, as both code and language can be very precisely wrong or right.


> Human programmers have a limited discipline budget

You think humans are bad, try working with Lobster programmers, they get work done, but their coding style is just horrible (they use tabs).


>If the developer doesn't remember to do it when making changes, then the code reviewer can catch it and enforce it.

They can. Just after correcting all the buffer overflows and before fixing all the use-after-frees. Then the comments can be consumed by all the other teams with the discipline and culture to avoid writing bugs for all time.


Nothing enforces that the code is correct, either, not even tests, as tests are also code, plus there is the utter infeasibility of exhaustive testing.

It does not follow from the possibility for error that a "significant" proportion of comments will necessarily be false. In my experience, that is most likely when an organization has commenting as a mandatory part of its process, which inevitably leads to most comments being trite, and some wrong. Outside of that, comments have not been a problem mainly because they are almost non-existent, even when the code could benefit from them.


> Comments should be a last-resort stopgap

I'd add; Comments should be saying _why_ this crazy method is here. You can always parse the code to figure out what it does. In a few months/years (depending on your memory) you will not remember _why_ this code was put in place.


Comments rot, but so does everything else such as type names, tests, variable names, field names, designs, architectures, etc.


Yep - it doesn't help much with a method that's named EmptyCacheToPreventBlugblagCongestion() if external circumstances have stopped the blugblag from ever congesting any longer. So discipline in maintaining the intent of the code is required even if you never write a single comment.


Tests, types and field names get checked on build.


If someone adds functionality to a type so the name isn't really applicable anymore I don't think the build catches that.


As soon as you form something that should conform to the type (according to its name) and find that it doesn't, you notice the problem, and then you fix it once and for all (because the type is defined in one place). So yes, you can have misleading type names in the codebase, but there's a natural pressure to correct them, in a way that there largely isn't for misleading comments.


Nothing enforces correct variable names and descriptive types either, why would you expect those to be more consistently accurate than comments?


They're amenable to automated refactoring, and if you change a type or variable name in one place you're forced to update it everywhere else that uses the same thing.


Comments and `git log -p <file>` to see what the comment originally referred to is pretty useful.

My personal favorite comment style is to wrap a chunk of code in `#{` `#}` blocks and add a general comment of what that chunk of code is accomplishing. Sort of like an inline method.


All those comments of "blah blah blah gets or sets a value" on my class properties, why do we add all that overhead to our projects to the point we have to use tools like GhostDoc to write our worthless comments? This industry is on crack sometimes.

I simply like comments for adding things like... so and so told me to do this... or simply documenting weird behavior or weird business logic.


(as others have pointed out here and every where) Comments are NOT for making code understandable. The things you mentioned are for that.

Comments are for things like

1) explaining why this thing that looks wrong or dumb, really isn't. 2) explaining what method/function/class/whatever is suppose to do. Because code can be correct, understandable, and still wrong.


I'm 29 and can barely remember the code I wrote last week.

Comments don't get updated when the requirements for the code change, more often than not end up as misleading.

The only thing worth commenting are actual libraries that are maintained, and 'magic values'.


> For me, any code that I wrote more than 3 weeks, I forgot. That's why I comment the hell out of my code.

I couldn't agree more.

A while back I got in the habit of trying to write code for "me, six-months from now". So, if I think I can explain it to "future me", then I'm happy. Ever since I started doing that, I've been much happier with "past me"'s code.

In addition to comments (particularly around hard to grok code), I've also started trying to be as consistent as possible in code structure and naming schemes. This also helps a lot.


I write notes to my future self all the time.

Meta comment: This is bullshit and has problems with this that and the other thing. But to fix that I'd have to refactor this other module and I'm not going to do that now. And the other thing I'm drawing a blank.

Meta comment2: I don't think the code needs to do this here. But I can't prove it right now.

Meta comment3: We absolutely need to do this exactly as it is. Because otherwise bad thing happens, which you probably won't see until it hits production.

Meta comment4: This function name isn't correct. But I can't think of a name that is better.


> Meta comment: This is bullshit

I used to worry about putting emotional blurbs in comments or commit messages, but I'm starting to see their value. A commit that starts "This ugly writing is to appease Roger, the editor obsessed with AP style" lets me know three things:

- Who asked for the change - The source of the content - The fact I disagree but still do it, so future me doesn't pick fights present me avoided

Of course, it could also mean "TODO: revert this commit the minute Roger retires."


That's less about emotion and more about context, which absolutely is important to capture. Links to tracking systems, and in more dysfunctional environments, quotes from emails and water cooler conversations can help a lot when going back in time. Generally I put those in the commit message whenever it makes sense, but sometimes it's better to put it in the actual comments themselves.

On the other hand, on the rare occasion that I've commented or committed something based on emotions, I've always regretted it. Granted they never caused problems for me, just a source of internal embarrassment. Still a good enough reason to be thoughtful about what emotions you express.


> Meta comment3: We absolutely need to do this exactly as it is. Because otherwise bad thing happens, which you probably won't see until it hits production.

This is the highest purpose that a comment can fulfill - telling why you are doing something that looks stupid.


> This function name isn't correct

When dealing with articulate code I often rename the same thing multiple time while I understand it better/clarify it's purpose. Also I love how naming protects the purpose of a variable or method, mentally speaking


Couldn't agree more, except with meta comment 3 it is very important to describe the bad thing, so that future me knows if he can safely rewrite this or not.


I write multi-line commit comments whenever I do something that's not trivial or has required a large amount of reasoning to perform correctly. As in, first 80 characters explaining the high-level details with (see details) at the end. Then a number of lines with more detailed reasoning.

Most such commits are never looked at again. But every other month or so, I come across a maintenance issue where I wonder about the context of something. In many such cases, I've saved multiple days of false starts or debugging. So it pays off in the long run even if it's only me gaining something from this. (Unlikely; we're a company of 80 developers).


Yes, this.

Sometimes you have a choice of a clever way to do something, which saves a few lines and uses neat language tricks that you rarely use * , or just doing things the boring way. As long as the boring way is obvious enough, it's often the better choice.

* I'm looking at you, Ruby... :)


There's truth to both sides. Depends on the comment really.

``` doesAThing() //does a thing ```

doesn't help anyone.

My rule: Code is for how, comment is for why.


> Code is for how, comment is for why.

Excellent. For interfaces, other code that uses the interface (perhaps even tests) can also help to document the "why".


Yeah, whoever told you that has never had the sinking feeling of digging in to a 300 or 1000 LOC function with a pretty refactor in mind only to see just how much of the system relies on that one function. It's really only an issue if you try to be diligent about testing the work you produce, in which case that little refactor could cost your team a week or a month of additional testing while they verify that you didn't break anything.

Or you could sneak one more little if statement or some copy/paste in there to fix it instead, and add a little comment that says "If you modify this line, please verify that your change doesn't impact Line XXX of file FFFF as well." And then you're done in less than a day and have saved a huge amount of testing.


This is especially problematic in machine-control code.

I've seen code from an otherwise highly capable developer that contained 1000+ LOC functions. When asked why he couldn't do a refactor the answer boiled down to fear. When the only real way to test the code is by physically running a machine through a number of scenarios, many of which are difficult at best to recreate, you become very reluctant to refactor or clean it up.

Like all problems, it's best to nip it in the bud before things get that far out of line.


Line numbers might not stay static. Perhaps referring to a particular function or variable might be better, as well as explaining what it might impact? That way, one can jump to the location, then inspect it to see if the potentially-impacting behaviour still exists.

Definitely useful in the case where it's near-to-impossible to DRY up something, though. Sadly, the limitations of an industrial C environment have led my code to contain a lot of annoying 'If you add something here, make sure to add it to X struct and Y function' comments.


Thats also a reason I will include as a comment the unoptimized code with comments whenever I do optimized crazycode.

That way, I can understand what is actually being done. And I can then re-analyze why I did the shortcuts to get to optimization.

But 99% of the time, we dont need to optimize. CPU/RAM is cheap. But those 1% of the times when you're going from N^2 to N^logN ... Welll.....


Did you mean N*logN? I don't think going to N^logN is what you want ;)


Sigh, yep!

Thats what I get for trying to type it on a phone browser!


As I've matured as a developer I generally find it easier to read and understand code, whether it be my own or others. As a junior this is something I definitely struggled with.


Lemma:

> Learned as junior: Legacy code is hard to read. Understood as senior: Legacy code that I wrote myself is hard to read.

Lemma:

> Learned as junior: Technical skills matter most. Understood as senior: Communication skills matter most.

Theorem:

Communication needs to target the people of the future.


Outstanding observation, thank you!


Way back in the day, my boss made a lovely observation - 'write your code like it's going to be maintained by an axe murderer who has your home address, and nothing to lose.'

Simple guidelines to live your life by ;)


Attribution: “Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John F Woods, 1991 in comp.lang.c++¹

(I do not know who first replied, ‘I do know where I live.’)

¹ https://groups.google.com/forum/#!msg/comp.lang.c++/rYCO5yn4...


Did he call it "his" observation? I am yet to find a likeable person quoting this.


All excellent points. I got used to filling my code with comments in the spirit of having a conversation with someone to whom I am explaining how an why. Thirty years later I can look at any of my code and quickly understand it. In sharp contrast to this I find most open source code to be difficult and laborious to understand due to the almost total lack of useful comments.


Sometimes you encounter some questionable code and you wonder: "What idiot wrote this?" So you `git blame` and you find out "Oh, I'm that idiot."


I have been in this situation and tried to remember what was going on while I wrote the questionable code. It frequently came down to a day full of interruptions and context switching. Also motivation plays a tremendous role, I know that whenever I have to work on a code base I don’t understand and don’t want to have anything to do with it long term, it ruins my ability to focus.


> Learned as junior: New tech solves old problems.

> Understood as senior: New tech creates new problems.

Yep! When you use "new tech", you're making a bet, and not all bets pay off. If you're cautious, you'll hold off until new tech is proven (or pilot it cautiously before migrating). If you're possessed with good judgment, you'll be discriminating in which new techs you adopt. If you have both virtues, you may even gain a competitive advantage.


> Understood as senior: Communication skills matter most.

I've been of a similar opinion at some point, but now that I see people _optimizing for communication_ as juniors in lieu of actual technical skills, I say: both matter a ton. I hate dealing with a junior that is an extremely good people person but a terrible developer: they tend to think they got everything covered just because people like them so much, even when their actual solutions are terrible.


>Learned as junior: Technical skills matter most.

>Understood as senior: Communication skills matter most.

I see this often, but it needs to be said with a caveat - the second line presupposes the first. Without the first, the second doesn't really matter (or it does but you're in the wrong career).


Actually it is very disfuncional for me to be in a company that communication skills are ahead of technical.


"Understood as senior: Legacy code that I wrote myself is hard to read."

Your comment came at the perfect time. I just finished debugging a "high urgency" problem with a program I developed and maintained for the past 10 years.

The program started simple with a small list of rules to apply against data sets. Over time the list became a tree of rules that expanded in both breadth and depth. It was refactored once to get the design in line with the rules of the time.

The "high urgency" issue turned out to be the program working correctly. But, the functional user wasn't able to keep in mind some of the rules he set. It took me a half hour, with lots of "why did I do that", to explain it again to the user.


> Understood as senior: Legacy code that I wrote myself is hard to read.

As it should be. If your code from a five to ten years ago doesn't make you cringe at least a little bit, the right way to view that is not that you were doing a good job back then, but that you haven't gotten any better since then.


There are absolutely things I was better at back then, mostly because then I spent all my time doing nothing but programming and now a lot of time goes to other activities (meetings, planning, writing, etc).


Well, yeah, good point. I guess I should temper that with if you're still a full time professional software engineer. I wouldn't expect it to apply to someone that moves partially of fully to a different type of job or activity.


It isn't as simple as better skilled or not. Many times having more understanding about the problem may open your eyes to better / simpler design, or you now don't have any rushing matter that fuel bad design in the past.


I disagree with this perspective. You of yesteryear is in many ways just another programmer you have to work with; if that makes you cringe, you may be taking a bit too much pride in your work.


> You of yesteryear is in many ways just another programmer you have to work with

Taking that idea to the extreme of not having any feeling of ownership or pride (or lack thereof) in your past work seems rather silly to me. It wasn't just some other programmer, it was you.

I'm not saying you should cringe because the code is bad, but because you should have a sense of "well, I could have saved myself some trouble or made this cleaner/more obvious if I only knew then what I know now."


Or maybe you cringe at your old 'good' code because you spent too much time on things that ultimately didn't matter.


>Understood as senior: Communication skills matter most.

I can't speak enough on this one. In our craft, The better one is at communication skills, the more effective their technical skills will be.


There's also a strong correlation between strong communication skills and strong technical skills in our industry that demands frequent learning of new skills. So not only does strong communication augment technical skills, it also signals its strength which is also valuable to software developers.


> Learned as junior: If you report an OS bug or some other deep problem, seniors will not believe you and assume you're making excuses for your own bugs and lack of understanding.

My mum told me "if you think you found a bug in the compiler or OS... you're wrong". This advice applies until you're good enough to know it doesn't. She was right.


One of the first rules from the book The Pragmatic Programmer is "SELECT isn't broken". (It might even be the first rule.)


I wish my mother knew what a compiler was.


Heh, at 19 she was the PL/1 expert for the Asia/Pacific region. Mum's a badass.


> Learned as junior: New tech solves old problems.

> Understood as senior: New tech creates new problems.

This is one all the people who push "new and shiny" need to learn.


I mean, yes and no, sometimes an old systems so bad it really needed to be killed off and replaced. Or would you rather everyone stick to coding in VB6? I rather we all use C# instead of VB6 ;) I'm not implying we only ever use C#, I know there's other languages, just illustrating a shift in the MS windows development ecosystem that was for the better.


VB 6 is probably not a good example.

Even nowadays many languages don't have features that VB6 offered incl. WYSIWYG. Debugging capabilities of modern languages/environments are still often not even close to what VB6 offered 20+ years ago.

C# certainly is outstanding but I think Microsoft made a gigantic mistake by killing VB6 the way they did.

Microsoft's prevented a large amount of people to write applications, since a new ecosystem like C# or VB.net was significantly more difficult to learn and understand.

In retrospect Python or Node probably took VB6's place, so Microsoft just lost out on a huge market there. Bad management decision.


> I mean, yes and no, sometimes an old systems so bad it really needed to be killed off and replaced.

I don't believe this is the spirit in which this was meant. If the old system is out of date and there are buggy libraries that aren't being maintained, that is a WHOLE different issue.


> Learned as junior: If you report an OS bug or some other deep problem, seniors will not believe you

Same thing happens in reverse! ;)

A few years back I told 3 devs who reported to me that there was a bug in Laravel database sub system.

The bug was: If you use the word "returning" in any Laravel insert query the system would crash (Laravel 3 & 4).

None of my guys would believe me!

I finally tracked the bug down to

> laravel/database/connection.php

> public function query($sql, $bindings = array()) > { > ... etc ... > elseif (stripos($sql, 'insert') === 0 and stripos($sql, 'returning') !== false)

I sent an email to the Laravel team and never got a response... but the bug stopped happening some time after that ;)

I think it was relate to mySQL version as well.


> If you report an OS bug or some other deep problem, seniors will not believe you and assume you're making excuses for your own bugs and lack of understanding.

Similarly, one thing I learned is that if I find a bug with an OS or platform, 9 times out of 10 it's actually due to some problem in my code or my own lack of understanding :)


Can suggest it's more like 999 times out of 1000?


> Understood as senior: Communication skills matter most.

can you give examples ?


I can try.

Teaching juniors can be more productive than coding. Being 10x by yourself is less good than 3x-ing a whole team. Even better, teach everyone to be as fast & good as you. Good teaching requires good communication and building trust.

Understanding priorities and goals is absolutely critical to making good choices while programming. Writing good code under reasonable deadlines in an organization necessarily involves a lot of discussion about what constitutes an acceptable solution, what doesn’t, how long it might take, how long is too long, what features are nice but not necessary.

Over-engineering, for example, is extremely common, and is caused in part by not correctly balancing goals and priorities with time budgets. It’s usually a symptom of mis-communication.

Can’t even count how many times I’ve seen a programmer go off the rails building stuff that wasn’t asked for, only to have a meeting several weeks later that invalidated weeks of work when the goals were clarified. (That includes me, btw.)

Making large changes and leading a group of programmers often requires a lot of convincing and rallying work along with the technical planning, sometimes much more than you’d expect. It also requires the ability to put yourself aside and allow others to contribute to the design, even when you think your technical solution is superior.

Getting promoted is, in my experience, most commonly a process of demonstrating to others that you listen well, organize well, work well with others, get things done under deadlines, understand and report what juniors are doing to management, budget well, internalize the organizational goals and contribute meaningfully to meeting those goals.

In short, it’s because teamwork is important.


Alright, I just failed to parse what communication could mean. I see it clearly now, leadership, team work, social skills, they do indeed matter a ton.


I once worked with a brilliant engineer. He was from Hong Kong. He struggled to express his ideas in English. We tended to let him show us in code instead.

Sometimes this worked. Sometimes it really, really didn't. It also meant we had a very difficult time discussing larger architectural questions with him or giving him useful feedback on his code.


I’ve recently had a similar problem, only in my case I’m the foreigner who can’t speak the local language. It’s fine except in meetings.

Junior dev me: meetings are a waste of time.

Senior dev me: meetings are the steering wheel, the developers are the engine (https://kitsunesoftware.wordpress.com/2018/01/29/utility-fun...)


Well that's half cultural, I meant in a context of shared native tongue. Now international work does create lots of hurdles because you can't translate things above casual smalltalk. I guess that's where maths could help.


Articulating requirements or architecture to either stakeholders or juniors is key to being able to actually do your job as a senior (whether that's coding yourself, code reviews, planning, interviews, etc etc).


If nothing else, it is a godsend when talking to PMs. A junior may be like "this code sucks, it is too complex!!" where a more senior can be more like "This code is not written in a good way. If we had some time to rewrite these particular parts, we could provide these features to the users".

Being able to couple value to what you do, or for that matter, to what might not be worth pursuing, is a very good skill. And be sure to not often say no to a PM asking for a feature, but rather give an alternative "doing that is quite complex and might take us half a year. How about we do this other thing that gets us 90% of the way there, but only take a week to implement?"


There's no hard-and-fast rule on this but generally junior projects are isolated where you may only have one stakeholder. Growing into a more senior role, your actions generally have greater breadth that affect more teams/clients/orgs. Technical chops are worthy but I can guarantee that your stakeholders would rather have amazing communication for things like progress, deployment, etc. than how efficient or beautifully written the project is.


It's mainly used when discussing task and making architectural / design decision. Junior usually talk in techincal / implementation terms. While senior usually talk in broader / general / business term.

Moreover junior usually have hard time expressing techincal problem.


> Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

Me: 'Even a blind squirrel finds a nut once in a while'


Horses, not zebras. But yes, there are a few zebras in the world also.


Depends on what they are doing.

Oh you found a 'bug' in Mac OS, Windows, or Linux? Probably not.

Oh you found a 'bug' some open source library? Maybe.

Or in our in house developed framework? Probably.

In house framework I wrote? Certain of that.


> Understood as senior: New tech creates new problems.

Related: There are no new problems.


Things I've learned: I'll never be a senior programmer.


> Understood as senior: Legacy code that I wrote myself is hard to read.

Ha, yes, I occasionally come across code I wrote years before and have a few "WTF moments"!


I always find myself saying “Who is the shithead who wrote... oh...”


Its a good sign - means you are learning and getting better.


New tech solves your existing problems and creates new ones. I have a big pile of problems I have had for many years that I’d love to trade for some new ones. I’m happy to fix one problem and get 5 new ones. I’m happy to get rid of an easy problem and get a new much harder problem. The important thing is that I get some new problems, and nothing will do that like new tech.


> Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

And, yet, it does happen. The important part as the senior guy is to make the junior guy create a test case and then cut it to the bone until it is obvious where the bug is.

Story time: It's mid ninteen-ninety-mumble and your intrepid hero is a junior programmer handling multi-site integration and testing tool infrastructure. This being the time when the Swiss Army Chainsaw(tm) (aka Perl 4) is well and truly entrenched in the sysadmin and toolsmith programmers, my technical superiors throw me a couple of Perl books, set my deliverable date impossibly soon, and tell me to get going post haste.

So, I code. It's not a lot of code, but it is parsing and matching files in 3 different formats from 3 different sites. Of course, I hear what you are saying: "Perl and parsing is like mixing ammonia and chlorine--and probably more painful." Yes, I concur. But, it is the tool at hand in the long forgotten mists of time when RAM was expensive and spinning rust still resembled an iron brick. So, off I go with regexes for parsing (Yes, I know, now I have 3 problems).

And everything worked quite swimmingly. Except for a bit of idiocy that nobody could track down that occasionally flagged a couple of records as mismatched when manual inspection showed they really were not. Nobody minded that much as the scripts got 99.99% of the job done and didn't give a false negative, so: "Ship it, junior."

And so we did.

However, the bug annoyed me because I had to got clean up the false positives when they fired. And, if I am anything, I am a VERY lazy programmer--and this was preventing me from being lazy.

So, I eventually am waiting for one of the other groups to deliver, and I go spelunking for a testcase.

Spelunking? HAH! Cave diving is a better analogy.

The program took the most inclusive of the syntaxes, used each record from that to build a regex to look for the corresponding record in the other syntaxes, matched what it could and flagged what remained.

So, the program was building a dynamic regex on-the-fly and then using it. Not a huge deal, but the regexes were larger than most people were probably comfortable with. No problem, I validated this on much smaller records out to REALLY big records, and they work well.

Except for those weird cases ...

So, I'm looking through a case that works and trying to compare it to a case that doesn't. And I accidentally fat-finger some character set match and delete a character that shouldn't matter.

And the regex fails ... provoking the Asimovean "That's funny ..."

So I delete another character ... and it works again. What?!?!?!

So, I add a character. And it fails. And I add another. And it works again.

I stared at that regex for what felt like EONS until the light bulb went on.

The one that worked? 511 characters or 513 characters. The one that failed? 512 characters.

So, yes, I, a total Perl n00b managed to find a bug in Perl 4 in my first ever Perl program.

Sometimes the junior dude gets really unlucky and finds an actual system-level bug.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: