Hacker News new | past | comments | ask | show | jobs | submit login
Absolute truths I unlearned as junior developer (monicalent.com)
1529 points by dcu on June 7, 2019 | hide | past | favorite | 511 comments



Admittedly, my first days as a junior programmer were before some of you were born, but I'm thinking of a particular format here...

Learned as junior: If you report an OS bug or some other deep problem, seniors will not believe you and assume you're making excuses for your own bugs and lack of understanding.

Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

Learned as junior: Legacy code is hard to read.

Understood as senior: Legacy code that I wrote myself is hard to read.

Learned as junior: Technical skills matter most.

Understood as senior: Communication skills matter most.

Learned as junior: New tech solves old problems.

Understood as senior: New tech creates new problems.


> code that I wrote myself is hard to read

This has happened more times than it probably should:

1. Arrive upon some code I wrote at some point in the near or distant past.

2. Review it to get some idea of what I was trying to do

3. Laugh at my young self for being so naive

4. Refactor or Rewrite

5. Re-realize the edge-cases and difficulties

6. Remember this being a problem

7. Refactor And Rewrite

8. Either `git reset --hard HEAD` or end up with a similar solution, but with better comments

Once in a while, I end up with a [simpler | faster | clearer | otherwise better] version, which makes this process worth while - even with the false positives.


Semi-related story of mine:

1. Stumble upon some specific problem with a web framework we use.

2. Jump straight to stackoverflow.

3. Sbd had a similar issue, nice.

4. Sbd wrote a very concise answer, nice too.

5. There's my nickname under the answer. Oh, wait...


I'm getting the statement about once a quarter at work of:

Yeah I found this bug and I worked through it, did a bunch of googling or internal searching found a similar issue and welp it was a ticket from you (me) 2 years ago.


Which is why I am trying to get better at what I used to to: writing down and sharing problems I saw.


I have a giant dev file (well, 2 really: 1 for personal, 1 for work) that I use as a scratchpad. Anything I work on goes there. Makes it easy to find things.

Monthly I pull out useful things into notes about that topic. Run into a weird problem with spring framework? Copy out the relevant info into springframework.md.


It drives me nuts when you find a forum post and they never reported back. Or it's a terse "I figured it out."

I try really hard not to do that for internal or public forums. The odds are better than you think that you'll stumble on the same topic a few years from now.


Or it say "Google it" and thats how you ended up there.


Wisdom of the Ancients

https://xkcd.com/979/


I try to at least move the problem forward. However does anyone have a way they make sure to close those things out and circle back? I mean other than just doing it?

On internal tools I do do that but that's a smaller number.

Worse is stuff like car and fridge repair. I have no idea what forum I find qqs on.


I'm sad that I cannot share my experiences the same way :-( 99% of my "interesting" bugs are almost all Xbox One/PS4 related, so I can't write a blog post about "how I found out that Sony's implementation of file read is not compatible with regular stdio" - it's hugely interesting but this stuff is NDA'd to such an extent that I wouldn't risk writing openly about it - but I'd love to.


.


I've found a good way to do that is running a personal blog. In fact, I write as if the audience of my blog will be myself in the future. It's nice to help out others who face the same problem tho.


Does "Sbd" stand for something, or is that your username?


The abbreviation "sth" occurs here a lot, probably because of people who learned English with the help of dictionaries that use "sth" as an abbreviation for "something". I suspect "sbd" also comes from the same source.


Somebody


Uh, OK. Never seen that before.

"Sby" would've made way more sense. Or heaven forbid we use one more character to make the nearly obvious "sbdy."


It's a recurring theme with me, though a bit different where my name is under step 3, and I only realized that when I tried to upvote the question and couldn't.


Heh, that happened to me too... I loved it.


THANK YOU!!!

I really thought this only happened to me :-)


Have experienced this too. Haha.


SO is my notebook.


This.

Why create my own silo of documentation when I can just put the answer where I'm likely going to find it (Stack Overflow).


This is probably the primary actual use case for comments. Explain why something is done the way it is; to justify to those who come after why Chesterton's Fence [1] should apply in this case.

[1] https://wiki.lesswrong.com/wiki/Chesterton%27s_Fence


Yes.

In a wider sense, you also want to explain why something is NOT done. Ie why certain other ideas don't work, or why we don't offer certain features.


Same idea should apply to laws. The why gets forgotten a half century later because the problem was solved but not well documented. Because the problem is no longer happening, they think the law must no longer be necessary.


As a counterpoint, we often end up with laws, traditions, and social mores that long outlive whatever rationale they had for them in the first place. (Assuming they had a rationale, and weren’t just based on fear and superstition).


That someone being me not knowing why in hell would I write this monstrosity... git blame + some archeology work to get jira ticket number (I put issue numbers in commit message/branch name) and from that I know why.


Yes, naturally "those who come after" often includes the future you.


Much of my life as an older dev includes balancing my relationship with my past, current, and future selves. Be kind to your future self; be compassionate of your past self. And don't forget, they really are different people ;)


Here's what happened to me(A) and a friend(B) from a former workplace of mine(open-source project):

B: Take a look at this shit code that I found.

A: Whoah, it really is shit. Blame it so that we can see what kind of genious is behind this.

B: ...

A: Well?

B: Apparently you wrote and I reviewed/approved it.


The "who wrote this?" mentality is a trap that's good to avoid. Get comfortable with different ways of writing something that, while they might have different tradeoffs, accomplish the same thing, and try to see past that. Understand that most code wasn't written by anybody--lines 1 and 3 were written by Alice a year ago, line 2 was written by Bob 2 years ago, and line 4 was written by Alice yesterday. `git blame` is very useful for seeing the change in context, which can give a lot of insight into why it's written that way, but usually the author isn't very useful to know, unless you're planning to ask them about it. Sometimes it's useful if you happen to know that the author isn't very familiar with something when you're wondering why they didn't use it, but try to keep in mind what the actual benefits of your way are, whether they're especially pressing, and that the other person might have written it differently because they were thinking about other concerns that you forgot.


I agree that “who wrote this” is dangerous, and git blame is a terribad name. I will say though, if you can avoid value judgements, then knowing who wrote a block of code is super valuable in a legacy code base. I’ve found that every dev I’ve ever worked with has very real strengths and weaknesses. And knowing who wrote a piece of code can drastically reduce the time it takes me to find hard bugs. It often goes something like, so and so tends to forget certain kinds of edge cases, but they never seem to screw up this kind of logic... so I bet the problem is related to... ah found the problem. But never blame someone for creating bugs, unless it’s really egregious, and then, only if you can help them with better habits going forwards.


Yes, "blame" is not a good word.

Use "git annotate" instead.


I really like this approach of using git blame, it's original thinking and highlights how much human components there are in developing.


While lots of legacy code emerges organically the way you describe, there are in fact many people in the industry who I'd call "legacy coders." People who saw Dijkstra's "Goto considered harmful" and scoffed, "all these 'for' loops are much less readable than my 'goto loop1' solution." People who use global variables because parameter-based implementations are "needlessly complex."

Basically, not everyone works for Google.


You say that, but Golang has gotos, while being a very minimal language. Not everyone at Google is an amazing developer that's fully up to date with best practices and patterns.

Not that that needs to be said, no matter the company (if it's of any decent size.)


It’s often helpful to know who wrote a line of code, because then I can put myself into their shoes and try to figure out what was going on their mind when they wrote it.


I’ve always found the svn alias ‘svn praise’ pretty entertaining for this reason.


git has "git annotate"


Assuming those lines were getting out of hand, I think it would be a valid question as to why they were not tidied up during the review of the line 4 addition by Alice.


It’s Friday afternoon, you are exhausted after a busy week, the business is pushing for a fix before you leave and you have committed to be home by 6pm so your spouse can go out.


`git log -p` is vastly superior to `git blame` for determining why a file is the way it is.


That gives you all the history (or all the history of a file).

Git blame quickly gives you the commits you are likely interested in, then you can use them as a starting point for your git log digging.


I've had that happen several times, but once I actually did the inverse.

I was working on an extraordinarily bad codebase, and stumbled upon some modular, reusable code that made my life way easier. I wondered who wrote this rare gem in that pile of shit, and checked `svn praise` for a change.

It was me.

It would have been the highlight of my then short career, if not for the fact that it meant there were no other semi-competent people on that project.


This is really funny. So good. I ask all the time, "Who wrote this shit??" knowing it was probably me.


Honestly, this is usually a good sign. I'll code at the edge of my current skill. Six months from now, I hope I can look at that code and consider it primitive from where I hope to be then.


That is the only time I dare ask it. When else do I dare? Someone might take it the wrong way otherwise.


*genius


These days my rule of thumb is to not try to rewrite or generalize until I or my team has tried to do more or less the same thing three times. Before that, you just don't have a good feel for what is generalizable/edge case or what is invariant/variable in the problem space.

I've definitely run into this phenomenon of independently landing on the same design twice because of the same edge cases. At some point back in 2005 or so, I was working on a collision detection component for a physics engine. This was 1-2 years before the Box2D engine, so there was a significant lack of any open source options, so I was rolling my own stuff that was quite similar to Box2D (but Box2D was written much better).

One year later, I came back, looked at the code, thought "this is unreadable!" refactored it, and sure enough, stuff was falling through floors. I went back to my old code, found several comments discussing edge cases having to do with discrete time step problems, and concluded that my old code was in fact the right approach, it just didn't put comments about edge cases high enough in the call stack.


Generalizing even when you are only using your solution once can sometimes be useful.

When you know that certain information should not be used in a correct solution, a more abstract approach can make sure that information stays hidden.

A really simple example: for-loops in Python 'leak' their index variable. Stick that loop inside a local function, and then you know that you can not accidentally make use of the index variable later.

A more complicated example is deliberately coding to an interface that carefully exposes only what should be exposed. Eg using a map or filter higher-order function.


As a technique against this, at the point that you're writing the code to begin with, have you ever tried documenting the reasons you're doing something even if it's blatantly obvious at the time?

   // rather than use the API we parse a scrape here with a
   // regex because we signed up and it doesn't have half the fields we want.
like, totally obvious. except six years later when the regex stops matching, and you are already using the API all over the code anyway and you get to this part with the scrape and you don't get it. Are we trying to elude the API requests number limit? or what is the reason for this bizarre scrape.

a lot of people would refactor by seeing if they can put the API call in there, but this wastes massive time you could avoid if you knew the reason for this in the first place. and maybe the API still doesn't have it, and so you put the API call in, you try to remember the reason for the scrape, and then you realize that all this is still the best way to do this, you just have to update the regex to continue to match. work that could have been saved by a simple comment telling you the reason it looks this way.


> code that I wrote myself is hard to read

1. Found some extremely cool code, marveled how amazing it works

2. Realized I wrote it as a teenager

3. Got depressed, questioning my life decisions


The best feeling for me is when I come across old code and then re-write it to make it [simpler | faster | clearer | better] .. It is tangible proof to me that I have improved in my craft.


I usually do this same thing. But not before mentally cursing out the programmer that wrote this poorly documented spaghetti code... After which I realize it was me.


Even well factored code looks like spaghetti at a casual read in a lot of frameworks/cases, and makes sense once you've swapped all that info back in.


>but with better comments

Communication, especially with your future self, is an important skill.


Code twice: once to understand, once to solve.

As in "42", the first bit is more difficult.


And each time this happens, you get better about writing readable comments up front describing edge cases and difficulties so that future self can avoid steps 1 - 6 with a head start on refactor ideas / feasibility.


> git reset --hard HEAD

story of my life


> git stash # i might need this one day And i never need it.


I've started saving my stashes to branches instead.. adding a _ on the front of the branch name to remind me to delete the branch at some point


Out of curiosity, do you ever actually delete the branches? I would absolutely just end up with a number of _-prefixed branches on all my projects.


I don't allow myself to have more than 1 active stash at any given point on a project. You quickly learn to delete with no regret some code you wrote !


Better than the other way round.


> Re-realize the edge-cases and difficulties

I think comments can help avoid such scenarios.


This isn't quite the same situation but it reminds me a little bit of xkcd.com/1421


Oh this is such a good comment!

> Legacy code that I wrote myself is hard to read.

Sometimes I don’t even recognize me as the author for a while. Realizing I’m reading something I wrote and can’t understand it without studying carefully has been rather surprising and reminds me of the old Kernighan quote “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”

My goals used to be to write code that looked and felt cool to myself and others, to add features in clever ways with as little change to the function signatures and structure as possible so as to not disturb the architecture. While keeping changes small is a good goal to balance, it’s always possible to be too small and add unnamed concepts and fail to restructure around new concepts when you should. Do that a few times in a row (which I have done) and you end up with architectural spaghetti code that might look clean and modular at first glance but becomes a maintenance nightmare. My goals are now to: - make code so easy to read it’s boring, because if it looks interesting, it’s probably too clever - and to identify and name all the unnamed concepts and refactor often to accommodate new features.


> the old Kernighan quote “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”

This quote can be interpreted in an intelligence-positive way, to encourage you to learn by writing the cleverest possible code. Then, when you get to debug it, you will be forced to improve your skills. This interpretation is called Kernighan's lever [1] and it is very beautiful. The alternative is a life of boredom where you don't learn anything new.

[1] https://www.linusakesson.net/programming/kernighans-lever/in...


”Prod is on fire and nobody can figure out your burrito pasta”

”Isn’t this a wonderful learning opportunity for all of us”

Debugging time is never a good time to start honing new skills.


“Burrito pasta” is a great term! Is that something that you just came up with?

It sounds like spaghetti code but even worse because it’s covered with a messy layer of beans and guacamole and wrapped in layers of tortilla.


Just came up with it. The complexity of monads and other post-grad level CS stuff mixed with good old spaghetti code.

Next up: some doofus who learned high-order Haskell in middle school, and takes offense of me calling it complicated.

PS: https://blog.plover.com/prog/burritos.html


Of course, Kernighan's lever is most useful on individual projects. In an industrial context there are often other constraints. Nobody is saying that you have to always write the most clever code you can. But if you never do it, you will improve your skills very slowly, if at all.


Writing simple code is difficult, more difficult than writing clever code, and is a better skill to grow.


I would hope at least most of the debugging happened before the code made it to production.


I was just pondering this very issue. I think one of the reasons some teams end up with spaghetti code is because the developers are smart enough that they always muddle through it in the end. So when it comes to talking about technical debt, there's never a "it's too complicated," there's just "you're not trying hard enough." In that sense, it would help if the developers had a lower pain tolerance, because it would drive us to actually work hard to improve the ergonomics of our architectures.


> The alternative is a life of boredom where you don't learn anything new.

Now that's an impressive logical leap.


False dichotomy, anyone? I learn more from trying new ideas than debugging complex code.


I mean

When I try something new, usually what I learn is not that my assumptions were correct, but the more interesting part is where my assumptions were wrong. This usually only becomes apparent either when writing the code, or in some of the more interesting cases, when debugging.


This quote can be interpreted in an intelligence-positive way, to encourage you to learn by writing the cleverest possible code.

I would claim that any clever final result can be accomplished without any need for intermediate cleverness. Things that look clever to other people but are intuitive to you might be OK. Things that look clever to you should be avoided when possible.


That is absolutely horrific. I don't want to spend my energy learning the special skills needed to deal with things I should have been smart enough to prevent from ever existing.


When seniors (and above) complain about the low quality of junior code, I tell them to go look at their own code from 6 months ago.

It's an endemic issue, core to the problem of poor software.

> My goals used to be to write code that looked and felt cool to myself and others, ... My goals are now to: - make code so easy to read it’s boring

Same for me, and I'm sure same to many of the folks that have advanced past senior level. The problem of other people, other senior people, needing to read, understand, and significantly modify your code is one of the reasons why you can't really advance past senior in a startup. There aren't enough experienced folks that you have to write "up to". Of course the other part of being post-senior is the ability to scale your expertise, but now I'm digressing quite far. Then once you're that advanced, you don't want to take a pay and scope cut to work at a startup. This is a major contributor to startup technical debt accumulation, one that can't readily be solved.

I still do write "cool code", for code that I will only use myself that doesn't go into production. But for all others, I write easy-to-read code. And I review code with that in mind. When comments have a typo that alter the meaning of the comment, I insist that be fixed. Juniors hate me for being too picky. (and I hate them for being too sloppy.)

Remember, the most important part of your job as senior+ is not what you do yourself, it's how you guide others.


> seniors [...] complain about the low quality of junior code [...] their own code from 6 months ago.

What classes as 'senior' that their own coding style has changed that much in 6 months? O.o

> I still do write "cool code", for code that I will only use myself that doesn't go into production. But for all others, I write easy-to-read code.

Good dev. Remember, you are not your audience. Unless you're just writing play-code in which case go nuts and be as 'clever' as you want. :D

> When comments have a typo that alter the meaning of the comment, I insist that be fixed. Juniors hate me for being too picky.

Then they're wrong, and tbh that's not something I'd accept more than once from an employee.


> > seniors [...] complain about the low quality of junior code [...] their own code from 6 months ago.

> What classes as 'senior' that their own coding style has changed that much in 6 months? O.o

Your code style can remain exactly the same, and it would still happen. The reason is not that you would write the code differently today, but that you forgot the issues and edge cases that made you write it like that then, and that at the time you focused too much on the writing, not on the reading.


Here we get to the real heart of the problem. Engineers of all levels seem to default into “this code is shit - I could do better” instead of having some empathy and considering they don’t have the whole picture.

But the wrong lesson seems to be taken away from this. It’s not that all code is shit - it’s that you aren’t good at reading the code yet if you can’t see all the little hairs and bug fixes.


> Engineers of all levels seem to default into “this code is shit - I could do better” instead of having some empathy and considering they don’t have the whole picture.

several projects I've come in to - yeah, the code what shit, and yeah, I could do better. And I've done better. By asking questions, documenting the answers, writing sample data, and writing tests.

I get that code can be sloppy, have edge cases, etc. Took over a project that was halfway migrated from CI to Laravel. The migrator had close to a year on this 'migration'. We had not one unit test case, no migrations, no seeders, no integration tests when we took over. What we had was piles of half-baked uncommented model code, over-reliance on magic methods, Laravel/CI models with the same names and method names often being used in the same request but with unintentionally dissimilar behavior.

The 'code' isn't the (whole) problem. All the other stuff around the code that provides the context is the problem. We had ~ 20 tickets in a tracking system with vague notes, and were given 5 email threads of discussion about functionality questions, none with actual resolution.

> it’s that you aren’t good at reading the code yet if you can’t see all the little hairs and bug fixes.

Or... the person writing it before you simply didn't know how to write/document.

Sometimes - really, honestly - you can actually "do it better" because... really, honestly, sometimes you are actually better - more competent, more experienced, more diligent, more professional - than the person who left the code you're working on. Not always, but not never.


Oh absolutely code can be shit. But I’ve also seen code that’s made the company millions of dollars and run flawlessly for 20 years be called “shit” because it doesn’t look like modern code. The replacement naturally consumes many times more resources and has bugs that were long fixed in the old code.

In my experience the latter case is far more common. But I suppose experiences will differ dramatically depending on what you work on.


I'd meant to add that part as well too - code can be bad and still work, and work OK. The problems only come in when it needs to be changed.

I've advised a number of folks to care less about the code style, and focus more on making it at least understandable. I don't particularly care if you're using a factory pattern or not, but please do doc/comment someplace what the expected behavior for your 'backordering' logic is. I can fix things later if I understand what was intended, vs just what I have to guess at later.

Have worked on some projects in the last few years that are 'bad' from code perspective. One is bad, but the company as a whole operates... decently, and is improving, and more importantly, is providing a lot of value to their customers. The customers tolerate some bugs now and then because a) they still get value and b) the issues are addressed. There's a full process for changes/fixes/rollouts, and the team overall understands that there's tech debt to deal with. Some folks understand that they're still paying off tech debt from 3-4 years ago, and understand those decisions were bad, and try to avoid those same mistakes.

Hundreds of integration and unit tests (growing every week) help grow the confidence levels, and remove barriers to smaller refactoring efforts, because there's a reasonable way to detect the impact of changes. It's not perfect, but that's also understood and accepted.

Another one is the CI/Laravel situation from above. Small company, no real 'tech' people on staff - it's all 'outsourced'. They're frustrated because they see other companies progressing faster than they do, and everything seems to take 5x longer than they expect. It's because the code is bad (on many levels). If we were not trying to make any changes, and it just ran in its current state, it would still continue to make money for them, but they want new features, which requires actually understanding how everything fits together. It took two people several months to have a reasonable understanding of how all the running parts fit together (while also trying to add new features/etc), and finally get a small number of unit tests in place.


...you don't remember what you were doing 6 months ago? I'm sorry, that's still not a good excuse. Maybe you don't remember every tiny facet but hopefully you'd have a general idea. And above all that, if you're a senior dev then you'd understand that "this code is unfamiliar" does NOT mean "this code is bad."

I mean really, "this code is unfamiliar therefore it's messy therefore we should rewrite it" is a flaming red flag. Chesterton's fence, people!


One of my favorite things about comments in programming is that you can stick a label on that fence explaining why it's there. One of my biggest frustrations is that so many people don't bother, even when it would literally be just one little sentence.


> One of my favorite things about comments in programming is that you can stick a label on that fence explaining why it's there.

This is routine for ordinary cultural practices as well. However, the common explanation for a given cultural practice usually has nothing to do with the actual reasons it might be a good idea.


Unknown unknowns.

My coding style hasn't really changed in years - frankly, I don't write a lot of code, I do other things. But I often run into situations where I'm irritated at my own bad code from months earlier, when the shortcomings of the code are actually driven by things I know now that I didn't know then.


Review process surely mitigates differences between "junior" and "senior" code?


This also would require that devs who are on the PR reviews ACTUALLY look at the code. In about every job I've worked at in my short career there are people I work with that I don't trust them to actually review my code. I've come to accept that. I instead make sure the people I know will do a decent job are on the PR. Some people will just look at the diff and an even smaller few will actually pull the branch locally so they can see the entire context. That being said I always do my best to review other peoples code regardless of whether or not they will review mine.


Not really. It just ensures no really atrocious code makes it into master.

Of course you can stall any junior merge request until it looks like senior code, but at that point you might as well write it yourself.


I don't know if it's stalling so much as taking the time to request changes / pair up and teach.

If bad code is getting merged, that's tech debt / time someone else is going to have to spend anyway, plus the time needed to identify the issue and triage down the line. I would think it's better investment to use that time up front and help the jumior level up too.


> Sometimes I don’t even recognize me as the author for a while.

This happened to me just yesterday! I was helping a co-worker with a problem, and I noticed some redundant code in the same function, so I told him he could simplify it while he was there. His response was, "...but you wrote that". (And as it turns out, only a few days prior!)


This is why I will point out issues in code only on the form of stating potential improvements as best as I can. I especially try to avoid hating on the author - it might have been me or the boss who's standing nearby...

In fact, I often conciously refrain from using blame on an "interesting" piece of code because it doesn't matter who wrote it. Looking it up would just satisfy idle curiosity, but yield noninsight into how to improve the code. In fact, I think that blame should just list the commits, but hide the authors by default. "What changed? " and "how?" are always much more pressing questions than "who?".


My experience varies a lot. Maybe it's just my colleagues write better commit messages but blame (and looking up the PR and code review) is often a good method of understanding why the code is that way


Yes, but blame puts the focus too much on the authors instead of the changes themselves. I also use it to understand how code evolved over time, but only in circumstances where I suspect the history to hold import clues.

Pinning bad work on a person does not make progress. Fixing bad code does.


I am coming back from vacations next week and have 2 PRs to finish (couldn’t merge before because they are high risk), I am antecipating a lot of pain just to pick them up where I left off... I would rather do my tax returns instead lol


It always happens to me while I approve my own pull requests to master.. :’(


> Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

My first job was in finance and I remember one time that I had a glitch on a complicated Excel spreadsheet. I checked it over, checked again and checked again until I finally concluded that the bug was in Excel itself. So I go to my boss and tell him that the data isn't ready because there's a bug in Excel. I was laughed at by the entire team. They agreed to give me $1000 right then and there if it was really a bug, but if not, I had to admit the shame. Well... of course it wasn't a bug in Excel, I just made a careless, albeit hard-to-find, mistake.

Lesson learned. If millions of people use something, that doesn't mean it doesn't have bugs. But it does mean that you probably aren't going to find those bugs unless you are doing something strange.

oddly, I did once find an actual bug with indexes in Postgres. Because of my earlier experience, I spent a lot of time assuming it was me before I finally isolated it as being a bug with postgres itself. I submitted a bug report and it was patched within a day. But still, 99% of the time, it's me.


It's unusual, but it does happen. My example is actually from hardware.

I was writing a Windows NT (what's that?) device driver for a communications board we developed for one of our products. I kept running into a problem and struggled with it for days. I was experienced enough to understand that the error was probably mine and the problem was so basic that if it was in the chip pretty much everyone who tried to use it in that mode would be screaming about it not working. The chip was an 8-channel UART (serial converter) and IIRC, the bug was in one of the FIFO interrupt modes.

Finally I gave up, got the phone number for the chip vendor's (I think it was Texas Instruments) local Field Applications Engineer and explained the problem to him. "Oh, yeah, we know about that bug, there's a new version of the chip about to be released. You guys are actually only the second customer to sample that chip. Lucky we found the problem before it went into production!"


Junior programmers find an amazing amount of system-level issues.

One noob came to me with a serious codegen bug in GCC, where even with `-O0` it would fail to correctly run a trivial for loop. Another found a huge security hole in `sudo` that gave everyone unrestricted access. My favorite was one who asked if the JDK standard library had any known bugs processing the letter "g".

They all turned out to be user error, if you can believe it.


There was a junior in one of the early companies I worked at who claimed to have found a JVM bug. He insisted JVM handled comparison with null incorrectly. He had a String variable that was null, then he guarded against NPE with "if (str == null) return -1" followed by code that dereferenced str. The code looked innocent at first glance, but somehow it failed with NPE. Finally it turned out the string was "null" not null. :D

But I also have a good counter-example story:

Some day I found an HTTP rfc violation in one of very popular oss HTTP client libraries. I filed a bug report with detailed reproduction. It was closed immediately, with "works as designed, you misunderstood HTTP". Then we had a debate in ticket comments for many days and I couldn't convince them to the right interpretation of HTTP (I admit, the text is not easy sometimes). Finally I posted a message on HTTP mailing list and Roy Fielding confirmed I was right. They reopened and fixed. I must say this is really hard thing to argue with somebody with an edge in experience and not come out as arrogant.

In particular - when somebody responds with "I have more experience / I've been doing it for 20 years, and you say I'm wrong?". How to best handle such cases?


>"I've been doing it for 20 years, and you say I'm wrong?" How to best handle such cases?

I wish I knew. I try to limit the discussion to the purely technical, or to barely acknowledge it as in "sure, but RFC123 says X and Y implements it that way as shown in Z".

Of course, that goes for when I'm the authority as well. I don't care who is correct, I care about what.


Haha, that last one is hilarious!

Even though I'm still juniorish, I still run into issues like this that stump me. But then occasionally you do find bugs with existing software which keeps you second guessing everything. Usually those bugs come from using two things in conjunction that haven't been well tested together.

Also sometimes, you find a bug that isn't accepted by the software vendor/owning team as a bug, because it has some sort of obscure work around that would take you a week of tinkering to figure out. Those "aren't bugs" but yeah, they are bugs. Software vendors that also sell consulting and related services love to pull shit like that


I actually found an OS bug in AIX on my first real I-designed-this code project. Worse, the bug was discovered only when the code went into production - it behaved differently on the test servers than on the production servers! The bug caused mmap() system calls to randomly overwrite certain pages in memory with NULLs. Yeah, that was fun. And it was caused by the order in which patches had been applied, which was why it behaved differently in production than in dev/test.

If some second year programmer told me that nonsense, I'd make them go back and either find their bug, or write a test program that exercised the bug in isolation (which was what I did).


Were you finally believed?


Eventually. I had to write a test program, as small and neat as possible, that demonstrated the problem on both test and prod systems. Then I got permission to send it to IBM (with my binary, my source code, and my logs). They eventually determined it was caused by the order patches had been applied on the servers - they had the same patches, just applied in a different order.


I just say, even if it's a bug, there's no guarantee it'll be fixed on a timeline that's agreeable to our deadlines. Find a workaround.


You can't tease us like that without explaining what the "bug" in Excel was.


As I said, it wasn't a bug in the end :) The issue was with my formulas, but I don't recall exactly what I did wrong. It was a tiny typo I made a sheet with thousands of formulas if I remember correctly.


> Understood as senior: Legacy code that I wrote myself is hard to read.

For me, any code that I wrote more than 3 weeks, I forgot. That's why I comment the hell out of my code. The younger programmers have routinely told me "commented code means the code isn't very good." I chuckle and ignore them and wait for them to hit their mid-30s and older.


Also related:

> Understood as senior: Communication skills matter most.

The reason people dismiss comments is usually that they or others around them aren't good at writing useful comments.

Especially when there are linter rules requiring comments you'll have something like def open_thing(x, y): and a comment, "defines a function that opens thing."

Yes, those are pointless. Often what's going on is a person is dumping their stream of consciousness into the comment field.

It takes practice to understand what a reader needs to know. You have to actually practice reading comments and thinking things through (another reason code review is important in your team) to get good at undertanding what you should write down.

All that said, if you truly hate commenting, at least build a habit of descriptive naming and exploiting your type system as fully as possible.


I'm actually thinking more of social skills and written language, not programming. I said something about this in a different thread earlier this week, and someone was baffled as to why I thought being able to write and sell was important, since you just wind up doing what your boss tells you to do anyway.

As opposed to telling the boss what you're going to do.


> Especially when there are linter rules requiring comments you'll have something like def open_thing(x, y): and a comment, "defines a function that opens thing."

I actually think those comments are useful in two ways:

1. The process of writing a comment will help often help me rename the function/variables so e.g. “defines a function that opens thing” becomes something like “opens can_of_worms with the given instrument and restraints” for the method definition open_can_of_worms(instrument, restraints)

2. You can use variable/return value comments to further restrict the domain of values, e.g. non-null, positive or in the range 1-42 (arguably it would be better to express some of these in the type system, but that is a different discussion). These comments show up in my IDE when I try to call the code in a remote location, so I don’t have to guess or remember the constraints.

(edit formatting)


Comments rot, and details about what is going on is better incorporated using good variable names and functions that abstract aspects of a task from their implementation.

While I don't like comments that try to explain what code is doing (write better code), comments are very useful for annotating WHY code does what it does. They're also very useful for adding documentation references, code use gotchas and things that need to be addressed in the future.


Comments are critical for explaining the code that isn’t there. False starts, obvious optimizations that don’t actually work, etc.


I've heard these sentiments very frequently from junior programmers, and almost never from senior programmers.


I'm guessing most of the senior programmers you've interacted with are maintaining established software with low churn and high availability requirements.

I hear comment love very frequently from enterprise engineers working with 10+ year old Java codebases, but very infrequently from hackers working with young code bases in more concise languages (complex algorithms aside).


Complete opposite for me.


I think it's better to document the context/intention/business reasons and let the code speak for itself.


This assumes everybody reading the code at the company have the skill to read it.

Where I work, I can expect my scalacode to be read by people who can barely write a line of it, and I routinely read typescript and go code while being totally inept at those.

I bless comments that are here to help the reader read, and I let those behind too where there's some specialists-only syntax.


Early 30s here and I've realised that comments are worse than useless most of the time. Nothing enforces that the comment is correct, so a significant proportion of comments will be false, so no comments can be relied upon.

Descriptive types, clear tests, and sensible variable names are much more effective strategies for making code understandable. Comments should be a last-resort stopgap.


Honestly, I would add this whole comment to the list of "absolute truths" juniors unlearn as they get more experience. And I would also point to the original post's point that types of experience matter - just because you're early 30s doesn't necessary mean you've had the right experience. If you still believe this, then - to be brutally honest - I would question the quality of the teams you've worked with.

Comments don't have to decay. Discipline is important. Culture is important. And yes, these have to be intentionally set and upheld.

If you set a culture of discipline around maintaining the comments with the code, and ensuring they are updated, then it's really not that hard to do it. If the developer doesn't remember to do it when making changes, then the code reviewer can catch it and enforce it.

And nothing really substitutes for an english language explanation of the "why" and the intention of a particular section of code. A good comment explaining why something was done a particular way, or what the code was intended to accomplish, can save hours of walking up and down call stacks. It's also something that cannot be communicated through unit tests, or even integration tests, a lot of the time. Those communicate the "what" and the "how" - not the "why".


> Comments don't have to decay. Discipline is important. Culture is important. And yes, these have to be intentionally set and upheld.

These are things that are often completely outside your control.

> If you set a culture...

At most shops, you don't get to set the culture. About the only time you do is if you're a founder or early developer. Otherwise you have to fit into the existing culture, or attempt to find a company that better reflects what you want. Sure, it's not hopeless; you can likely influence to some extent, but your influence is usually limited.

> And nothing really substitutes for an english language explanation of the "why" and the intention of a particular section of code.

I do agree with this. Any code that can't be written in a self-documenting way absolutely must be commented. However, if you find the need to do this often, it might be a sign that you should focus more on code clarity and less on (likely premature) optimization, or perhaps consider if you're really using the right tool (language, framework, etc.) for the job at hand.

I will admit that I probably comment less than I should, but I feel like the average is way too verbose, and that enough comments are out of date and incorrect (often in very subtle ways) that it adds significantly to my overhead when trying to understand someone else's (or even my own) code.


> If you still believe this, then - to be brutally honest - I would question the quality of the team's you've worked with.

To be equally brutally honest: right back at you. I would trust the quality of those I've worked with over those who believe in comments, any day of the week.

My point was simply that I started as a believer in comments when I was more junior, and became anti-comment through experience. So even if we believe senior people are more likely to be right than junior people (which I very much doubt, frankly), that tells us little about whether comments are good or not.

> If you set a culture of discipline around maintaining the comments with the code, and ensuring they are updated, then it's really not that hard to do it.

Human programmers have a limited discipline budget, and if you're spending it on keeping the comments up to date then you're not spending it on other things. Yes, you can use manual effort to keep code explanations up to date, just as you can use manual effort to ensure that you don't use memory after it's freed, or that your code is formatted consistently, or that the tests were run before a PR is merged. But you're better off automating those things and saving your manual effort for the things that can't be automated.

> And nothing really substitutes for an english language explanation of the "why" and the intention of a particular section of code.

Disagree; code can be much more precise and clear than English, that's its great advantage. As the saying goes, the code is for humans to understand, and only incidentally for the computer to execute. The whole point of coding declaratively is that the "why" is front and center and the "what"/"how" follows from that.


> The whole point of coding declaratively is that the "why" is front and center and the "what"/"how" follows from that.

I've been writing Lisp off and on since late last century, so I know full well the value of declarative code. Preaching to the choir, there! But I can also report that every real program I've ever written (i.e., that had at least one user) needed significant non-declarative parts.

And for those non-declarative parts, you need the "why". Why is this call before that one? Why is this system call used? Why is this constant being passed to the call? And so on. (It's because when you run it on OS ${a} version ${b}, there's a bug in the ${c} library that requires us to force the initialization of the ${d} subsystem before it can ... true story.)

The declarative parts of your program don't require "why" comments, and that's great, but a corollary to that is the parts that can be written in a declarative style aren't the ones that require a "why". Building a DOM structure manually takes a lot of lines of code, but it's all still quite simple, and requires no explanation. Writing a trampoline necessitates a bunch of "why"s, and there's no way to just substitute a declaration for it (without pushing the whole mess somewhere else).

Code is first for humans to understand, and that requires comments, because humans speak English (or some other natural language), and no programming language is yet powerful enough to efficiently (in time or space) express everything that English can.


> Writing a trampoline necessitates a bunch of "why"s, and there's no way to just substitute a declaration for it (without pushing the whole mess somewhere else).

I've got a trampoline in my codebase to avoid a stack overflow. The why is the test that a certain repeated operation doesn't stack overflow.

There are a number of places where it could've been implemented with one technique or another, but there's no particular reason that the approach I've taken should be better or worse than one of the other options. If there was, I'd want to formalise that (e.g. if I'd chosen one approach because it performed better than another, I'd want a benchmark test that actually checked that).


> code can be much more precise and clear than English,

This doesn't address the parent's criticism. Clear, precise code only tells you what the computer is doing. What it can never tell you is why the computer needs to do it exactly like that.

Software breaks in weird ways when pushed to the limits. The fixes for these edge cases are not always obvious and may not be something that can be replicated with testing.

Without comments, some cowboy can come along and think, "it's flushing a buffer here? that's dumb. <delete>" The change gets put in, passes testing, spends four months in production, when a bug report comes in from a customer complaining about an issue that they had three years ago.

Now someone has to spend a bunch of time figuring out the problem, QAing the fix, then getting it back into production. It's thousands of dollars that the company could have saved if only there was a comment about why that buffer flush was there.

You might think this is some crazy edge case, but it's not.


This is my problem as with this argument as well. English and other spoken languages seem first and foremost about conveying ideas. Programming languages seem first and foremost about conveying instructions to computers that don't comprehend "ideas"..

Reconstructing the original idea or meaning can often involve far more context than local variable and functioning naming can provide.


I don't understand what sort of environment you work in where you don't encounter situations where comments could add clarity to the code.

Do you never see code that has global side effects? Or that is written a particular way to take advantage of the hardware that it is running on? Or any other of the many ways that the intention and meaning of a piece of code within the codebase it exists in can be not immediately obvious?


>Do you never see code that has global side effects?

The answer for modern languages and frameworks is "write pure functions."

>Or that is written a particular way to take advantage of the hardware that it is running on?

Move to service/helper/utility class for that particular hardware or with a name that clarifies it's for that particular hardware.

I find comments to be necessary very rarely. Atm looking at a codebase where they are made to cover up for a lack of desire to think.


> The whole point of coding declaratively is that the "why" is front and center and the "what"/"how" follows from that.

Declarative means that we specify the "what", and the machine deduces the "how".

There is no room for "why", because our present-day machines do not require motivating argumentation in order to do our bidding. They either need the "what" or the "how", or whatever blend of the two that we find convenient.

We need the "why" in the documentation. Such as: why is this program written in the first place? The "why" is not in the code. When code is clear, it's just easy to understand its "what" or "how", never the "why". Unclear code obscures the "how" or "what", but clear code doesn't reveal "why".

Every "how" breaks down into smaller steps. Of course, those smaller steps have a "why" related to their role in relation to the other steps; that's not the "why" that I'm talking about here. Of course we know why we increment a loop counter when walking though an array: to get to the next element. If you start commenting that kind of why, you will soon be flogged by your team mates.


code can be much more precise and clear than English

Agreed, code is much more precise than English. But precision is not the same thing as being meaningful and without context, precision is useless. Code generally sucks at context, which is why every programming language worth its salt has comments.


You are missing the point entirely. No matter how clear your code is, it is only expressing the “what”, not the why. I can see that you’re using a binary tree, but why a binary tree and not a hash table? Why a decision forest and not a linear regression? Why a regularization penalty of 0.1 and not 0.2? Why cache A but not B? Why create an object at startup instead of generating it on the fly? You need comments to explain these decisions.


If there's an important difference (e.g. a performance requirement that a hash table wouldn't meet), I'd have a test that checks that. If not, it's probably an arbitrary choice that doesn't matter. If the decision is worth recording, it's worth recording properly.


For the cases you mention a combination of package name, class name and method / function name could serve as a comment with the benefit of making sure any place referencing the code also "documents" why something is happening (tests for example, or callers of your methods).

This is not always possible, and in those cases I also strongly prefer well written, concise comments explaining what is going on and why, ideally with a link to a reference/source which explains the background.

Some examples of method names:

- generateTreeToAllowPartitioningOfItems(...)

- getMatchingRegularizationPenaltyForSpecialCaseX(...)

- getShortTermRedisProxyCache(...)

- createNewPrefilledTemplateObjectForXYZ(...)

I hope this doesn't sound snarky. But more often than not comments do date in my experience (and they don't handle refactoring well), while (compiler-known) names are handled as 1st class citizens by the current IDEs and thus are corrected and updated anywhere.

In code reviews we usually aim for "low comment" density, the implementer shouldn't explain what or why he was doing, the reviewer has to understand just from the code (as it would happen if she/he has to maintain it later on). The review is "not good" or even fails if the reviewer doesn't understand why and what is happening. The outcome will in most cases be an improved design, not more comments.


But those method names are still hard to relate to the business cases your customer requested. So you implemented something for some reason; your code and methods tell you what you implemented but not why... Why did you use a tree? I read your top level code and think ; dude, that would have been so much simpler and faster using a Wobble instead of a Tree! Then I try that and it turns out it has to be a Tree; you went through the same process, did not tell me why and I lost a day retrying. For instance.

(assuming, which you should always assume imho, that you left the company many years ago when this event occurs)


If I wrote an explanation then what would check that explanation? Maybe I write "we use a Tree instead of a Wobble because Wobble doesn't support the APIs we need". But then maybe when you come to work on it, it turns out that the current version of Wobble does support those APIs. Maybe it's actually better at them than Tree. Whereas if I have a unit test around Tree that exercises the kind operations that we need to do, then you can just try dropping Wobble in there and see for yourself whether it works or not.


> code can be much more precise and clear than English. [my emphasis.]

As a common feature of most higher-level languages is that they co-opt natural language terms (and also mathematical notation, which is an option in commenting) with the intent to increase clarity, can you show us an example where code is more clear than natural language in explaining both what it is doing and why?

If you are working in something like APL, I can see there might be a case...

I am not so much interested in the precision issue, as both code and language can be very precisely wrong or right.


> Human programmers have a limited discipline budget

You think humans are bad, try working with Lobster programmers, they get work done, but their coding style is just horrible (they use tabs).


>If the developer doesn't remember to do it when making changes, then the code reviewer can catch it and enforce it.

They can. Just after correcting all the buffer overflows and before fixing all the use-after-frees. Then the comments can be consumed by all the other teams with the discipline and culture to avoid writing bugs for all time.


Nothing enforces that the code is correct, either, not even tests, as tests are also code, plus there is the utter infeasibility of exhaustive testing.

It does not follow from the possibility for error that a "significant" proportion of comments will necessarily be false. In my experience, that is most likely when an organization has commenting as a mandatory part of its process, which inevitably leads to most comments being trite, and some wrong. Outside of that, comments have not been a problem mainly because they are almost non-existent, even when the code could benefit from them.


> Comments should be a last-resort stopgap

I'd add; Comments should be saying _why_ this crazy method is here. You can always parse the code to figure out what it does. In a few months/years (depending on your memory) you will not remember _why_ this code was put in place.


Comments rot, but so does everything else such as type names, tests, variable names, field names, designs, architectures, etc.


Yep - it doesn't help much with a method that's named EmptyCacheToPreventBlugblagCongestion() if external circumstances have stopped the blugblag from ever congesting any longer. So discipline in maintaining the intent of the code is required even if you never write a single comment.


Tests, types and field names get checked on build.


If someone adds functionality to a type so the name isn't really applicable anymore I don't think the build catches that.


As soon as you form something that should conform to the type (according to its name) and find that it doesn't, you notice the problem, and then you fix it once and for all (because the type is defined in one place). So yes, you can have misleading type names in the codebase, but there's a natural pressure to correct them, in a way that there largely isn't for misleading comments.


Nothing enforces correct variable names and descriptive types either, why would you expect those to be more consistently accurate than comments?


They're amenable to automated refactoring, and if you change a type or variable name in one place you're forced to update it everywhere else that uses the same thing.


Comments and `git log -p <file>` to see what the comment originally referred to is pretty useful.

My personal favorite comment style is to wrap a chunk of code in `#{` `#}` blocks and add a general comment of what that chunk of code is accomplishing. Sort of like an inline method.


All those comments of "blah blah blah gets or sets a value" on my class properties, why do we add all that overhead to our projects to the point we have to use tools like GhostDoc to write our worthless comments? This industry is on crack sometimes.

I simply like comments for adding things like... so and so told me to do this... or simply documenting weird behavior or weird business logic.


(as others have pointed out here and every where) Comments are NOT for making code understandable. The things you mentioned are for that.

Comments are for things like

1) explaining why this thing that looks wrong or dumb, really isn't. 2) explaining what method/function/class/whatever is suppose to do. Because code can be correct, understandable, and still wrong.


I'm 29 and can barely remember the code I wrote last week.

Comments don't get updated when the requirements for the code change, more often than not end up as misleading.

The only thing worth commenting are actual libraries that are maintained, and 'magic values'.


> For me, any code that I wrote more than 3 weeks, I forgot. That's why I comment the hell out of my code.

I couldn't agree more.

A while back I got in the habit of trying to write code for "me, six-months from now". So, if I think I can explain it to "future me", then I'm happy. Ever since I started doing that, I've been much happier with "past me"'s code.

In addition to comments (particularly around hard to grok code), I've also started trying to be as consistent as possible in code structure and naming schemes. This also helps a lot.


I write notes to my future self all the time.

Meta comment: This is bullshit and has problems with this that and the other thing. But to fix that I'd have to refactor this other module and I'm not going to do that now. And the other thing I'm drawing a blank.

Meta comment2: I don't think the code needs to do this here. But I can't prove it right now.

Meta comment3: We absolutely need to do this exactly as it is. Because otherwise bad thing happens, which you probably won't see until it hits production.

Meta comment4: This function name isn't correct. But I can't think of a name that is better.


> Meta comment: This is bullshit

I used to worry about putting emotional blurbs in comments or commit messages, but I'm starting to see their value. A commit that starts "This ugly writing is to appease Roger, the editor obsessed with AP style" lets me know three things:

- Who asked for the change - The source of the content - The fact I disagree but still do it, so future me doesn't pick fights present me avoided

Of course, it could also mean "TODO: revert this commit the minute Roger retires."


That's less about emotion and more about context, which absolutely is important to capture. Links to tracking systems, and in more dysfunctional environments, quotes from emails and water cooler conversations can help a lot when going back in time. Generally I put those in the commit message whenever it makes sense, but sometimes it's better to put it in the actual comments themselves.

On the other hand, on the rare occasion that I've commented or committed something based on emotions, I've always regretted it. Granted they never caused problems for me, just a source of internal embarrassment. Still a good enough reason to be thoughtful about what emotions you express.


> Meta comment3: We absolutely need to do this exactly as it is. Because otherwise bad thing happens, which you probably won't see until it hits production.

This is the highest purpose that a comment can fulfill - telling why you are doing something that looks stupid.


> This function name isn't correct

When dealing with articulate code I often rename the same thing multiple time while I understand it better/clarify it's purpose. Also I love how naming protects the purpose of a variable or method, mentally speaking


Couldn't agree more, except with meta comment 3 it is very important to describe the bad thing, so that future me knows if he can safely rewrite this or not.


I write multi-line commit comments whenever I do something that's not trivial or has required a large amount of reasoning to perform correctly. As in, first 80 characters explaining the high-level details with (see details) at the end. Then a number of lines with more detailed reasoning.

Most such commits are never looked at again. But every other month or so, I come across a maintenance issue where I wonder about the context of something. In many such cases, I've saved multiple days of false starts or debugging. So it pays off in the long run even if it's only me gaining something from this. (Unlikely; we're a company of 80 developers).


Yes, this.

Sometimes you have a choice of a clever way to do something, which saves a few lines and uses neat language tricks that you rarely use * , or just doing things the boring way. As long as the boring way is obvious enough, it's often the better choice.

* I'm looking at you, Ruby... :)


There's truth to both sides. Depends on the comment really.

``` doesAThing() //does a thing ```

doesn't help anyone.

My rule: Code is for how, comment is for why.


> Code is for how, comment is for why.

Excellent. For interfaces, other code that uses the interface (perhaps even tests) can also help to document the "why".


Yeah, whoever told you that has never had the sinking feeling of digging in to a 300 or 1000 LOC function with a pretty refactor in mind only to see just how much of the system relies on that one function. It's really only an issue if you try to be diligent about testing the work you produce, in which case that little refactor could cost your team a week or a month of additional testing while they verify that you didn't break anything.

Or you could sneak one more little if statement or some copy/paste in there to fix it instead, and add a little comment that says "If you modify this line, please verify that your change doesn't impact Line XXX of file FFFF as well." And then you're done in less than a day and have saved a huge amount of testing.


This is especially problematic in machine-control code.

I've seen code from an otherwise highly capable developer that contained 1000+ LOC functions. When asked why he couldn't do a refactor the answer boiled down to fear. When the only real way to test the code is by physically running a machine through a number of scenarios, many of which are difficult at best to recreate, you become very reluctant to refactor or clean it up.

Like all problems, it's best to nip it in the bud before things get that far out of line.


Line numbers might not stay static. Perhaps referring to a particular function or variable might be better, as well as explaining what it might impact? That way, one can jump to the location, then inspect it to see if the potentially-impacting behaviour still exists.

Definitely useful in the case where it's near-to-impossible to DRY up something, though. Sadly, the limitations of an industrial C environment have led my code to contain a lot of annoying 'If you add something here, make sure to add it to X struct and Y function' comments.


Thats also a reason I will include as a comment the unoptimized code with comments whenever I do optimized crazycode.

That way, I can understand what is actually being done. And I can then re-analyze why I did the shortcuts to get to optimization.

But 99% of the time, we dont need to optimize. CPU/RAM is cheap. But those 1% of the times when you're going from N^2 to N^logN ... Welll.....


Did you mean N*logN? I don't think going to N^logN is what you want ;)


Sigh, yep!

Thats what I get for trying to type it on a phone browser!


As I've matured as a developer I generally find it easier to read and understand code, whether it be my own or others. As a junior this is something I definitely struggled with.


Lemma:

> Learned as junior: Legacy code is hard to read. Understood as senior: Legacy code that I wrote myself is hard to read.

Lemma:

> Learned as junior: Technical skills matter most. Understood as senior: Communication skills matter most.

Theorem:

Communication needs to target the people of the future.


Outstanding observation, thank you!


Way back in the day, my boss made a lovely observation - 'write your code like it's going to be maintained by an axe murderer who has your home address, and nothing to lose.'

Simple guidelines to live your life by ;)


Attribution: “Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John F Woods, 1991 in comp.lang.c++¹

(I do not know who first replied, ‘I do know where I live.’)

¹ https://groups.google.com/forum/#!msg/comp.lang.c++/rYCO5yn4...


Did he call it "his" observation? I am yet to find a likeable person quoting this.


All excellent points. I got used to filling my code with comments in the spirit of having a conversation with someone to whom I am explaining how an why. Thirty years later I can look at any of my code and quickly understand it. In sharp contrast to this I find most open source code to be difficult and laborious to understand due to the almost total lack of useful comments.


Sometimes you encounter some questionable code and you wonder: "What idiot wrote this?" So you `git blame` and you find out "Oh, I'm that idiot."


I have been in this situation and tried to remember what was going on while I wrote the questionable code. It frequently came down to a day full of interruptions and context switching. Also motivation plays a tremendous role, I know that whenever I have to work on a code base I don’t understand and don’t want to have anything to do with it long term, it ruins my ability to focus.


> Learned as junior: New tech solves old problems.

> Understood as senior: New tech creates new problems.

Yep! When you use "new tech", you're making a bet, and not all bets pay off. If you're cautious, you'll hold off until new tech is proven (or pilot it cautiously before migrating). If you're possessed with good judgment, you'll be discriminating in which new techs you adopt. If you have both virtues, you may even gain a competitive advantage.


> Understood as senior: Communication skills matter most.

I've been of a similar opinion at some point, but now that I see people _optimizing for communication_ as juniors in lieu of actual technical skills, I say: both matter a ton. I hate dealing with a junior that is an extremely good people person but a terrible developer: they tend to think they got everything covered just because people like them so much, even when their actual solutions are terrible.


>Learned as junior: Technical skills matter most.

>Understood as senior: Communication skills matter most.

I see this often, but it needs to be said with a caveat - the second line presupposes the first. Without the first, the second doesn't really matter (or it does but you're in the wrong career).


Actually it is very disfuncional for me to be in a company that communication skills are ahead of technical.


"Understood as senior: Legacy code that I wrote myself is hard to read."

Your comment came at the perfect time. I just finished debugging a "high urgency" problem with a program I developed and maintained for the past 10 years.

The program started simple with a small list of rules to apply against data sets. Over time the list became a tree of rules that expanded in both breadth and depth. It was refactored once to get the design in line with the rules of the time.

The "high urgency" issue turned out to be the program working correctly. But, the functional user wasn't able to keep in mind some of the rules he set. It took me a half hour, with lots of "why did I do that", to explain it again to the user.


> Understood as senior: Legacy code that I wrote myself is hard to read.

As it should be. If your code from a five to ten years ago doesn't make you cringe at least a little bit, the right way to view that is not that you were doing a good job back then, but that you haven't gotten any better since then.


There are absolutely things I was better at back then, mostly because then I spent all my time doing nothing but programming and now a lot of time goes to other activities (meetings, planning, writing, etc).


Well, yeah, good point. I guess I should temper that with if you're still a full time professional software engineer. I wouldn't expect it to apply to someone that moves partially of fully to a different type of job or activity.


It isn't as simple as better skilled or not. Many times having more understanding about the problem may open your eyes to better / simpler design, or you now don't have any rushing matter that fuel bad design in the past.


I disagree with this perspective. You of yesteryear is in many ways just another programmer you have to work with; if that makes you cringe, you may be taking a bit too much pride in your work.


> You of yesteryear is in many ways just another programmer you have to work with

Taking that idea to the extreme of not having any feeling of ownership or pride (or lack thereof) in your past work seems rather silly to me. It wasn't just some other programmer, it was you.

I'm not saying you should cringe because the code is bad, but because you should have a sense of "well, I could have saved myself some trouble or made this cleaner/more obvious if I only knew then what I know now."


Or maybe you cringe at your old 'good' code because you spent too much time on things that ultimately didn't matter.


>Understood as senior: Communication skills matter most.

I can't speak enough on this one. In our craft, The better one is at communication skills, the more effective their technical skills will be.


There's also a strong correlation between strong communication skills and strong technical skills in our industry that demands frequent learning of new skills. So not only does strong communication augment technical skills, it also signals its strength which is also valuable to software developers.


> Learned as junior: If you report an OS bug or some other deep problem, seniors will not believe you and assume you're making excuses for your own bugs and lack of understanding.

My mum told me "if you think you found a bug in the compiler or OS... you're wrong". This advice applies until you're good enough to know it doesn't. She was right.


One of the first rules from the book The Pragmatic Programmer is "SELECT isn't broken". (It might even be the first rule.)


I wish my mother knew what a compiler was.


Heh, at 19 she was the PL/1 expert for the Asia/Pacific region. Mum's a badass.


> Learned as junior: New tech solves old problems.

> Understood as senior: New tech creates new problems.

This is one all the people who push "new and shiny" need to learn.


I mean, yes and no, sometimes an old systems so bad it really needed to be killed off and replaced. Or would you rather everyone stick to coding in VB6? I rather we all use C# instead of VB6 ;) I'm not implying we only ever use C#, I know there's other languages, just illustrating a shift in the MS windows development ecosystem that was for the better.


VB 6 is probably not a good example.

Even nowadays many languages don't have features that VB6 offered incl. WYSIWYG. Debugging capabilities of modern languages/environments are still often not even close to what VB6 offered 20+ years ago.

C# certainly is outstanding but I think Microsoft made a gigantic mistake by killing VB6 the way they did.

Microsoft's prevented a large amount of people to write applications, since a new ecosystem like C# or VB.net was significantly more difficult to learn and understand.

In retrospect Python or Node probably took VB6's place, so Microsoft just lost out on a huge market there. Bad management decision.


> I mean, yes and no, sometimes an old systems so bad it really needed to be killed off and replaced.

I don't believe this is the spirit in which this was meant. If the old system is out of date and there are buggy libraries that aren't being maintained, that is a WHOLE different issue.


> Learned as junior: If you report an OS bug or some other deep problem, seniors will not believe you

Same thing happens in reverse! ;)

A few years back I told 3 devs who reported to me that there was a bug in Laravel database sub system.

The bug was: If you use the word "returning" in any Laravel insert query the system would crash (Laravel 3 & 4).

None of my guys would believe me!

I finally tracked the bug down to

> laravel/database/connection.php

> public function query($sql, $bindings = array()) > { > ... etc ... > elseif (stripos($sql, 'insert') === 0 and stripos($sql, 'returning') !== false)

I sent an email to the Laravel team and never got a response... but the bug stopped happening some time after that ;)

I think it was relate to mySQL version as well.


> If you report an OS bug or some other deep problem, seniors will not believe you and assume you're making excuses for your own bugs and lack of understanding.

Similarly, one thing I learned is that if I find a bug with an OS or platform, 9 times out of 10 it's actually due to some problem in my code or my own lack of understanding :)


Can suggest it's more like 999 times out of 1000?


> Understood as senior: Communication skills matter most.

can you give examples ?


I can try.

Teaching juniors can be more productive than coding. Being 10x by yourself is less good than 3x-ing a whole team. Even better, teach everyone to be as fast & good as you. Good teaching requires good communication and building trust.

Understanding priorities and goals is absolutely critical to making good choices while programming. Writing good code under reasonable deadlines in an organization necessarily involves a lot of discussion about what constitutes an acceptable solution, what doesn’t, how long it might take, how long is too long, what features are nice but not necessary.

Over-engineering, for example, is extremely common, and is caused in part by not correctly balancing goals and priorities with time budgets. It’s usually a symptom of mis-communication.

Can’t even count how many times I’ve seen a programmer go off the rails building stuff that wasn’t asked for, only to have a meeting several weeks later that invalidated weeks of work when the goals were clarified. (That includes me, btw.)

Making large changes and leading a group of programmers often requires a lot of convincing and rallying work along with the technical planning, sometimes much more than you’d expect. It also requires the ability to put yourself aside and allow others to contribute to the design, even when you think your technical solution is superior.

Getting promoted is, in my experience, most commonly a process of demonstrating to others that you listen well, organize well, work well with others, get things done under deadlines, understand and report what juniors are doing to management, budget well, internalize the organizational goals and contribute meaningfully to meeting those goals.

In short, it’s because teamwork is important.


Alright, I just failed to parse what communication could mean. I see it clearly now, leadership, team work, social skills, they do indeed matter a ton.


I once worked with a brilliant engineer. He was from Hong Kong. He struggled to express his ideas in English. We tended to let him show us in code instead.

Sometimes this worked. Sometimes it really, really didn't. It also meant we had a very difficult time discussing larger architectural questions with him or giving him useful feedback on his code.


I’ve recently had a similar problem, only in my case I’m the foreigner who can’t speak the local language. It’s fine except in meetings.

Junior dev me: meetings are a waste of time.

Senior dev me: meetings are the steering wheel, the developers are the engine (https://kitsunesoftware.wordpress.com/2018/01/29/utility-fun...)


Well that's half cultural, I meant in a context of shared native tongue. Now international work does create lots of hurdles because you can't translate things above casual smalltalk. I guess that's where maths could help.


Articulating requirements or architecture to either stakeholders or juniors is key to being able to actually do your job as a senior (whether that's coding yourself, code reviews, planning, interviews, etc etc).


If nothing else, it is a godsend when talking to PMs. A junior may be like "this code sucks, it is too complex!!" where a more senior can be more like "This code is not written in a good way. If we had some time to rewrite these particular parts, we could provide these features to the users".

Being able to couple value to what you do, or for that matter, to what might not be worth pursuing, is a very good skill. And be sure to not often say no to a PM asking for a feature, but rather give an alternative "doing that is quite complex and might take us half a year. How about we do this other thing that gets us 90% of the way there, but only take a week to implement?"


There's no hard-and-fast rule on this but generally junior projects are isolated where you may only have one stakeholder. Growing into a more senior role, your actions generally have greater breadth that affect more teams/clients/orgs. Technical chops are worthy but I can guarantee that your stakeholders would rather have amazing communication for things like progress, deployment, etc. than how efficient or beautifully written the project is.


It's mainly used when discussing task and making architectural / design decision. Junior usually talk in techincal / implementation terms. While senior usually talk in broader / general / business term.

Moreover junior usually have hard time expressing techincal problem.


> Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

Me: 'Even a blind squirrel finds a nut once in a while'


Horses, not zebras. But yes, there are a few zebras in the world also.


Depends on what they are doing.

Oh you found a 'bug' in Mac OS, Windows, or Linux? Probably not.

Oh you found a 'bug' some open source library? Maybe.

Or in our in house developed framework? Probably.

In house framework I wrote? Certain of that.


> Understood as senior: New tech creates new problems.

Related: There are no new problems.


Things I've learned: I'll never be a senior programmer.


> Understood as senior: Legacy code that I wrote myself is hard to read.

Ha, yes, I occasionally come across code I wrote years before and have a few "WTF moments"!


I always find myself saying “Who is the shithead who wrote... oh...”


Its a good sign - means you are learning and getting better.


New tech solves your existing problems and creates new ones. I have a big pile of problems I have had for many years that I’d love to trade for some new ones. I’m happy to fix one problem and get 5 new ones. I’m happy to get rid of an easy problem and get a new much harder problem. The important thing is that I get some new problems, and nothing will do that like new tech.


> Understood as senior: If a junior programmer tells me they found a system-level bug, I won't believe them and will tell them to go figure out what's wrong with their code.

And, yet, it does happen. The important part as the senior guy is to make the junior guy create a test case and then cut it to the bone until it is obvious where the bug is.

Story time: It's mid ninteen-ninety-mumble and your intrepid hero is a junior programmer handling multi-site integration and testing tool infrastructure. This being the time when the Swiss Army Chainsaw(tm) (aka Perl 4) is well and truly entrenched in the sysadmin and toolsmith programmers, my technical superiors throw me a couple of Perl books, set my deliverable date impossibly soon, and tell me to get going post haste.

So, I code. It's not a lot of code, but it is parsing and matching files in 3 different formats from 3 different sites. Of course, I hear what you are saying: "Perl and parsing is like mixing ammonia and chlorine--and probably more painful." Yes, I concur. But, it is the tool at hand in the long forgotten mists of time when RAM was expensive and spinning rust still resembled an iron brick. So, off I go with regexes for parsing (Yes, I know, now I have 3 problems).

And everything worked quite swimmingly. Except for a bit of idiocy that nobody could track down that occasionally flagged a couple of records as mismatched when manual inspection showed they really were not. Nobody minded that much as the scripts got 99.99% of the job done and didn't give a false negative, so: "Ship it, junior."

And so we did.

However, the bug annoyed me because I had to got clean up the false positives when they fired. And, if I am anything, I am a VERY lazy programmer--and this was preventing me from being lazy.

So, I eventually am waiting for one of the other groups to deliver, and I go spelunking for a testcase.

Spelunking? HAH! Cave diving is a better analogy.

The program took the most inclusive of the syntaxes, used each record from that to build a regex to look for the corresponding record in the other syntaxes, matched what it could and flagged what remained.

So, the program was building a dynamic regex on-the-fly and then using it. Not a huge deal, but the regexes were larger than most people were probably comfortable with. No problem, I validated this on much smaller records out to REALLY big records, and they work well.

Except for those weird cases ...

So, I'm looking through a case that works and trying to compare it to a case that doesn't. And I accidentally fat-finger some character set match and delete a character that shouldn't matter.

And the regex fails ... provoking the Asimovean "That's funny ..."

So I delete another character ... and it works again. What?!?!?!

So, I add a character. And it fails. And I add another. And it works again.

I stared at that regex for what felt like EONS until the light bulb went on.

The one that worked? 511 characters or 513 characters. The one that failed? 512 characters.

So, yes, I, a total Perl n00b managed to find a bug in Perl 4 in my first ever Perl program.

Sometimes the junior dude gets really unlucky and finds an actual system-level bug.


The big one for me was the realisation that the code doesn't matter. I mean sure, it does, to us. It's what we do. But really, code doesn't matter.

To the end user, what matters is that we solve their problem. We let them do their job, and we make that job as easy as possible. And that's what they pay us for.

And to the company we work for, what matters is that we solve the end user's problem, and that we do so in an expedient and economical manner, so that it costs less for us to do so than the customer pays us.

Of course, for us to continue doing this, in the long term, we need to innovate and architect and uphold standards and stay on top of technical debt. And all of that put together is "make the code good". But that's an end to the means, and the customer doesn't care what horrors awake when they click that dialog box button as long as the result they needed pops out at the end.


This is why calling people "coders" annoys the hell out of me. There was a popular blog post a long way back[1] about how you shouldn't call yourself a "programmer" because that builds the expectation that programs are the key output of your work. "Coder" takes this problem even further — your job is no longer to build a program (with all the thought and design work that entails), but rather to type out code.

I'm not a coder, or a programmer. I'm a problem solver, and I tend to specialise in the sort of software infrastructure-y problems that are usually solved through code. If you think of yourself as a problem solver first and foremost, then you'll often realise that the best solution involves zero code.

This ${problem}-solver versus ${solution}-user gets even worse when people go beyond define themselves as technology-du-jour users, doesn't matter if it's Rails, React, blockchains, big data, ML, AI...

1. https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr...


Problem solver can be the title of pretty much any job though.

"I'm a problem solver, I fix problems by diving down and welding broken things at oil platforms"

"I'm a problem solver, I make sure the books are correct at the end of the financial year"

etc. I mean it is correct but if someone calls themselves a coder/programmer/software engineer (even if they are not real engineers) then I know roughly what they are doing each day even if the value they create and what they are working on can be wildly different. Just like a deep sea welder.


Yeah, I wasn't entirely clear on that bit :) There absolutely is plenty of room for a word more specific than "problem-solver". But our choice of words matters — words carry nuance.

In some ways I'd consider "software engineer" as equivalent to "novelist" or "journalist" where "programmer" maps to "writer" and "coder" corresponds to "typist". Software engineer, novelist, journalist all encapsulate a lot of responsibilities, where writer and programmer both talk just about primary means of achieving the job, and coder/typist bring it down to the mechanical skills you require to get the job done.


I actually go the opposite route; I hate calling myself an engineer, because that word actually carries weight due to older, more established professions. The tech/software industry wants the prestige of that title without the work and effort that goes into it.

We are not engineers. We have no standardized certification process or tests. We have no (or very little) accountability. We have no codes of ethics. We may or may not be following proper, accepted development workflows. We may not even know what industry standard processes are.

I am a software developer because I solve problems primarily via software. This can and does include many different responsibilities and skillsets, but at the end of the day I primarily architect and write code.

It's up to you to educate the layperson as to what a software developer actually does. Although I'm not a writer, I understand that a "writer" doesn't literally only write. People can understand better than you're giving them credit for, I think.


I think of myself as an "engineer", but more in the "keep the steam engine from exploding" sense.


it's fair to say that if you don't like it, then don't speak about it

that's your definition of engineering, or whatever officials that define it

more so, who cares about your skills other than your employer? And your employer cares about your skills, why should he care about your title?

even if other programmers who like to address themselves as software engineers, is it up to you to decide whether they can be hired?

It's just a title for god's sake


> it's fair to say that if you don't like it, then don't speak about it

I would like to see the status quo changed, which is why I do discuss it. Of course it's fair that I hold an opinion and discuss it. There is no obligation for you to respond if you disagree :-)

> that's your definition of engineering, or whatever officials that define it

It's not my sole opinion:

> As with many other professions, the professional status and the actual practice of professional engineering is legally defined and protected by law in some jurisdictions. [3]

It's understood that if someone holds the title 'Engineer', they went through a certification process from a regulated body, traditionally. You can see this for example in countries and per state. [0] [1] [2]

> In Canada the designation "professional engineer" can only be used by licensed engineers and the practice of engineering is protected in law and strictly enforced in all provinces. [3]

In Canada (and I believe some states), it is illegal to sign off an email or other correspondence as a "Professional Engineer" if you're not actually licensed as such. [4]

---

> is it up to you to decide whether they can be hired?

When did I mention hiring? All I said was the term 'engineer' is loosely used in the software industry, and it has absolutely no standard around it.

> employer [...] why should he care about your title?

I wasn't talking about my employer at all. I only referenced skills because I was responding to a portion of the parent comment.

> It's just a title for god's sake

It isn't. How we frame something is very important in my opinion–just as important as the concept itself. [5] If it's "just a title", then people should have no problem calling themselves programmers or developers. However one can see that we call ourselves "engineers" because it sounds prestigious, despite the software industry being a total joke when it comes to standardization or even following basic modern practices consistently.

[0]: https://engineerscanada.ca/accreditation/about-accreditation

[1]: https://ncees.org/engineering/

[2]: https://www.ncbels.org/

[3]: https://en.wikipedia.org/wiki/Regulation_and_licensure_in_en...

[4]: http://www.occupationalhealthandsafetylaw.com/ontario-man-fi...

[5]: Consider for a moment the term "Global Warming" versus "Global Pollution Epidemic". I strongly believe if we had gone with the latter instead of the former, there would not have been pushback to the scale that we've seen. It certainly would have avoided the confusion of "oh, but this winter is so cold, global warming must be a hoax"! It also shifts the focus from an effect of pollution, to the pollution itself. This example is quite different than the software engineer/developer example, but I think it illustrates my point that how things are framed is very important.

We are not software engineers, and we won't be until regulatory bodies exist, and we develop codes of ethics.


> We are not software engineers, and we won't be until regulatory bodies exist, and we develop codes of ethics.

Nobody said that you are software engineer, you can just call yourself whatever you want

Those who called themselves software engineers are indeed software engineers, no one can forbids it

Fancy term like engineer is for marketing purposes, just like you said it's prestigious, they use it because they want to impress people, anything wrong with that? No, it's correct.

And it's also correct if anyone think they don't fit the title engineer, because it's his/her opinion which the software engineers won't likely give a fuck.

No one can stop them from using the term.

As for the Canada's law, the earth is bigger than that AFAIK. China and US software engineers are waiting for arrest. Except for Texas FYI

Standardization/basic practices mean shits by the way, it's research and development phases during engineering, people can invent what they want in their own ways as long as their products are legit

From Wikipedia:

> Engineers, as practitioners of engineering, are people who invent, design, analyse, build, and test machines, systems, structures and materials to fulfill objectives and requirements while considering the limitations imposed by practicality, regulation, safety, and cost.

Wikipedia can be sue at anytime, as you like


> It's understood that if someone holds the title 'Engineer', they went through a certification process from a regulated body, traditionally. You can see this for example in countries and per state. [0] [1] [2]

I think you're misinterpreting the intention of the title of "Professional Engineer". As far as I can tell, it's for accountability for public projects (buildings, power, etc.). It's not strictly limiting what job titles can have the word "engineer" in it.

Most engineers in the aerospace industry don't even take the FE exam. Would you refuse to call most aerospace engineers "engineers" then?


I’ve posted it before. But it gets confusing on the global stage. Software Engineer is a protected title in some countries. In mine it means studied a great amount of advanced math in college, almost nothing else. We usually prefix it with a word that tells if we studied 3 or 5 years. SE in US seems to mean just Software Developer I guess?


In Ontario, Canada you can call yourself a Software Engineer. It's prefixing that title with, Professional that will get you in trouble if you're not licensed with the Professional Engineers of Ontario under the Professional Engineers Act.

Requirements are pretty high: http://www.peo.on.ca/index.php/ci_id/2057/la_id/1.htm

But I suppose industry/capitalism loves the fact that we don't require licensing in order to produce software even if use of said software should be safeguarding health, property, economic interests, etc.


In Quebec, Canada Software Engineer is a protected title and you must be registered with the Ordre des ingénieurs du Québec (http://oiq.qc.ca)


I dunno. I’m a programmer because I like solving problems through code, and I’m not really interested in solving problems outside this toolset. Some artists were exclusively painters, and we still call them that.


Picasso could paint a whole house, in one day - two days tops if you wanted multiple colors.


To you that's great, the article the parent was referencing is from a marketing perspective. He gives the example of quants, they solve financial problems with code. The same is true with data scientist, they just solve analytical problems with code.

In both of these situations they're a programmer, but they sound cooler and will go a lot farther when applying for jobs or moving up in a company than the person who labels themselves as "a programmer".


A quant is somebody who specialised in solving very specific finance problems, and a data scientist is somebody who specialised in solving very specific statistics/data analysis problems. I wouldn't hire myself for a data science or quant role any more than I would hire a data scientist or quant for my role, because there's a whole bunch of ancillary skills that we each have that are required by our respective jobs that go above and beyond just programming.


I totally appreciate this attitude. One of the most important qualities of software development is to solve the customer's problem with as little code as possible. "I am coder and therefore the more code I write, the more productive and valuable I am" is a blinkered mindset.

At the same time, you have to actually describe your profession somehow. Everyone solves problems. Solving problems is definition of productive work. A carpenter solves problems by cutting pieces of wood apart and attaching them together. A bricklayer solves problems by stacking bricks on top of each other, with a bit of mortar in between to smooth them out. A car mechanic solves problems by fixing cars. A surgeon solves problems by cutting people open and fixing their insides. Eventually it turns into a bunch of meaningless MBA platitudes. "We don't 'build houses', we 'deliver solutions' to people's shelter-related problems."

If you go to a good surgeon with a problem that isn't going to be fixed with surgery, the surgeon is going to recommend a different solution and possibly even refer you to a different kind of professional. "Hey, you just need to rest that knee and maybe get some physical therapy. Here's a physical therapist I recommend." People understand that surgeons are smart and will occasionally listen to them, so surgeons can get away with this. More importantly, surgeons understand this. Surgeons don't go around thinking, "man, I need to perform even more surgeries because I'm a surgeon and that's what surgeons do" (or, at least, they shouldn't)--they have a deep appreciation of what surgery is and when it is or is not appropriate.

So I don't mind thinking of myself as a programmer, because I'm a programmer the same way a surgeon is a surgeon. If you come to me with a problem that cannot be solved by programming, I will tell you that, because to do otherwise would be a form of malpractice.


This is why I always respond to questions about what I do with "software engineer". I've noticed people seem to perceive "engineering" as more serious than "programming", and it helps to promote the idea that I build things to solve problems and don't just sit there turning designs into code.


From Wikipedia:

Engineers, as practitioners of engineering, are people who invent, design, analyse, build, and test machines, systems, structures and materials to fulfill objectives and requirements while considering the limitations imposed by practicality, regulation, safety, and cost.

If you’re doing that you’re an engineer. Hopefully we are all doing that.


I feel like in areas like software, hardware, electrical, mechanical -construction, there are two descriptors of practitioners and they are at opposite ends; hackers and engineers. At least that’s how I think about myself. Sometimes what I do is hacking together a proof of concept, and sometimes I’m engineering a product.

Of course, there is something in the middle, and for software, that is probably “coders” or “programmers”.


Unless you're in certain states where engineer is a protected term.


Engineer is not a "protected term" in any US state that I'm aware of. "Professional Engineer" has a specific licensed meaning but, if I have a degree in mechanical engineering I'm pretty sure I can call myself an engineer anywhere in the US without the licensing police coming after me. (Unless, of course, I imply that I'm licensed when I'm not.)


If you work in a music studio, and you call yourself a sound engineer, no one is going to ask to see your credentials because it's assumed that you engineer sound. The same holds through with software engineering. No one expects you to carry some sort of card. When it comes to structural, or civil engineering, there is a far higher expectation that the things you design are not going to fall down. But we're able to understand this distinction. The idea that we should disallow computer coders/programmers from using the term engineer is not based on reducing confusion...


>When it comes to structural, or civil engineering, there is a far higher expectation that the things you design are not going to fall down.

The big thing is that you need a P.E. in many cases to do things like sign off on drawings for regulators. Some civil engineers, mechanical engineers, etc. have PE's and many don't. In Louisiana, I had business cards with an engineering title and definitely worked as an engineer. At some point, had I remained in the oil business, I'd have gotten a PE because I'd presumably have eventually been in a position where I had approval authority over designs submitted to various government agencies.


The term "engineer" is definitely protected in Texas, and I think some other states.


Someone ought to tell whoever put all these job listings up (including in Houston): https://www.rigzone.com/a-mud-engineer-jobs/

ADDED: TIL apparently Texas is indeed exceptionally restrictive (in theory) about the use of the term "engineer." [1] I'd be pretty certain this is widely ignored in practice. Leave the oil business aside, I'm guessing that tech companies in Texas probably advertise engineering positions now and then. (Yep: https://jobs.dell.com/location/united-states-texas-round-roc...)

[1] https://www.statesman.com/news/20160903/their-name-on-the-li...


The oil industry is setup as nested subcontractors so that no-one with any money is responsible for these kinds of screwups.

Go look into the actual law, you legally can't call yourself an engineer in Texas without a PE, or a couple tiny carveouts (on the order of you work at NASA, and NASA calls you an engineer)


Replying to your edit: there are no jobs on that Dell site in Texas with engineer in the title. It's including some from Minnesota for some reason.


The problem of listing to dynamic sites. See e.g. https://jobs.dell.com/job/austin/software-senior-engineer-re...

There are doubtless tens of thousands of Texas job listings for "engineer" positions in software and elsewhere. (Software is especially notable only because my understanding is that PE's in software have basically been phased out. So you basically can't get licensed in that branch of engineering even if you have an accredited degree and have met the other requirements.)


Not Texas iirc.


As I recall my days in the oil industry, "engineer" was one of the most overused titles. We used to joke about "mud engineers" (reps of the companies that sold the ingredients that made up drilling mud) being, in fact, "mud salesmen." (If you were being generous, they were technicians but they did mostly make recommendations and sell you stuff.)


Heh, on iron ore mine sites we have "hose technicians" whose job is to clean up any spillage with a high pressure hose. There are also jokes about "sanitation engineers" who clean the toilets.


Exactly. I would welcome changes that would bring software engineering to a state where it was a protected term. If someone used that term I could have a high degree of certainty that they would abide by certain standards both professional and ethical. Such as “certain percentage of test coverage” and “certain big-O tolerance” for different project levels. Etc. otherwise I’d write the person off as a coder.


Different places have different standards for engineers, and any other title. It doesn’t matter that you call yourself an engineer in any case. What matters in some cases are the standards you use; is it licensed by someone or something, or is it something else? That’s what you should be asking. What are the standards used?

You can do engineering without following any official standard, and anyone who do engineering is of course an engineer, so yeah, the protected title thing is just meh.


I've found myself saying "I, uh, build software" lately.

I guess because I don't like any of the established terms.


I tend to avoid the word "building" in the context of describing what I do. After all, the compiler does the building :-) "I design and develop software" is what I usually say.

Would be nice if there was a word that captures the idea of someone both designing and constructing the internals of a machine.


I try to avoid the word "design" because most people immediately think of graphic design and get the wrong idea of what I do. Even though it's not as technically accurate, I think "building" or "engineering" are better words to use for people not familiar with software development.


It's funny, I've gone from "I'm a coder" -> "I'm a software developer" -> "I'm a software engineer" -> "I'm a developer" -> "I make useful things."


And on the best of days - involves deleting code.



My old boss of exceptional at this. He had hired a lead engineer to architect on code to support some new features and hardware. This lead was working on one of the important module.s It was 1300 lines of convoluted code and still not done. At that point the guy quit. My boss spend a day thinking about the requirements and wrote something that made perfect sense in 30 lines of code all one function.


Fun fact: Bill Atkinson (featured in this story) also wrote the round rect drawing routine for the original Mac.

Source: https://www.folklore.org/StoryView.py?story=Round_Rects_Are_...


…among many other things, including writing MacPaint, QuickDraw, and HyperCard.


You're spot on. Regarding your last point though my interpretation of the blockchain hype was that the problem of the business was that they needed quick and easy money. Blockchain investors solved that.


Whether you perceive the problem as one worth solving or not, Bitcoin defined a very specific, if extreme, problem it wanted to solve, and proposed an extreme solution to that extreme problem. There's a few other projects (such as Ethereum) that are also trying to solve some very specific problems.

The issue is that the problems that Bitcoin and Ethereum are trying to solve imply almost generating money from thin air. That gave unscrupulous people a lot of silly ideas, on the one hand, and, on the other, gave the people desperate to prove they're "thought leaders" (whatever that means) a lot of different, but still silly, ideas.


I often say "That's not programming, that's just typing". (Stole it from someone, can't remember who now.)


this topic never stops, trust me

it's time for you guys to realize this effect from calling yourself "coders"/"programmers"/"problem solvers" are nothing but marketing

It's just to impress those who don't know otherwise


> I tend to specialise in the sort of software infrastructure-y problems that are usually solved through code.

So basically a coder?


Back in the ice age we were called 'programmer analyst' or even 'system analyst'


machine dompteur ?


I would argue it's even worse than that. It's not just that the code doesn't matter, it's that code is a liability, and the more code you create the more problems you create.

Code has to be maintained, or it will eventually fail as other moving parts around it change their interfaces.

Code rots. Platform norms are always changing, so today's fresh new code becomes tomorrow's smelly old code.

Code interacts with other code. As the volume of code increases linearly, the number of these interactions increases exponentially. Eventually the complexity becomes unmanageable and the whole system has to be rebuilt. The more code you add to a system, the faster you hasten its demise.

All of which means that code is a liability, in the balance-sheet sense of the term. Throwing code at a business problem hurts the bottom line. The goal therefore becomes to throw just enough code at the problem to solve it, and not one line more than that.

This is the difference between inexperienced and experienced developers. Inexperienced developers handle code like it's spackle. Experienced ones handle it like it's uranium.


I'd agree with you here, I'm still a way off seniority but for me the takeaway from the past 5 years is good code is code that can be deleted with minimal work.

All code we write is going to be wrong in some way as a result of our understanding of the problem domain being incomplete so I'd rather see code with an obvious place to put an if statement than a lovingly constructed masterpiece of indirection and abstraction.

Obviously there are basic rules to follow, make it testable, don't mix too many concerns in one place but I see so much bikeshedding and overcomplicated code when it just needs to not be obviously wrong and easy to replace when needed.


The code doesn't matter to the end user. Sure. You can say that about any industry. Engines don't matter as long as you can get from point A to point B and gas is cheap, which it isn't anymore. Healthcare norms don't matter when you are young and healthy. You can get away with this kind of reasoning in the short term. Engineering and almost any industry have rigorous codes and norms you have to follow, but not software engineering. Many software engineers are not engineers, but more like unlicensed plumbers or electricians that can fix the leak, and perhaps throw in a bunch of extension cords to make to hookup some electrician devices. Extension cords may work well for a while because they fix the customer problem, but long term may cause outages and fires. I think Software Engineering as a whole would benefit from more rigor. Of course, there has to be the right balance, as too much rigidity can hinder creativity and productivity, but saying that code doesn't matter is a wrong message.


This is true with basically everything. Game development attracts more than its fair share of truly horrid code, to the point where success seems almost inversely correlated to code quality. If you decompile Terraria (using CIL, which preserves the object design), for instance, you'll notice that its main class is over 40kLOC and that basically all of the business logic is encoded in one great big chain of if-statements. However, the game has been extremely popular and successful (and is pretty fast, and looks really pretty).

Somewhere there is a balance, but for business purposes it leans closer to the "working ugly hack" side.


Maybe games can get away with more things because when they're done, they're (often) DONE.

There is no next release, maintenance, new features etc. Once Balloon Pirates is done it ships, and is never touched again.


I don't think that's been the case for a long, long time. Almost every new game on modern consoles and PCs will see one (or many) updates in its lifetime. Game cartridges, things we used to think of as immutable, are merely storage devices for delivery of the original game code. Two recent games I purchased for the Nintendo Switch, on cartridge, both required a download before I could play them.


It's more blurry but Games are still DONE in a way that an ongoing project where new features are constantly being added aren't. There are some exceptions like Minecraft but on average most games can not economically justify multiple full time engineers working on them for over a decade.

There is also the side issue of game engines which many teams reuse from game to game so maybe that doesn't really apply.


Terraria has been getting updates for 8 years now. Maybe that's why it's such a mess.


This is very far outside my realm of expertise but isn't it possible they're running some sort of obfuscation on the binary before releasing? 40k LOC seems unbelievable to me, surely even scrolling around in a text editor would be laggy as hell.


> all of the business logic is encoded in one great big chain of if-statements

One of the big lies of OO design is that you can manage that kind of complexity better with objects/classes, that you should factor out functionality into tiny pieces, and so on.

Unless you have written (and debugged!) an actual video game, you should spare your judgement.


There are certainly disadvantages. For a start, it keeps freezing my VS code :D

But Modders also have a hard time adding features if logic for all items is in a single function. This seemed exactly what OOP / interfaces were designed for.

Also I'd love to see Terraria's actual source code. The de-compiled versions hat a ton of stuff like

> if (num1 != 109 && num1 != 110 && (num1 != 113 && num1 != 115) && (num1 != 116 && num1 != 117 && num1 != 118)) return;

or

> else if ((int) Main.tile[i, j].type == 19) Type = 94;

20x in a row, with obviously different numbers. Hope the actual code was more readable and just lost a lot in translation. But many more readable styles should be visible in IL (enums, constants, etc). There might also have been some obfuscation (can't remember), but couldn't have a strong one since names were preserved.

But yes, there are also advantages to their style of code. And in the end they delivered a product and that's all that counts.


Exactly, having conditionals being spread throughout the program is a great way to increase complexity.

Rules engines, statecharts and other forms of (declarative) behaviour modelling solve real problems.


Then you have technical debt. Wordpress appeared good to end users, but we know how it is underneath and all the problem you get along[0]. It's a fine balance for sure, and forgetting that solving the problem first is what matters is a mistake indeed.

[0] about that, there seem to be a 'foot in the door' effect at play. If you get users with a good enough but imperfect product, they won't mind too much if you take responsibility for problems when they arise. Another instance of worse is better.


Wordpress is a great example. Its the most successful, most love blogging platform and usually user's first choice.


Man, I cannot deny and submit to the reality of wordpress dominance. That said have you ever poked at wordpress plugins code ? I won't name the one I read but it was a catastrophy. And not in a pedantic purist sense, real catastrophy, dead code everywhere, while loops 8 screens long, copy pasted above each others. I think it was #2 most downloaded at the time.


The code is crap. Wordpress is crap. It's unsafe, open-to-hackers crap. The plugins are crap.

But it solves the problem users want solved. It's lego-level blog assembly, which is as technical as many users want to get. A much "better" system - maybe a nice type-safe Haskell static HTML templater with a command line build system - wouldn't.

Flipping this around, from the user's POV, "I wrote some code that almost does Thing X" is not a solution - it's just some code that doesn't quite do the job.

It's nice that it uses functional programming or formal methods or $shiny_language or ML or whatever. But if it doesn't do the job, it doesn't solve the problem.


I don't agree with the analysis. It's mostly that mainstream life floats on ambiguity and undefined contexts. Wordpress does some things perfect for users but then bug or leak or plugin conflicts happen. While other platform may have solved this prior but it's a problem that users didn't anticipate so they didn't value it.


He didn't say Wordpress solves everyone's problems. That it doesn't solve your problem does not negate his point.


bugs leaks and upgrade conflicts are not my problems but any wordpress user problems


and globals. mutable globals everywhere. (at least last time i ran away from it all after having to fix an EOL'd plugin 10 years ago).

never again.


Why don't they fix it? It's long lived, it's popular, they have to maintain it.

Yes, there's backwards compatibility to maintain, but surely some of it can be contained, maybe with shims, like Windows does it.


Why don't they fix it?

In all likelihood, because the people maintaining Wordpress are the people who created it in the first place, and they don't know any better.

Why did PHP apps suffer from SQL injection attacks, long after PHP supported prepared statements? Why do Java apps often have ridiculous classes like AbstractSingletonThingFactoryFactory? Why does the Javascript community (still) promulgate useless leftpad-like packages? Why do Ruby apps often end up so heavily metaprogrammed that you can't trust any line of code to do what it says?

Cultures are hard to change. The people that are turned off by Wordpress' programming style probably pick other projects and other communities. Lots of the folks working on Wordpress probably cut their teeth in that codebase. They might not like it and they might realize there are better ways, but there probably isn't a critical mass of them that share a vision out of the morass.

Personally, I took one brief look under the covers of Wordpress and immediately decided to avoid it.


Maybe a 'if it aint broke dont fix it' paired with 'lets make this new feature instead'.


There's such a thing as taking the backwards compatibility meme too far though. You can't keep catering to those that drag their feet. Release an update and say that the legacy will no longer be supported a year from now.



Ugh. The submitter there uses Some of you guys are killing people figuratively. Are they not aware that bad software really does kill people?

I was expecting to see some ill-advised use of WordPress in a critical software system, but no, it's about inconveniencing web developers. </rant>


Debt (technical and otherwise) is leverage if utilized well, and potentially crippling if abused.


I knew a shop that refused to move its customer data out of an ancient MS Access instance until one day it just started shedding records.

That's what I think of when I hear technical debt.

What are examples that make sense and don't just explode in your face?


Whatever it is you're writing doesn't have to work for the next twenty years. If it's sufficiently simpler and cheaper, it's fine if that project only works for the next five years, and you have to start working on a rewrite in four. By then, you'll either have gone broke (in which case the extra scaling would've been useless anyhow), or you'll have four more years of experience to guide you in how to build the next generation.

In your particular case, that MS Access DB was almost certainly the right choice when it was written, and the gotcha in that story is that they didn't start planning the migration away from Access until it was too late — effectively, they didn't pay off the tech debt when they should have, and got foreclosed on.


An example of really bad technical debt we made in one of our projects was: back when Facebook apps were a thing, another social networking site released an app platform that was very similar to Facebook. Some companies had early partnership and launched at release, but we weren't aware of it and were caught flat footed.

Our code wasn't built to be generic enough to be able to work on anything but Facebook. So what we did was simple: copy the whole project, change the stuff that didn't work right.

The next time a social networking site released a platform we refactored our main code and introduced the proper abstractions so we could use it on all 3 sites. That code ended up working for most of the other social networks that launched an app platform.


I'll use your example: I've seen companies start life as Access databases and make plenty of money, and then pay for the proper rewrite out of the revenues generated, instead of doing it the "right way" up front. (when the costs to do it up front may have resulted in a failure to launch)

I don't think it's the type of debt that's the issue, as much as how you manage it.


Wordpress is spectacularly useful and successful.


Note that I don't claim otherwise.


I think part of the problem is that humans are fundamentally present-biased. So low quality code that can avoid a schedule delay today is considered preferable to higher quality code that prevents even larger schedule delays in the future.


> Another instance of worse is better.

I hate this term, because worse isn't better, it's worse. But good enough is good enough and technically-better-but-not-available-yet is worse than adequate-and-available-right-now.


Uh, the code does matter in almost all circumstances. That's because in most cases you're not so much solving something unsolved as providing a lower cost solution via software. The cost of the software (both in terms of initial development and ongoing maintenance) is a factor.

If your problem domain is totally mapped out, maybe any solution that is quick to implement will work. If your problem domain is in any way nebulous or changing, well architected software will save you a lot of money responding to changes, and ongoing maintenance will obviously be cheaper if it's easily testable and robust.


I think the parent's point (though I could be reading my own thoughts into it), is that the things you mention only matter to the user if it directly affects them. An end user couldn't care less what the maintenance costs are if they aren't passed on to them in some way. If they are, then its completely in the interests of the company to have easy to maintain software. Or if changes are time-sensitive and the company is unable to keep up which change requests. If all that's invisible to "users" (or internal stakeholders), it really doesn't matter how much the boots on the ground hate the software.

But I think also the "code doesn't matter" really means, that ideally there is no (new) code, because there is in fact an existing solution but the person asking for the solution doesn't know it. This is likely more the case with internal stakeholders that ask for something to be built that does X, not realizing that there is readily available software or libraries that does X (or something close to it). So part of our role is to know the landscape of what part of the domain really needs new (potentially bug-ridden code) to be written.


This realization is what has attracted me to the Erlang/Elixir world where fault tolerance > provable correctness.

The end user would rather have an application be frantically logging errors in the background, but be working for their usecase, than a program that is fixated with being correct and refusing to run in the presence of errors.

Obviously quality code still matters, but at least in the business realm "does it do what I expect?" is basically your success/fail state.


This bothers me. As code continues to eat the world, this is akin to saying “building codes don’t matter”. That is dangerous. And it isn’t professional.


Experienced developers know when to spend time making sure code is actually good and when a terrible crufty hack is fine. If you think "All code must be perfect" then you're not a pro yet.


>If you think "All code must be perfect" then you're not a pro yet

Or they're a pro in a very different field than you.

When I worked in web development, I was gobsmacked at what professionals considered "good enough to ship". Now that I work in Medical EHR, the standards for 'good enough to ship' when lives could be on the line is very, very, very different.

I imagine a NASA engineer creating famously low-defect code or perhaps an engineer creating control systems for a nuclear power plant would similarly look at our medical code and think we too ship far too many hacks and defects.


People in fields where they write code that has to be proven can still know when it's appropriate to write bad hacky code. If they believe "I write proven code for my EHR system therefore all code must be like that." I would argue they're not pros.

Being a professional is knowing how to do your job well. Applying a single heuristic to every problem isn't that.


> If you think "All code must be perfect" then you're not a pro yet.

> Applying a single heuristic to every problem isn't that.

How ironic :)


I'm surprised this hasn't come up yet, but the following should be required reading. I don't want to invoke "we follow orders or people die" (or blow up), but...

https://www.fastcompany.com/28121/they-write-right-stuff

https://en.wikipedia.org/wiki/Therac-25


That's the point. The context of the code matters. 99% of code is not critical to life. Applying NASA level of rigor to every line of code ever written is an indication of mental disorder (autistic obsession)


I never, ever, said “All code must be beautiful. I never even implied it. Unless there is some general agreement that I am unaware of among architects and builders that building codes exist to make building plans “beautiful”.

And about that implied ad hominem. It doesn’t matter if I’m a pro.

Building codes allow one to say whether plans conform or do not conform to code. That way you don’t have to be a “pro” to determine whether the plan is up to code. You compare the plan to the code. You didn’t write the code. You don’t need to think the code describes beautiful plans. You just compare.


And often business owners won't allow you to make that determination alone.


Like in any business area. But in the end, it's their business and their responsibility.


"code doesn't matter" is more like "code beauty doesn't matter", in the same way that beautiful foundations for a building do not matter (or are incidental) to its stability.


Quality matters.

There is quality in code which you seem to be ignoring, troubling.

There is quality in delivering on time.

There is quality in the end-users' experience.

There is quality in the developers' experience.

There is quality in the investors' experience.

Quality is different than maximizing a metric (earnings per share, on time deliveries, user experience survey score, code coverage)

There is quality in finding the right balance in any situation.

Many many people want to maximize a single metric tacitly expecting everything else to be great, that is a good lesson to unlearn. Switching from one quality target to another is not.


A corollary to this is: the code should match the problem domain.

If the business problem is complicated, trying to further simplify the code usually makes it awkwardly abstracted and extra brittle. The fix should not be _just_ cleaning up the code, but rather trying to resolve the original usecase.


Hah I got the exact opposite realization actually.

Yes if you boil it down to it’s essentials you are right the customer is the one with the money, so satisfying his needs/desires is what matters.

But, and thats a big but, we as devs are also humans, we also have desires and needs, and in the long run companies that successfully balance the needs of its users with the needs of its workforce get the best workers, which solve the problems of their users better/cheaper/faster.

So it’s more of an equilibrium kinda thing. If you stray too much in any direction the organization tanks - either the users loose faith, or the people satisfying theirs needs do, which leads to failure just as much.

The complexity with balancing this usually stems from the fact that both of those variables are subjective to their respective environment - users can tolerate bad solutions if there are no alternatives available, same with devs.

And that is also a subject to information availability - devs can be ok with their situation because they don’t know what salaries are at that other place for example.


After 15 years in the business I think it’s worse than that: code is a liability. It matters - to have as little of it as necessary and not a line more.


Nope. Sometimes it's the right choice to leave something in that your heuristic would lead you to rip out.

Can we stop with these simplistic maxims? Engineering is about tradeoffs and dealing with complexity. Trying to reduce the decision-making process to a single sentence is silly.


True. Oh man, I was so angry about that. You read all them books, refactor anything, use that functional programming, but in the end: who cares! A shiny new button gets more attention than you refactoring that SQL statement.


If you get away from frontend fluff and work on serious backend systems involving large financial transactions, they will appreciate those more advanced engineering skills and disciplines.


Perfecto! As a programmer turned product manager, I see this first hand every single day. The end user doesn’t care what technology you use as long as it solves their problem(s). My managers don’t care what technology my team uses as long as timelines and budgets are met. I’m astonished and sometimes smile at how my old programmer self wouldn’t understand this basic fact.


You're right that the end user doesn't care about what technology you use, but do they care about things not working? Do they care that a new feature takes 3 months to develop when it could have taken 1 month if the code had been better architected?

Its funny to see management types dismiss the programmers as being "airy-fairy" when they talk about things like code quality and technology stacks but then they wonder why things get completed on time, why bugs happen, why sites get hacked etc.

Somebody else has mentioned this analogy before in this thread, but if you had a house built would you say the same thing about the structural integrity? "As long as the floor can support me then thats all that matters", "House dwellers don't care what kind of structural beams are used to support the floor they just want it to not collapse". That kind of a mentality is ok for an average person living in a house but you'd you have to be off your rocker to hire a building manager who said that.


My bad if I gave you the wrong impression about management types. You're right that things should work. What I meant was this: assume you have technology stack A and B each with its own pros and cons (whether tech maturity, programmer skills or what have you). Then, higher management expects the lead programmer/engineering manager to review the pros and cons and take the decision and deliver keeping the end user in mind. Whether stack A has xyz features which will do blah-blah-blah to the team is less relevant than the deliverable. Hope it is clear :-)


I think I get your point. As someone who doesn’t have any particular specialty (I do webdev, “embedded” devices, image processing, tooling, design, etc.), I always hesitate when someone asks me what I do. I consider the things I can do as tools to get something done.

I interned at a research facility where I had to figured out obscure serial commands for a two axis objective stage, interface microscope camera, and image analysis. Then I wrapped it all in a python library for them to use.

From my mentors perspective however, the goal was “simple”: you find a microorganism swimming under microscope, and keep following it to record its trajectory. Make sure each tick is max 10ms because those little ones move FAST. They didn’t care if I used 4 space indents or 2. They cared about getting pandas dataframe so they could do all the analysis.

I can’t explain this in a word or few, and I still don’t know what to call myself! Am I a developer? I don’t know.


> Am I a developer? I don’t know.

I would say the title "Research Engineer" could fit quite well based on what you describe!


> the code doesn't matter

It does if you want someone other than the person who wrote it to be able to fix bugs and add features.


I agree with you but I can't bear to say code doesn't matter. Try not writing any and see how valuable software is without it. Better yet, delete it and see what happens :D. I expect Codeless will become a thing one day but, until that day comes you will write code, because the code is necessary and I have trouble saying that the code is needed but also doesn't matter. Least important necessity maybe?


I've read on this website about people launching minimum viable products that appear to the customer to be some kind of automated service that in reality are the owner manually satisfying the order.

The software isn't necessarily what the customer values. You can provide a valuable service with 0 LOC


What tool makes them, and what's the tool made out of? You go back far enough, someone had to write code for something. You're not wrong though but it sounds like a domain specific thing too. Vending Machine as a service essentially. Still, as long as people are writing code it'll be because there's no choice but to write it for their use case.


I call this the intrinsic quality of the code. As you say, it isn't that it does not matter at all, just that it is over indexed by many of us.


> To the end user, what matters is that we solve their problem. We let them do their job, and we make that job as easy as possible. And that's what they pay us for.

If your code quality is bad, you can't do that efficiently. Your developers will hate their jobs and churn like crazy.

And by "bad quality" I don't mean "not unit tested" or "not commented" or "poorly formatted". I'm talking about code that is difficult to reason about, difficult to debug and difficult to refactor or extend. This is hard to quantify or even verbalize. It has to be felt.


And the business will feel it eventually. Developers have this magical ability to create invisible time-delayed catastrophes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: