Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Absolute truths I unlearned as junior developer (monicalent.com)
1529 points by dcu 40 days ago | hide | past | web | favorite | 511 comments



Overall a good article, but I completely disagree with the notion that "good enough is good enough".

I've been in a lot of code reviews where developers push back because it's "good enough". You need to maintain a defined level of quality otherwise codebases go to shit very, very fast.

I was recently told in a code review that a Cassandra read before a write (to ensure there were no duplicates) was "good enough" because a duplicate "probably wouldn't happen very often". Meanwhile, the consequences of a dupe would lead to a pretty bad customer experience.

I pushed back hard and forced the developer to rewrite his entire code. Would "good enough" be okay in this situation? My bar is much higher than this developer and I stand by my decision. We have the luxury of being tasked with solving customer problems and if we only strive for "good enough" every time instead of "the best I can do within the constraints I'm given", then in my opinion your career won't be very successful. We always have to make the best tradeoffs when it comes to time and expense, but the best developers are the ones that come up with the best solution and the best code that fits in a particular constraint.


Hey! OP here. I definitely don't mean "good enough is good enough" as an excuse -- pushing for quality is extremely important.

My point was more about nitpicking line by line for perfection. What you're talking about sounds like a legitimate performance issue.

I think we're on the same page, but maybe my point wasn't clear enough. I tried to make it clear in my last point that "code is quality is important" but it's important not to confuse code quality with things that are more minor like idiosyncratic coding style.

Thanks for reading!


When I code review, I differentiate between "code quality" and "opinion". First off, there needs to be a coding standard guideline so that 80% of all issues of "opinion" vs "coding standard" are well defined. This is one of the reasons why I like go, because things like go fmt are the great equalizer.

I probably won't like the variable names people chose but I won't comment on that because that's "my opinion". I will comment on even the smallest bug I see, because that's what we're paid to do. So line by line "perfection" is what I believe we need to strive for in terms of code quality. Maybe not so much "perfection" but "best practices" might be a better way of stating it. We always need to strive for best practices so that our code is predictably easy to maintain, read, etc.


Agree with this. I obviously comment on bugs and also convoluted code that I know will be hard to understand later. If it is just small details it is easier to just change them myself when I inevitably have to revisit the code later on rather than nitpicking and arguing during the review. If I don't have to revisit the code for any change then just let it be. Out of sight, out of mind


Another way to skin this cat I'd say is that the definition of "perfection" or "best practices" is different depending on the use-case, company, product, or mission. Making sure you have a very clear definition of quality and "done-ness" is really important here. For some products any bug is a no-go. For others, lots of small visual bugs are ok, and on and on. But once you agree on that definition of quality, I totally agree with you that everyone needs to champion that definition.


I'm going to challenge you here (admittedly without full context, but to make a point). It sounds to me like the problem you're describing requires a database has ACID transactions which is not Cassandra. Therefore Cassandra is a poor choice and in my view that is poor engineering. This being said, I'd err on the side of agreeing with the developer who determined that they are not capable of writing code that guarantees ACID transactions in Casandra (that's a hard thing to do), and I'd really question the choice to use it to begin with.

Again without appropriate context, my bet here is that you guys are using Cassandra for other important features you won't get out of a typical RDBMS and as such you made a trade off to begin with and decided that Cassandra was "good enough".

Now the point I'm trying to illustrate (and I'm not just doing this to pick a fight, I promise), is that engineering is about trade offs and a big part of it is definitely related to likelihood of a problem occurring.

I think it also completely depends on the domain of the problem, the criticality of the process you're building and the outcome of a major failure of your assumptions.

So I'd just argue and say "good enough" is an entirely appropriate answer in many contexts and domains and it's important not to make a blanket assumption that it's wrong.


We aren't in disagreement, and I don't take your comment as a challenge, it was exactly something we went through with this feature. The point is, who gets to decide what "good enough" is? The difference between my "good enough" is a lot different than my coworker's "good enough", and I think that's what separates levels of maturity as a developer.

In fact Cassandra wasn't my first choice, but a strongly consistent database wasn't available to us. As I mentioned in another comment, making the very best decision you can given the constraints of your system is what one should strive for. Not stopping at "good enough" because of (poor) intuition that error conditions "probably won't happen".

We decided to go with LWT and eat the latency costs as a trade off to "stronger" consistency, realizing that Cassandra doesn't offer the same strong consistency as an ACID database. Not perfect, but it fit within our SLA, decreased the probability of encountering duplicate values, and if there was an error, it was easier to detect and the user could be directed to try again, vs having a completely silent error condition that would cause a small percentage of our users tremendous amounts of trouble.


That was my thought, too. If you need ACID transactions, use an ACID database. There are rather a lot of them. And then mirror the data out to Cassandra after the fact, if you need it there. Ironically, Cassandra may not be the source of truth!

Way back in the stone age, MySQL did not yet have ACID transactions. They got them about the same time they stopped bragging about how much faster they were than Oracle, but I digress. Anyway, I had to write a bunch of transactional code around it. Drove the dba and me nuts. We begged for Sybase (we both knew it well), but the startup CTO was an open source purist and hated his first contact with the Sybase sales machine.

Eventually they folded, and the point was moot.


> You need to maintain a defined level of quality otherwise codebases go to shit very, very fast.

And it's that level that we call "good enough". Or, I would say, acceptably bad.

One of the most important lessons I've learnt over my career is that there is no such thing as "good" software. Everything could always suck less — anything that takes over 0.0 seconds is bad, more than 0kB of memory is bad, more than 0 lines of code is bad. However, your level of badness for each of these metrics might be something you're willing to live with.

It's like hygiene. What you call "nice and clean" for your toilet is not clean enough that you'd cook on it, and even your "immaculate" kitchen is unacceptable for, say, an OR. Hygiene is always "bad", you're just looking for a point where it's no longer unacceptably bad for your purpose.


Codebases all go to shit pretty fast. That is really what you learn as a senior developer. The fully sustainable codebase is a myth. All codebases will inevitably get progressive more difficult to work on no matter what you or anyone else does.

All software eventually gets rewritten. Either in full or in parts. So "good enough" means "will this keep it going until this software, or piece of this software, is thrown away and replaced". Because that is typically much more economical than code review infighting causes 2-4 rewites of every feature until its perfect. Or spending 3 times more time on a feature to make it perfect. Or having to hire very expensive developers that are capable of writing to that high standard.

There are obvious exceptions in specific industries, but this holds true for 80%.


I have experienced great difficulties with literal spaghetti code written in delphi which one can only follow through debugging, but I have also seen large codebases written in kotlin and python where almost everything is in layers and it is obvious where to add a new feature and how to find one that doesn't work. And I was very afraid that the project would end up an entangled mess, but luckily, it had the right mixture of individual services (for scaling), monoliths and messaging mediums to not become a mess.


I understand what you are saying. I think what she's really referring to is: The perfect is the enemy of the good. You have to adhere to certain level of standards, but at some point you do have to admit there is a difference between, "this has problems", and "it's not how I would have done it, but it works and is more or less OK."


Good enough is by definition, good enough.

It's up to the team and customers to decide on that however. Database integrity is particularly important, so with limited information, I'd say you made the right decision. Therefore the first draft of the code was not good enough.

So, we should all be in agreement now, right?


Right, but who decides what "good enough" is? If I weren't code reviewing, then it would have passed as "good enough". And that's the point of why just saying good enough isn't good enough. There should be a threshold that no one fights over.


He gave the threshold in his comment though.

> It's up to the team and customers to decide on that however.

You said yourself:

>the consequences of a dupe would lead to a pretty bad customer experience.

I've seen many situation were a duplicate wouldn't matter to a customer. It would matter to me because like you, I'm a perfectionist, but at the end of the day, it's both the team and customers that decide together.


It's also a matter of dealing with the problem now while it's right in front of your eyes and you remember.

Generalizing here but assume 6 months later this rare duplicate happens for a very important customer so you can't just brush it off, you now really have to fix it. By then nobody remembers this code review so you don't even know if this duplicate is a one off rare event or if it is going to affect all customers. Fixing it in code review might have taken a few hours extra for one guy, now you sent the whole team scrambling weekend overtime just to find the issue and understand the implications.


> "good enough" because a duplicate "probably wouldn't happen very often". Meanwhile, the consequences of a dupe would lead to a pretty bad customer experience. [...] My bar is much higher than this developer...

I see a lot of posturing in this anecdote, what I think is missing are:

1 - an indication of how often would a customer experience the issue;

2 - how bad would his "bad experience" be;

Did you calculate the former and took in account the latter in forming your judgement?

Or, otherwise, was the "correct" solution simple and obvious enough that any non-junior developer would have picked that first without hesitation?


You're getting some flack because "good enough" is too loaded and everyone reads it a little differently. No one seems to be reading your qualifier, "the consequences of a dupe would lead to a pretty bad customer experience."

Sometimes, good enough is good enough. If you were able to push back hard in this case, I take it you are senior to the other guy and your decision is/was justified by the product/feature requirements. IOW, in this case, good enough was in fact not good enough.

> constraint

most important thing, right there.


I don't think anybody would argue pushing bugs is good enough or qualifies as such under any circumstance. I think good enough means that no codebase is perfect and working with nitpicky perfect types is the absolute worst IMO. Not only that, I've seen those codebases too and they are often times just as convoluted and messy, although without trailing whitespace and no missing semicolons.


There's the "it'll do" mean of "good enough", which really isn't good enough. Then there's the "of sufficient quality" meaning, which is by definition good enough. The latter is the true minimum bar, IMO, and what I believe the author means?


I also think that a blatant disregard for correctness is not good enough. For me the good enough is more around style and architecture. If there’s an obvious bug, even if it only affects .1% of users it should be fixed. After all, if you have a million users your bug now affects 1,000 people.


Well, it's very clear that what the developer was trying to do was not "good enough."


> So imagine my surprise when I showed up at my first day on the job at a startup and found no tests at all. No tests in the frontend. No tests in the backend. Just, no tests.

That one hits home. I felt like I was being so helpful when I, fresh-eyed after finishing some entry-level C language book, suggested that we should implement unit tests to the legacy software.

The product lead, to his credit, did not chew me out, but stated very reasonably, "This software is old, relatively stable, and will likely be dead or sunset in five years. It's not worth the man-hours it would take to try retrofitting unit tests onto a piece of software this old, only for the product to be killed six months after we finish writing them."


Tests are a really important part of development. There's nothing wrong with suggesting tests. The wrong thing is to insist on it to the point it creates a conflict with you and your team. Either set an example and write tests and see if it creates value and your team is OK with it, make a convincing argument or if you think tests are needed very badly because everything is broken all the time and you can't convince anyone, get a new job. Sticking around and being bitter about it is the thing that's bad.

I usually ask upfront in an interview if the team writes tests and if they don't and I get the idea they don't want to I probably wouldn't work there.


My Absolute truth I unlearned as a Senior developer. your knowledge, hard work and quality of work is not important. The relationship that you have with your boss and your bank account is what you need to focus on.


> your knowledge, hard work and quality of work is not important.

Bingo. I used to be that guy who'd spend my weekends fixing the code that everyone else left messed up on Friday or at the bookstore reading up on the latest framework. And I thought that someday it would be recognized with promotions or more pay or even a pat on the back.

Nope.

> The relationship that you have with your boss ...

I should have been the guy who was always socializing with management at the office and happy hours. I should have been pushing my way into positions that were closer to the money itself - like getting contracts, attending conferences with management, architecture etc.

This is becoming even more apparent as I hit middle age, and it's harder to justify my high salary when the majority of our code isn't really all that complicated.

> and your bank account is what you need to focus on

But deep down I always knew this was true, so I lived frugally and saved most of my money. I honestly don't see a great future for guys like me who enjoy programming but are too introverted or just don't care to go into management positions. There is simply too many H1Bs, foreign competition, etc to justify high paid developers in most companies that aren't doing Google type development.


This is something I've noticed early on even as a junior engineer (maybe it's my general upbringing that highlighted this well known fact of life). It becomes obvious quite quickly how little your knowledge and work matters. You can be incredibly smart and come up with wonderful solutions but it won't mean a thing if your boss(es) doesn't like you.

For a profession that has frequently professed meritocracy - it is certainly not. It's unfortunate.


You need to focus on all, not to switch between one and the other.

It doesn't matter if your work is amazing if nobody realizes it (unless you are fine being a starving artist, no judgement)

The corollary isn't that you should make everyone think your inadequate work is amazing. I don't think you personally are making that judgement but there are people who would.


What a great article.

Definitely goes under the heading of "Truths So True They Seem Obvious."

As for "Disorganized or messy code isn’t the same as technical debt," I can't look at messy code without automatically cleaning it up. It's automatic as I read and understand. In the last 5 years I've probably checked in hundreds of PR's of cleanup. If this takes anymore time and/or effort on my part I don't notice it. The only thing is someone has to merge it.


I have a rule of thumb to assist with this. If i feel like I have to maintain some state in my mind to understand a particular piece of code then I opt to clean it up since it'll potentially save future developers that time.


The thing is, especially with juniors, cleaning up code leads to a few things:

- poorly executed cleanup (aka regressions)

- misunderstanding of requirements (new bugs)

- "standardization" of naming (now you have two standards https://xkcd.com/927/)

- restructuring of code/renaming of abstractions which then adds friction to original authors of code (provided they are still around)

Left unchecked this can all happen, at the expense of feature work, and an unmerged PR resulting in missed deadlines and a frustrated junior. Alas we usually learn best by making mistakes though and I find this one hard to teach for some juniors.

I would argue that a measured tolerance of ugly code (and sometimes bad code) is more important until you learn how to spot and write code that doesn't look wrong.

Two articles I found helpful:

https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

https://www.joelonsoftware.com/2000/04/06/things-you-should-...


I read once someone's estimate that the probability of introducing a bug when fixing a bug is somewhere between 20 and 50 percent. That probably also applies if it's not a bug - if you're just cleaning up something. That should make you think twice.

In particular, it might make you think about game theory. Is the expectation value of this change more bugs, or less? Will making this change have at least a 50% chance of preventing a future bug?


There is the debt that is localized to a singular place, and there is the debt that is all over the place and that is a death by a thousand cuts. The former is easier to fix (and will probably be fixed by some enterprising person, but then second one is the more nefarious one that a lot fewer people actually take the time to do anything about.


As a Junior: Creating the most abstract grand architecture to apply to every facet of my application will solve all my problems.

As a Senior: Replacing all this architecture astronaut indirection with simple linear concrete code will solve all my problems.


Excellent article! Thank you, Monica. You just put into words a whole bunch of stuff that I always "sensed" but never "said".

After spending 7 million years (it sure seems like it) cleaning up the most vile garbage code you could possibly imagine, I'd like to elaborate on this:

Architecture is more important than nitpicking. While a small line of code could be improved, the stuff that tends to cause bigger problems down the line are usually architectural. I should’ve focused more on the structure of the application than tiny bits of code early on.

Architecture = the sum of all those seemingly unimportant "tiny bits of code".

It seems like every time I have to refactor or (heaven forbid) rewrite, I have to start deep down in those tiny bits. I've worked places with all these "genius" architects, but when I dive deep down into the code, I find a sewer than couldn't possibly support software life as we know it, no matter how brilliantly it was conceived.

Fellow programmers, you probably know exactly what I'm talking about, all those cancerous tiny bits that kill even the strongest patients:

  - variables so horribly named that no one could ever interpret them
  - 800 line iterations surely destined to be broken by a maintenance programmer
  - conditionals so illogical that no one can tell if they ever worked
  - early exits to hell that can only be fixed by rewriting
  - double negative logic that could not never fail to break
  - 8 lines of code where 1 could do
  - <add your own>
Great architecture comes from both directions, "above" and "below". From my experience (unlearned as a junior developer :-) ), 90% of the problems have always seemed to come from below.

Get good at the trees and the forest will flourish.


From my experience of working at Fortune 100's for over 10 years as a senior engineer, I would say I have seen what you described, but rarely if ever are they the architects fault. There are usually just a few of them and usually hundreds of developers constantly shoveling garbage into the codebase. They rely upon processes and such... but they all fall apart in practice.

I've come to the conclusion good is good enough and working is even better. Usually the business agrees.


> - early exits to hell that can only be fixed by rewriting What do you mean by that?


  if (a) {
    do_something();
    return;
  } else if (b) {
    do_something();
    if (something_else() == -1) {
      return;
    }
  }
  
  do_other_things();
  
  maybe_exit_here();
  
  maybe_keep_going();
it has similarity to frequent `goto label` type of coding


I'd argue that this sort of thing is fine, as long as the `return` is not nested deeply. A function which starts with a list of 'early exit' simple conditions is fairly easy to deal with, compared with one where you need to read the whole thing to check that the value assigned to the returned variable isn't reassigned later in a different condition. Early returns can significantly reduce cognitive load if used correctly precisely because they exit directly.

Once you have complex bail-out scenarios, the function needs breaking up or insanity follows... but that's true of any sufficiently confusing decision tree.

Generally I have found `else` to be something of an antipattern. I usually find it cleaner to write the code as 'this special early bail-out with a return, and the other case is just the rest of the function'. A bit like pattern-matching in Haskell, or a `switch` block in which the default case is the usual one. I also try to avoid reassigning variables, eg. in Typescript everything would be `const`.

But like all generalities, these have exceptions!


GOTO return(); is the droid you're looking for. :)


Multiple of these points could be grouped under the general belief that "Everything is equally important". The #1 change required for growing into a senior role is to form a habit of ruthless prioritization of how you spend your time (and your team's time, if applicable).

Often you end up in situations where all of the following are true:

1. Your teammate or colleague is designing or implementing something.

2. You have different ideas about how that thing should be done.

3. Your ideas would produce an objectively better system.

4. Even though (3) is true, the improvement to the system or design isn't worth the cost in your time, team velocity, team morale/development, etc.

In that case, the right move is not to intervene, and to let your teammate design/implement the system their way. This is hard to do, because most engineers are natural maximizers[1], but for most tasks you're much better off with a satisficing[2] approach. This isn't to say that you should never give feedback on designs or in code reviews - you absolutely should, but always remember that you have a limited budget for your own time and for team morale, and you should spend that budget on the feedback where it will make the biggest difference.

[1] https://en.wikipedia.org/wiki/Maximization_(psychology)

[2] https://en.wikipedia.org/wiki/Satisficing


This really hits home to me. I worked on a project where I encountered an almost identical situation to the one you described. I stuck my foot down, and insisted we build the objectively better solution. We did, and only much later did I realise the damage I had done to my coworker's morale.

Looking back, I completely regret it. If I had my time again, I would stop after explaining the alternate solution. The time we would have had to spend fixing the problems with the original approach would have been worth it.

I've heard this repeated a lot, but I'm really only just starting to understand: Often the best leaders speak up less, rather than more.


> I also learned that job titles don’t “make” you anything. It’s kind of like, being a CTO with a 5-person team is different than with a 50-person team or a 500-person team. The job and skills required are totally different, even if the title is identical. So just because I had a “senior” job title did not make me a senior engineer at all. Furthermore, hierarchical titles are inherently flawed, and difficult to compare cross-company. I learned it’s important not to fixate on titles, or use them as a form of external validation.

Unfortunately, they do matter in the real world.


They certainly "matter" but I think the point more is that skills, responsibilities, ability from one place to the next with the same title is not a good comparison.


True, but the realisation above is key to understanding how career progression works once you hit your cap. A developer can become a senior developer and then a technical lead and then... what? A manager? You can't really go past technical lead while still staying in the trenches.

So what you need to do is be the tech lead for a team working on a bigger problem. Your job title doesn't have to change as long as you're working on a bigger job.


Well, depends. I know people going from CTO of a small start-up to individual contributor in a bigger company, probably for money.

Titles count a lot inside an organisation, thought.


I used to work with one of those, he traded his title and pay for a work permit in another country


> "Good enough is good enough"

+1000 to this. Please don't be the guy that slows me down and prevents me from delivering features because you insist on everything being perfect. I can't tell you how many times and how much money people who are obsessed with code quality waste. They spend months polishing code only to get out a feature that no one uses and doesn't matter. But hey, the code is "perfect"!


Who defines what "good enough" means? Is it basic functionality? What if that basic functionality contains security vulnerabilities? What if it is written in such a way that will cause potential, foreseeable problems down the road? What if it is written in such a way that it is only understandable to the author? What exactly is "good enough?"


I’ve found it’s far better to error on the side of speed and work pragmatically on features rather than trying to deliver perfection. That’s just me though. Nothing is going to be free of defect or perfectly secure.


> Documentation lies

This is a very true statement, especially concerning legacy codebases. I have worked on some projects that have had several developers make changes to it.

The original developers were great: they commented every class, had comments for all the methods, and added comments for any complex or funky logic.

Then the changes came. And the next developers hacked and slashed the existing code base to meet the new spec. Except they did not update any of the comments, so now what was once true and reliable is now frail and questionable.

Now, whenever I inherit legacy code riddled with comments, the first thing I do is delete all the comments. This helps me focus on what the code is actually doing, rather than what someone thrice-removed said it should be doing.


I was thinking about this and because of the book _Clean Code_ I had already decided that comments will lie at some point.

So I moved towards descriptive variable and function names, but those could lie as well.

So I'm thinking, how could we ensure truthful intentions at all?

And I think only a combination of small pull requests, good variable and function names and a thorough review can save us here.

But I don't know, I'm just a junior developer.


Types can help. Your code simply won't compile if the types are wrong.

The catch is that you have to define way more types, but if the code is complex enough it's really worth it.


Personally, I think the best thing is education. Now more than ever (in this agile / scrum world) devs are making small, quick changes to existing codebases. These small changes add up, and what was once a great, clean codebase is now riddled with hacks and mis-documented code.

I think educating the developer / team on the existing codebase first would be the right step. Spend a day just going through the code. Learn how it connects, learn the full scope of the project. After this, make your changes in a way that fit into the existing code.

I've had too many offshore developers re-imnplement existing functionality, like reading data from an excel spreadsheet, simply because they did not look through the code that was already there. It cost them 2 hours of work to add another library and implement it, when the code they already needed was a simple using statement away.

In theory, this is why good orgs have senior devs: they know the codebases already. Furthermore, they should be reviewing these changesets to call out devs who mis-document or scatter about the code. But senior devs are people too, and they miss things. Additionally, some codebases are enormously complex, and it's virtually impossible for a single dev to understand it completely.


Variable names can also go stale - you should consider replacing all the variable names with placeholders. /s

Deleting all the comments is too far in the extreme. How about just read them and realize they could be stale?


I've been programming professionally for well over 20 years (and many years prior as a hobby), and held "senior" to "C-level" positions. in my eyes I'm still a junior kid learning new stuff all the time. When I stop seeing myself as that, I'll switch vocation.


"Loads of companies and startups have little or no tests" which should scare you, or at least it would scare me if I joined a team.

You can definitely over test, but how can you possibly know what you built works (or still works when you change it for the 50th time) if you have no tests? There's a trade-off with testing. Early in the dev cycle, not testing can make you go super fast (supposedly, this hasn't been my personal experience but in general it seems to be true for teams). But you'll plateau quickly and then at iteration 1+n you'll just come to a screeching halt because you introduce bugs or the new engineer isn't confident that they didn't break downstream things and needs to manually test everything. Testing early will cause you to go slower earlier (again, not my personal experience but seems to be generally true of teams) but you'll be significantly faster at iteration 1+n.

I usually summarize this as: testing early will on average make your development faster over the life of your codebase. Knowing this you can make trade-offs. Not testing early is probably better called prototyping. It's OK to prototype in production if you need to. But know what you're getting yourself into.


Tests are anti-agile. TDD is waterfall. Legacy tests add friction to making changes. Sometimes you want that friction but in a frenetic prototyping phase its detrimental. This is especially true when business goals are changing constantly (such as early prototyping).

How do you know what you built works? The only thing that truly matters is that the user story is satisfied and to that end unit tests are terrible. At best you could make some integration tests but that is not the same thing.

I also think there are better practices than tests to keep iteration pain low but that doesn't preclude tests so I guess that's not an argument against them. That said I prefer to focus on making composable, low side effect code than write more tests.

Personally I hate tests although I come from games where you need human QA testers to test that your game feels fun anyway so I do admit that's a unique situation.


> Tests are anti-agile. TDD is waterfall. Legacy tests add friction to making changes.

Strongly disagree. Tests empower agile.

Here's a module/class/object/file. The unit tests test the externally visible behaviors of that code. Now stuff happens; the code needs to change. That's OK, we're agile, we can deal with it. I make the changes.

What did I break? I run the existing tests. Some break. Is that because the test doesn't reflect the new changes? I fix those tests. Or is it because I broke something? Good to know that early. I fix my code.

Next question: Did my changes do what they needed to do? For that, I write new tests.

Finding your problems early is a big part of agile. Tests are a big part of finding your problems early.

Now: If you're prototyping a new algorithm, would I write detailed unit tests before it gels? Probably not. (In XP, we called this a "spike" - trying to nail something down, not necessarily writing production code.) But even then, how do you know that your algorithm does what you need it to do? Maybe some tests?


Tests might make you personally feel safer making changes but they are not Agile(tm) because they attempt to predict needs instead solving an explicit existing need. They violate YAGNI and worse there is work in removing and rewriting old tests.


I see absolutely nothing in the Agile Manifesto that precludes tests. Nor do I see any basis for saying that tests attempt to predict needs. They are for the future, true, but they're also for the present - as is every line of code you write.


>Nor do I see any basis for saying that tests attempt to predict needs

The prediction is that the next time you revisit this code your original assumptions will still be valid. In games, which is my background, game rules (our business logic) are constantly changing during development to the point that you're fighting tests constantly for no benefit. Games are an extreme case but you can extrapolate the experience.

You've already admitted that there is a point when tests are not worthwhile when you are iterating. My argument is simply that in my experience you're in that state more often than not. Ultimately user value is the only thing that matters and tests don't predict that.

And I'm not saying you're not allowed to write tests. If something helps you do it. I'm arguing against code coverage and test enforcement.


> The prediction is that the next time you revisit this code your original assumptions will still be valid.

No. The tests will encode the previous assumptions. When you run them, and they fail, either you broke something or one of the previous assumptions is no longer valid. But the tests give you an automated way of recognizing which assumptions you'd better think about, to see if they are still valid.

> Games are an extreme case but you can extrapolate the experience.

No - no more than you can extrapolate my experience, which is having core logic that is (mostly) valid a decade later.

> I'm arguing against code coverage and test enforcement.

Or at least against those things in areas that are constantly changing. Even in your world, though, are there areas that change more slowly than the game rules? Would it make sense to have tests for those areas, and not for the game rules?


I still struggle to understand the real benefit of unit tests; the most intuitive tests that test real business logic that I've written always become something like semi-integration tests. Pure unit tests that test a small isolated function are almost never useful to catch bugs after they're written. The only legitimate use for them has been that it's easier to understand the purpose of the function and edge cases. At times it makes me think about edge cases and identify bugs WHIlE writing the tests, but once they're written, they're never useful unless someones actively trying to understand that exact unit of code.


For unit testing, I suggest starting with parsers, or any string-parsing code you have. They tend to be nicely self-contained and have lots of edge cases that you can test in a data-driven way, using a loop.

Even if you're just using a regular expression for parsing, it can be worth moving the parsing code to its own function and testing it like a parser.


Depends on how you define a unit and what you're willing to mock. I treat a 'unit' as a piece of functionality. So one method on an API is a unit. if it does complex stuff then there may be more targeted tests down below. and depending on what sort of work it does i may not bother with unit tests because mocking a DB transaction isn't worth it.

In java land when I'm writing stuff i use unit tests mostly as a place where i can run code without having to compile the whole app. And any tests that come out of it just end up being a bit of regression protection. But there's always a bigger slightly more complex unit test i write which tests the bigger unit of function. And those really straddle the line between unit and integration testing but i find i get the most value from them because one you lock in and publish functionality you have a contract you have to honor.


You say you've never understood the benefit of unit tests but you list three really great things: they help you debug stuff while you're writing it, they help you understand what you're writing and the edge cases, and they help others understand your code later. That's all great stuff.


The benefit is being able to make changes and sleep soundly at night.

I agree with the grandparent that they are more useful as a project matures rather than the design stage. Starting small and adding tests as designs solidify is a good tradeoff.


you have a lot to unlearn, young padawan. you must first forget what you know ...


You are making the assumption that most people care about n+1 where the code gets so bad it’s hard to maintain.

- the average tenure for a software developer at a company is 1.5 - 3 years, by the time things get that bad, its someone else’s problem.

- the business folks at a startup just care about throwing something together long enough to get their next round of funding or the exit.

- no one gets promoted by having code that is easy to maintain over the long term. They get promoted by releasing the new and shiny - not maintenance work. See Google.


> how can you possibly know what you built works (or still works when you change it for the 50th time) if you have no tests?

Because software development did exist before the invention of JUnit et al. The old fashioned way is 'user acceptance tests' and of course thats not perfect but its easily understandable, and nearly anyone can sit there and do it. And they might miss things or whatever yadda yadda but at the end of the day in an environment where everyone is short of time and time is money a bit of user testing can often be 'good enough'.


I’m not saying you need 100% code coverage, or 80%. Just some level of deliberate automation. In my personal experience, user testing as your only or primary form of testing gets painful fast.


Having lived through the screeching halt, I agree. There reaches a point where you genuinely cannot make a change without breaking something on the live system, or taking five times as long as is reasonable. You no longer have any idea what the real behaviour of the system or why (doco is NEVER up to date). Not having tests was part of the reason I changed jobs.

I don’t know why people assume that tests hold you back from changes. If they do, they were just bad tests which were testing implementation details rather than businsss logic. That’s an argument to not write useless/stupid tests, not an argument to not write any tests at all. It just seems to be far more difficult for developers to decide what a good test should look like than to decide what any other good code should look like.


A workflow using the code is a itself a type of test. If something breaks, the users will tell you. A lot of code ends up in that space because it was never asked for as an actual project, but now a team relies on it but no developer can be officially assigned to it.


"I don't always test my code, but when I do, I do it in production."


There are some programs that are not designed to last. It doesn't make sense to put in time in that regard.


About two years ago, I didn't get a promotion to "senior engineer" that I thought I was going to, and I had a huge temper-tantrum to my boss about it as a result (I'm still surprised to this day that I didn't quit on the spot, to be honest).

I was upset, because people that seemed to be contributing less and were less-qualified (at least from my admittedly-biased perspective at the time) were promoted to a higher level than me, and I got a fairly form-letter-esque answer of "we don't have the budget to promote you this time".

The next cycle, they corrected it, and I was officially a "senior engineer" on paper, and I realized how silly my hissy-fit had been. Sure, I guess having a bit more money was nice, but it's not like it radically changed the quality of my life, it's not like having a fancy title changed how people really saw me, and I didn't even bother updating the title on LinkedIn.

I don't think I was "wrong" in what I said. I do think that I deserved the promotion over someone else, but at the same time, I also tarnished a relationship with my boss and coworkers, and I let it get me far more depressed than it should have.

-------

I guess if any "junior" engineers are reading this, try and remember that a title is simply that: a title. They don't matter a lot, try to not get too upset over them, and obsessing over something so nominal is a great way to build up anxiety.

EDIT: Just a note, I absolutely think you should call out a company if you feel like you're being taken for granted. I'm not advocating complacency, just make sure that your hatred is directed to the right places and try to avoid getting too depressed.


Nah, f that mess. If you find yourself in a place that promotes the weak because it's a buddy system, it's not a place to work long-term. It means your boss is in 'don't rock the boat mode' and that's going to hold you back long term. Your boss should be fighting for promotions for their best workers and enabling growth in responsibility as well.


Oh, in fairness to my boss, he did make a strong effort in the next promotion cycle to get me promoted, which in fairness was only 6 months later.

The issue was the they had a limited budget for promotions, and basically limited it to one person per team. This, by itself wouldn't have bothered me too much, since the person who deserved it most on my team (someone with more experience than me, and was definitely under-leveled) did get promoted that cycle.

What upset me most was that a person on another team (with 1/3 of my experience, with no increased education, and on a team that accomplished nothing (not just my opinion, that team was disbanded a year later)) got promoted to a level higher than me. My direct boss didn't have any control on that team.


Yeah, that clears things up. I think every organization has those dead-weight teams. As long as you put tons of points on your stories in Jira, you look super busy and management gets to see your pretty charts.


Remember: once you retire, no one cares if you were a director. When you are 60, you are just old. The people that do care, you probably don't want to be friends with them.


I'm currently wanting "senior" in my title because of a recurring issue with interviewing for new positions. I often get stuck at talking to HR because from my years of experience they assume I'm MUCH less expensive than I am. I could use the title so that I could stop running into this as much.


I've always found the obsession with titles somewhat strange in our business. In my 7-8 jobs since college, I've never really had a 'formal' title. It just seems like all 'developers' are thrown into the same bin, and the better ones are team leads who don't have any actual authority to do anything (e.g. order a new laptop or approve of a hire).

After about 10 years, I just started calling myself a 'senior software engineer' on my resume and nobody has ever called me out on it.

Must be a Silicon Valley thing tied to salary. But in the areas I've worked, titles don't seem to exist for non-management.


I think it's the same compulsion that makes many (though not all) people who get a PhD feel like they have to mention that fact everywhere (e.g. Dr. John Doe, Ph.D.).


My title is senior application engineer. I don't think I've ever actually said that out loud. I just tell people I'm a developer. Titles are meaningless beyond letting HR know what salary band I should be in.


Yeah, same here. I was excited to get the promotion, but when people ask what I do for a living, I typically say "eccentric" for a more casual conversation, and "software engineer" for something more serious. I realize now that if you have to rely on titles for people to take you seriously, you probably aren't making great points.


I recently changed job, and went from being a "principal engineer" to being an "engineer" (in a company that doesn't do titles). There's more to life.


Senior is as senior does.


The #1 absolute truth I unlearned was: The solution to your problem is the tech stack or framework you're not using. Everyone in this industry seems to have tech wanderlust; using the latest and greatest and unproven is seen as sexy.

* "Oh we could do that, but we're on python 2.7"

* "Oh we could do that, but we're using Java"

* "Oh we could do that, but we're using a relational database"


1. Code and communication are the most important. If you are writing extremely bad code, you will be annoyed how much time you will need to go though to find the bug. The more you do, the more you will not like to touch the code even they are written by you. (I have seen someone with this experience) Understanding what your teammates are doing is also important in order to support each other and provide valued advices, in this case code is a communication medium.

2. To write good software, we cannot just be a coder, we need to understand the business and the project management

3. There are many seniors became senior because of their age not their ability. I am not talking about using a particular tech, I am talking about their mind. Many of them still think like junior even they are at a senior position, the way they are working didn’t scale at all.

4. Fundamentals is very important. https://hackernoon.com/the-doctor-and-the-scalpel-78656f508c...


Great post. Lot's of good stuff.

Just to comment on one part at random:

Code reviews would be more useful if review feedback was categorized:

1. is this a matter of personal style? 2. is this a judgement call? 3. is this something that is just wrong and has to be fixed? 4. is this in-scope or actually a separate issue?

(This is off the top of my head, so this is more of an example of the kind of categorization I'm talking about than a proposal.)

It's useful because it helps to set the direction and expectations on what to do about an item of feedback.

E.g. if you have a lot of 1. then the right "fix" might be to spin off a task to develop a common style-guide. (Or maybe fix an out-of-date a style guide, or to enforce an existing style-guide so that these issues don't dominate code reviews, etc.). For 4. the resolution would be to open a ticket (or whatever the process is so it gets proper consideration, prioritization, etc.)

Where I am currently we spend a lot of effort figuring out what to do about review feedback (and I think we too often make non-optimal decisions which sucks time as well).


The last couple of teams I've been on have had the custom of prefixing any issue that is purely stylistic in nature with 'nit:'.


I think another truth to unlearn is "users know what they want, and can express what they want in precise technical terms."


And the complementary truth would be "I know what the user wants".

My team and I are kind of trying to break this up, since we suffer from this a lot.


One thing I learned as a developer for 12+ years is that the title is meaningless and where I work, many of the highest impact developers would be considered 'junior' developers. I wonder why people have such an infatuation with the title 'junior' and 'senior'. Seems to be more prevalent in European countries.


I don't know if I can call myself a senior. (none of the companies I've worked for have used senior/junior in their job titles), but one of the most important pieces of information I've picked up is this:

The Simplest way is usually the right way.

What I mean is that fancy algorithms, sweeping design patterns, and "clever" pieces of code are generally not the best approach to 99% of coding specific problems.

Example: Nested for-loops. Generally a bad choice. When encountering a nested for-loop, one may be tempted to try and refactor it into some sort of recursive and highly-performant function with O(n) complexity, etc. You go through all that work and then realize that the most iterations that for loop will ever see is ~10. You just wasted a ton of time writing code that is more complex, harder to debug, and generally more opaque.

In my experience, I've seen this a lot in the context of premature optimization. I've also been the perpetrator many times as well.


Premature optimization comes in many forms, and at least these three cases are doing more harm than good:

1: "It's good practice"

For example: Wrapping every function implementation in a memoize function may seem like a good idea (= it prevents doing unnecessary work-heavy stuff), but it actually makes code both harder to read (more boilerplate to skip when reading) and most cases slower (= even when a function is executed only once, you're doing extra checks and function call).

2. "We're gonna need this soon"

For example: many times I've implemented an abstraction that makes sense for sharing code with a feature that I know we'll be implementing soon, only to find out priorities have changed and that other feature never gets implemented. What's left is an unnecessary abstraction that only makes the code harder to read and maintain.

3: Optimising for speed/lines of code

Unless you're building a game engine, it rarely makes bang for the buck to optimise for speed until you have identified an actual bottleneck in performance.

Same for one-liners. It may be cool that you know how to write 10-line function as one-liner nested ternary, but your "clever" code is probably less readable and harder to maintain.


Properly choosing an algorithm or data structure is first knowing the nature of your data.


Sidenote: I believe in general a Senior developer is 8-10 years with a general expectation that you will mentor junior developers.

YMMV depending upon region and company.


One thing the OP mentioned but I think isn't talked about much is that for me everyone has their own style. Some people have great abstractions, others lots of tests, some more code comments, interesting variable name choices, some like functional-style, or lots of frameworks & patterns etc. From our team I can often see who wrote something just from how its written.

Now as a junior I just wrote how I liked and hated everyone else's code. Then came a long awkward phase where I tried to fit in closely with other people's styles and figure out their intent and design - this is a very difficult way to write code and I wasn't very productive. I thought about this a lot and now I just go ahead and blaze a trail and other people can just figure out my style. Move fast and break things - or being reckless, its often hard to tell, but you wont be a 10x dev by trying to be nice and fit it.


Interestingly, I'm at that unproductive phase right now. On one hand, I could fix this fast and ugly, but then my code is going to look no different then all the code I go home and complain about. On the other hand, I can try to make my code really nice and abstracted with well thought out design patterns like <golden-coworker>'s. So, on one hand I'm a hypocrite and on the other hand I'm slow and hardly productive because I am just learning design patterns and no matter how many times I refactor my code it never looks as good as <golden-coworkers>'s. How did you get out of this stage? Please help.


This is surely debatable, but the best code is readable by Engineers of Tomorrow with the least amount of effort.

Getting code reviews from other people that understand this, or pair programming with them, is the best way to practice empathy for those future code readers (which very well may be you).

Don't focus on commenting about how. They can read the code for that, and those comments almost always drift from truth. Instead, focus on writing code and comments that describe the why. Consider describing what other approaches you tried, why you didn't use them, and why you went with the current implementation.

This especially applies if your solution may not be the obvious first answer.

Try to help the Engineers of Tomorrow from repeating prior mistakes. Free them up to make new and grander mistakes, instead.


I think that is nice, but may not be aligned with what the company wants. If <golden coworker> is respected, well paid and promoted then by all means try to emulate them. if <golden coworker> is gold plating a lot of small projects that dont really help the business, and hackers are getting rewarded for delivering new functionality that the bosses want, then that route is better.


> Then came a long awkward phase where I tried to fit in closely with other people's styles and figure out their intent and design - this is a very difficult way to write code and I wasn't very productive.

My first gig was at a 3 person team, including me. The other two wrote their code in basically identical format. I ended up codifying a linter config to make a style guide that basically all existing code already fit into. I remember thinking, this is never going to be so easy again.

Your post makes me think I was right.


I believe more in tests and documentation now more than ever.

When you have to support legacy code with no docs, no specs, no tests, you change your mind pretty about the value of tests and a good specification.

As a senior engineering manager, I push hard to get good requirements for my team. Its being a sales person with the business side.


One thing I would say about the whole "good enough is good enough" mentality is that if everyone has that mentality in a company, your code quality will decline so far that it will be impossible to get anything done in the future. This has happened to my company because ten years ago people had this mentality and it did work for them, getting us a lot of market share and short term success due to high profit margin. However, today we are suffering immensely due to those decisions, losing customers and new hires because no one can get anything done except put out fires due to critical customer issues.

I think a healthy mix of idealists and realists is necessary.. With senior developers representing a good majority of the former.


I think she covered that by mentioning that architecture is more important as a senior developer. My understanding as a senior developer by what she means is not to focus on optimizing the hell out of code. Too often, people would demand others use the most optimize piece of code, but if the code is only going to be called once or twice, you should focus your time on other things more important.

I think your company is suffering from bad architecture design more than anything else. It seems like everything is tightly coupled with little room for modifications. This probably means those senior developers weren't really senior to begin with. This tends to happen at startups where business is prioritized and people get hired with inflated titles. I have seen that happen in a lot of startups.


What happened here was different. This is a top 25 Silicon Valley company, thousands of employees.. But with poor software habits. The people who designed the architecture were always under tight time constraints from upper management, so they always accepted the good enough approach.

But you are right! The company is suffering from horrible architecture.. And any attempts to change it are near impossible because of the sheer amount of code and processes that we have. It seems like a never ending battle


Developers are new factory workers. Pay is good now because industry is expanding and there's not enough of us but it won't be like that forever. And bottom line work is already being commoditized (WordPress ecosystem etc).


From a hiring perspective, the problem is, and will continue to be, the quality of each worker. Although there are a handful of similarities between how the two professions operate, I think that it is unwise to make the factory worker connotation.

The entire idea of a factory worker is that they can be swapped in place by another and the output is maintained to a great extent. This just simply isn't the case in software or really any creative work. Even if we assume two workers have equal skill sets, something that is already dubious due to the importance of cross-domain knowledge in software, the personality one brings to a team will ultimately alter the entire team, sometimes good, sometimes not so. The social dynamic within groups is a critical element and something that cannot be commoditized away like other professions.

As to the point on commoditization within old tech, I can only answer with a "well duh, It's tech!". The goalposts are always moving within this profession. There used to be a whole heck of a lot of jobs in programming super low-level tasks, those are largely gone and have been replaced with new positions. Last year everyone worked on web apps, this year everyone was on mobile apps, next year it's anybody's guess.


That's because our tools are not good enough so what we lack in tools we compensate by individual developer quality.

I spend a lot of time within WordPress ecosystem and I've seen people coming to me with 20K products shops they've built and maintain without a single line of code written.

I've seen companies operating online without having any developments either on contract or in house.

Because for certain simpler scenarios, there's UI based tools you can run your business on.

Sure they come to me because eventually they need something they can't do with tools they use but those are becoming more and more high level jobs.

Web development is pretty much figured out problem. You have legions of repleacable low value developers churning out pages and you have subset of highly paid specialists who can do more advanced things.

But those gazillions web pages are doing their job, they drive sales, improving information quality etc. so I could say digital workers are factory workers of our time.

On certain jobs line is blurry - designers can do basic WordPress but they also can set up MailChimp because machine needs to be humming.


Perhaps we are arguing different things. If by developer you are referring specifically to a web-developer, sure they are akin to the a factory worker in that they are, by in large, on the way out. I am arguing the more broad sense of developer as being a substitute for computer programmer.


Finance, accounting, engineering, and many other careers have been around longer, have been commoditized to varying degrees, yet still enjoy good pay.

I think developers will still make good money for decades. I could, however, see an issue with junior developers finding less opportunities in the coming decade. I think there will be less need for juniors and the entry level jobs will be harder to come by. Some of those skills are more easily "commoditized."


My junior thought: spending 3 days to optimize code to make the UI faster and use fewer resources will be appreciated by managers and users, so win-win.

My senior thought: spend 1 day to SOLVE the problem, and 5 minutes removing the code entirely.


Interestingly, some of these absolute truths are absolutely true at larger (50+ devs, or so) companies because forward progress can't be made otherwise. System not documented? Might as well not exist at all. Code is not tested? Well, then nobody really knows if it works or not after even the most insignificant and peripheral changes. People don't treat code quality as a priority? Welcome to days/weeks/months of yak shaving to fix even the simplest issues. Good at communication but can't code worth a damn? Consider becoming a manager, not a "senior developer".


No offence to other developers but this is the easiest read of any article I’ve ever read by a developer. I wish I could write like that. I also wish documentation was written by people with that talent.


What I learned as developper and Data Scientist is: either you know what you’re doing and can see the big picture, either you’re walking blindly to a “good enough” solution.

In the first case, I spend most of my time in architectural design and modelization. The implementation phase is rather quick, because everything is well defined and you can write a lot of tests upfront.

In the second case, you’ll discover the big picture by facing the problems while trying to find and implement a working solution.

While getting experienced means how fast can you figure out the big picture.


I think much of this depends what you are employed to do.

Sometimes it's about making things less complex, which means good practices, code quality, clean code and testing means you can afford a bit more risk.

Equally, sometimes you employed to just get things done, make it work, even fake it till you make it, proof of concepts, minimum viable products, etc.

I've done both, I enjoy both, but they are two different modes of working.

Ultimately the answer is "it depends" and milage may vary.

Quality is a journey not a destination.

Yes, put delivering value above anything else, sometimes that value is quality.


Great article. Related to the “code quality” point, what I have personally realized is that it’s easy to over complicate solutions. Often, a simple solution is better (especially when it is part of a larger, more complex system).

When I was younger, I liked clever solutions to show off my skills. Now I cringe at that. Other people have to read my code. And more importantly, “I” have to read my code tomorrow or next week. I’ve started to really prioritize simplicity. Which is ironically quite hard!


Good writup.

>Imagine 50+ comments on your PR with all the semicolons you missed!

Depends. You should have a style guide and it should document what the expectation is. If it was decided that semicolons are mandatory, then code review should be failed. If they are optional than the code reviewer shouldn't flag it. The alternative is to use a linter/formatter and either auto-fail during a pull-request or have the tool fix it up.


> So imagine my surprise when I showed up at my first day on the job at a startup and found no tests at all. No tests in the frontend. No tests in the backend. Just, no tests. > Nada. Zip. Null. Undefined. NaN tests.

On my current project I have 100% test code coverage. Which I believe is quite unusual. But I am pretty sure that if I give a talk about how I did it, most people will be horrified.


I worked on a project that achieved 100% code coverage and still had loads of bugs. That’s precisely when I lost all faith in TDD.


Why would people be horrified?


Because it doesn't follow the best practices.


Thoughtful and deserving of HN front page attention.


I disagree with the documentation one. I really think developers need to focus a whole lot more on documentation, possibly prioritizing it over code.

In fact, I think there should be something like a documentation driven development, where first devs change the description of what code should do, and then change or add the tests, and then finally write the code.


I agree. Title doesn't matter. There is so much title inflation in the industry today.

For me, becoming a senior developer has allowed me to see code more objectively than before. Instead of following convention to the T, I can now look at code and make my own judgement whether to follow the convention. It's great to have the sense of relieve.


Over the course of my career, I've gone from deploying via scp, to svn-up/git-pull, to every variety of actual deployment pipeline imaginable with dedicated dev-ops teams. Now I'm the only engineer at a tiny 501(c)(3), git-pull straight from production again, and have never felt more free.


I love the uncluttered ui for the site you work for https://sumup.com/ sticks the message to the potential customer instead of shoving lots of blog/jobs/other links all over the place as in the top menu.


good article, I agree with most of the things. However code reviews that only have comments about code style are kinda useless in my opinion. I rather get 50 comments about how to improve my code instead of missed formatting. (which can be also done automatically)


That is the exact point the article is making. When you are junior you think that code style is the most important thing but it isn't. (Also it can be handled automatically as you say)


> "Not everyone will become senior during their career"

This. So much this. I didn't realise that until a little while ago but when I did it made me realise I need to understand actually what make someone a senior. It's not just an automatic progression.


Absolute truths passed on by my various line managers:

- Assumptions are the mother of all fuckups.

- The optimal number of people in an organisation is 3. At 4 you start losing efficiency. At 80,000....

- The more you progress in the hierarchy, the more you realise it is the same idiots at every level.


We’re so far behind everyone else (AKA “tech FOMO”)

This one hit strongly for me because I didn't realize it until reading this...

I am sure our CTO and PM are tired of me pushing for the latest tech I read about on here all the time. I will hold back a little more from now on.


> I read “that orange website” all the damn time.

Yeah, don’t take the “orange website” too seriously.


Unless people are saying nice things to you on the orange website, then you definitely should listen!


I mean, it's often valuable to hear what people have to say about you, good or bad, but you also need to figure out when the feedback you're getting is not useful.


A good plan violently executed now is better than a perfect plan executed next week. - Patton

It's much better to get something, anything, on the screen and working than an elegant design constantly refined but never put to actual use.


Not just "anything". Patton did say a "good" plan. A bad plan can tank you just as much as elegant design that isn't used.


"It's about the code." That is probably the biggest untruth I though when I was a new developer (I dislike the term "junior").

The truth: "It's not about the code, it's about the people."


i think now more than ever, there needs to be a bridge mindset between technical and non-technical. i learned that very early on, and even spent some time in a customer-facing, non-technical role to hone my bridge mindset. the ability to have empathy for the end user, and everyone else upstream who will touch your logic, is paramount and takes care of a lot of issues.

it's up to us as technical professionals to care about both the "what" and the "how". the ability to pan in and out on a particular need was learned early on and i cannot be more thankful of that.


Some of us technical people like to be client-facing. The key is to make room for us to do that. Let us handle the client/user and then take meetings with us so we can filter what's appropriate to you.

The biggest problem with "technical people" is the desire to retreat to a pair of headphones and not communicate with anyone all day. Set aside some consistent portion of your day where that's the only time you'll take meetings and we'll work around your schedule.


I agree. I enjoy speaking to prospects and clients, but I know not everyone feels the same way. I would say instead of letting technical people handle the client/user, it should be done in a collaborative way. Filtering shouldn't be necessary as that has the potential of keeping possible problems/solutions in the dark.

Technical and nontechnical people should collaborate face-to-face on a daily basis. That includes customer interactions.


As you get older, the thing you unlearn is that there are no absolute truths.


The only thing to do when you implement a linter is fix the 800 errors it generates. It shouldn’t take long. That’s absolutely the right thing to do, and I’d happily back anyone on my team who does that.


How can you define seniors as pros? I know some senior devs that suck at programming, still coding in Perl using Notepad++ and their systems are full of bugs.


Perhaps the reason for "nobody writes tests" in the real-world isn't primarily because of time/cost, but because virtually no real potential bugs are testable. If you're writing a JPEG encoder or database model handler, yes, you can test that all day, but those things already exist and are well-tested for you. But if you're designing retail software or a web app, there are 2^10000 things that can go wrong, so most companies ignore automated unit testing in favor of human quality control.


This is all wrong. When you have enough people working on a codebase such that not everyone knows everything about it, tests are a great way to document the behavior. They're much better than something like a wiki, in fact, because nobody actually keeps that kind of documentation up to date.


I'm not arguing that tests are useless. I'm arguing that tests are usually impossible. I invite you to suggest a test for https://github.com/VCVRack/Rack for example that could either 1) have prevented a bug, or 2) document the software's behavior.


You only have 2^10000 if you have bad boundaries and overcoupling of your subsystems. Thus another benefit of unit testing is validating your architecture: bad architectures are inherently difficult to unit test.


>bad boundaries and overcoupling of your subsystems

And that's the overwhelming majority of ways to make money with software. Think of every video game, desktop application, and mobile app that people purchase. What you call "bad boundary" isn't bad at all. The customers just expect the software to do 2^10000 combinations of possible things across N platforms on M hardware and integration with P external services. Imagine if you emailed a company's tech support saying that the CAD software you bought doesn't work with your Nvidia 1060, and they responded "sorry, that makes our sybsystems overcoupled".


> Early in your career, you can learn 10x more in a supportive team in 1 year, than coding on your own (or with minimal feedback) for 5 years.

Spot on!!!


I've been doing dev work for 20+ odd years now, and I suppose I can relate to many of these realizations. A young developer's framework is full of sharp polemics that soften over time as we scrape against them through experience. It's definitely something I look for in junior devs - a few strongly held opinions and a willingness to back them up by referring to some coherent "moral" framework - as long as this does not descend into technical dogma or dismissal of other perspectives.

If I were to step back and try to characterize the growth in my understanding of software development in a single general statement, putting aside all the hard-tack technical experience, it's this:

I am beginning to understand the subtle and complex delineation between _good_ and _useful_, and the role that execution plays in that - with all it's myriad parts: prioritization, management, technical risk mitigation and hypothesis validation, consensus building on technical direction, post-hoc validation, etc. etc.

And usefulness has a lot of facets in execution: not just technical but social. A good project and good idea might come to nothing if you are oblivious to obtaining some organizational mandate for it, and it gets sidelined due to shifts in priorities. If this happens, it means a misstep was made earlier: either the project should never have been started at all - due to awareness of upcoming priority changes, or you should have done the organizational consensus building to ensure that the work had the runway it needed to complete.

The same goes for technical consensus among implementors. Sometimes this can be avoided by giving clear mandates to trusted individual leads, but come complex projects really need the input of multiple senior members.

I'm starting to find lately that many of the contributions I'm most proud of are the ones where I come to firm conclusions on what work _not_ to do: concluding that certain tasks that were good but not useful enough, or determining ahead of time that certain planned implementation paths are actually not going to deliver what we might have expected them to, and thus we should scrap that idea. It has saved inordinate amounts of time that would _otherwise_ have been wasted.

The challenge for me has been reconciling this new understanding with my old methods for evaluating my own performance. I know these days, within reasonable bounds of arrogance, how to write decently complex software and understand it. As I move on to considering these higher level concerns, I find that I'm asking myself whether that opportunity cost is worth it: "I _could_ be spending time writing good code right now, instead of analysis and reports and meetings and planning. Is this new activity of mine _useful_?"

I'm still building my internal model for measuring my effectiveness in this new domain.. but it's clear that it's a profoundly impactful and worthwhile multi-factor optimization problem to tackle.


But seriously, put in your damn semicolons.


A paean to normalization of variance.


Learned as a junior: results matter Learned as a senior: stories matter


Most of the time legacy code and technical debt appearing on same project


>I read “that orange website” all the damn time.

Is she talking about HN?



I must say this is a very well written blog post.


I just started my first job as a junior-mid_level dev last week, my team is really awesome and I've been learning a lot. I'm the only junior dev on a team with 3 mid to senior devs, so there's lots to learn.

1: A great quote from one of the most senior developers I look up to town said this: a developer who's been developing for 1 year can have _far_ more experience than someone who's been doing it for 3 years. I find people who introduce themselves as "developer for 15 years" aren't those types, rather it's the ones that show innovation.

Another quote is this - to become a better developer, you need people at the same level as you, more experienced, and less experienced. So I've taken it upon myself to mentor junior developers in town. Other ways I've learned is by talking about technical challenges from multiple developers in multiple companies in my surrounding region, doing technical & lightning talks, and participating in hackathons.

2: I wasn't entirely surprised by how little testing there was even at companies that have been around for a 3+ years, I had learn that most companies in my area (even the ones that have great developers) - code test coverage was poor. There is an opportunity cost in testing something that might change later. As kent dodds put it, do some tests coverage on core features, but have integration tests

3: I think working with legacy codebases is a great way to learn. The codebase shows a story about how the scope of the project changed, how workarounds were made to meet client expectations in short turn around time, etc. These things you don't learn on your own. Being forced to break something down and build it into a better version is rewarding, so long as it's not everything you do.

4: My first code review was more informal, it was more pointing out everything I didn't consider when implementing a feature. E.g deleting old unused code, formatting things semantically and expressively, etc. Was still setting up my dev env so there were a few grammatical errors that didn't match eslint styleguides, but we had test runners for those.

5: Documentation to me has always been important, but forcing it upon others as a way to bookmark something I avoided. I think it's far better to just notate on a piece of paper your interpretation of how the code works, or write your own documents in Confluence/team-wikis so your senior dev can understand your interpretation of how things work. As the new dev on my team in a year, documentation was out of date and each dev only knew specific parts of the codebase, so I've been updating how everything works from a eagles-eye perspective. Those notes have a lot of value to the next dev that gets hired, because they come from a clean slate like yourself.

I also think, documentating your pseudo-code inside of tickets is important. I've learned the hard way from getting fired at a previous job many years back - you need to actively show your team what you do. Documentation is the lowest hanging fruit, you need to think about what you're going to say in your daily standup tomorrow, communication is essential

6: For me technical debt is really being able to identify where the debt lies. For instance, rails and laravel has well documented magic, but the path less traveled needs to be understood. My rule of thumb is if I'm going to write less-than-ideal code, it needs a comment block indicating why I did so. Don't push code you yourself don't want to read 6 months later

7: In regards to seniority, I think the best developers know they are junior in other areas and are multidisciplined in other fields besides programming. I've learned a lot of great wisdom from developers in my city's slack channel just by seeing conversations unfold. I know some first year developers whom I already consider senior, and some developers who've been doing this longer who I do not.

I think the most important thing I've learned in these 2 weeks in my first dev role, you can be a mediocre dev who just copypastes things and still be a great asset to any team. There's a lot of value in just documentation & communication alone

Other things I learned - you can bother your senior dev if they just sat-down or stoodup from their desk, or if their headphones are off. But experiment with communicating asynchronously in slack and synchronously in person and see what works and what doesn't - even if they sit right next to you.

At the crux of everything, it's important to have a certain growth mindset. I embrace a japanese methodology called LEAN. Other things - being okay with self-humilation is a great way to learn. Other things - force yourself to take breaks by only using small water bottles or none at all.

Also, make sure you say hello to everyone in the morning, my office is 20+ mechanical/electrical/civil engineers/designers, I make sure to say hi to them. I take the longer-route of walking through the frontdoor every morning, my boss makes fun of me for that. Give the gift of giving to others in the community, it reflects well on yourself, my team already had heard of me despite never meeting me b/c of how active I am in the local tech community. Rubber duckies are good inexpensive programmer gifts.


Epoché


That's the kind of talk I always hear from mediocre devs once they reach their first management job.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: