
Things I Learnt from a Senior Software Engineer - neilkakkar
https://neilkakkar.com/things-I-learnt-from-a-senior-dev.html
======
OJFord
> in my team culture it’s not frowned upon to “snoop behind” people writing
> code. Whenever I sensed something interesting going on, I’d roll around and
> watch what was happening.

Agh, I'd hate that. In fact if anything interesting had been going on on my
screen it would immediately stop, no way I could work with someone watching.

I struggle enough at desks with my back to a door or where people walk by,
just can't stop myself from being distracted and looking.

(It's not that I'm not working, or otherwise doing anything I shouldn't be,
it's just.. distracting is the best word I have. It's not where I'd sit in a
restaurant, and I've despised not having a choice but to sit at such a desk at
work.)

~~~
ken
I've read that in feng shui what you describe is called the "command
position".

I'm not able to work with my back to a crowd. That's a recipe for anxiety. I
quit a company when they moved my desk so my back was to a hallway, and
refused to compromise on this issue in any way.

~~~
mikekchar
If it happens again, I've been told that having a mirror on your desk so that
you can see what's going on behind you can help a lot.

~~~
JohnBooty
I tried this and it absolutely did not help.

In a "normal" situation where your desk faces the rest of the room, you would
become aware (in an "ambient" way) of people wandering into your field of
view.

But with your back to the room/hallway/whatever, you need to explicitly scan
the mirror (sort of like polling, in computer terms) with frequency to know if
somebody is approaching.

It's like trying to work while also focused on a video playing in a desktop
window or something.

Of course, as with anything, some people are not bothered by this. (And, of
course, some of those people are fooling themselves into thinking there's zero
impact on their focus/productivity)

~~~
IloveHN84
After we moved to another floor/office, I forced my team lead to move to the
desk with back facing the door while I conquered the desk with a wall behind
me, after spending 4+ years with the back facing the open office we had
previously.

I improved exponentially my concentration and quality of work while he showed
up that previously he was spending large amount of time watching YouTube/news
websites/shows instead of doing work and he is frustrated that he has to do
real work now.

------
teddyh
> _Naming your clusters? Naming them after the service that runs on them is
> great, till the point you start running something else on them too. We ended
> up naming them with our team name._

This is covered by RFC 1178¹, _Choosing a Name for Your Computer_ (from 1990):

 _Don 't choose a name after a project unique to that machine._

 _A manufacturing project had named a machine "shop" since it was going to be
used to control a number of machines on a shop floor. A while later, a new
machine was acquired to help with some of the processing. Needless to say, it
couldn't be called "shop" as well. Indeed, both machines ended up performing
more specific tasks, allowing more precision in naming. A year later, five new
machines were installed and the original one was moved to an unrelated
project. It is simply impossible to choose generic names that remain
appropriate for very long._

 _Of course, they could have called the second one "shop2" and so on. But then
one is really only distinguishing machines by their number. You might as well
just call them "1", "2", and "3". The only time this kind of naming scheme is
appropriate is when you have a lot of machines and there are no reasons for
any human to distinguish between them. For example, a master computer might be
controlling an array of one hundred computers. In this case, it makes sense to
refer to them with the array indices._

 _While computers aren 't quite analogous to people, their names are. Nobody
expects to learn much about a person by their name. Just because a person is
named "Don" doesn't mean he is the ruler of the world (despite what the
"Choosing a Name for your Baby" books say). In reality, names are just
arbitrary tags. You cannot tell what a person does for a living, what their
hobbies are, and so on._

1\.
[https://tools.ietf.org/html/rfc1178#page-2](https://tools.ietf.org/html/rfc1178#page-2)

~~~
stefs
in my home network i name my computers after animals, whereby the power/size
of the computer roughly resembles the size of the animal.

my beefy desktop may be the whale, the nas is the rhino, the notebooks are
roughly dogs, the raspberry pis are small animals like mice, the chromecast is
e coli.

plus good: i'm never going to run out of animal names.

~~~
zrail
I use Simpson’s bit characters. Never gonna run out of those.

~~~
dillonmckay
My first job, I named each web server off of a different Simpson’s character,
all attached to a big KVM, so I customized each desktop to the character, so I
could easily tell what server I was in.

~~~
AstroJetson
Some people thought it was annoying, but the desktop picture had the character
in the upper right corner. Away from the Icons, but you could glance ad see
who it was and go right to it.

Also have used Simpson's for test data. There is some built in structures that
can be used Abe-> Homer == Marge > Bart, Maggie, Lisa. Lots of people mostly
remember them.

------
amzans
_> Good engineers themselves design systems that are more robust and easier to
understand by others. This has a multiplier effect, letting their colleagues
build upon their work much more quickly and reliably - How to Build Good
Software_

The advice above is probably one of the things which would have the most
impact across all your activities as a developer. When people talk about
simplicity in software, they don’t necessarily refer to ease of use or number
of lines of code, but instead it’s about how understandable a solution is
given their shared knowledge.

~~~
dxhdr
> When people talk about simplicity in software, they don’t necessarily refer
> to ease of use or number of lines of code, but instead it’s about how
> understandable a solution is given their shared knowledge.

Rich Hickey gave a great talk on the topic of "simple" vs "easy":
[https://www.youtube.com/watch?v=rI8tNMsozo0](https://www.youtube.com/watch?v=rI8tNMsozo0)

------
Everlag
The referenced idea of a 'human log' is great[0]. I started doing something
similar 4 years ago and it eventually evolved from per-project notes into a
full diary. Being able to search for 'August 24 2016' and know exactly what I
did that day is quite powerful.

I encourage anyone to take 10 minutes(or 30...) at the end of the day to write
up what they've done. Just a text file with minimal formatting has scaled to
2.6MB of hand-typed text. Though, after a bit, I've tended to shard out
specific long-running topics into their own files.

[0] [https://neilkakkar.com/the-human-log.html](https://neilkakkar.com/the-
human-log.html)

~~~
ThrowawayP
And what does one do when one is struggling with burnout/depression/insomnia
and the only legitimate thing to write down is "Not a whole lot really." for
days in a row? (A problem a um, ... friend has, of course.) Risky habit to
have under those circumstances.

~~~
JohnBooty
That might be the _best_ time to take up this habit.

Write down _everything_ you do and honor it as an accomplishment.

Getting out of bed. Reading an article about your craft. Brushing your teeth.
Showering. Dressing. Reading somebody else's PR. Commenting on somebody else's
PR.

Every. Single. Thing.

Attending a meeting. Having a hallway conversation. Answering an email or ten.

Not taking your own life. Surviving. Not doing other self-destructive things.

Chances are, if you are able to get out of bed (not always a given) you
probably did a lot of "little" things. Those are achievements. And in your
depression those things may have taken more effort than running a 5-minute
mile.

Sure, the goal is to eventually accomplish _more._ That's fine. But honor the
things you're doing.

And hey... good luck.

------
noonespecial
_I was already forgetting things I learnt. They either became so internalized
that my mind tricked me into believing I always knew them, or they slipped my
mind._

The best time to write a tutorial is as you learn a thing yourself. You learn
it better for doing it and others benefit from it more because it
automatically comes from a beginners perspective.

------
caymanjim
> I like a bit of humour in my code, and I wanted to name it GodComponent.

No. Just no. Do not ever be funny in code. No one else likes your humor, and
it's distracting. (Ignoring the other reasons "GodComponent" is a bad name.)

~~~
philwelch
I think it's okay to be funny in test cases. For instance, my test data has
included the user-agent string, "The Thrilla In Mozilla".

~~~
jchook
As long as you don't go into potentially offensive humor

~~~
lonelappde
Upthread we have someone boasting about naming their business machines after
porn stars.

~~~
jchook
I had to learn the hard way

~~~
Moru
I worked with home support for an ISP. A few times I was at a customer helping
with a router and the mother call her son to get the password for the router.
Not always a nice password...

~~~
philwelch
I very much believe in offensive passwords/passphrases (maybe not for wireless
routers) specifically because:

(a) You're not supposed to tell anybody what they are.

(b) Offensive passphrases are easier to remember.

~~~
ses1984
Also easier to crack if an attacker knows that detail.

~~~
philwelch
Not that it narrows it down that much.

~~~
ses1984
You could probably throw out 90% or more of dictionary words for your
permutations, I'd say that is significant enough to paint a target on your
back.

~~~
philwelch
If your password is derived from a four word phrase (per the XKCD formula,
which isn’t the only one), potentially all of the individual words could be
inoffensive in isolation. There’s no obvious way to operationalize the human
intuition of offense in a way that restricts the search space if you’re smart
about it.

~~~
ses1984
It all depends on if you have anything worth cracking. Sure, if your average
hn reader encrypts a password db with a dirty four word phrase, that reader
will be fine because no one is willing to rub two pennies together to crack
that.

On the other hand if you're protecting secrets actually worth something...

~~~
philwelch
Hey, for all you know I might find certain seemingly random 128-character
alphanumeric strings _very_ offensive ;)

~~~
ses1984
Seemingly random isn't random.

------
heinrichhartman
> When refactoring and preventing huge-ass PRs: “If I’d have changed all the
> tests first then I would have seen I had 52 files to change and that was
> obviously gonna be too big but I was messing with the code first and not the
> tests.” Is breaking it up worth it?

My 2 cents: There are two things to consider:

1\. reviewability

2\. deployment risk

If it takes a colleague 3 days to review your code, your PR is too big. If you
panic, on the thought of deploying this ever, your PR is too big.

On the other side:

\- Huge PRs that only change formatting are fine. Easy review. Low risk if
properly automated/tested.

\- Largs PRs that are feature neutral are acceptable as long as they are
reviewable.

\- PRs that refactor 2000LOC, fix 2 bugs and add 3 features are not a good
idea.

~~~
einpoklum
The thing is:

1\. Some people balk at you making lots of small PRs, because it feels like
noise to them. 2\. Some people/organizations make you do a lot of work on a
separate branch and then want you to PR just once for the final feature you're
implementing.

~~~
lonelappde
1\. Commits can be merged/squashed when merged upstream.

2\. That's fine, your branch has the series of small PRs that people can
review.

------
honkycat
Just to toss in my two cents about testing:

Unit tests are for refactoring and declaring behaviour. Additionally, a lot of
people have made the same observation: Code that is easy to unit test tends to
be more modular and have better architecture. If you are struggling to test a
function, think about how you can change your design to make testing easier.
It will probably improve your code quality.

Integration tests are for finding bugs and exceptions and should be a part of
CI/CD.

You want both. The more testing the better.

Comments: I find comments, outside of docstrings, a smell. Even then:
Docstrings often become a substitution for a decent type system, so maybe
think about what is happing there as well.

Documentation should be generated from your codebase. Otherwise, it will
inevitably become out of date as people forget to update it. If you need to
comment on what something is doing, usually you can move that behaviour into a
function and test that function.

~~~
heinrichhartman
I have heard this so many times:

> Comments: I find comments, outside of docstrings, a smell.

But is this really the case? I find comments like these invaluable:

    
    
         /*
          * Mark walreceiver as running in shared memory.
          *
          * Do this as early as possible, so that if we fail later on, we'll set
          * state to STOPPED. If we die before this, the startup process will keep
          * waiting for us to start up, until it times out.
          */
         SpinLockAcquire(&walrcv->mutex);
    
    

[https://github.com/postgres/postgres/blob/master/src/backend...](https://github.com/postgres/postgres/blob/master/src/backend/replication/walreceiver.c#L192-L199)

Part of the job of a developer is to communicate

\- domain model (Concepts, Relations between them)

\- intention behind the code (Why is this code here?)

\- possible pitfalls (I tried this, it does not work because ...)

\- limitations (TODO: This special case does not really work, yet)

for your colleagues and future heirs.

How would you ever cramp information like this into variable names and types?
(... without introducing so much abstractions that the code becomes
unreadable).

~~~
honkycat
This is EXACTLY where comments should be used, great example!

Extremely high-performance database code is not what most of us are doing,
however. We are doing enterprise software development to send emails and
scrape money out of people.

I would want that function to be multiple smaller functions with docstrings
and such, but obviously, that would add stackframes to the call stack and be
slower.

Additionally, many modern compilers have an easier time reasoning about and
optimizing multiple smaller functions as opposed to huge functions with a ton
of variables and logic. Plus large functions have an impact on garbage
collection etc. etc.

~~~
heinrichhartman
> Extremely high-performance database code is not what most of us are doing,
> however.

That's true. But look at the overall coding style in that file:

Code is structured into cohesive blocks, which are prefixed by a comment which
explains in english language the intention, pitfalls, limitations of the
following code.

I find this quite elegant and readable. An ideal to aspire to.

I am not advocating nonsensical JavaDoc:

    
    
        // Get's foo from bar
        private Foo get_foo(Bar bar);
    

If you are able to just use names and types to document your intentions,
pitfalls etc ... then by all means, do just this. E.g.

    
    
         for( person in address_book ) {
              body = template.render( name => person.name, ... )
              mail = Mail.new(person.email, body)
              mailgun.queue(mail)
         }
    
    

However, if stuff get's more nuanced don't be afraid to use english language
to explain what you are doing. You future self will thank you : )

~~~
heinrichhartman
> I would want that function to be multiple smaller functions with docstrings
> and such [...]

I have tried that style in the past, and have reverted to writing longer
functions with intermittent comments. I now consider "single caller functions"
a smell.

If logic is serial (do A, then B, then C) it's fine to have serial code:

    
    
       main() {
         ... A ...
    
         ... B ...
    
         ... C ...
       }
    

The issue with splitting out the intermediate steps into functions is, that
the serial flow is broken. Instead of the logic now looks something like

    
    
        A(...){ ... }
        C(...){ ... }
        main() { 
           A(...)
           ... B ...
           C(...)
        }
    

Where B might have been a tiny task, that is not worth splitting out. In
effect:

\- The ordering of the logic is broken.

\- Passing arguments can be a pain, if a lot of pieces are touched.

And what have you gained? Not sure there are many upsides besides debatable
aesthetics of avoiding inline code comments.

------
Ididntdothis
I think this is pretty good.

As a pretty senior dev myself I always tell people to have an actual design in
mind. Know what you are trying to achieve and when something goes wrong you
should be able to explain what you expected the system to do on that situation
. A lot of people don’t seem to have a clear mental image of the end goal but
are overwhelmed .

Of course you should be flexible but you need to know where you want to be in
order to make good decisions.

------
ulkesh
I like the article. I just wish the senior software engineer also included
teaching humility. The article isn't written with any ego, so I commend the
author, but it would be ideal for any mentor to also teach what to do when
being wrong, making a mistake, costing the company money, etc.

------
contingencies
_The main value in software is not the code produced, but the knowledge
accumulated by the people who produced it_

Don't agree with this at all. If you're relying on people to maintain
knowledge then you're doing it wrong and setting yourself up for failure.
Document the why.

~~~
tehlike
You are thinking just the running system. The decisions behind why, the quirks
of the software, how that all fits together, and the knowledge of why you
should do certain things (like monitoring, experiments and whatnot) is
important.

Good luck documenting all that. Good luck making that document discoverable
and readable.

~~~
seren
Often you can deduce the "why" from the code because it makes some sense, but
you can almost never find out is the "why not" .

Have they tried something else ? Was it the best solution chosen after a
review or something quickly put together ? Likely you'll never know.

------
ztjio
Makes me a little ill that the author thinks Jeff Atwood coined that old-ass
joke.

~~~
neilkakkar
.. Nopes, but it was the quickest reference I could find for the joke. I don't
like to put quotes in without the source.

~~~
LeonB
Hi Neil. The attribution is correctly given here:
[https://martinfowler.com/bliki/TwoHardThings.html](https://martinfowler.com/bliki/TwoHardThings.html)

(Hint... it's me.)

~~~
neilkakkar
Hey Leon, thanks!

I've updated the website :)

~~~
asgeir
Cunningham's law in action.

"The best way to get the right answer on the Internet is not to ask a
question, it's to post the wrong answer."

[https://en.wikipedia.org/wiki/Ward_Cunningham#Cunningham%27s...](https://en.wikipedia.org/wiki/Ward_Cunningham#Cunningham%27s_Law)

~~~
LeonB
You know it wasn’t Cunningham who said that? (Just kidding)

------
chaboud
“De-risking is the art of reducing risk with the code that you deploy.

What all steps can you take to reduce the risk of disaster?”

Honestly, I talk about derisking most in two contexts in our projects:
schedule, invention.

Normally this means that derisking is about frontloading unknowns and finding
pivot decisions that reduce the likelihood of getting too far in the wrong
direction or accumulating too much uncertainty.

I don’t use “risk” too often when talking about fault tolerant code or system
design. If you’re taking on “risk” in code, like the chance of a lost message,
a race condition, or corrupted data, I’m going to take away the keys.

Note: I encounter shoddy “we can go hours without a crash, so it’s good” work
more than I’d like to admit. It is possible to know why code and systems fail
(and account for it) much more than has become industry norm these days.

------
NotUsingLinux
The article as the comments here read very strange. Like a place where there
is no place for people. There can be space (real physical as well as a
culture) for people, lets give team human a try. Please let's not be working
drones that optimize towards questionable goals...

[https://podcasts.apple.com/de/podcast/team-
human/id114033181...](https://podcasts.apple.com/de/podcast/team-
human/id1140331811?l=en&i=1000447434162)

------
7532yahoogmail
This article is fluff. Presented with a gosh-this-is-cool-isnt-it narrative,
nothing here suggests senior engineer. Senior engineers must have a much more
nuanced assessment about the interaction between organizational norms,
teamwork, and software. There's no penetrating insights here. Software is
semi-formal at best and more a hidden Markov chain at best. Indeed the central
questions to software engineering are deferred as questions.

------
xuesj
Good programmer should be good at writing essay.

------
greyskull
> In the end, we went with a database with role access control (only our
> machines and us can talk to the database). Our code gets the secrets from
> this database on startup.

How does the code get access to the database?

I've lived in AWS-land for so long, I don't know how the non-cloud world
manages secrets.

------
sidcool
There are other blogs by Neil Kakkar that I really enjoyed, including the one
on Human log. It's quite nice and practical. [https://neilkakkar.com/the-
human-log.html](https://neilkakkar.com/the-human-log.html)

------
greyhair
Requirements matter. Design matters. Data matters. Documentation matters.
Source control matters. Defect tracking matters.

All source code eventually "wears out" and has to be rewritten.

------
mleonhard
> I’ve come to love testing so much that I feel uncomfortable writing code in
> a codebase without tests.

This person became a good engineer in only one year! :)

------
aleksgrach
Я бы с радостью пообъщялся с инженером-програмиста.

------
greenie_beans
my mentor writes bad code

~~~
greenie_beans
want to report back that i can't write good code either!

------
neilkakkar
Existing discussion on Reddit:

[https://www.reddit.com/r/programming/comments/cv6tnu/self_po...](https://www.reddit.com/r/programming/comments/cv6tnu/self_post_things_i_learnt_from_a_senior_software/)

------
einpoklum
> Things I Learnt from a Senior Software Engineer

I guess spelling wasn't one of them?

But other than that, it's a nice blog post. I like the "human log" suggestion
- should really pick that one up. I've been doing that already, but only for
my servers...

~~~
whitepoplar
What's wrong with the spelling? He's from the UK.

~~~
OJFord
I don't know what GP has a problem with, but I don't think the author's
British - 'internalize' in the opening paragraph.

~~~
Izkata
Probably "learnt" vs "learned".

------
adamnemecek
These posts keep repeating the same things.

"There are two hard things in computer science: cache invalidation, naming
things, and off-by-one errors. - Jeff Atwood"

This is not novel, nor insightful.

"Premature Optimization Is the Root of All Evil". I've reached a point where
every time I hear someone say this, I legitimately ignore everything else they
say.

~~~
lazzlazzlazz
You've become that unhelpful curmudgeon you used to dismiss.

